You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@devlake.apache.org by yu...@apache.org on 2023/01/21 08:56:57 UTC

[incubator-devlake-website] branch main updated: feat: freeze version v0.15 (#409)

This is an automated email from the ASF dual-hosted git repository.

yumeng pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-devlake-website.git


The following commit(s) were added to refs/heads/main by this push:
     new 2c9fd49310 feat: freeze version v0.15 (#409)
2c9fd49310 is described below

commit 2c9fd4931093f2c0c6c6907c7d307d0171207e19
Author: Louis.z <lo...@gmail.com>
AuthorDate: Sat Jan 21 16:56:52 2023 +0800

    feat: freeze version v0.15 (#409)
    
    Co-authored-by: Startrekzky <ka...@merico.dev>
---
 docs/Metrics/PRCodingTime.md                       |   4 +-
 docs/Metrics/PRCount.md                            |  10 +-
 docs/Metrics/PRCycleTime.md                        |   4 +-
 docs/Metrics/PRDeployTime.md                       |   4 +-
 docs/Metrics/PRMergeRate.md                        |  10 +-
 docs/Metrics/PRPickupTime.md                       |   4 +-
 docs/Metrics/PRReviewDepth.md                      |   4 +-
 docs/Metrics/PRReviewTime.md                       |   4 +-
 docs/Metrics/PRSize.md                             |   4 +-
 docs/Metrics/PRTimeToMerge.md                      |   4 +-
 package-lock.json                                  |  15 +
 src/components/Blog/AllPosts.tsx                   |   4 +-
 src/components/Blog/EditorPick.tsx                 |   8 +-
 src/components/Team/Committer.tsx                  |   2 +-
 src/components/Team/Contributor.tsx                |   2 +-
 src/components/Team/PPMC.tsx                       |   2 +-
 .../DataModels/DevLakeDomainLayerSchema.md         | 630 +++++++++++++++++++++
 .../version-v0.15/DataModels/RawLayerSchema.md     |  29 +
 .../version-v0.15/DataModels/SystemTables.md       |  28 +
 .../version-v0.15/DataModels/ToolLayerSchema.md    |  28 +
 .../version-v0.15/DataModels/_category_.json       |   8 +
 .../version-v0.15/DeveloperManuals/DBMigration.md  |  90 +++
 .../version-v0.15/DeveloperManuals/Dal.md          | 173 ++++++
 .../DeveloperManuals/DeveloperSetup.md             | 126 +++++
 .../DeveloperManuals/E2E-Test-Guide.md             | 211 +++++++
 .../DeveloperManuals/Notifications.md              |  32 ++
 .../DeveloperManuals/PluginImplementation.md       | 541 ++++++++++++++++++
 .../version-v0.15/DeveloperManuals/Project.md      | 251 ++++++++
 .../version-v0.15/DeveloperManuals/Release-SOP.md  | 146 +++++
 .../DeveloperManuals/TagNamingConventions.md       |  13 +
 .../version-v0.15/DeveloperManuals/_category_.json |   8 +
 .../version-v0.15/GettingStarted/Authentication.md |  43 ++
 .../GettingStarted/DockerComposeSetup.md           |  41 ++
 .../version-v0.15/GettingStarted/HelmSetup.md      | 157 +++++
 .../GettingStarted/KubernetesSetup.md              |  62 ++
 .../version-v0.15/GettingStarted/RainbondSetup.md  |  39 ++
 .../version-v0.15/GettingStarted/TemporalSetup.md  |  40 ++
 .../version-v0.15/GettingStarted/_category_.json   |   8 +
 .../version-v0.15/Metrics/AddedLinesOfCode.md      |  79 +++
 versioned_docs/version-v0.15/Metrics/BugAge.md     |  77 +++
 .../Metrics/BugCountPer1kLinesOfCode.md            |  88 +++
 versioned_docs/version-v0.15/Metrics/BuildCount.md |  72 +++
 .../version-v0.15/Metrics/BuildDuration.md         |  72 +++
 .../version-v0.15/Metrics/BuildSuccessRate.md      |  89 +++
 versioned_docs/version-v0.15/Metrics/CFR.md        | 149 +++++
 .../version-v0.15/Metrics/CommitAuthorCount.md     |  52 ++
 .../version-v0.15/Metrics/CommitCount.md           |  83 +++
 .../version-v0.15/Metrics/DeletedLinesOfCode.md    |  77 +++
 .../version-v0.15/Metrics/DeploymentFrequency.md   | 169 ++++++
 .../version-v0.15/Metrics/IncidentAge.md           |  76 +++
 .../Metrics/IncidentCountPer1kLinesOfCode.md       |  88 +++
 .../version-v0.15/Metrics/LeadTimeForChanges.md    | 158 ++++++
 versioned_docs/version-v0.15/Metrics/MTTR.md       | 159 ++++++
 .../version-v0.15}/Metrics/PRCodingTime.md         |   4 +-
 .../version-v0.15}/Metrics/PRCount.md              |  10 +-
 .../version-v0.15}/Metrics/PRCycleTime.md          |   4 +-
 .../version-v0.15}/Metrics/PRDeployTime.md         |   4 +-
 .../version-v0.15}/Metrics/PRMergeRate.md          |  10 +-
 .../version-v0.15}/Metrics/PRPickupTime.md         |   4 +-
 .../version-v0.15}/Metrics/PRReviewDepth.md        |   4 +-
 .../version-v0.15}/Metrics/PRReviewTime.md         |   4 +-
 .../version-v0.15}/Metrics/PRSize.md               |   4 +-
 .../version-v0.15}/Metrics/PRTimeToMerge.md        |   4 +-
 .../version-v0.15/Metrics/RequirementCount.md      |  72 +++
 .../Metrics/RequirementDeliveryRate.md             |  88 +++
 .../Metrics/RequirementGranularity.md              |  36 ++
 .../version-v0.15/Metrics/RequirementLeadTime.md   |  79 +++
 .../version-v0.15/Metrics/_category_.json          |   8 +
 .../version-v0.15/Overview/Architecture.md         |  39 ++
 .../version-v0.15/Overview/Introduction.md         |  39 ++
 .../version-v0.15/Overview/KeyConcepts.md          | 110 ++++
 .../version-v0.15/Overview/References.md           |  28 +
 versioned_docs/version-v0.15/Overview/Roadmap.md   |  33 ++
 .../version-v0.15/Overview/SupportedDataSources.md | 179 ++++++
 .../version-v0.15/Overview/_category_.json         |   8 +
 .../version-v0.15/Plugins/_category_.json          |   8 +
 versioned_docs/version-v0.15/Plugins/bitbucket.md  |  77 +++
 versioned_docs/version-v0.15/Plugins/customize.md  |  99 ++++
 versioned_docs/version-v0.15/Plugins/dbt.md        |  67 +++
 versioned_docs/version-v0.15/Plugins/feishu.md     |  71 +++
 versioned_docs/version-v0.15/Plugins/gitee.md      | 106 ++++
 .../version-v0.15/Plugins/gitextractor.md          | 134 +++++
 versioned_docs/version-v0.15/Plugins/github.md     | 141 +++++
 versioned_docs/version-v0.15/Plugins/gitlab.md     |  96 ++++
 versioned_docs/version-v0.15/Plugins/jenkins.md    | 100 ++++
 versioned_docs/version-v0.15/Plugins/jira.md       |  71 +++
 versioned_docs/version-v0.15/Plugins/pagerduty.md  |  78 +++
 versioned_docs/version-v0.15/Plugins/refdiff.md    | 132 +++++
 versioned_docs/version-v0.15/Plugins/tapd.md       |  24 +
 versioned_docs/version-v0.15/Plugins/webhook.md    | 191 +++++++
 versioned_docs/version-v0.15/Plugins/zentao.md     |  24 +
 .../version-v0.15/Troubleshooting/Configuration.md |  74 +++
 .../version-v0.15/Troubleshooting/Dashboard.md     |  13 +
 .../version-v0.15/Troubleshooting/Installation.md  |  12 +
 .../version-v0.15/Troubleshooting/_category_.json  |   8 +
 .../UserManuals/ConfigUI/AdvancedMode.md           | 316 +++++++++++
 .../UserManuals/ConfigUI/BitBucket.md              |  66 +++
 .../version-v0.15/UserManuals/ConfigUI/GitHub.md   | 155 +++++
 .../version-v0.15/UserManuals/ConfigUI/GitLab.md   | 100 ++++
 .../version-v0.15/UserManuals/ConfigUI/Jenkins.md  |  72 +++
 .../version-v0.15/UserManuals/ConfigUI/Jira.md     |  79 +++
 .../version-v0.15/UserManuals/ConfigUI/Tapd.md     |  41 ++
 .../version-v0.15/UserManuals/ConfigUI/Tutorial.md |  93 +++
 .../version-v0.15/UserManuals/ConfigUI/Zentao.md   |  37 ++
 .../UserManuals/ConfigUI/_category_.json           |   4 +
 .../version-v0.15/UserManuals/ConfigUI/webhook.md  |  34 ++
 versioned_docs/version-v0.15/UserManuals/DORA.md   | 187 ++++++
 .../UserManuals/Dashboards/AccessControl.md        |  44 ++
 .../UserManuals/Dashboards/GrafanaUserGuide.md     | 125 ++++
 .../UserManuals/Dashboards/_category_.json         |   4 +
 .../version-v0.15/UserManuals/TeamConfiguration.md | 193 +++++++
 .../version-v0.15/UserManuals/_category_.json      |   8 +
 versioned_sidebars/version-v0.15-sidebars.json     |   8 +
 versions.json                                      |   1 +
 114 files changed, 8080 insertions(+), 61 deletions(-)

diff --git a/docs/Metrics/PRCodingTime.md b/docs/Metrics/PRCodingTime.md
index f9fca08899..7f0ac87f9e 100644
--- a/docs/Metrics/PRCodingTime.md
+++ b/docs/Metrics/PRCodingTime.md
@@ -12,8 +12,8 @@ The time it takes from the first commit until a PR is issued.
 It is recommended that you keep every task on a workable and manageable scale for a reasonably short amount of coding time. The average coding time of most engineering teams is around 3-4 days.
 
 ## Which dashboard(s) does it exist in?
-- [Engineering Throughput and Cycle Time](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
-- [Engineering Throughput and Cycle Time - Team View](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
+- [Engineering Throughput and Cycle Time](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
+- [Engineering Throughput and Cycle Time - Team View](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
 
 
 ## How is it calculated?
diff --git a/docs/Metrics/PRCount.md b/docs/Metrics/PRCount.md
index 367fb8be30..cbef92826c 100644
--- a/docs/Metrics/PRCount.md
+++ b/docs/Metrics/PRCount.md
@@ -14,11 +14,11 @@ The number of pull requests (eg. GitHub PRs, Bitbucket PRs, GitLab MRs) created.
 3. Identify modules with low-quality risks in advance, optimize practices, and precipitate into reusable knowledge and tools to avoid technical debt accumulation
 
 ## Which dashboard(s) does it exist in
-- [GitHub](../../../livedemo/DataSources/GitHub)
-- [GitLab](../../../livedemo/DataSources/GitLab)
-- [Weekly Community Retro](../../../livedemo/OSSMaintainers/WeeklyCommunityRetro)
-- [Engineering Throughput and Cycle Time](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
-- [Engineering Throughput and Cycle Time - Team View](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
+- [GitHub](/livedemo/DataSources/GitHub)
+- [GitLab](/livedemo/DataSources/GitLab)
+- [Weekly Community Retro](/livedemo/OSSMaintainers/WeeklyCommunityRetro)
+- [Engineering Throughput and Cycle Time](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
+- [Engineering Throughput and Cycle Time - Team View](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
 
 
 ## How is it calculated?
diff --git a/docs/Metrics/PRCycleTime.md b/docs/Metrics/PRCycleTime.md
index 3b61a7e3f8..46c7f0cc61 100644
--- a/docs/Metrics/PRCycleTime.md
+++ b/docs/Metrics/PRCycleTime.md
@@ -12,8 +12,8 @@ PR Cycle Time is the sum of PR Coding Time, Pickup TIme, Review Time and Deploy
 PR Cycle Time indicates the overall velocity of the delivery progress in terms of PR. 
 
 ## Which dashboard(s) does it exist in?
-- [Engineering Throughput and Cycle Time](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
-- [Engineering Throughput and Cycle Time - Team View](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
+- [Engineering Throughput and Cycle Time](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
+- [Engineering Throughput and Cycle Time - Team View](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
 
 
 ## How is it calculated?
diff --git a/docs/Metrics/PRDeployTime.md b/docs/Metrics/PRDeployTime.md
index ca3046bf1e..077535bfe2 100644
--- a/docs/Metrics/PRDeployTime.md
+++ b/docs/Metrics/PRDeployTime.md
@@ -13,8 +13,8 @@ The time it takes from when a PR is merged to when it is deployed.
 2. Evaluate whether the delivery capacity matches the business phase and demand scale. Identify key bottlenecks and reasonably allocate resources.
 
 ## Which dashboard(s) does it exist in?
-- [Engineering Throughput and Cycle Time](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
-- [Engineering Throughput and Cycle Time - Team View](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
+- [Engineering Throughput and Cycle Time](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
+- [Engineering Throughput and Cycle Time - Team View](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
 
 ## How is it calculated?
 `PR deploy time` is calculated by subtracting a PR's deployed_date and merged_date. Hence, we should associate PR/MRs with deployments.
diff --git a/docs/Metrics/PRMergeRate.md b/docs/Metrics/PRMergeRate.md
index 9fa6cb029a..af4e178460 100644
--- a/docs/Metrics/PRMergeRate.md
+++ b/docs/Metrics/PRMergeRate.md
@@ -14,11 +14,11 @@ The ratio of PRs/MRs that get merged.
 3. Identify modules with low-quality risks in advance, optimize practices, and precipitate into reusable knowledge and tools to avoid technical debt accumulation
 
 ## Which dashboard(s) does it exist in
-- [GitHub](../../../livedemo/DataSources/GitHub)
-- [GitLab](../../../livedemo/DataSources/GitLab)
-- [Weekly Community Retro](../../../livedemo/OSSMaintainers/WeeklyCommunityRetro)
-- [Engineering Throughput and Cycle Time](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
-- [Engineering Throughput and Cycle Time - Team View](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
+- [GitHub](/livedemo/DataSources/GitHub)
+- [GitLab](/livedemo/DataSources/GitLab)
+- [Weekly Community Retro](/livedemo/OSSMaintainers/WeeklyCommunityRetro)
+- [Engineering Throughput and Cycle Time](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
+- [Engineering Throughput and Cycle Time - Team View](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
 
 
 ## How is it calculated?
diff --git a/docs/Metrics/PRPickupTime.md b/docs/Metrics/PRPickupTime.md
index d22f77714d..d33a9e46db 100644
--- a/docs/Metrics/PRPickupTime.md
+++ b/docs/Metrics/PRPickupTime.md
@@ -12,8 +12,8 @@ The time it takes from when a PR is issued until the first comment is added to t
 PR Pickup Time shows how engaged your team is in collaborative work by identifying the delay in picking up PRs. 
 
 ## Which dashboard(s) does it exist in?
-- [Engineering Throughput and Cycle Time](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
-- [Engineering Throughput and Cycle Time - Team View](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
+- [Engineering Throughput and Cycle Time](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
+- [Engineering Throughput and Cycle Time - Team View](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
 
 
 ## How is it calculated?
diff --git a/docs/Metrics/PRReviewDepth.md b/docs/Metrics/PRReviewDepth.md
index 7c8c2cc529..4f6a637071 100644
--- a/docs/Metrics/PRReviewDepth.md
+++ b/docs/Metrics/PRReviewDepth.md
@@ -12,8 +12,8 @@ The average number of comments of PRs in the selected time range.
 PR Review Depth (in Comments per RR) is related to the quality of code review, indicating how thorough your team reviews PRs.
 
 ## Which dashboard(s) does it exist in?
-- [Engineering Throughput and Cycle Time](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
-- [Engineering Throughput and Cycle Time - Team View](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
+- [Engineering Throughput and Cycle Time](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
+- [Engineering Throughput and Cycle Time - Team View](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
 
 ## How is it calculated?
 This metric is calculated by counting the total number of PR comments divided by the total number of PRs in the selected time range.
diff --git a/docs/Metrics/PRReviewTime.md b/docs/Metrics/PRReviewTime.md
index 5754d2555e..e7075db7b2 100644
--- a/docs/Metrics/PRReviewTime.md
+++ b/docs/Metrics/PRReviewTime.md
@@ -14,8 +14,8 @@ Code review should be conducted almost in real-time and usually take less than t
 2. The team is too busy to review code.
 
 ## Which dashboard(s) does it exist in?
-- [Engineering Throughput and Cycle Time](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
-- [Engineering Throughput and Cycle Time - Team View](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
+- [Engineering Throughput and Cycle Time](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
+- [Engineering Throughput and Cycle Time - Team View](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
 
 
 ## How is it calculated?
diff --git a/docs/Metrics/PRSize.md b/docs/Metrics/PRSize.md
index 8e898bdd44..3e24baecc2 100644
--- a/docs/Metrics/PRSize.md
+++ b/docs/Metrics/PRSize.md
@@ -12,8 +12,8 @@ The average code changes (in Lines of Code) of PRs in the selected time range.
 Small PRs can reduce risks of introducing new bugs and increase code review quality, as problems may often be hidden in big chuncks of code and difficult to identify.
 
 ## Which dashboard(s) does it exist in?
-- [Engineering Throughput and Cycle Time](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
-- [Engineering Throughput and Cycle Time - Team View](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
+- [Engineering Throughput and Cycle Time](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
+- [Engineering Throughput and Cycle Time - Team View](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
 
 
 ## How is it calculated?
diff --git a/docs/Metrics/PRTimeToMerge.md b/docs/Metrics/PRTimeToMerge.md
index c1bcbeeda1..5a83db129e 100644
--- a/docs/Metrics/PRTimeToMerge.md
+++ b/docs/Metrics/PRTimeToMerge.md
@@ -12,8 +12,8 @@ The time it takes from when a PR is issued to when it is merged. Essentially, PR
 The delay of reviewing and waiting to review PRs has large impact on delivery speed, while reasonably short PR Time to Merge can indicate frictionless teamwork. Improving on this metric is the key to reduce PR cycle time.
 
 ## Which dashboard(s) does it exist in?
-- [GitHub](../../../livedemo/DataSources/GitHub)
-- [Weekly Community Retro](../../../livedemo/OSSMaintainers/WeeklyCommunityRetro)
+- [GitHub](/livedemo/DataSources/GitHub)
+- [Weekly Community Retro](/livedemo/OSSMaintainers/WeeklyCommunityRetro)
 
 
 ## How is it calculated?
diff --git a/package-lock.json b/package-lock.json
index 2b760ba7a9..3b2f07afde 100644
--- a/package-lock.json
+++ b/package-lock.json
@@ -12,6 +12,7 @@
         "@docusaurus/plugin-content-docs": "^2.0.0-rc.1",
         "@docusaurus/preset-classic": "^2.0.0-rc.1",
         "@mdx-js/react": "^1.6.22",
+        "@tailwindcss/line-clamp": "^0.4.2",
         "autoprefixer": "^10.4.8",
         "clsx": "^1.1.1",
         "dev-website-tailwind-config": "github:merico-dev/dev-website-tailwind-config",
@@ -3106,6 +3107,14 @@
         "node": ">=6"
       }
     },
+    "node_modules/@tailwindcss/line-clamp": {
+      "version": "0.4.2",
+      "resolved": "https://registry.npmmirror.com/@tailwindcss/line-clamp/-/line-clamp-0.4.2.tgz",
+      "integrity": "sha512-HFzAQuqYCjyy/SX9sLGB1lroPzmcnWv1FHkIpmypte10hptf4oPUfucryMKovZh2u0uiS9U5Ty3GghWfEJGwVw==",
+      "peerDependencies": {
+        "tailwindcss": ">=2.0.0 || >=3.0.0 || >=3.0.0-alpha.1"
+      }
+    },
     "node_modules/@trysound/sax": {
       "version": "0.2.0",
       "resolved": "https://registry.npmjs.org/@trysound/sax/-/sax-0.2.0.tgz",
@@ -16076,6 +16085,12 @@
         "defer-to-connect": "^1.0.1"
       }
     },
+    "@tailwindcss/line-clamp": {
+      "version": "0.4.2",
+      "resolved": "https://registry.npmmirror.com/@tailwindcss/line-clamp/-/line-clamp-0.4.2.tgz",
+      "integrity": "sha512-HFzAQuqYCjyy/SX9sLGB1lroPzmcnWv1FHkIpmypte10hptf4oPUfucryMKovZh2u0uiS9U5Ty3GghWfEJGwVw==",
+      "requires": {}
+    },
     "@trysound/sax": {
       "version": "0.2.0",
       "resolved": "https://registry.npmjs.org/@trysound/sax/-/sax-0.2.0.tgz",
diff --git a/src/components/Blog/AllPosts.tsx b/src/components/Blog/AllPosts.tsx
index fe888e0fd2..c341168500 100644
--- a/src/components/Blog/AllPosts.tsx
+++ b/src/components/Blog/AllPosts.tsx
@@ -1,5 +1,5 @@
 import React from "react";
-import BlogInfo from "../../../info/Blog/AllPosts.json";
+import BlogInfo from "/info/Blog/AllPosts.json";
 import { BlogpageBottomBG } from './BlogpageBG';
 import { BlogInfoType } from "./types";
 import dateFormatter from "./utils";
@@ -17,7 +17,7 @@ const ListItem = (props: { cardInfo: BlogInfoType }) => {
     >
       <a href={cardInfo.detailLink}>
         <img
-          src={require(`../../../static/img/Blog/${cardInfo.coverTitle}.png`).default}
+          src={require(`/static/img/Blog/${cardInfo.coverTitle}.png`).default}
           className="
         m-[auto] ml-[88px] sm:ml-[24px] mobile:ml-[0] mobile:mt-4
         w-[400px] sm:w-[310px] mobile:w-[100%] 
diff --git a/src/components/Blog/EditorPick.tsx b/src/components/Blog/EditorPick.tsx
index 3ca99c17e4..f016d29ffa 100644
--- a/src/components/Blog/EditorPick.tsx
+++ b/src/components/Blog/EditorPick.tsx
@@ -1,10 +1,10 @@
 import React from "react";
-import BlogInfo from "../../../info/Blog/EditorPickBlog.json";
+import BlogInfo from "/info/Blog/EditorPickBlog.json";
 import { BlogInfoType } from './types';
 import dateFormatter from "./utils";
-import apacheWelcomesDevLake from '../../../static/img/Blog/apache-welcomes-devLake.png';
-import compatibilityOfApacheDevLakeWithPostgreSQL from '../../../static/img/Blog/compatibility-of-apache-devLake-with-postgreSQL.png';
-import HowDevLakeIsUpAndRunning from '../../../static/img/Blog/How DevLake is up and running.png';
+import apacheWelcomesDevLake from '/static/img/Blog/apache-welcomes-devLake.png';
+import compatibilityOfApacheDevLakeWithPostgreSQL from '/static/img/Blog/compatibility-of-apache-devLake-with-postgreSQL.png';
+import HowDevLakeIsUpAndRunning from '/static/img/Blog/How DevLake is up and running.png';
 
 const coverImgArr = [HowDevLakeIsUpAndRunning, apacheWelcomesDevLake, compatibilityOfApacheDevLakeWithPostgreSQL];
 const Card = function (props: {cardInfo: BlogInfoType, index: number}) {
diff --git a/src/components/Team/Committer.tsx b/src/components/Team/Committer.tsx
index 986f328d81..40ce0bf76a 100644
--- a/src/components/Team/Committer.tsx
+++ b/src/components/Team/Committer.tsx
@@ -1,5 +1,5 @@
 import React from "react";
-import committerInfo from '../../../info/Team/committers.json';
+import committerInfo from '/info/Team/committers.json';
 import { PersonCard } from './PersonCard';
 import { ContributorInfo } from './types';
 
diff --git a/src/components/Team/Contributor.tsx b/src/components/Team/Contributor.tsx
index 43bd2fb019..e082698298 100644
--- a/src/components/Team/Contributor.tsx
+++ b/src/components/Team/Contributor.tsx
@@ -1,5 +1,5 @@
 import React, { useState, useEffect } from "react";
-import Contributors from '../../../info/Team/contributors.json';
+import Contributors from '/info/Team/contributors.json';
 import { PersonCard } from "./PersonCard";
 import { ContributorInfo } from "./types";
 import { TeampageBottomBG } from "./TeampageBG";
diff --git a/src/components/Team/PPMC.tsx b/src/components/Team/PPMC.tsx
index 12cf8d521e..28a1cf0bcf 100644
--- a/src/components/Team/PPMC.tsx
+++ b/src/components/Team/PPMC.tsx
@@ -1,5 +1,5 @@
 import React from "react";
-import ppmcInfo from '../../../info/Team/ppmc.json';
+import ppmcInfo from '/info/Team/ppmc.json';
 import { PersonCard } from './PersonCard';
 import { ContributorInfo } from './types';
 
diff --git a/versioned_docs/version-v0.15/DataModels/DevLakeDomainLayerSchema.md b/versioned_docs/version-v0.15/DataModels/DevLakeDomainLayerSchema.md
new file mode 100644
index 0000000000..3563d90d0d
--- /dev/null
+++ b/versioned_docs/version-v0.15/DataModels/DevLakeDomainLayerSchema.md
@@ -0,0 +1,630 @@
+---
+title: "Domain Layer Schema"
+description: >
+  The data tables to query engineering metrics
+sidebar_position: 1
+---
+
+## Summary
+
+This document describes Apache DevLake's domain layer schema.
+
+Referring to DevLake's [architecture](../Overview/Architecture.md), the data in the domain layer is transformed from the data in the tool layer. The tool layer schema is based on the data from specific tools such as Jira, GitHub, Gitlab, Jenkins, etc. The domain layer schema can be regarded as an abstraction of tool-layer schemas.
+
+<p align="center"><img src="/img/Architecture/arch-dataflow.svg" /></p>
+<p align="center">DevLake Dataflow</p>
+
+Domain layer schema itself includes 2 logical layers: a `DWD` layer and a `DWM` layer. The DWD layer stores the detailed data points, while the DWM is the slight aggregation and operation of DWD to store more organized details or middle-level metrics.
+
+## Use Cases
+
+1. [All metrics](../Metrics) from pre-built dashboards are based on this data schema.
+2. As a user, you can create your own customized dashboards based on this data schema.
+3. As a contributor, you can refer to this data schema while working on the ETL logic when adding/updating data source plugins.
+
+## Data Models
+
+This is the up-to-date domain layer schema for DevLake v0.10.x. Tables (entities) are categorized into 5 domains.
+
+1. Issue tracking domain entities: Jira issues, GitHub issues, GitLab issues, etc.
+2. Source code management domain entities: Git/GitHub/Gitlab commits and refs(tags and branches), etc.
+3. Code review domain entities: GitHub PRs, Gitlab MRs, etc.
+4. CI/CD domain entities: Jenkins jobs & builds, etc.
+5. Cross-domain entities: entities that map entities from different domains to break data isolation.
+
+### Schema Diagram
+
+[![Domain Layer Schema](/img/DomainLayerSchema/schema-diagram.png)](/img/DomainLayerSchema/schema-diagram.png)
+
+When reading the schema, you'll notice that many tables' primary key is called `id`. Unlike auto-increment id or UUID, `id` is a string composed of several parts to uniquely identify similar entities (e.g. repo) from different platforms (e.g. Github/Gitlab) and allow them to co-exist in a single table.
+
+Tables that end with WIP are still under development.
+
+### Naming Conventions
+
+1. The name of a table is in plural form. Eg. boards, issues, etc.
+2. The name of a table which describe the relation between 2 entities is in the form of [BigEntity in singular form]\_[SmallEntity in plural form]. Eg. board_issues, sprint_issues, pull_request_comments, etc.
+3. Value of the field in enum type are in capital letters. Eg. [table.issues.type](#issues) has 3 values, REQUIREMENT, BUG, INCIDENT. Values that are phrases, such as 'IN_PROGRESS' of [table.issues.status](#issues), are separated with underscore '\_'.
+
+## How to Customize Data Models
+
+Apache DevLake provides 2 plugins:
+
+- [customize](https://devlake.apache.org/docs/Plugins/customize): to create/delete columns in the domain layer schema with the data extracted from [raw layer tables](https://devlake.apache.org/docs/Overview/Architecture/#dataflow)
+- [dbt](https://devlake.apache.org/docs/Plugins/customize): to transform data based on the domain layer schema and generate new tables
+
+<br/>
+
+## DWD Entities - (Data Warehouse Detail)
+
+### Domain 1 - Issue Tracking
+
+#### issues
+
+An `issue` is the abstraction of Jira/Github/GitLab/TAPD/... issues.
+
+| **field**                   | **type** | **length** | **description**                                                                                                                                                                                                                                                                                                                                                                                                                                      [...]
+|:----------------------------|:---------|:-----------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [...]
+| `id`                        | varchar  | 255        | An issue's `id` is composed of < plugin >:< Entity >:< PK0 >[:PK1]..." <ul><li>For Github issues, a Github issue's id is like "github:GithubIssues:< GithubIssueId >". Eg. 'github:GithubIssues:1049355647'</li> <li>For Jira issues, a Github repo's id is like "jira:JiraIssues:< JiraSourceId >:< JiraIssueId >". Eg. 'jira:JiraIssues:1:10063'. < JiraSourceId > is used to identify which jira source the issue came from, since DevLake users  [...]
+| `issue_key`                 | varchar  | 255        | The key of this issue. For example, the key of this Github [issue](https://github.com/apache/incubator-devlake/issues/1145) is 1145.                                                                                                                                                                                                                                                                                                                 [...]
+| `url`                       | varchar  | 255        | The url of the issue. It's a web address in most cases.                                                                                                                                                                                                                                                                                                                                                                                              [...]
+| `title`                     | varchar  | 255        | The title of an issue                                                                                                                                                                                                                                                                                                                                                                                                                                [...]
+| `description`               | longtext |            | The detailed description/summary of an issue                                                                                                                                                                                                                                                                                                                                                                                                         [...]
+| `type`                      | varchar  | 255        | The standard type of this issue. There're 3 standard types: <ul><li>REQUIREMENT: this issue is a feature</li><li>BUG: this issue is a bug found during test</li><li>INCIDENT: this issue is a bug found after release</li></ul>The 3 standard types are transformed from the original types of an issue. The transformation rule is set in the '.env' file or 'config-ui' before data collection. For issues with an original type that has not mapp [...]
+| `original_type`             | varchar  | 255        | The original type of an issue.                                                                                                                                                                                                                                                                                                                                                                                                                       [...]
+| `status`                    | varchar  | 255        | The standard statuses of this issue. There're 3 standard statuses: <ul><li> TODO: this issue is in backlog or to-do list</li><li>IN_PROGRESS: this issue is in progress</li><li>DONE: this issue is resolved or closed</li></ul>The 3 standard statuses are transformed from the original statuses of an issue. The transformation rule: <ul><li>For Jira issue status: transformed from the Jira issue's `statusCategory`. Jira issue has 3 default [...]
+| `original_status`           | varchar  | 255        | The original status of an issue.                                                                                                                                                                                                                                                                                                                                                                                                                     [...]
+| `story_point`               | int      |            | The story point of this issue. It's default to an empty string for data sources such as Github issues and Gitlab issues.                                                                                                                                                                                                                                                                                                                             [...]
+| `priority`                  | varchar  | 255        | The priority of the issue                                                                                                                                                                                                                                                                                                                                                                                                                            [...]
+| `component`                 | varchar  | 255        | The component a bug-issue affects. This field only supports Github plugin for now. The value is transformed from Github issue labels by the rules set according to the user's configuration of .env by end users during DevLake installation.                                                                                                                                                                                                        [...]
+| `severity`                  | varchar  | 255        | The severity level of a bug-issue. This field only supports Github plugin for now. The value is transformed from Github issue labels by the rules set according to the user's configuration of .env by end users during DevLake installation.                                                                                                                                                                                                        [...]
+| `parent_issue_id`           | varchar  | 255        | The id of its parent issue                                                                                                                                                                                                                                                                                                                                                                                                                           [...]
+| `epic_key`                  | varchar  | 255        | The key of the epic this issue belongs to. For tools with no epic-type issues such as Github and Gitlab, this field is default to an empty string                                                                                                                                                                                                                                                                                                    [...]
+| `original_estimate_minutes` | int      |            | The original estimation of the time allocated for this issue                                                                                                                                                                                                                                                                                                                                                                                         [...]
+| `time_spent_minutes`        | int      |            | The original estimation of the time allocated for this issue                                                                                                                                                                                                                                                                                                                                                                                         [...]
+| `time_remaining_minutes`    | int      |            | The remaining time to resolve the issue                                                                                                                                                                                                                                                                                                                                                                                                              [...]
+| `creator_id`                | varchar  | 255        | The id of issue creator                                                                                                                                                                                                                                                                                                                                                                                                                              [...]
+| `creator_name`              | varchar  | 255        | The name of the creator                                                                                                                                                                                                                                                                                                                                                                                                                              [...]
+| `assignee_id`               | varchar  | 255        | The id of issue assignee.<ul><li>For Github issues: this is the last assignee of an issue if the issue has multiple assignees</li><li>For Jira issues: this is the assignee of the issue at the time of collection</li></ul>                                                                                                                                                                                                                         [...]
+| `assignee_name`             | varchar  | 255        | The name of the assignee                                                                                                                                                                                                                                                                                                                                                                                                                             [...]
+| `created_date`              | datetime | 3          | The time issue created                                                                                                                                                                                                                                                                                                                                                                                                                               [...]
+| `updated_date`              | datetime | 3          | The last time issue gets updated                                                                                                                                                                                                                                                                                                                                                                                                                     [...]
+| `resolution_date`           | datetime | 3          | The time the issue changes to 'DONE'.                                                                                                                                                                                                                                                                                                                                                                                                                [...]
+| `lead_time_minutes`         | int      |            | Describes the cycle time from issue creation to issue resolution.<ul><li>For issues whose type = 'REQUIREMENT' and status = 'DONE', lead_time_minutes = resolution_date - created_date. The unit is minute.</li><li>For issues whose type != 'REQUIREMENT' or status != 'DONE', lead_time_minutes is null</li></ul>                                                                                                                                  [...]
+| `original_project`          | varchar  | 255        | The name of the original project. Transformed from a Jira project's name, a TAPD workspace's name, etc.                                                                                                                                                                                                                                                                                                                                              [...]
+
+#### issue_labels
+
+This table shows the labels of issues. Multiple entries can exist per issue. This table can be used to filter issues by label name.
+
+| **field**  | **type** | **length** | **description** | **key**      |
+| :--------- | :------- | :--------- | :-------------- | :----------- |
+| `name`     | varchar  | 255        | Label name      |              |
+| `issue_id` | varchar  | 255        | Issue ID        | FK_issues.id |
+
+#### issue_comments(WIP)
+
+This table shows the comments of issues. Issues with multiple comments are shown as multiple records. This table can be used to calculate _metric - issue response time_.
+
+| **field**      | **type** | **length** | **description**                            | **key**        |
+| :------------- | :------- | :--------- | :----------------------------------------- | :------------- |
+| `id`           | varchar  | 255        | The unique id of a comment                 | PK             |
+| `issue_id`     | varchar  | 255        | Issue ID                                   | FK_issues.id   |
+| `account_id`   | varchar  | 255        | The id of the account who made the comment | FK_accounts.id |
+| `body`         | longtext |            | The body/detail of the comment             |                |
+| `created_date` | datetime | 3          | The creation date of the comment           |                |
+| `updated_date` | datetime | 3          | The last time comment gets updated         |                |
+
+#### issue_changelogs
+
+This table shows the changelogs of issues. Issues with multiple changelogs are shown as multiple records. This is transformed from Jira or TAPD changelogs.
+
+| **field**             | **type** | **length** | **description**                                                  | **key**        |
+| :-------------------- | :------- | :--------- | :--------------------------------------------------------------- | :------------- |
+| `id`                  | varchar  | 255        | The unique id of an issue changelog                              | PK             |
+| `issue_id`            | varchar  | 255        | Issue ID                                                         | FK_issues.id   |
+| `author_id`           | varchar  | 255        | The id of the user who made the change                           | FK_accounts.id |
+| `author_name`         | varchar  | 255        | The id of the user who made the change                           | FK_accounts.id |
+| `field_id`            | varchar  | 255        | The id of changed field                                          |                |
+| `field_name`          | varchar  | 255        | The id of changed field                                          |                |
+| `original_from_value` | varchar  | 255        | The original value of the changed field                          |                |
+| `original_to_value`   | varchar  | 255        | The new value of the changed field                               |                |
+| `from_value`          | varchar  | 255        | The transformed/standardized original value of the changed field |                |
+| `to_value`            | varchar  | 255        | The transformed/standardized new value of the changed field      |                |
+| `created_date`        | datetime | 3          | The creation date of the changelog                               |                |
+
+#### issue_worklogs
+
+This table shows the work logged under issues. Usually, an issue has multiple worklogs logged by different developers.
+
+| **field**            | **type** | **length** | **description**                                                                         | **key**        |
+| :------------------- | :------- | :--------- | :-------------------------------------------------------------------------------------- | :------------- |
+| `id`                 | varchar  | 255        | The id of the worklog                                                                   | PK             |
+| `author_id`          | varchar  | 255        | The id of the author who logged the work                                                | FK_accounts.id |
+| `comment`            | longtext | 255        | The comment made while logging the work.                                                |                |
+| `time_spent_minutes` | int      |            | The time logged. The unit of value is normalized to minute. Eg. 1d =) 480, 4h30m =) 270 |                |
+| `logged_date`        | datetime | 3          | The time of this logging action                                                         |                |
+| `started_date`       | datetime | 3          | Start time of the worklog                                                               |                |
+| `issue_id`           | varchar  | 255        | Issue ID                                                                                | FK_issues.id   |
+
+#### boards
+
+A `board` is an issue list or a collection of issues. It's the abstraction of a Jira board, a Jira project, a [GitHub issue list](https://github.com/apache/incubator-devlake/issues) or a GitLab issue list. This table can be used to filter issues by the boards they belong to.
+
+| **field**      | **type** | **length** | **description**                                                                                                                                                                                                                                                                                                                                                                      | **key** |
+| :------------- | :------- | :--------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------ |
+| `id`           | varchar  | 255        | A board's `id` is composed of "< plugin >:< Entity >:< PK0 >[:PK1]..." <ul><li>For a Github repo's issue list, the board id is like "< github >:< GithubRepos >:< GithubRepoId >". Eg. "github:GithubRepo:384111310"</li> <li>For a Jira Board, the id is like the board id is like "< jira >:< JiraSourceId >< JiraBoards >:< JiraBoardsId >". Eg. "jira:1:JiraBoards:12"</li></ul> | PK      |
+| `name`         | varchar  | 255        | The name of the board. Note: the board name of a Github project 'apache/incubator-devlake' is 'apache/incubator-devlake', representing the [default issue list](https://github.com/apache/incubator-devlake/issues).                                                                                                                                                                 |         |
+| `description`  | varchar  | 255        | The description of the board.                                                                                                                                                                                                                                                                                                                                                        |         |
+| `url`          | varchar  | 255        | The url of the board. Eg. https://github.com/apache/incubator-devlake                                                                                                                                                                                                                                                                                                                |         |
+| `created_date` | datetime | 3          | Board creation time                                                                                                                                                                                                                                                                                                                                                                  |         |
+| `type`         | varchar  | 255        | Identify scrum and non-scrum board                                                                                                                                                                                                                                                                                                                                                   |         |
+
+#### board_issues
+
+This table shows the relation between boards and issues. This table can be used to filter issues by board.
+
+| **field**  | **type** | **length** | **description** | **key**      |
+| :--------- | :------- | :--------- | :-------------- | :----------- |
+| `board_id` | varchar  | 255        | Board id        | FK_boards.id |
+| `issue_id` | varchar  | 255        | Issue id        | FK_issues.id |
+
+#### sprints
+
+A `sprint` is the abstraction of Jira sprints, TAPD iterations and GitHub milestones. A sprint contains a list of issues.
+
+| **field**           | **type** | **length** | **description**                                                                                                                                                                                                                                                                                                                                                                                                                                              [...]
+| :------------------ | :------- | :--------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [...]
+| `id`                | varchar  | 255        | A sprint's `id` is composed of "< plugin >:< Entity >:< PK0 >[:PK1]..."<ul><li>A sprint in a Github repo is a milestone, the sprint id is like "< github >:< GithubRepos >:< GithubRepoId >:< milestoneNumber >".<br/>Eg. The id for this [sprint](https://github.com/apache/incubator-devlake/milestone/5) is "github:GithubRepo:384111310:5"</li><li>For a Jira Board, the id is like "< jira >:< JiraSourceId >< JiraBoards >:< JiraBoardsId >".<br/>Eg.  [...]
+| `name`              | varchar  | 255        | The name of sprint.<br/>For Github projects, the sprint name is the milestone name. For instance, 'v0.10.0 - Introduce Temporal to DevLake' is the name of this [sprint](https://github.com/apache/incubator-devlake/milestone/5).                                                                                                                                                                                                                           [...]
+| `url`               | varchar  | 255        | The url of sprint.                                                                                                                                                                                                                                                                                                                                                                                                                                           [...]
+| `status`            | varchar  | 255        | There're 3 statuses of a sprint:<ul><li>CLOSED: a completed sprint</li><li>ACTIVE: a sprint started but not completed</li><li>FUTURE: a sprint that has not started</li></ul>                                                                                                                                                                                                                                                                                [...]
+| `started_date`      | datetime | 3          | The start time of a sprint                                                                                                                                                                                                                                                                                                                                                                                                                                   [...]
+| `ended_date`        | datetime | 3          | The planned/estimated end time of a sprint. It's usually set when planning a sprint.                                                                                                                                                                                                                                                                                                                                                                         [...]
+| `completed_date`    | datetime | 3          | The actual time to complete a sprint.                                                                                                                                                                                                                                                                                                                                                                                                                        [...]
+| `original_board_id` | datetime | 3          | The id of board where the sprint first created. This field is not null only when this entity is transformed from Jira sprints.<br/>In Jira, sprint and board entities have 2 types of relation:<ul><li>A sprint is created based on a specific board. In this case, board(1):(n)sprint. The `original_board_id` is used to show the relation.</li><li>A sprint can be mapped to multiple boards, a board can also show multiple sprints. In this case, board [...]
+
+#### sprint_issues
+
+This table shows the relation between sprints and issues that have been added to sprints. This table can be used to show metrics such as _'ratio of unplanned issues'_, _'completion rate of sprint issues'_, etc
+
+| **field**        | **type** | **length** | **description**                                                                                                                                                                                                                                                                                                                                                                                                                                                 [...]
+| :--------------- | :------- | :--------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [...]
+| `sprint_id`      | varchar  | 255        | Sprint id                                                                                                                                                                                                                                                                                                                                                                                                                                                       [...]
+| `issue_id`       | varchar  | 255        | Issue id                                                                                                                                                                                                                                                                                                                                                                                                                                                        [...]
+| `is_removed`     | bool     |            | If the issue is removed from this sprint, then TRUE; else FALSE                                                                                                                                                                                                                                                                                                                                                                                                 [...]
+| `added_date`     | datetime | 3          | The time this issue added to the sprint. If an issue is added to a sprint multiple times, the latest time will be the value.                                                                                                                                                                                                                                                                                                                                    [...]
+| `removed_date`   | datetime | 3          | The time this issue gets removed from the sprint. If an issue is removed multiple times, the latest time will be the value.                                                                                                                                                                                                                                                                                                                                     [...]
+| `added_stage`    | varchar  | 255        | The stage when issue is added to this sprint. There're 3 possible values:<ul><li>BEFORE_SPRINT<br/>Planning before sprint starts.<br/> Condition: sprint_issues.added_date <= sprints.start_date</li><li>DURING_SPRINT Planning during a sprint.<br/>Condition: sprints.start_date < sprint_issues.added_date <= sprints.end_date</li><li>AFTER_SPRINT<br/>Planing after a sprint. This is caused by improper operation - adding issues to a completed sprint.< [...]
+| `resolved_stage` | varchar  | 255        | The stage when an issue is resolved (issue status turns to 'DONE'). There're 3 possible values:<ul><li>BEFORE_SPRINT<br/>Condition: issues.resolution_date <= sprints.start_date</li><li>DURING_SPRINT<br/>Condition: sprints.start_date < issues.resolution_date <= sprints.end_date</li><li>AFTER_SPRINT<br/>Condition: issues.resolution_date ) sprints.end_date</li></ul>                                                                                   [...]
+
+#### board_sprints
+
+| **field**   | **type** | **length** | **description** | **key**       |
+| :---------- | :------- | :--------- | :-------------- | :------------ |
+| `board_id`  | varchar  | 255        | Board id        | FK_boards.id  |
+| `sprint_id` | varchar  | 255        | Sprint id       | FK_sprints.id |
+
+<br/>
+
+### Domain 2 - Source Code Management
+
+#### repos
+
+Information about GitHub or Gitlab repositories. A repository is always owned by a user.
+
+| **field**      | **type** | **length** | **description**                                                                                                                                                                                    | **key**        |
+| :------------- | :------- | :--------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------- |
+| `id`           | varchar  | 255        | A repo's `id` is composed of "< plugin >:< Entity >:< PK0 >[:PK1]..."<br/>For example, a Github repo's id is like "< github >:< GithubRepos >< GithubRepoId >". Eg. 'github:GithubRepos:384111310' | PK             |
+| `name`         | varchar  | 255        | The name of repo.                                                                                                                                                                                  |                |
+| `description`  | varchar  | 255        | The description of repo.                                                                                                                                                                           |                |
+| `url`          | varchar  | 255        | The url of repo. Eg. https://github.com/apache/incubator-devlake                                                                                                                                   |                |
+| `owner_id`     | varchar  | 255        | The id of the owner of repo                                                                                                                                                                        | FK_accounts.id |
+| `language`     | varchar  | 255        | The major language of repo. Eg. The language for apache/incubator-devlake is 'Go'                                                                                                                  |                |
+| `forked_from`  | varchar  | 255        | Empty unless the repo is a fork in which case it contains the `id` of the repo the repo is forked from.                                                                                            |                |
+| `deleted`      | tinyint  | 255        | 0: repo is active 1: repo has been deleted                                                                                                                                                         |                |
+| `created_date` | datetime | 3          | Repo creation date                                                                                                                                                                                 |                |
+| `updated_date` | datetime | 3          | Last full update was done for this repo                                                                                                                                                            |                |
+
+#### repo_languages(WIP)
+
+Languages that are used in the repository along with byte counts for all files in those languages. This is in line with how GitHub calculates language percentages in a repository. Multiple entries can exist per repo.
+
+The table is filled in when the repo has been first inserted on when an update round for all repos is made.
+
+| **field**      | **type** | **length** | **description**                                                                                                                                                                                    | **key** |
+| :------------- | :------- | :--------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------ |
+| `id`           | varchar  | 255        | A repo's `id` is composed of "< plugin >:< Entity >:< PK0 >[:PK1]..."<br/>For example, a Github repo's id is like "< github >:< GithubRepos >< GithubRepoId >". Eg. 'github:GithubRepos:384111310' | PK      |
+| `language`     | varchar  | 255        | The language of repo.<br/>These are the [languages](https://api.github.com/repos/apache/incubator-devlake/languages) for apache/incubator-devlake                                                  |         |
+| `bytes`        | int      |            | The byte counts for all files in those languages                                                                                                                                                   |         |
+| `created_date` | datetime | 3          | The field is filled in with the latest timestamp the query for a specific `repo_id` was done.                                                                                                      |         |
+
+#### repo_commits
+
+The commits belong to the history of a repository. More than one repos can share the same commits if one is a fork of the other.
+
+| **field**    | **type** | **length** | **description** | **key**        |
+| :----------- | :------- | :--------- | :-------------- | :------------- |
+| `repo_id`    | varchar  | 255        | Repo id         | FK_repos.id    |
+| `commit_sha` | char     | 40         | Commit sha      | FK_commits.sha |
+
+#### refs
+
+A ref is the abstraction of a branch or tag.
+
+| **field**    | **type** | **length** | **description**                                                                                                                                                                                                                                                                                                                                             | **key**     |
+| :----------- | :------- | :--------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :---------- |
+| `id`         | varchar  | 255        | A ref's `id` is composed of "< plugin >:< Entity >:< PK0 >[:PK1]..."<br/>For example, a Github ref is composed of "github:GithubRepos:< GithubRepoId >:< RefUrl >". Eg. The id of release v5.3.0 of PingCAP/TiDB project is 'github:GithubRepos:384111310:refs/tags/v5.3.0' A repo's `id` is composed of "< plugin >:< Entity >:< PK0 >[:PK1]..."           | PK          |
+| `ref_name`   | varchar  | 255        | The name of ref. Eg. '[refs/tags/v0.9.3](https://github.com/apache/incubator-devlake/tree/v0.9.3)'                                                                                                                                                                                                                                                          |             |
+| `repo_id`    | varchar  | 255        | The id of repo this ref belongs to                                                                                                                                                                                                                                                                                                                          | FK_repos.id |
+| `commit_sha` | char     | 40         | The commit this ref points to at the time of collection                                                                                                                                                                                                                                                                                                     |             |
+| `is_default` | int      |            | <ul><li>0: the ref is the default branch. By the definition of [Github](https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/managing-branches-in-your-repository/changing-the-default-branch), the default branch is the base branch for pull requests and code commits.</li><li>1: not the default branch</li></ul> |             |
+| `merge_base` | char     | 40         | The merge base commit of the main ref and the current ref                                                                                                                                                                                                                                                                                                   |             |
+| `ref_type`   | varchar  | 64         | There're 2 typical types:<ul><li>BRANCH</li><li>TAG</li></ul>                                                                                                                                                                                                                                                                                               |             |
+
+#### commits_diffs
+
+This table shows the commits added in a new commit compared to an old commit. This table can be used to support tag-based and deploy-based metrics.
+
+The records of this table are computed by [RefDiff](https://github.com/apache/incubator-devlake/tree/main/plugins/refdiff) plugin. The computation should be manually triggered after using [GitRepoExtractor](https://github.com/apache/incubator-devlake/tree/main/plugins/gitextractor) to collect commits and refs. The algorithm behind is similar to [this](https://github.com/apache/incubator-devlake/compare/v0.8.0%E2%80%A6v0.9.0).
+
+| **field**        | **type** | **length** | **description**                                                            | **key** |
+| :--------------- | :------- | :--------- | :------------------------------------------------------------------------- | :------ |
+| `new_commit_sha` | char     | 40         | The commit new ref/deployment points to at the time of collection          | PK      |
+| `old_commit_sha` | char     | 40         | The commit old ref/deployment points to at the time of collection          | PK      |
+| `commit_sha`     | char     | 40         | One of the added commits in the new ref compared to the old ref/deployment | PK      |
+| `sorting_index`  | varchar  | 255        | An index for debugging, please skip it                                     |         |
+
+#### finished_commits_diffs
+
+This table shows the commits_diffs `new_commit_sha` and `old_commit_sha` pairs which are calculated successfully.
+
+| **field**        | **type** | **length** | **description**                                                       | **key** |
+| :--------------- | :------- | :--------- | :-------------------------------------------------------------------- | :------ |
+| `new_commit_sha` | char     | 40         | The new commit new ref/deployment points to at the time of collection | PK      |
+| `old_commit_sha` | char     | 40         | The commit old ref/deployment points to at the time of collection     | PK      |
+
+#### ref_commits
+
+| **field**        | **type** | **length** | **description**                                        | **key** |
+| :--------------- | :------- | :--------- | :----------------------------------------------------- | :------ |
+| `new_ref_id`     | varchar  | 255        | The new ref's id for comparison                        | PK      |
+| `old_ref_id`     | varchar  | 255        | The old ref's id for comparison                        | PK      |
+| `new_commit_sha` | char     | 40         | The commit new ref points to at the time of collection |         |
+| `old_commit_sha` | char     | 40         | The commit old ref points to at the time of collection |         |
+
+#### commits
+
+| **field**         | **type** | **length** | **description**                                                                                                                                                         | **key**        |
+| :---------------- | :------- | :--------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------- |
+| `sha`             | char     | 40         | One of the added commits in the new ref compared to the old ref                                                                                                         | FK_commits.sha |
+| `message`         | varchar  | 255        | Commit message                                                                                                                                                          |                |
+| `author_name`     | varchar  | 255        | The value is set with command `git config user.name xxxxx` commit                                                                                                       |                |
+| `author_email`    | varchar  | 255        | The value is set with command `git config user.email xxxxx` author                                                                                                      |                |
+| `authored_date`   | datetime | 3          | The date when this commit was originally made                                                                                                                           |                |
+| `author_id`       | varchar  | 255        | The id of commit author                                                                                                                                                 | FK_accounts.id |
+| `committer_name`  | varchar  | 255        | The name of committer                                                                                                                                                   |                |
+| `committer_email` | varchar  | 255        | The email of committer                                                                                                                                                  |                |
+| `committed_date`  | datetime | 3          | The last time the commit gets modified.<br/>For example, when rebasing the branch where the commit is in on another branch, the committed_date changes.                 |                |
+| `committer_id`    | varchar  | 255        | The id of committer                                                                                                                                                     | FK_accounts.id |
+| `additions`       | int      |            | Added lines of code                                                                                                                                                     |                |
+| `deletions`       | int      |            | Deleted lines of code                                                                                                                                                   |                |
+| `dev_eq`          | int      |            | A metric that quantifies the amount of code contribution. The data can be retrieved from [AE plugin](https://github.com/apache/incubator-devlake/tree/main/plugins/ae). |                |
+
+#### commit_files
+
+The files have been changed via commits.
+
+| **field**    | **type** | **length** | **description**                                        | **key**        |
+| :----------- | :------- | :--------- | :----------------------------------------------------- | :------------- |
+| `id`         | varchar  | 255        | The `id` is composed of "< Commit_sha >:< file_path >" | FK_commits.sha |
+| `commit_sha` | char     | 40         | Commit sha                                             | FK_commits.sha |
+| `file_path`  | varchar  | 255        | Path of a changed file in a commit                     |                |
+| `additions`  | int      |            | The added lines of code in this file by the commit     |                |
+| `deletions`  | int      |            | The deleted lines of code in this file by the commit   |                |
+
+#### components
+
+The components of files extracted from the file paths. This can be used to analyze Git metrics by component.
+
+| **field**    | **type** | **length** | **description**                                        | **key**     |
+| :----------- | :------- | :--------- | :----------------------------------------------------- | :---------- |
+| `repo_id`    | varchar  | 255        | The repo id                                            | FK_repos.id |
+| `name`       | varchar  | 255        | The name of component                                  |             |
+| `path_regex` | varchar  | 255        | The regex to extract components from this repo's paths |             |
+
+#### commit_file_components
+
+The relationship between commit_file and component_name.
+
+| **field**        | **type** | **length** | **description**              | **key**            |
+| :--------------- | :------- | :--------- | :--------------------------- | :----------------- |
+| `commit_file_id` | varchar  | 255        | The id of commit file        | FK_commit_files.id |
+| `component_name` | varchar  | 255        | The component name of a file |                    |
+
+#### commit_parents
+
+The parent commit(s) for each commit, as specified by Git.
+
+| **field**    | **type** | **length** | **description**   | **key**        |
+| :----------- | :------- | :--------- | :---------------- | :------------- |
+| `commit_sha` | char     | 40         | commit sha        | FK_commits.sha |
+| `parent`     | char     | 40         | Parent commit sha | FK_commits.sha |
+
+<br/>
+
+### Domain 3 - Code Review
+
+#### pull_requests
+
+A pull request is the abstraction of GitHub pull request and Gitlab merge request.
+
+| **field**          | **type** | **length** | **description**                                                                                                                                                                                                                                                                                                                                                                                | **key**        |
+| :----------------- | :------- | :--------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------- |
+| `id`               | char     | 40         | A pull request's `id` is composed of "< plugin >:< Entity >:< PK0 >[:PK1]..." Eg. For 'github:GithubPullRequests:1347'                                                                                                                                                                                                                                                                         | FK_commits.sha |
+| `title`            | varchar  | 255        | The title of pull request                                                                                                                                                                                                                                                                                                                                                                      |                |
+| `description`      | longtext |            | The body/description of pull request                                                                                                                                                                                                                                                                                                                                                           |                |
+| `status`           | varchar  | 255        | the status of pull requests. For a Github pull request, the status can either be 'open' or 'closed'.                                                                                                                                                                                                                                                                                           |                |
+| `parent_pr_id`     | varchar  | 255        | The id of the parent PR                                                                                                                                                                                                                                                                                                                                                                        |                |
+| `pull_request_key` | varchar  | 255        | The key of PR. Eg, 1536 is the key of this [PR](https://github.com/apache/incubator-devlake/pull/1563)                                                                                                                                                                                                                                                                                         |                |
+| `base_repo_id`     | varchar  | 255        | The repo that will be updated.                                                                                                                                                                                                                                                                                                                                                                 |                |
+| `head_reop_id`     | varchar  | 255        | The repo containing the changes that will be added to the base. If the head repository is NULL, this means that the corresponding project had been deleted when DevLake processed the pull request.                                                                                                                                                                                            |                |
+| `base_ref`         | varchar  | 255        | The branch name in the base repo that will be updated                                                                                                                                                                                                                                                                                                                                          |                |
+| `head_ref`         | varchar  | 255        | The branch name in the head repo that contains the changes that will be added to the base                                                                                                                                                                                                                                                                                                      |                |
+| `author_name`      | varchar  | 255        | The author's name of the pull request                                                                                                                                                                                                                                                                                                                                                          |                |
+| `author_id`        | varchar  | 255        | The author's id of the pull request                                                                                                                                                                                                                                                                                                                                                            |                |
+| `url`              | varchar  | 255        | the web link of the pull request                                                                                                                                                                                                                                                                                                                                                               |                |
+| `type`             | varchar  | 255        | The work-type of a pull request. For example: feature-development, bug-fix, docs, etc.<br/>The value is transformed from Github pull request labels by configuring `GITHUB_PR_TYPE` in `.env` file during installation.                                                                                                                                                                        |                |
+| `component`        | varchar  | 255        | The component this PR affects.<br/>The value is transformed from Github/Gitlab pull request labels by configuring `GITHUB_PR_COMPONENT` in `.env` file during installation.                                                                                                                                                                                                                    |                |
+| `created_date`     | datetime | 3          | The time PR created.                                                                                                                                                                                                                                                                                                                                                                           |                |
+| `merged_date`      | datetime | 3          | The time PR gets merged. Null when the PR is not merged.                                                                                                                                                                                                                                                                                                                                       |                |
+| `closed_date`      | datetime | 3          | The time PR closed. Null when the PR is not closed.                                                                                                                                                                                                                                                                                                                                            |                |
+| `merge_commit_sha` | char     | 40         | the merge commit of this PR. By the definition of [Github](https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/managing-branches-in-your-repository/changing-the-default-branch), when you click the default Merge pull request option on a pull request on Github, all commits from the feature branch are added to the base branch in a merge commit. |                |
+| `base_commit_sha`  | char     | 40         | The base commit of this PR.                                                                                                                                                                                                                                                                                                                                                                    |                |
+| `head_commit_sha`  | char     | 40         | The head commit of this PR.                                                                                                                                                                                                                                                                                                                                                                    |                |
+
+#### pull_request_labels
+
+This table shows the labels of pull request. Multiple entries can exist per pull request. This table can be used to filter pull requests by label name.
+
+| **field**         | **type** | **length** | **description** | **key**             |
+| :---------------- | :------- | :--------- | :-------------- | :------------------ |
+| `name`            | varchar  | 255        | Label name      |                     |
+| `pull_request_id` | varchar  | 255        | Pull request ID | FK_pull_requests.id |
+
+#### pull_request_commits
+
+A commit associated with a pull request
+
+The list is additive. This means if a rebase with commit squashing takes place after the commits of a pull request have been processed, the old commits will not be deleted.
+
+| **field**         | **type** | **length** | **description** | **key**             |
+| :---------------- | :------- | :--------- | :-------------- | :------------------ |
+| `pull_request_id` | varchar  | 255        | Pull request id | FK_pull_requests.id |
+| `commit_sha`      | char     | 40         | Commit sha      | FK_commits.sha      |
+
+#### pull_request_comments
+
+Normal comments, review bodies, reviews' inline comments of GitHub's pull requests or GitLab's merge requests.
+
+| **field**         | **type** | **length** | **description**                                                                                                                                            | **key**             |
+| :---------------- | :------- | :--------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------ |
+| `id`              | varchar  | 255        | Comment id                                                                                                                                                 | PK                  |
+| `pull_request_id` | varchar  | 255        | Pull request id                                                                                                                                            | FK_pull_requests.id |
+| `body`            | longtext |            | The body of the comments                                                                                                                                   |                     |
+| `account_id`      | varchar  | 255        | The account who made the comment                                                                                                                           | FK_accounts.id      |
+| `created_date`    | datetime | 3          | Comment creation time                                                                                                                                      |                     |
+| `position`        | int      |            | Deprecated                                                                                                                                                 |                     |
+| `type`            | varchar  | 255        | - For normal comments: NORMAL<br/> - For review comments, ie. diff/inline comments: DIFF<br/> - For reviews' body (exist in GitHub but not GitLab): REVIEW |                     |
+| `review_id`       | varchar  | 255        | Review_id of the comment if the type is `REVIEW` or `DIFF`                                                                                                 |                     |
+| `status`          | varchar  | 255        | Status of the comment                                                                                                                                      |                     |
+
+#### pull_request_events(WIP)
+
+Events of pull requests.
+
+| **field**         | **type** | **length** | **description**                                                                                                                                                                                                                                                                                                                                                                                                                                          | **k [...]
+| :---------------- | :------- | :--------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-- [...]
+| `id`              | varchar  | 255        | Event id                                                                                                                                                                                                                                                                                                                                                                                                                                                 | PK  [...]
+| `pull_request_id` | varchar  | 255        | Pull request id                                                                                                                                                                                                                                                                                                                                                                                                                                          | FK_ [...]
+| `action`          | varchar  | 255        | The action to be taken, some values:<ul><li>`opened`: When the pull request has been opened</li><li>`closed`: When the pull request has been closed</li><li>`merged`: When Github detected that the pull request has been merged. No merges outside Github (i.e. Git based) are reported</li><li>`reoponed`: When a pull request is opened after being closed</li><li>`syncrhonize`: When new commits are added/removed to the head repository</li></ul> |     [...]
+| `actor_id`        | varchar  | 255        | The account id of the event performer                                                                                                                                                                                                                                                                                                                                                                                                                    | FK_ [...]
+| `created_date`    | datetime | 3          | Event creation time                                                                                                                                                                                                                                                                                                                                                                                                                                      |     [...]
+
+<br/>
+
+### Domain 4 - CI/CD(WIP)
+
+#### cicd_pipelines
+
+A cicd_pipeline is a series of builds that have connections or a standalone build.
+
+| **field**       | **type**        | **length** | **description**                                                                               | **key** |
+| :-------------- | :-------------- | :--------- | :-------------------------------------------------------------------------------------------- | :------ |
+| `id`            | varchar         | 255        | This key is generated based on details from the original plugin                               | PK      |
+| `name`          | varchar         | 255        | For gitlab, as there is no name for pipeline, so we use projectId, others have their own name |         |
+| `result`        | varchar         | 100        | The result of this task                                                                       |         |
+| `status`        | varchar         | 100        | The status of this task                                                                       |         |
+| `type`          | varchar         | 100        | To indicate if this is a DEPLOYMENT                                                           |         |
+| `duration_sec`  | bigint unsigned |            | how long does this task take                                                                  |         |
+| `started_date`  | datetime        | 3          | when did this task start                                                                      |         |
+| `finished_date` | datetime        | 3          | when did this task finish                                                                     |         |
+| `environment`   | varchar         | 255        | To indicate the environment in which the task is running                                      |         |
+
+#### cicd_pipeline_commits
+
+| **field**     | **type** | **length** | **description**                                                 | **key** |
+| :------------ | :------- | :--------- | :-------------------------------------------------------------- | :------ |
+| `pipeline_id` | varchar  | 255        | This key is generated based on details from the original plugin | PK      |
+| `commit_sha`  | varchar  | 255        | The commit that trigger this pipeline                           | PK      |
+| `branch`      | varchar  | 255        | The branch that trigger this pipeline                           |         |
+| `repo`        | varchar  | 255        |                                                                 |         |
+| `repo_id`     | varchar  | 255        | The repo that this pipeline belongs to                          |         |
+| `repo_url`    | longtext |            |                                                                 |         |
+
+#### cicd_tasks
+
+A cicd_task is a single job of ci/cd.
+
+| **field**       | **type**        | **length** | **description**                                                 | **key** |
+| :-------------- | :-------------- | :--------- | :-------------------------------------------------------------- | :------ |
+| `id`            | varchar         | 255        | This key is generated based on details from the original plugin | PK      |
+| `name`          | varchar         | 255        |                                                                 |         |
+| `pipeline_id`   | varchar         | 255        | The id of pipeline                                              |         |
+| `result`        | varchar         | 100        | The result of this task                                         |         |
+| `status`        | varchar         | 100        | The status of this task                                         |         |
+| `type`          | varchar         | 100        | To indicate if this is a DEPLOYMENT                             |         |
+| `duration_sec`  | bigint unsigned |            | how long does this task take                                    |         |
+| `started_date`  | datetime        | 3          | when did this task start                                        |         |
+| `finished_date` | datetime        | 3          | when did this task finish                                       |         |
+| `environment`   | varchar         | 255        | To indicate the environment in which the task is running        |         |
+
+### Project Metric Entities
+
+#### project_pr_metrics 
+
+| **field** | **type** | **length** | **description**                                                                        | **key** |
+| :-------- | :-------- |:-----------|:---------------------------------------------------------------------------------------| :-------- |
+| `id` | varchar | 255        | Id of PR                                                                               | PK |
+| `project_name` | varchar | 100        | The project that this PR belongs to                                                    | PK |
+| `first_review_id` | longtext |            | The id of the first review on this pr                                                  |  |
+| `first_commit_sha` | longtext |            | The sha of the first commit                                                            |  |
+| `pr_coding_time` | bigint |            | The time it takes from the first commit until a PR is issued                           |  |
+| `pr_pickup_time` | bigint |            | The time it takes from when a PR is issued until the first comment is added to that PR |  |
+| `pr_review_time` | bigint |            | The time it takes to complete a code review of a PR before it gets merged              |  |
+| `deployment_id` | longtext |            | The id of cicd_task which deploy the commits of this PR                                |  |
+| `pr_deploy_time` | bigint |            | The time it takes from when a PR is merged to when it is deployed                      |  |
+| `pr_cycle_time` | bigint |            | The total time from the first commit to when the PR is deployed                        |  |
+
+#### project_issue_metrics
+
+| **field** | **type** | **length** | **description**                             | **key** |
+| :-------- | :-------- |:-----------|:--------------------------------------------| :-------- |
+| `id` | varchar | 255        | Id of Issue                                 | PK |
+| `project_name` | varchar | 100        | The project that this Issue belongs to      | PK |
+| `deployment_id` | longtext |            | The id of cicd_task which cause an incident |  |
+
+### Cross-Domain Entities
+
+These entities are used to map entities between different domains. They are the key players to break data isolation.
+
+There're low-level entities such as issue_commits, users, and higher-level cross domain entities such as board_repos
+
+#### issue_commits
+
+A low-level mapping between "issue tracking" and "source code management" domain by mapping `issues` and `commits`. Issue(n): Commit(n).
+
+The original connection between these two entities lies in either issue tracking tools like Jira or source code management tools like GitLab. You have to use tools to accomplish this.
+
+For example, a common method to connect Jira issue and GitLab commit is a GitLab plugin [Jira Integration](https://docs.gitlab.com/ee/integration/jira/). With this plugin, the Jira issue key in the commit message written by the committers will be parsed. Then, the plugin will add the commit urls under this jira issue. Hence, DevLake's [Jira plugin](https://github.com/apache/incubator-devlake/tree/main/plugins/jira) can get the related commits (including repo, commit_id, url) of an issue.
+
+| **field**    | **type** | **length** | **description** | **key**        |
+| :----------- | :------- | :--------- | :-------------- | :------------- |
+| `issue_id`   | varchar  | 255        | Issue id        | FK_issues.id   |
+| `commit_sha` | char     | 40         | Commit sha      | FK_commits.sha |
+
+#### pull_request_issues
+
+This table shows the issues closed by pull requests. It's a medium-level mapping between "issue tracking" and "source code management" domain by mapping issues and commits. Issue(n): Commit(n).
+
+The data is extracted from the body of pull requests conforming to certain regular expression. The regular expression can be defined in GITHUB_PR_BODY_CLOSE_PATTERN in the .env file
+
+| **field**             | **type** | **length** | **description**  | **key**             |
+| :-------------------- | :------- | :--------- | :--------------- | :------------------ |
+| `pull_request_id`     | char     | 40         | Pull request id  | FK_pull_requests.id |
+| `issue_id`            | varchar  | 255        | Issue id         | FK_issues.id        |
+| `pull_request_number` | varchar  | 255        | Pull request key |                     |
+| `issue_number`        | varchar  | 255        | Issue key        |                     |
+
+#### board_repos (Deprecated)
+
+A way to link "issue tracking" and "source code management" domain by mapping `boards` and `repos`. Board(n): Repo(n).
+
+| **field**  | **type** | **length** | **description** | **key**      |
+| :--------- | :------- | :--------- | :-------------- | :----------- |
+| `board_id` | varchar  | 255        | Board id        | FK_boards.id |
+| `repo_id`  | varchar  | 255        | Repo id         | FK_repos.id  |
+
+#### accounts
+
+This table stores of user accounts across different tools such as GitHub, Jira, GitLab, etc. This table can be joined to get the metadata of all accounts.
+metrics, such as _'No. of Issue closed by contributor', 'No. of commits by contributor',_
+
+| **field**      | **type** | **length** | **description**                                                                                                                                                                                                                                                              | **key** |
+| :------------- | :------- | :--------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------ |
+| `id`           | varchar  | 255        | An account's `id` is the identifier of the account of a specific tool. It is composed of "< Plugin >:< Entity >:< PK0 >[:PK1]..."<br/>For example, a Github account's id is composed of "< github >:< GithubAccounts >:< GithubUserId >)". Eg. 'github:GithubUsers:14050754' | PK      |
+| `email`        | varchar  | 255        | Email of the account                                                                                                                                                                                                                                                         |         |
+| `full_name`    | varchar  | 255        | Full name                                                                                                                                                                                                                                                                    |         |
+| `user_name`    | varchar  | 255        | Username, nickname or Github login of an account                                                                                                                                                                                                                             |         |
+| `avatar_url`   | varchar  | 255        |                                                                                                                                                                                                                                                                              |         |
+| `organization` | varchar  | 255        | User's organization(s)                                                                                                                                                                                                                                                       |         |
+| `created_date` | datetime | 3          | User creation time                                                                                                                                                                                                                                                           |         |
+| `status`       | int      |            | 0: default, the user is active. 1: the user is not active                                                                                                                                                                                                                    |         |
+
+#### users
+
+| **field** | **type** | **length** | **description**               | **key** |
+| --------- | -------- | ---------- | ----------------------------- | ------- |
+| `id`      | varchar  | 255        | id of a person                | PK      |
+| `email`   | varchar  | 255        | the primary email of a person |         |
+| `name`    | varchar  | 255        | name of a person              |         |
+
+#### user_accounts
+
+| **field**    | **type** | **length** | **description** | **key**          |
+| ------------ | -------- | ---------- | --------------- | ---------------- |
+| `user_id`    | varchar  | 255        | users.id        | Composite PK, FK |
+| `account_id` | varchar  | 255        | accounts.id     | Composite PK, FK |
+
+#### teams
+
+| **field**       | **type** | **length** | **description**                                    | **key** |
+| --------------- | -------- | ---------- | -------------------------------------------------- | ------- |
+| `id`            | varchar  | 255        | id from the data sources, decided by DevLake users | PK      |
+| `name`          | varchar  | 255        | name of the team. Eg. team A, team B, etc.         |         |
+| `alias`         | varchar  | 255        | alias or abbreviation of a team                    |         |
+| `parent_id`     | varchar  | 255        | teams.id, default to null                          | FK      |
+| `sorting_index` | int      | 255        | the field to sort team                             |         |
+
+#### team_users
+
+| **field** | **type** | **length** | **description**                                 | **key**          |
+| --------- | -------- | ---------- | ----------------------------------------------- | ---------------- |
+| `team_id` | varchar  | 255        | Full name of the team. Eg. team A, team B, etc. | Composite PK, FK |
+| `user_id` | varchar  | 255        | users.id                                        | Composite PK, FK |
+
+<br/>
+
+## DWM Entities - (Data Warehouse Middle)
+
+DWM entities are the slight aggregation and operation of DWD to store more organized details or middle-level metrics.
+
+#### refs_issues_diffs
+
+This table shows the issues fixed by commits added in a new ref compared to an old one. The data is computed from [table.commits_diffs](#commits_diffs), [table.finished_commits_diffs](#finished_commits_diffs), [table.pull_requests](#pull_requests), [table.pull_request_commits](#pull_request_commits), and [table.pull_request_issues](#pull_request_issues).
+
+This table can support tag-based analysis, for instance, '_No. of bugs closed in a tag_'.
+
+| **field**            | **type** | **length** | **description**                                        | **key**      |
+| :------------------- | :------- | :--------- | :----------------------------------------------------- | :----------- |
+| `new_ref_id`         | varchar  | 255        | The new ref's id for comparison                        | FK_refs.id   |
+| `old_ref_id`         | varchar  | 255        | The old ref's id for comparison                        | FK_refs.id   |
+| `new_ref_commit_sha` | char     | 40         | The commit new ref points to at the time of collection |              |
+| `old_ref_commit_sha` | char     | 40         | The commit old ref points to at the time of collection |              |
+| `issue_number`       | varchar  | 255        | Issue number                                           |              |
+| `issue_id`           | varchar  | 255        | Issue id                                               | FK_issues.id |
+
+## Get Domain Layer Models in Developer Mode
+
+When developing a new plugin, you need to refer to domain layer models, as all raw data should be transformed to domain layer data to provide standardized metrics across tools. Please use the following method to access the domain data models.
+
+```golang
+import "github.com/apache/incubator-devlake/models/domainlayer/domaininfo"
+
+domaininfo := domaininfo.GetDomainTablesInfo()
+for _, table := range domaininfo {
+  // do something
+}
+```
+
+If you want to learn more about plugin models, please visit [PluginImplementation](https://devlake.apache.org/docs/DeveloperManuals/PluginImplementation)
diff --git a/versioned_docs/version-v0.15/DataModels/RawLayerSchema.md b/versioned_docs/version-v0.15/DataModels/RawLayerSchema.md
new file mode 100644
index 0000000000..e2336e5ca2
--- /dev/null
+++ b/versioned_docs/version-v0.15/DataModels/RawLayerSchema.md
@@ -0,0 +1,29 @@
+---
+title: "Raw Layer Schema"
+description: >
+   Caches raw API responses from data source plugins
+sidebar_position: 3
+---
+
+## Summary
+
+This document describes Apache DevLake's raw layer schema.
+
+Referring to DevLake's [architecture](../Overview/Architecture.md), the raw layer stores the API responses from data sources (DevOps tools) in JSON. This saves developers' time if the raw data is to be transformed differently later on. Please note that communicating with data sources' APIs is usually the most time-consuming step.
+
+
+## Use Cases
+
+1. As a user, you can check raw data tables to verify data quality if you have concerns about the [domain layer data](DevLakeDomainLayerSchema.md).
+2. As a developer, you can customize domain layer schema based on raw data tables via [customize](Plugins/customize.md).
+
+
+## Data Models
+
+Raw layer tables start with a prefix `_raw_`. Each plugin contains multiple raw data tables, the naming convension of these tables is `_raw_{plugin}_{entity}`. For instance,
+- _raw_jira_issues
+- _raw_jira_boards
+- _raw_jira_board_issues
+- ...
+
+Normally, you do not need to use these tables, unless you have one of the above use cases.
diff --git a/versioned_docs/version-v0.15/DataModels/SystemTables.md b/versioned_docs/version-v0.15/DataModels/SystemTables.md
new file mode 100644
index 0000000000..6fea769add
--- /dev/null
+++ b/versioned_docs/version-v0.15/DataModels/SystemTables.md
@@ -0,0 +1,28 @@
+---
+title: "System Tables"
+description: >
+   Stores DevLake's own entities
+sidebar_position: 4
+---
+
+## Summary
+
+This document describes Apache DevLake's data models of its own entities. These tables are used and managed by the Devlake framework. 
+
+
+## Use Cases
+
+1. As a user, you can check `_devlake_blueprints` and `_devlake_pipelines` when failing to collect data via DevLake's blueprint.
+2. As a contributor, you can check these tables to debug task concurrency or data migration features.
+
+
+## Data Models
+
+These tables start with a prefix `_devlake`. Unlike raw or tool data tables, DevLake only contains one set of system tables. The naming convension of these tables is `_raw_{plugin}_{entity}`, such as 
+- _devlake_blueprints
+- _devlake_pipelines
+- _devlake_tasks
+- _devlake_subtasks
+- ...
+
+Normally, you do not need to use these tables, unless you have one of the above use cases.
diff --git a/versioned_docs/version-v0.15/DataModels/ToolLayerSchema.md b/versioned_docs/version-v0.15/DataModels/ToolLayerSchema.md
new file mode 100644
index 0000000000..889e1a23c8
--- /dev/null
+++ b/versioned_docs/version-v0.15/DataModels/ToolLayerSchema.md
@@ -0,0 +1,28 @@
+---
+title: "Tool Layer Schema"
+description: >
+   Extract raw data into a relational schema for each specific tool
+sidebar_position: 2
+---
+
+## Summary
+
+This document describes Apache DevLake's tool layer schema.
+
+Referring to DevLake's [architecture](../Overview/Architecture.md), the Tool layer extracts raw data from JSONs into a relational schema that's easier to consume by analytical tasks. Each DevOps tool would have a schema that's tailored to its data structure, hence the name, the Tool layer.
+
+
+## Use Cases
+
+As a user, you can check tool data tables to verify data quality if you have concerns about the [domain layer data](DevLakeDomainLayerSchema.md).
+
+
+## Data Models
+
+Tool layer tables start with a prefix `_tool_`. Each plugin contains multiple tool data tables, the naming convension of these tables is `_raw_{plugin}_{entity}`. For instance,
+- _tool_jira_issues
+- _tool_jira_boards
+- _tool_jira_board_issues`
+- ...
+
+Normally, you do not need to use tool layer tables, unless you have one of the above use cases.
diff --git a/versioned_docs/version-v0.15/DataModels/_category_.json b/versioned_docs/version-v0.15/DataModels/_category_.json
new file mode 100644
index 0000000000..ae28c626ea
--- /dev/null
+++ b/versioned_docs/version-v0.15/DataModels/_category_.json
@@ -0,0 +1,8 @@
+{
+  "label": "Data Models",
+  "position": 6,
+  "link":{
+    "type": "generated-index",
+    "slug": "DataModels"
+  }
+}
diff --git a/versioned_docs/version-v0.15/DeveloperManuals/DBMigration.md b/versioned_docs/version-v0.15/DeveloperManuals/DBMigration.md
new file mode 100644
index 0000000000..d12cd68250
--- /dev/null
+++ b/versioned_docs/version-v0.15/DeveloperManuals/DBMigration.md
@@ -0,0 +1,90 @@
+---
+title: "DB Migration"
+description: >
+  DB Migration
+sidebar_position: 3
+---
+
+## Summary
+Starting in v0.10.0, DevLake provides a lightweight migration tool for executing migration scripts.
+Both the framework and the plugins can define their migration scripts in their own migration folder.
+The migration scripts are written with gorm in Golang to support different SQL dialects.
+
+
+## Migration Scripts
+The migration scripts describe how to do database migration and implement the `MigrationScript` interface.
+When DevLake starts, the scripts register themselves to the framework by invoking the `Register` function.
+The method `Up` contains the steps of migration.
+
+```go
+type MigrationScript interface {
+    // this function will contain the business logic of the migration (e.g. DDL logic)
+    Up(basicRes BasicRes) errors.Error
+    // the version number of the migration. typically in date format (YYYYMMDDHHMMSS), e.g. 20220728000001. Migrations are executed sequentially based on this number.
+	Version() uint64
+	// The name of this migration
+	Name() string
+}
+```
+
+## The Migration Model
+
+For each migration, we define a "snapshot" datamodel of the model that we wish to perform the migration on.
+The fields on this model shall be identical to the actual model; but unlike the actual one, this one will
+never change in the future. The naming convention of these models is `<ModelName>YYYYMMDD` and they must implement
+the `func TableName() string` method, and consumed by the `Script::Up` method.
+
+## Table `migration_history`
+
+The table tracks migration scripts execution and schemas changes, and from which, DevLake can figure out the current state of database schemas.
+
+## Execution
+
+Each plugin has a `migrationscripts` subpackage that lists all the migrations to be executed for that plugin. You
+will need to add your migration to that list for the framework to pick it up. Similarly, there is a package
+for the framework-only migrations defined under the `models` package.
+
+
+## How It Works
+1. Check `migration_history` table, calculate all the migration scripts need to be executed.
+2. Sort scripts by `Version` and `Name` in ascending order. Please do NOT change these two values for the script after release for any reasons; otherwise, users may fail to upgrade due to the duplicated execution.
+3. Execute the scripts.
+4. Save the results in the `migration_history` table.
+
+
+## Best Practices
+
+When you write a new migration script, please pay attention to the fault tolerance and the side effect. It would be better if the failed script could be safely retried, in case if something goes wrong during the migration. For this purpose, the migration scripts should be well-designed. For example, if you have created a temporary table in the Up method, it should be dropped before exiting, regardless of success or failure. 
+
+Suppose we want to change the type of the Primary Key `name` of table `users` from `int` to `varchar(255)`
+
+1. Rename `users` to `users_20221018` (stop if error, otherwise define a `defer` to rename back on error)
+2. Create new `users` (stop if error, otherwise define a `defer` to drop the table on error)
+3. Convert data from `users_20221018` to `users` (stop if error)
+4. Drop table `users_20221018`
+
+With these steps, the `defer` functions would be executed in reverse order if any error occurred during the migration process so the database would roll back to the original state in most cases.
+
+However, you don't neccessary deal with all the mess. We had summarized some of the most useful code examples for you to follow:
+
+- [Create new tables](https://github.com/apache/incubator-devlake/blob/main/models/migrationscripts/20220406_add_frame_tables.go)
+[Rename column](https://github.com/apache/incubator-devlake/blob/main/models/migrationscripts/20220505_rename_pipeline_step_to_stage.go)
+- [Add columns with default value](https://github.com/apache/incubator-devlake/blob/main/models/migrationscripts/20220616_add_blueprint_mode.go)
+- [Change the values(or type) of Primary Key](https://github.com/apache/incubator-devlake/blob/main/models/migrationscripts/20220913_fix_commitfile_id_toolong.go)
+- [Change the values(or type) of Column](https://github.com/apache/incubator-devlake/blob/main/models/migrationscripts/20220903_encrypt_blueprint.go)
+
+The above examples should cover most of the scenarios you may encounter. If you come across other scenarios, feel free to create issues in our GitHub Issue Tracker for discussions. 
+
+
+In order to help others understand the script you have written, there are a couple of rules we suggest to follow:
+
+- Name your script in a meaningful way. For instance, `renamePipelineStepToStage` is more descriptive than `modifyPipelines`.
+- The script should keep only the targeted `fields` you are attempting to operate except when using `migrationhelper.Transform`, which is a full table tranformation that requires full table definition. If this is the case, add comment to the end of the fields to indicate which ones are the targets.
+- Add comments to the script when the operation is too complicated to be expressed in plain code.
+
+Other rules to follow when writing a migration script:
+
+- The migration script should only use the interfaces and packages offered by the framework like `core`, `errors` and `migrationhelper`. Do NOT import `gorm` or package from `plugin` directly.
+- The name of `model struct` defined in your script should be suffixed with the `Version` of the script to distinguish from other scripts in the same package to keep it self-contained, i.e. `tasks20221018`. Do NOT refer `struct` defined in other scripts.
+- All scripts and models names should be `camelCase` to avoid accidental reference from other packages.
+
diff --git a/versioned_docs/version-v0.15/DeveloperManuals/Dal.md b/versioned_docs/version-v0.15/DeveloperManuals/Dal.md
new file mode 100644
index 0000000000..3e1d397e5e
--- /dev/null
+++ b/versioned_docs/version-v0.15/DeveloperManuals/Dal.md
@@ -0,0 +1,173 @@
+---
+title: "Dal"
+sidebar_position: 5
+description: >
+  The Dal (Data Access Layer) is designed to decouple the hard dependency on `gorm` in v0.12
+---
+
+## Summary
+
+The Dal (Data Access Layer) is designed to decouple the hard dependency on `gorm` in v0.12.  The advantages of introducing this isolation are:
+
+ - Unit Test: Mocking an Interface is easier and more reliable than Patching a Pointer.
+ - Clean Code: DBS operations are more consistence than using `gorm ` directly.
+ - Replaceable: It would be easier to replace `gorm` in the future if needed.
+
+## The Dal Interface
+
+```go
+type Dal interface {
+	AutoMigrate(entity interface{}, clauses ...Clause) error
+	Exec(query string, params ...interface{}) error
+	RawCursor(query string, params ...interface{}) (*sql.Rows, error)
+	Cursor(clauses ...Clause) (*sql.Rows, error)
+	Fetch(cursor *sql.Rows, dst interface{}) error
+	All(dst interface{}, clauses ...Clause) error
+	First(dst interface{}, clauses ...Clause) error
+	Count(clauses ...Clause) (int64, error)
+	Pluck(column string, dest interface{}, clauses ...Clause) error
+	Create(entity interface{}, clauses ...Clause) error
+	Update(entity interface{}, clauses ...Clause) error
+	CreateOrUpdate(entity interface{}, clauses ...Clause) error
+	CreateIfNotExist(entity interface{}, clauses ...Clause) error
+	Delete(entity interface{}, clauses ...Clause) error
+	AllTables() ([]string, error)
+}
+```
+
+
+## How to use
+
+### Query
+```go
+// Get a database cursor
+user := &models.User{}
+cursor, err := db.Cursor(
+  dal.From(user),
+  dal.Where("department = ?", "R&D"),
+  dal.Orderby("id DESC"),
+)
+if err != nil {
+  return err
+}
+for cursor.Next() {
+  err = dal.Fetch(cursor, user)  // fetch one record at a time
+  ...
+}
+
+// Get a database cursor by raw sql query
+cursor, err := db.Raw("SELECT * FROM users")
+
+// USE WITH CAUTIOUS: loading a big table at once is slow and dangerous
+// Load all records from database at once. 
+users := make([]models.Users, 0)
+err := db.All(&users, dal.Where("department = ?", "R&D"))
+
+// Load a column as Scalar or Slice
+var email string
+err := db.Pluck("email", &username, dal.Where("id = ?", 1))
+var emails []string
+err := db.Pluck("email", &emails)
+
+// Execute query
+err := db.Exec("UPDATE users SET department = ? WHERE department = ?", "Research & Development", "R&D")
+```
+
+### Insert
+```go
+err := db.Create(&models.User{
+  Email: "hello@example.com", // assuming this the Primarykey
+  Name: "hello",
+  Department: "R&D",
+})
+```
+
+### Update
+```go
+err := db.Create(&models.User{
+  Email: "hello@example.com", // assuming this the Primarykey
+  Name: "hello",
+  Department: "R&D",
+})
+```
+### Insert or Update
+```go
+err := db.CreateOrUpdate(&models.User{
+  Email: "hello@example.com",  // assuming this is the Primarykey
+  Name: "hello",
+  Department: "R&D",
+})
+```
+
+### Insert if record(by PrimaryKey) didn't exist
+```go
+err := db.CreateIfNotExist(&models.User{
+  Email: "hello@example.com",  // assuming this is the Primarykey
+  Name: "hello",
+  Department: "R&D",
+})
+```
+
+### Delete
+```go
+err := db.CreateIfNotExist(&models.User{
+  Email: "hello@example.com",  // assuming this is the Primary key
+})
+```
+
+### DDL and others
+```go
+// Returns all table names
+allTables, err := db.AllTables()
+
+// Automigrate: create/add missing table/columns
+// Note: it won't delete any existing columns, nor does it update the column definition
+err := db.AutoMigrate(&models.User{})
+```
+
+## How to do Unit Test
+First, run the command `make mock` to generate the Mocking Stubs, the generated source files should appear in `mocks` folder. 
+```
+mocks
+├── ApiResourceHandler.go
+├── AsyncResponseHandler.go
+├── BasicRes.go
+├── CloseablePluginTask.go
+├── ConfigGetter.go
+├── Dal.go
+├── DataConvertHandler.go
+├── ExecContext.go
+├── InjectConfigGetter.go
+├── InjectLogger.go
+├── Iterator.go
+├── Logger.go
+├── Migratable.go
+├── PluginApi.go
+├── PluginBlueprintV100.go
+├── PluginInit.go
+├── PluginMeta.go
+├── PluginTask.go
+├── RateLimitedApiClient.go
+├── SubTaskContext.go
+├── SubTaskEntryPoint.go
+├── SubTask.go
+└── TaskContext.go
+```
+With these Mocking stubs, you may start writing your TestCases using the `mocks.Dal`.
+```go
+import "github.com/apache/incubator-devlake/mocks"
+
+func TestCreateUser(t *testing.T) {
+    mockDal := new(mocks.Dal)
+    mockDal.On("Create", mock.Anything, mock.Anything).Return(nil).Once()
+    userService := &services.UserService{
+        Dal: mockDal,
+    }
+    userService.Post(map[string]interface{}{
+        "email": "helle@example.com",
+        "name": "hello",
+        "department": "R&D",
+    })
+    mockDal.AssertExpectations(t)
+```
+
diff --git a/versioned_docs/version-v0.15/DeveloperManuals/DeveloperSetup.md b/versioned_docs/version-v0.15/DeveloperManuals/DeveloperSetup.md
new file mode 100644
index 0000000000..064af5109d
--- /dev/null
+++ b/versioned_docs/version-v0.15/DeveloperManuals/DeveloperSetup.md
@@ -0,0 +1,126 @@
+---
+title: "Developer Setup"
+description: >
+  The steps to install DevLake in developer mode.
+sidebar_position: 1
+---
+
+
+## Requirements
+
+- <a href="https://docs.docker.com/get-docker" target="_blank">Docker v19.03.10+</a>
+- <a href="https://golang.org/doc/install" target="_blank">Golang v1.19+</a>
+- <a href="https://www.gnu.org/software/make/" target="_blank">GNU Make</a>
+  - Mac (Preinstalled)
+  - Windows: [Download](http://gnuwin32.sourceforge.net/packages/make.htm)
+  - Ubuntu: `sudo apt-get install build-essential libssl-dev`
+
+## How to setup dev environment
+
+The following guide will walk through how to run DevLake's frontend (`config-ui`) and backend in dev mode.
+
+
+1. Navigate to where you would like to install this project and clone the repository:
+
+   ```sh
+   git clone https://github.com/apache/incubator-devlake
+   cd incubator-devlake
+   ```
+
+2. Install dependencies for plugins:
+
+   - [RefDiff](../Plugins/refdiff.md#development)
+
+3. Install Go packages
+
+    ```sh
+	go get
+    ```
+
+4. Copy the sample config file to new local file:
+
+    ```sh
+    cp .env.example .env
+    ```
+
+5. Update the following variables in the file `.env`:
+
+    * `DB_URL`: Replace `mysql:3306` with `127.0.0.1:3306`
+
+6. Start the MySQL and Grafana containers:
+
+    > Make sure the Docker daemon is running before this step.
+
+    ```sh
+    docker-compose up -d mysql grafana
+    ```
+
+7. Run `devlake` and `config-ui` in dev mode in two separate terminals:
+
+    ```sh
+    # run devlake
+    make dev
+    # run config-ui
+    make configure-dev
+    ```
+
+    For common errors, please see [Troubleshooting](#troubleshotting).
+
+8.  Config UI is running at `localhost:4000`
+    - For how to use Config UI, please refer to our [tutorial](UserManuals/ConfigUI/Tutorial.md)
+
+## Running Tests
+
+```sh
+# install mockery
+go install github.com/vektra/mockery/v2@latest
+# generate mocking stubs
+make mock
+# run tests
+make test
+```
+
+## DB migrations
+
+Please refer to the [Migration Doc](../DeveloperManuals/DBMigration.md).
+
+## Using DevLake API
+
+All DevLake APIs (core service + plugin API) are documented with swagger. To see API doc live with swagger:
+
+    - Install [swag](https://github.com/swaggo/swag).
+    - Run `make swag` to generate the swagger documentation.
+    - Visit `http://localhost:8080/swagger/index.html` while `devlake` is running.
+
+
+## Developing dashboards
+
+To access Grafana, click *View Dashboards* button in the top left corner of Config UI, or visit `localhost:3002` (username: `admin`, password: `admin`).
+
+For provisioning, customizing, and creating dashboards, please refer to our [Grafana Doc](../UserManuals/Dashboards/GrafanaUserGuide.md).
+
+
+## Troubleshooting
+
+
+    Q: Running `make dev` yields error: `libgit2.so.1.3: cannot open share object file: No such file or directory`
+
+    A: `libgit2.so.1.3` is required by the gitextractor plugin and should be . Make sure your program can find `libgit2.so.1.3`. `LD_LIBRARY_PATH` can be assigned like this if your `libgit2.so.1.3` is located at `/usr/local/lib`:
+
+    ```sh
+    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
+    ```
+   
+    Note that the version has to be pinned to 1.3.0. If you don't have it, you may need to build it manually with CMake from [source](https://github.com/libgit2/libgit2/releases/tag/v1.3.0).
+
+
+## Compiling
+
+    - Compile all plugins: `make build-plugin`
+    - Compile specific plugins: `PLUGIN=<PLUGIN_NAME> make build-plugin`
+    - Compile server: `make build`
+    - Compile worker: `make build-worker`
+
+## References
+
+To dig deeper into developing and utilizing our built-in functions and have a better developer experience, feel free to dive into our [godoc](https://pkg.go.dev/github.com/apache/incubator-devlake) reference.
diff --git a/versioned_docs/version-v0.15/DeveloperManuals/E2E-Test-Guide.md b/versioned_docs/version-v0.15/DeveloperManuals/E2E-Test-Guide.md
new file mode 100644
index 0000000000..1156e4cd24
--- /dev/null
+++ b/versioned_docs/version-v0.15/DeveloperManuals/E2E-Test-Guide.md
@@ -0,0 +1,211 @@
+---
+title: "E2E Test Guide"
+description: >
+  The steps to write E2E tests for plugins.
+---
+
+# How to write E2E tests for plugins
+
+## Why write E2E tests
+
+E2E testing, as a part of automated testing, generally refers to black-box testing at the file and module level or unit testing that allows the use of some external services such as databases. The purpose of writing E2E tests is to shield some internal implementation logic and see whether the same external input can output the same result in terms of data aspects. In addition, compared to the black-box integration tests, it can avoid some chance problems caused by network and other facto [...]
+In DevLake, E2E testing consists of interface testing and input/output result validation for the plugin Extract/Convert subtask. This article only describes the process of writing the latter. As the Collectors invoke external
+services we typically do not write E2E tests for them.
+
+## Preparing data
+
+Let's take a simple plugin - Feishu Meeting Hours Collection as an example here. Its directory structure looks like this.
+![image](https://user-images.githubusercontent.com/3294100/175061114-53404aac-16ca-45d1-a0ab-3f61d84922ca.png)
+Next, we will write the E2E tests of the sub-tasks.
+
+The first step in writing the E2E test is to run the Collect task of the corresponding plugin to complete the data collection; that is, to have the corresponding data saved in the table starting with `_raw_feishu_` in the database.
+This data will be presumed to be the "source of truth" for our tests. Here are the logs and database tables using the DirectRun (cmd) run method.
+```
+$ go run plugins/feishu/main.go --numOfDaysToCollect 2 --connectionId 1 (Note: command may change with version upgrade)
+[2022-06-22 23:03:29] INFO failed to create dir logs: mkdir logs: file exists
+press `c` to send cancel signal
+[2022-06-22 23:03:29]  INFO  [feishu] start plugin
+[2022-06-22 23:03:33]  INFO  [feishu] scheduler for api https://open.feishu.cn/open-apis/vc/v1 worker: 13, request: 10000, duration: 1h0m0s
+[2022-06-22 23:03:33]  INFO  [feishu] total step: 2
+[2022-06-22 23:03:33]  INFO  [feishu] executing subtask collectMeetingTopUserItem
+[2022-06-22 23:03:33]  INFO  [feishu] [collectMeetingTopUserItem] start api collection
+[2022-06-22 23:03:34]  INFO  [feishu] [collectMeetingTopUserItem] finished records: 1
+[2022-06-22 23:03:34]  INFO  [feishu] [collectMeetingTopUserItem] end api collection error: %!w(<nil>)
+[2022-06-22 23:03:34]  INFO  [feishu] finished step: 1 / 2
+[2022-06-22 23:03:34]  INFO  [feishu] executing subtask extractMeetingTopUserItem
+[2022-06-22 23:03:34]  INFO  [feishu] [extractMeetingTopUserItem] get data from _raw_feishu_meeting_top_user_item where params={"connectionId":1} and got 148
+[2022-06-22 23:03:34]  INFO  [feishu] [extractMeetingTopUserItem] finished records: 1
+[2022-06-22 23:03:34]  INFO  [feishu] finished step: 2 / 2
+```
+
+<img width="993" alt="image" src="https://user-images.githubusercontent.com/3294100/175064505-bc2f98d6-3f2e-4ccf-be68-a1cab1e46401.png"/>
+Ok, the data has now been saved to the `_raw_feishu_*` table, and the `data` column is the return information from the plugin. Here we only collected data for the last 2 days. The data information is not much, but it also covers a variety of situations. That is, the same person has data on different days.
+
+It is also worth mentioning that the plugin runs two tasks, `collectMeetingTopUserItem` and `extractMeetingTopUserItem`. The former is the task of collecting, which is needed to run this time, and the latter is the task of extracting data. It doesn't matter whether the extractor runs in the prepared data session.
+
+Next, we need to export the data to .csv format. This step can be done in a variety of different ways - you can show your skills. I will only introduce a few common methods here.
+
+### DevLake Code Generator Export
+
+Run `go run generator/main.go create-e2e-raw` directly and follow the guidelines to complete the export. This solution is the simplest, but has some limitations, such as the exported fields being fixed. You can refer to the next solutions if you need more customisation options.
+
+![usage](https://user-images.githubusercontent.com/3294100/175849225-12af5251-6181-4cd9-ba72-26087b05ee73.gif)
+
+### GoLand Database export
+
+![image](https://user-images.githubusercontent.com/3294100/175067303-7e5e1c4d-2430-4eb5-ad00-e38d86bbd108.png)
+
+This solution is very easy to use and will not cause problems using Postgres or MySQL.
+![image](https://user-images.githubusercontent.com/3294100/175068178-f1c1c290-e043-4672-b43e-54c4b954c685.png)
+The success criteria for csv export is that the go program can read it without errors, so several points are worth noticing.
+
+1. the values in the csv file should be wrapped in double quotes to avoid special symbols such as commas in the values that break the csv format
+2. double quotes in csv files are escaped. generally `""` represents a double quote
+3. pay attention to whether the column `data` is the actual value, not the value after base64 or hex
+
+After exporting, move the .csv file to `plugins/feishu/e2e/raw_tables/_raw_feishu_meeting_top_user_item.csv`.
+
+### MySQL Select Into Outfile
+
+This is MySQL's solution for exporting query results to a file. The MySQL currently started in docker-compose.yml comes with the --security parameter, so it does not allow `select ... into outfile`. The first step is to turn off the security parameter, which is done roughly as follows.
+![origin_img_v2_c809c901-01bc-4ec9-b52a-ab4df24c376g](https://user-images.githubusercontent.com/3294100/175070770-9b7d5b75-574b-49ed-9bca-e9f611f60795.jpg)
+After closing it, use `select ... into outfile` to export the csv file. The export result is rough as follows.
+![origin_img_v2_ccfdb260-668f-42b4-b249-6c2dd45816ag](https://user-images.githubusercontent.com/3294100/175070866-2204ae13-c058-4a16-bc20-93ab7c95f832.jpg)
+Notice that the data field has extra hexsha fields, which need to be manually converted to literal quantities.
+
+### Vscode Database
+
+This is Vscode's solution for exporting query results to a file, but it is not easy to use. Here is the export result without any configuration changes
+![origin_img_v2_c9eaadaa-afbc-4c06-85bc-e78235f7eb3g](https://user-images.githubusercontent.com/3294100/175071987-760c2537-240c-4314-bbd6-1a0cd85ddc0f.jpg)
+However, it is obvious that the escape symbol does not conform to the csv specification, and the data is not successfully exported. After adjusting the configuration and manually replacing `\"` with `""`, we get the following result.
+![image](https://user-images.githubusercontent.com/3294100/175072314-954c6794-3ebd-45bb-98e7-60ddbb5a7da9.png)
+The data field of this file is encoded in base64, so it needs to be decoded manually to a literal amount before using it.
+
+### MySQL workbench
+
+This tool must write the SQL yourself to complete the data export, which can be rewritten by imitating the following SQL.
+```sql
+SELECT id, params, CAST(`data` as char) as data, url, input,created_at FROM _raw_feishu_meeting_top_user_item;
+```
+![image](https://user-images.githubusercontent.com/3294100/175080866-1631a601-cbe6-40c0-9d3a-d23ca3322a50.png)
+Select csv as the save format and export it for use.
+
+### Postgres Copy with csv header
+
+`Copy(SQL statement) to '/var/lib/postgresql/data/raw.csv' with csv header;` is a common export method for PG to export csv, which can also be used here.
+```sql
+COPY (
+SELECT id, params, convert_from(data, 'utf-8') as data, url, input,created_at FROM _raw_feishu_meeting_top_user_item
+) to '/var/lib/postgresql/data/raw.csv' with csv header;
+```
+Use the above statement to complete the export of the file. If pg runs in docker, just use the command `docker cp` to export the file to the host.
+
+## Writing E2E tests
+
+First, create a test environment. For example, let's create `meeting_test.go`.
+![image](https://user-images.githubusercontent.com/3294100/175091380-424974b9-15f3-457b-af5c-03d3b5d17e73.png)
+Then enter the test preparation code in it as follows. The code is to create an instance of the `feishu` plugin and then call `ImportCsvIntoRawTable` to import the data from the csv file into the `_raw_feishu_meeting_top_user_item` table.
+
+```go
+func TestMeetingDataFlow(t *testing.T) {
+	var plugin impl.Feishu
+	dataflowTester := e2ehelper.NewDataFlowTester(t, "feishu", plugin)
+
+	// import raw data table
+	dataflowTester.ImportCsvIntoRawTable("./raw_tables/_raw_feishu_meeting_top_user_item.csv", "_raw_feishu_meeting_top_user_item")
+}
+```
+The signature of the import function is as follows.
+```func (t *DataFlowTester) ImportCsvIntoRawTable(csvRelPath string, rawTableName string)```
+It has a twin, with only slight differences in parameters.
+```func (t *DataFlowTester) ImportCsvIntoTabler(csvRelPath string, dst schema.Tabler)```
+The former is used to import tables in the raw layer. The latter is used to import arbitrary tables.
+**Note:** These two functions will delete the db table and use `gorm.AutoMigrate` to re-create a new table to clear data in it.
+After importing the data is complete, run this tester and it must be PASS without any test logic at this moment. Then write the logic for calling the call to the extractor task in `TestMeetingDataFlow`.
+
+```go
+func TestMeetingDataFlow(t *testing.T) {
+	var plugin impl.Feishu
+	dataflowTester := e2ehelper.NewDataFlowTester(t, "feishu", plugin)
+
+	taskData := &tasks.FeishuTaskData{
+		Options: &tasks.FeishuOptions{
+			ConnectionId: 1,
+		},
+	}
+
+	// import raw data table
+	dataflowTester.ImportCsvIntoRawTable("./raw_tables/_raw_feishu_meeting_top_user_item.csv", "_raw_feishu_meeting_top_user_item")
+
+	// verify extraction
+	dataflowTester.FlushTabler(&models.FeishuMeetingTopUserItem{})
+	dataflowTester.Subtask(tasks.ExtractMeetingTopUserItemMeta, taskData)
+
+}
+```
+The added code includes a call to `dataflowTester.FlushTabler` to clear the table `_tool_feishu_meeting_top_user_items` and a call to `dataflowTester.Subtask` to simulate the running of the subtask `ExtractMeetingTopUserItemMeta`.
+
+Now run it and see if the subtask `ExtractMeetingTopUserItemMeta` completes without errors. The data results of the `extract` run generally come from the raw table, so the plugin subtask will run correctly if written without errors. We can observe if the data is successfully parsed in the db table in the tool layer. In this case the `_tool_feishu_meeting_top_user_items` table has the correct data.
+
+If the run is incorrect, maybe you can troubleshoot the problem with the plugin itself before moving on to the next step.
+
+## Verify that the results of the task are correct
+
+Let's continue writing the test and add the following code at the end of the test function
+```go
+func TestMeetingDataFlow(t *testing.T) {
+    ......
+    
+    dataflowTester.VerifyTable(
+      models.FeishuMeetingTopUserItem{},
+      "./snapshot_tables/_tool_feishu_meeting_top_user_items.csv",
+      []string{
+        "meeting_count",
+        "meeting_duration",
+        "user_type",
+        "_raw_data_params",
+        "_raw_data_table",
+        "_raw_data_id",
+        "_raw_data_remark",
+      },
+    )
+}
+```
+Its purpose is to call `dataflowTester.VerifyTable` to complete the validation of the data results. The third parameter is all the fields of the table that need to be verified. 
+The data used for validation exists in `. /snapshot_tables/_tool_feishu_meeting_top_user_items.csv`, but of course, this file does not exist yet.
+
+There is a twin, more generalized function, that could be used instead:
+```go
+dataflowTester.VerifyTableWithOptions(models.FeishuMeetingTopUserItem{}, 
+        dataflowTester.TableOptions{
+	        CSVRelPath: "./snapshot_tables/_tool_feishu_meeting_top_user_items.csv"
+        },
+    )
+
+```
+The above usage will be default to validating against all fields of the `models.FeishuMeetingTopUserItem` model. There are additional fields on `TableOptions` that can be specified to limit which fields on that model to perform validation on.
+
+To facilitate the generation of the file mentioned above, DevLake has adopted a testing technique called `Snapshot`, which will automatically generate the file based on the run results when the `VerifyTable` or `VerifyTableWithOptions` functions are called without the csv existing.
+
+But note! Please do two things after the snapshot is created: 1. check if the file is generated correctly 2. re-run it to make sure there are no errors between the generated results and the re-run results.
+These two operations are critical and directly related to the quality of test writing. We should treat the snapshot file in `.csv` format like a code file.
+
+If there is a problem with this step, there are usually 2 ways to solve it.
+1. The validated fields contain fields like create_at runtime or self-incrementing ids, which cannot be repeatedly validated and should be excluded.
+2. there is `\n` or `\r\n` or other escape mismatch fields in the run results. Generally, when parsing the `httpResponse` error, you can follow these solutions:
+    1. modify the field type of the content in the api model to `json.
+    2. convert it to string when parsing
+    3. so that the `\n` symbol can be kept intact, avoiding the parsing of line breaks by the database or the operating system
+
+
+For example, in the `github` plugin, this is how it is handled.
+![image](https://user-images.githubusercontent.com/3294100/175098219-c04b810a-deaf-4958-9295-d5ad4ec152e6.png)
+![image](https://user-images.githubusercontent.com/3294100/175098273-e4a18f9a-51c8-4637-a80c-3901a3c2934e.png)
+
+Well, at this point, the E2E writing is done. We have added a total of 3 new files to complete the testing of the meeting length collection task. It's pretty easy.
+![image](https://user-images.githubusercontent.com/3294100/175098574-ae6c7fb7-7123-4d80-aa85-790b492290ca.png)
+
+## Run E2E tests for all plugins like CI
+
+It's straightforward. Just run `make e2e-plugins` because DevLake has already solidified it into a script~
+
diff --git a/versioned_docs/version-v0.15/DeveloperManuals/Notifications.md b/versioned_docs/version-v0.15/DeveloperManuals/Notifications.md
new file mode 100644
index 0000000000..23456b4f1e
--- /dev/null
+++ b/versioned_docs/version-v0.15/DeveloperManuals/Notifications.md
@@ -0,0 +1,32 @@
+---
+title: "Notifications"
+description: >
+  Notifications
+sidebar_position: 4
+---
+
+## Request
+Example request
+```
+POST /lake/notify?nouce=3-FDXxIootApWxEVtz&sign=424c2f6159bd9e9828924a53f9911059433dc14328a031e91f9802f062b495d5
+
+{"TaskID":39,"PluginName":"jenkins","CreatedAt":"2021-09-30T15:28:00.389+08:00","UpdatedAt":"2021-09-30T15:28:00.785+08:00"}
+```
+
+## Configuration
+If you want to use the notification feature, you should add two configuration key to `.env` file.
+```shell
+# .env
+# notification request url, e.g.: http://example.com/lake/notify
+NOTIFICATION_ENDPOINT=
+# secret is used to calculate signature
+NOTIFICATION_SECRET=
+```
+
+## Signature
+You should check the signature before accepting the notification request. We use sha256 algorithm to calculate the checksum.
+```go
+// calculate checksum
+sum := sha256.Sum256([]byte(requestBody + NOTIFICATION_SECRET + nouce))
+return hex.EncodeToString(sum[:])
+```
diff --git a/versioned_docs/version-v0.15/DeveloperManuals/PluginImplementation.md b/versioned_docs/version-v0.15/DeveloperManuals/PluginImplementation.md
new file mode 100644
index 0000000000..644dbdb81c
--- /dev/null
+++ b/versioned_docs/version-v0.15/DeveloperManuals/PluginImplementation.md
@@ -0,0 +1,541 @@
+---
+title: "Plugin Implementation"
+sidebar_position: 2
+description: >
+  Plugin Implementation
+---
+
+If your favorite DevOps tool is not yet supported by DevLake, don't worry. It's not difficult to implement a DevLake plugin. In this post, we'll go through the basics of DevLake plugins and build an example plugin from scratch together.
+
+## What is a plugin?
+
+A DevLake plugin is a shared library built with Go's `plugin` package that hooks up to DevLake core at run-time.
+
+A plugin may extend DevLake's capability in three ways:
+
+1. Integrating with new data sources
+2. Transforming/enriching existing data
+3. Exporting DevLake data to other data systems
+
+## Types of plugins
+
+There are, as of now, support for two types of plugins:
+
+1. __*Conventional plugins*__: These are the primary type of plugins used by Devlake, and require the developer to write the most amount of code starting from fetching (collecting) data from data sources to converting them into our normalized data models and storing them.
+2. __*Singer-spec plugins*__: These plugins utilize [Singer-taps](https://www.singer.io/) to retrieve data from data-sources thereby eliminating the developer's burden of writing the collection logic. More on them [here](#how-do-singer-spec-plugins-work).
+
+
+## How do conventional plugins work?
+
+A plugin mainly consists of a collection of subtasks that can be executed by DevLake core. For data source plugins, a subtask may be collecting a single entity from the data source (e.g., issues from Jira). Besides the subtasks, there're hooks that a plugin can implement to customize its initialization, migration, and more. See below for a list of the most important interfaces:
+
+1. [PluginMeta](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_meta.go) contains the minimal interface that a plugin should implement, with only two functions 
+   - Description() returns the description of a plugin
+   - RootPkgPath() returns the root package path of a plugin
+2. [PluginInit](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_init.go) allows a plugin to customize its initialization
+3. [PluginTask](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_task.go) enables a plugin to prepare data prior to subtask execution
+4. [PluginApi](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_api.go) lets a plugin exposes some self-defined APIs
+5. [PluginMigration](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_migration.go) is where a plugin manages its database migrations 
+6. [PluginModel](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_model.go) allows other plugins to get the model information of all database tables of the current plugin through the GetTablesInfo() method. If you need to access Domain Layer Models, please visit [DomainLayerSchema](https://devlake.apache.org/docs/DataModels/DevLakeDomainLayerSchema/)
+7. [PluginBlueprint](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_blueprint.go) is the foundation for Blueprint and Plugin to collaborate and generate a reasonable Pipeline Plan based on User Settings. For example, a user may declare that he/she wants to collect data from a GitHub Repo, which implies that not only the issues and PRs, but also the git-meta-data including commits history, branches, tags, etc. need to be collected. To do it and do it faster, lev [...]
+
+The diagram below shows the control flow of executing a plugin:
+
+```mermaid
+flowchart TD;
+    subgraph S4[Step4 sub-task extractor running process];
+    direction LR;
+    D4[DevLake];
+    D4 -- "Step4.1 create a new\n ApiExtractor\n and execute it" --> E["ExtractXXXMeta.\nEntryPoint"];
+    E <-- "Step4.2 read from\n raw table" --> E2["RawDataSubTaskArgs\n.Table"];
+    E -- "Step4.3 call with RawData" --> ApiExtractor.Extract;
+    ApiExtractor.Extract -- "decode and return gorm models" --> E
+    end
+    subgraph S3[Step3 sub-task collector running process]
+    direction LR
+    D3[DevLake]
+    D3 -- "Step3.1 create a new\n ApiCollector\n and execute it" --> C["CollectXXXMeta.\nEntryPoint"];
+    C <-- "Step3.2 create\n raw table" --> C2["RawDataSubTaskArgs\n.RAW_BBB_TABLE"];
+    C <-- "Step3.3 build query\n before sending requests" --> ApiCollectorArgs.\nQuery/UrlTemplate;
+    C <-. "Step3.4 send requests by ApiClient \n and return HTTP response" .-> A1["HTTP APIs"];
+    C <-- "Step3.5 call and \nreturn decoded data \nfrom HTTP response" --> ResponseParser;
+    end
+    subgraph S2[Step2 DevLake register custom plugin]
+    direction LR
+    D2[DevLake]
+    D2 <-- "Step2.1 function \`Init\` \nneed to do init jobs" --> plugin.Init;
+    D2 <-- "Step2.2 (Optional) call \nand return migration scripts" --> plugin.MigrationScripts;
+    D2 <-- "Step2.3 (Optional) call \nand return taskCtx" --> plugin.PrepareTaskData;
+    D2 <-- "Step2.4 call and \nreturn subTasks for execting" --> plugin.SubTaskContext;
+    end
+    subgraph S1[Step1 Run DevLake]
+    direction LR
+    main -- "Transfer of control \nby \`runner.DirectRun\`" --> D1[DevLake];
+    end
+    S1-->S2-->S3-->S4
+```
+There's a lot of information in the diagram, but we don't expect you to digest it right away. You can simply use it as a reference when you go through the example below.
+
+## A step-by-step guide toward your first conventional plugin
+
+In this section, we will describe how to create a data collection plugin from scratch. The data to be collected is the information about all Committers and Contributors of the Apache project, in order to check whether they have signed the CLA. We are going to
+
+* request `https://people.apache.org/public/icla-info.json` to get the Committers' information
+* request the `mailing list` to get the Contributors' information
+
+We will focus on demonstrating how to request and cache information about all Committers through the Apache API and extract structured data from it. The collection of Contributors will only be briefly described.
+
+### Step 1: Bootstrap the new plugin
+
+**Note:** Please make sure you have DevLake up and running before proceeding.
+
+> More info about plugin:
+> Generally, we need these folders in plugin folders: `api`, `models` and `tasks`
+> `api` interacts with `config-ui` for test/get/save connection of data source
+>       - connection [example](https://github.com/apache/incubator-devlake/blob/main/plugins/gitlab/api/connection.go)
+>       - connection model [example](https://github.com/apache/incubator-devlake/blob/main/plugins/gitlab/models/connection.go)
+> `models` stores all `data entities` and `data migration scripts`. 
+>       - entity 
+>       - data migrations [template](https://github.com/apache/incubator-devlake/tree/main/generator/template/migrationscripts)
+> `tasks` contains all of our `sub tasks` for a plugin
+>       - task data [template](https://github.com/apache/incubator-devlake/blob/main/generator/template/plugin/tasks/task_data.go-template)
+>       - api client [template](https://github.com/apache/incubator-devlake/blob/main/generator/template/plugin/tasks/task_data_with_api_client.go-template)
+
+Don't worry if you cannot figure out what these concepts mean immediately. We'll explain them one by one later.
+
+Apache DevLake provides a generator to create a plugin conveniently. Let's scaffold our new plugin by running `go run generator/main.go create-plugin icla`, which would ask for `with_api_client` and `Endpoint`.
+
+* `with_api_client` is used for choosing if we need to request HTTP APIs by api_client. 
+* `Endpoint` use in which site we will request, in our case, it should be `https://people.apache.org/`.
+
+![](https://i.imgur.com/itzlFg7.png)
+
+Now we have three files in our plugin. `api_client.go` and `task_data.go` are in the subfolder `tasks/`.
+![plugin files](https://i.imgur.com/zon5waf.png)
+
+Have a try to run this plugin by function `main` in `plugin_main.go`. When you see results like this:
+```
+$go run plugins/icla/plugin_main.go
+[2022-06-02 18:07:30]  INFO failed to create dir logs: mkdir logs: file exists
+press `c` to send cancel signal
+[2022-06-02 18:07:30]  INFO  [icla] start plugin
+invalid ICLA_TOKEN, but ignore this error now
+[2022-06-02 18:07:30]  INFO  [icla] scheduler for api https://people.apache.org/ worker: 25, request: 18000, duration: 1h0m0s
+[2022-06-02 18:07:30]  INFO  [icla] total step: 0
+```
+It works! Plugin 'icla' is defined and initiated. This plugin ONLY contains `plugin_main.go` and `task_data.go`, which is the simplest form of a plugin in Apache DevLake. In the next step, we'll show you how to request HTTP APIs by `api_client.go`.
+
+### Step 2: Create a sub-task for data collection
+Before we start, it is helpful to know how a collection task is executed: 
+1. First, Apache DevLake would call `plugin_main.PrepareTaskData()` to prepare needed data before any sub-tasks. We need to create an API client here.
+2. Then Apache DevLake will call the sub-tasks returned by `plugin_main.SubTaskMetas()`. Sub-task is an independent task to do some job, like requesting API, processing data, etc.
+
+> Each sub-task must be defined as a SubTaskMeta, and implement SubTaskEntryPoint of SubTaskMeta. SubTaskEntryPoint is defined as 
+> ```go
+> type SubTaskEntryPoint func(c SubTaskContext) error
+> ```
+> More info at: https://devlake.apache.org/blog/how-apache-devlake-runs/
+
+#### Step 2.1: Create a sub-task(Collector) for data collection
+
+Let's run `go run generator/main.go create-collector icla committer` and confirm it. This sub-task is activated by registering in `plugin_main.go/SubTaskMetas` automatically.
+
+![](https://i.imgur.com/tkDuofi.png)
+
+> - Collector will collect data from HTTP or other data sources, and save the data into the raw layer. 
+> - Inside the func `SubTaskEntryPoint` of `Collector`, we use `helper.NewApiCollector` to create an object of [ApiCollector](https://github.com/apache/incubator-devlake/blob/main/generator/template/plugin/tasks/api_collector.go-template), then call `execute()` to do the job. 
+
+Now you can notice `data.ApiClient` is initiated in `plugin_main.go/PrepareTaskData.ApiClient`. `PrepareTaskData` creates a new `ApiClient`, which is a tool Apache DevLake suggests to request data from HTTP Apis. This tool support some valuable features for HttpApi, like rateLimit, proxy and retry. Of course, if you like, you may use the lib `http` instead, but it will be more tedious.
+
+Let's move forward to use it.
+
+1. To collect data from `https://people.apache.org/public/icla-info.json`,
+we have filled `https://people.apache.org/` into `tasks/api_client.go/ENDPOINT` in Step 1.
+
+![](https://i.imgur.com/q8Zltnl.png)
+
+2. Fill `public/icla-info.json` into `UrlTemplate`, delete the unnecessary iterator and add `println("receive data:", res)` in `ResponseParser` to see if collection was successful.
+
+![](https://i.imgur.com/ToLMclH.png)
+
+Ok, now the collector sub-task has been added to the plugin, and we can kick it off by running `main` again. If everything goes smoothly, the output should look like this:
+```bash
+[2022-06-06 12:24:52]  INFO  [icla] start plugin
+invalid ICLA_TOKEN, but ignore this error now
+[2022-06-06 12:24:52]  INFO  [icla] scheduler for api https://people.apache.org/ worker: 25, request: 18000, duration: 1h0m0s
+[2022-06-06 12:24:52]  INFO  [icla] total step: 1
+[2022-06-06 12:24:52]  INFO  [icla] executing subtask CollectCommitter
+[2022-06-06 12:24:52]  INFO  [icla] [CollectCommitter] start api collection
+receive data: 0x140005763f0
+[2022-06-06 12:24:55]  INFO  [icla] [CollectCommitter] finished records: 1
+[2022-06-06 12:24:55]  INFO  [icla] [CollectCommitter] end api collection
+[2022-06-06 12:24:55]  INFO  [icla] finished step: 1 / 1
+```
+
+Great! Now we can see data pulled from the server without any problem. The last step is to decode the response body in `ResponseParser` and return it to the framework, so it can be stored in the database.
+```go
+ResponseParser: func(res *http.Response) ([]json.RawMessage, error) {
+    body := &struct {
+        LastUpdated string          `json:"last_updated"`
+        Committers  json.RawMessage `json:"committers"`
+    }{}
+    err := helper.UnmarshalResponse(res, body)
+    if err != nil {
+        return nil, err
+    }
+    println("receive data:", len(body.Committers))
+    return []json.RawMessage{body.Committers}, nil
+},
+
+```
+Ok, run the function `main` once again, then it turned out like this, and we should be able to see some records show up in the table `_raw_icla_committer`.
+```bash
+……
+receive data: 272956 /* <- the number means 272956 models received */
+[2022-06-06 13:46:57]  INFO  [icla] [CollectCommitter] finished records: 1
+[2022-06-06 13:46:57]  INFO  [icla] [CollectCommitter] end api collection
+[2022-06-06 13:46:57]  INFO  [icla] finished step: 1 / 1
+```
+
+![](https://i.imgur.com/aVYNMRr.png)
+
+#### Step 2.2: Create a sub-task(Extractor) to extract data from the raw layer
+
+> - Extractor will extract data from raw layer and save it into tool db table.
+> - Except for some pre-processing, the main flow is similar to the collector.
+
+We have already collected data from HTTP API and saved them into the DB table `_raw_XXXX`. In this step, we will extract the names of committers from the raw data. As you may infer from the name, raw tables are temporary and not easy to use directly.
+
+Now Apache DevLake suggests saving data by [gorm](https://gorm.io/docs/index.html), so we will create a model by gorm and add it into `plugin_main.go/AutoMigrate()`.
+
+plugins/icla/models/committer.go
+```go
+package models
+
+import (
+	"github.com/apache/incubator-devlake/models/common"
+)
+
+type IclaCommitter struct {
+	UserName     string `gorm:"primaryKey;type:varchar(255)"`
+	Name         string `gorm:"primaryKey;type:varchar(255)"`
+	common.NoPKModel
+}
+
+func (IclaCommitter) TableName() string {
+	return "_tool_icla_committer"
+}
+```
+
+plugins/icla/plugin_main.go
+![](https://i.imgur.com/4f0zJty.png)
+
+
+Ok, run the plugin, and table `_tool_icla_committer` will be created automatically just like the snapshot below:
+![](https://i.imgur.com/7Z324IX.png)
+
+Next, let's run `go run generator/main.go create-extractor icla committer` and type in what the command prompt asks for to create a new sub-task.
+
+![](https://i.imgur.com/UyDP9Um.png)
+
+Let's look at the function `extract` in `committer_extractor.go` created just now, and the code that needs to be written here. It's obvious that `resData.data` is the raw data, so we could json-decode each row add a new `IclaCommitter` for each and save them.
+```go
+Extract: func(resData *helper.RawData) ([]interface{}, error) {
+    names := &map[string]string{}
+    err := json.Unmarshal(resData.Data, names)
+    if err != nil {
+        return nil, err
+    }
+    extractedModels := make([]interface{}, 0)
+    for userName, name := range *names {
+        extractedModels = append(extractedModels, &models.IclaCommitter{
+            UserName: userName,
+            Name:     name,
+        })fco
+    }
+    return extractedModels, nil
+},
+```
+
+Ok, run it then we get:
+```
+[2022-06-06 15:39:40]  INFO  [icla] start plugin
+invalid ICLA_TOKEN, but ignore this error now
+[2022-06-06 15:39:40]  INFO  [icla] scheduler for api https://people.apache.org/ worker: 25, request: 18000, duration: 1h0m0s
+[2022-06-06 15:39:40]  INFO  [icla] total step: 2
+[2022-06-06 15:39:40]  INFO  [icla] executing subtask CollectCommitter
+[2022-06-06 15:39:40]  INFO  [icla] [CollectCommitter] start api collection
+receive data: 272956
+[2022-06-06 15:39:44]  INFO  [icla] [CollectCommitter] finished records: 1
+[2022-06-06 15:39:44]  INFO  [icla] [CollectCommitter] end api collection
+[2022-06-06 15:39:44]  INFO  [icla] finished step: 1 / 2
+[2022-06-06 15:39:44]  INFO  [icla] executing subtask ExtractCommitter
+[2022-06-06 15:39:46]  INFO  [icla] [ExtractCommitter] finished records: 1
+[2022-06-06 15:39:46]  INFO  [icla] finished step: 2 / 2
+```
+Now committer data have been saved in _tool_icla_committer.
+![](https://i.imgur.com/6svX0N2.png)
+
+#### Step 2.3: Convertor
+
+Notes: The goal of Converters is to create a vendor-agnostic model out of the vendor-dependent ones created by the Extractors. 
+They are not necessary to have per se, but we encourage it because converters and the domain layer will significantly help with building dashboards. More info about the domain layer [here](https://devlake.apache.org/docs/DataModels/DevLakeDomainLayerSchema/).
+
+In short:
+
+> - Convertor will convert data from the tool layer and save it into the domain layer.
+> - We use `helper.NewDataConverter` to create an object of DataConvertor, then call `execute()`. 
+
+#### Step 2.4: Let's try it
+Sometimes OpenApi will be protected by token or other auth types, and we need to log in to gain a token to visit it. For example, only after logging in `private@apahce.com` could we gather the data about contributors signing ICLA. Here we briefly introduce how to authorize DevLake to collect data.
+
+Let's look at `api_client.go`. `NewIclaApiClient` load config `ICLA_TOKEN` by `.env`, so we can add `ICLA_TOKEN=XXXXXX` in `.env` and use it in `apiClient.SetHeaders()` to mock the login status. Code as below:
+![](https://i.imgur.com/dPxooAx.png)
+
+Of course, we can use `username/password` to get a token after login mockery. Just try and adjust according to the actual situation.
+
+Look for more related details at https://github.com/apache/incubator-devlake
+
+#### Step 2.5: Implement the GetTablesInfo() method of the PluginModel interface
+
+As shown in the following gitlab plugin example,
+add all models that need to be accessed by external plugins to the return value.
+
+```go
+var _ core.PluginModel = (*Gitlab)(nil)
+
+func (plugin Gitlab) GetTablesInfo() []core.Tabler {
+	return []core.Tabler{
+		&models.GitlabConnection{},
+		&models.GitlabAccount{},
+		&models.GitlabCommit{},
+		&models.GitlabIssue{},
+		&models.GitlabIssueLabel{},
+		&models.GitlabJob{},
+		&models.GitlabMergeRequest{},
+		&models.GitlabMrComment{},
+		&models.GitlabMrCommit{},
+		&models.GitlabMrLabel{},
+		&models.GitlabMrNote{},
+		&models.GitlabPipeline{},
+		&models.GitlabProject{},
+		&models.GitlabProjectCommit{},
+		&models.GitlabReviewer{},
+		&models.GitlabTag{},
+	}
+}
+```
+
+You can use it as follows:
+
+```go
+if pm, ok := plugin.(core.PluginModel); ok {
+    tables := pm.GetTablesInfo()
+    for _, table := range tables {
+        // do something
+    }
+}
+
+```
+
+#### Final step: Submit the code as open source code
+We encourage ideas and contributions ~ Let's use migration scripts, domain layers and other discussed concepts to write normative and platform-neutral code. More info at [here](https://devlake.apache.org/docs/DataModels/DevLakeDomainLayerSchema) or contact us for ebullient help.
+
+
+### Done!
+
+Congratulations! The first plugin has been created! 🎖 
+
+## How do Singer-spec plugins work?
+
+These plugins share a lot in common with [conventional plugins](#how-do-conventional-plugins-work), except the collector stage uses the `Tap` abstraction. You will additionally need to configure JSON files for the [singer-tap] that
+you are intending to use. These configuration files will tell the tap what APIs are available and what schema of data is expected to be returned by each of them.
+
+## A step-by-step guide towards your first Singer-spec plugin
+
+For this demo, we will create a simple GitHub plugin using the Singer-spec. Make sure you have familiarized yourself, at a high-level at least, with the concepts of [conventional plugins](#a-step-by-step-guide-towards-your-first-conventional-plugin) before proceeding.
+
+### Step 1: Singer tap setup
+
+Consult the documentation of the specific tap before getting started. Usually the steps go like this:
+
+*1.1*. Make sure you have Python 3+ with `pip` installed.
+
+*1.2*. Add the python module for the singer tap to the `requirements.txt` in the root Devlake directory.
+
+*1.3*. Run `make dep` to get the tap as well other dependencies, if missing, installed.
+
+*1.4*. You now have the tap binary installed and available on your $PATH.
+
+### Step 2: Setting up Singer tap config
+
+*2.1*. You will need to determine the structure of the `config.json` required to communicate with the tap. This should be in the documentation of the tap. This file will contain the config
+needed to have the tap make the API calls (e.g. authentication info, endpoint, etc)
+
+*2.2*. In some temp directory, create such a `config.json` file, and then run `<tap-name> -c config.json --discover > properties.json`. This will create a `properties.json` file that contains all the
+discovered "streams" of that tap. Each stream corresponds to a unique API call, and contains multiple fields including the JSON schema of the expected response for that stream.
+
+*2.3*. Place this `properties.json` file under `config/singer` and name it to something more specific, for instance, `github.json`, following our example.
+
+### Step 3: Writing and generating the plugin code
+
+As of now, the generator does not support scaffolding code for these plugins. As a workaround, use the generator to create a regular REST plugin and make the following modifications. We'll assume
+the plugin created for this example is called `github_singer`.
+
+*3.1*. Under `github_singer/models` create a `config.go` file that captures the structure of the `config.json` you used earlier. For this example, it'd look like this:
+```go
+// GithubConfig models corresponds to docs here https://github.com/singer-io/tap-github
+type GithubConfig struct {
+    AccessToken    string    `json:"access_token"`
+    Repository     string    `json:"repository"`
+    StartDate      time.Time `json:"start_date"`
+    RequestTimeout int       `json:"request_timeout"`
+    BaseUrl        string    `json:"base_url"`
+}
+```
+
+*3.2*. Modify `github_singer/tasks/task_data.go` to have the options and task-data appropriate for the subtasks. It is important that the `TaskData` struct contains a reference to the config struct,
+the connection-ID, and a function pointer that returns a Tap client. In our example, we could have:
+```go
+type GithubSingerOptions struct {
+    ConnectionId uint64   `json:"connectionId"`
+    Owner        string   `json:"owner"`
+    Tasks        []string `json:"tasks,omitempty"`
+}
+
+type GithubSingerTaskData struct {
+    Options   *GithubSingerOptions `json:"-"`
+    TapConfig *models.GithubConfig
+    TapClient *tap.SingerTap
+}
+
+type GithubApiParams struct {
+    Repo         string
+    Owner        string
+    ConnectionId uint64
+}
+
+func DecodeAndValidateTaskOptions(options map[string]interface{}) (*GithubSingerOptions, errors.Error) {
+    var op GithubSingerOptions
+    if err := helper.Decode(options, &op, nil); err != nil {
+        return nil, err
+    }
+    if op.ConnectionId == 0 {
+        return nil, errors.Default.New("connectionId is invalid")
+    }
+    return &op, nil
+}
+```
+*3.3*. Modify `github_singer/impl/impl.go` so that `PrepareTaskData` creates the TaskData struct from the Options. In our case:
+```go
+func (plugin GithubSinger) PrepareTaskData(taskCtx core.TaskContext, options map[string]interface{}) (interface{}, errors.Error) {
+    op, err := tasks.DecodeAndValidateTaskOptions(options)
+    if err != nil {
+        return nil, err
+    }
+    connectionHelper := helper.NewConnectionHelper(
+        taskCtx,
+        nil,
+    )
+    connection := &models.GithubConnection{}
+    err = connectionHelper.FirstById(connection, op.ConnectionId)
+    if err != nil {
+        return nil, errors.Default.Wrap(err, "unable to get GithubSinger connection by the given connection ID")
+    }
+    endpoint := strings.TrimSuffix(connection.Endpoint, "/")
+    tapClient, err := tap.NewSingerTap(&tap.SingerTapConfig{
+        TapExecutable:        "tap-github",
+        StreamPropertiesFile: "github_keon.json",
+        IsLegacy:             true,
+    })
+    if err != nil {
+        return nil, err
+    }
+    return &tasks.GithubSingerTaskData{
+        Options:   op,
+        TapClient: tapClient,
+        TapConfig: &models.GithubConfig{
+            AccessToken:    connection.Token,
+            Repository:     options["repo"].(string),
+            StartDate:      options["start_date"].(time.Time),
+            RequestTimeout: 300,
+            BaseUrl:        endpoint,
+        },
+    }, nil
+}
+```
+
+Note that the TapExecutable variable here was set to `"tap-github"`, which is the name of the python executable for the tap.
+The `StreamPropertiesFile` is the name of the properties file of interest, and is expected to reside in the directory referenced by the environment variable `"TAP_PROPERTIES_DIR"`. This directory is
+expected to be shared for all these JSON files. In our example, this directory is `<devlake-root>/config/tap`.
+Furthermore, observe how we created the `GithubConfig` object: The raw options needed two variables "repo" and "start_date", and the remaining fields were derivable from the connection instance.
+These details will vary from tap to tap, but the gist will be the same.
+
+*3.4*. Since this is a Singer plugin, the collector will have to be modified to look like this:
+
+```go
+package tasks
+
+import (
+	"github.com/apache/incubator-devlake/errors"
+	"github.com/apache/incubator-devlake/helpers/pluginhelper/tap"
+	"github.com/apache/incubator-devlake/plugins/core"
+	"github.com/apache/incubator-devlake/plugins/helper"
+)
+
+var _ core.SubTaskEntryPoint = CollectIssues
+
+func CollectIssues(taskCtx core.SubTaskContext) errors.Error {
+	data := taskCtx.GetData().(*GithubSingerTaskData)
+	collector, err := tap.NewTapCollector(
+		&tap.CollectorArgs[tap.SingerTapStream]{
+			RawDataSubTaskArgs: helper.RawDataSubTaskArgs{
+				Ctx:   taskCtx,
+				Table: "singer_github_issue",
+				Params: GithubApiParams{
+					Repo:         data.TapConfig.Repository,
+					Owner:        data.Options.Owner,
+					ConnectionId: data.Options.ConnectionId,
+				},
+			},
+			TapClient:    data.TapClient,
+			TapConfig:    data.TapConfig,
+			ConnectionId: data.Options.ConnectionId,
+			StreamName:   "issues",
+		},
+	)
+	if err != nil {
+		return err
+	}
+	return collector.Execute()
+}
+
+var CollectIssuesMeta = core.SubTaskMeta{
+	Name:             "CollectIssues",
+	EntryPoint:       CollectIssues,
+	EnabledByDefault: true,
+	Description:      "Collect singer-tap Github issues",
+}
+```
+
+
+*3.5*. Generate the data models corresponding to the JSON schemas of the streams of interest. These make life easy at the Extractor stage as we will not need to write "Response" structs by hand.
+We have a custom script that gets this job done. See `scripts/singer-model-generator.sh`. For our example, if we care about
+writing an extractor for GitHub Issues, we'll have to refer to the properties.json (or github.json) file to identify the stream name associated with it. In this case, it is called "issues". Next, we run the following
+command: ```sh ./scripts/singer-model-generator.sh "./config/tap/github.json" "./plugins/github_singer" "issues"```. (Make sure the script has execution permissions - ```sh chmod +x ./scripts/singer-model-generator.sh```.
+For the sake of convenience, the script supports an `--all` flag in place of the stream. This will generate source files for all stream. Also, see the `tap-models` target in the Makefile for references, and add your invocations
+there.
+
+This will generate Go (raw) data models and place them under `github_singer/models/generated`. Do not modify these files manually.
+
+*3.5.1*. Note: Occasionally, the tap properties will not expose all the supported fields in the JSON schema - you can go and manually add them there in the JSON file. Additionally, you might run into type-problems (for instance IDs coming back as strings but declared as integers). In general, these would be rare scenarios, and technically bugs for the tap that you would experimentally run into while testing.
+Either way, if you need to modify these data-types, do it in the JSON file.
+
+*3.6*. The remaining steps are just like what you would do for conventional plugins (e.g. the REST APIs, migrations, etc). Again, the generated source files from step *3.5* can be used in the
+extractor for row-data deserialization.
+
+**Final step:** [Submit the code as open source code](#final-step-submit-the-code-as-open-source-code)
+
+### Done!
+
+Congratulations! You have created a Singer-spec plugin!
diff --git a/versioned_docs/version-v0.15/DeveloperManuals/Project.md b/versioned_docs/version-v0.15/DeveloperManuals/Project.md
new file mode 100644
index 0000000000..bd1cc85c0f
--- /dev/null
+++ b/versioned_docs/version-v0.15/DeveloperManuals/Project.md
@@ -0,0 +1,251 @@
+---
+title: "Project"
+sidebar_position: 5
+description: >
+  `Project` is **a set of [Scope](../Overview/KeyConcepts.md#data-scope) from different domains**, a way to group different resources, and it is crucial for some metric calculations like `Dora`.
+---
+
+# Summary
+
+For some metric calculations such as the `DORA` metric, we often encounter situations requiring comprehensive calculations based on data from multiple data sources.
+
+For example, we may use `GitLab` for code hosting, `Jenkins` for CI/CD, to calculate PR deployment cycle time, we need to know which `GitLab Projects` and `Jenkins Jobs` are related for correctness and performance reasons.
+
+However, in most cases, we have multiple `GitLab Projects` / `Jenkins Jobs` that belong to different teams in our Apache DevLake database.
+
+To distinguish them into different groups. The `Project` is introduced in v0.15. Essentially, a `project` consists of a set of [Scopes](../Overview/KeyConcepts.md#data-scope), i.e., a couple of `GitLab Projects`, `Jira Boards` or `Jenkins Jobs`, etc.
+
+`Project` is **a set of [Scope](../Overview/KeyConcepts.md#data-scope) from different domains**, a way to group different resources, and it is crucial for some metric calculation like `Dora`.
+
+Next, let us introduce `Project` in the following order:
+- `Project` related models
+- Related APIs that can be used to manipulate `Project` models
+- The interface that needs to be implemented when developing various plugins to support the `Project`.
+	- The interface that needs to be implemented to develop the `Data Source Plugin`
+	- The interface that needs to be implemented to develop the `Metric Plugins`
+
+# Models
+
+To support project we contains the following three models:
+ - `projects` describes a project object, including its name, creation and update time and other basic information
+ - `project_metric_settings` describes what metric plugins a project had enabled.
+ - `project_mapping` describes the mapping relationship of project and scope, including the name of the project、the table name of [Scope](../Overview/KeyConcepts.md#data-scope) and the row_id in the [Scope](../Overview/KeyConcepts.md#data-scope) table.
+
+## projects
+
+|   **field**   | **type** | **length** | **description**               | **key** |
+| ------------- | -------- | ---------- | ----------------------------- | ------- |
+| `name`        | varchar  | 255        | name for project              | PK      |
+| `description` | longtext |            | description of the project    |         |
+| `created_at`  | datetime | 3          | created time of project       |         |
+| `updated_at`  | datetime | 3          | last updated time of project  |         | 
+
+### example
+
+| **name**  | **describe**                         | **created_at**          | **updated_at**          |
+| --------- | ------------------------------------ | ----------------------- | ------------------------|
+| project_1 | this is one of the test projects     | 2022-11-01 01:22:13.000 | 2022-11-01 02:24:15.000 |
+| project_2 | this is another project test project | 2022-11-01 01:23:29.000 | 2022-11-01 02:27:24.000 |
+
+## project_metric_settings
+
+|    **field**    | **type** | **length** | **description**                                            | **key** |
+| --------------- | -------- | ---------- | ---------------------------------------------------------- | ------- |
+| `project_name`  | varchar  | 255        | name for project                                           | PK      |
+| `plugin_name`   | varchar  | 255        | name for plugin                                            | PK      |
+| `plugin_option` | longtext |            | check if metric plugins have been enabled by the project   |         |
+| `enable`        | tinyint  | 1          | if the metric plugins is enabled                           |         |
+
+### example
+
+| **project_name** | **plugin_name** | **plugin_option** | **enable** |
+| ---------------- | --------------- | ----------------- | ---------- |
+| project_1        |   dora          | {}                | true       |
+| project_2        |   dora          | {}                | false      |
+
+## project_mapping
+
+|   **field**    | **type** | **length** | **description**                                               | **key** |
+| -------------- | -------- | ---------- | ------------------------------------------------------------- | ------- |
+| `project_name` | varchar  | 255        | name for project                                              | PK      |
+| `table`        | varchar  | 255        | the table name of [Scope](../Overview/KeyConcepts.md#data-scope)          | PK      |
+| `row_id`       | varchar  | 255        | the row_id in the [Scope](../Overview/KeyConcepts.md#data-scope) table    | PK      |
+
+###  example
+
+| **project_name** | **table** | **row_id**               |
+| ---------------- | --------- | ------------------------ |
+| project_1        | Repo      | gitlab:GithubRepo:1:lake |
+| project_1        | Board     | jira:JiraBoard:1:lake    |
+| project_2        | Repo      | github:GithubRepo:1:lake |
+
+# How to manage project via API
+
+For API specification, please check the swagger doc(by visiting `[Your Config-UI Host]/api/swagger/index.html`).
+Related endpoints:
+
+1. /projects
+2. /projects/:projectName/metrics
+3. /plugins
+
+# The interface that needs to be implemented
+
+We divide plugins into two categories
+- The first category is `Data Source Plugin`, such as `GitLab` `GitHub` `Jira` `Jenkins`, etc. These plugins collect data from various data sources
+- The second category is `Metric Plugin`, such as `Dora`, etc. These plugins do not directly contact the data source but do secondary calculations based on the collected data after the `Data Source Plugin` works
+
+## Data Source Plugin
+
+For example `GitLab` `GitHub` `Jira` `Jenkins` etc.
+
+These plugins, from various data sources, extract data into the database and store them, they deal directly with the data source, so we classify them as `Data Source Plugin`.
+
+## the DataSourcePluginBlueprintV200 interface
+
+`Data Source Plugin` needs to implement `DataSourcePluginBlueprintV200` interface to support `project`
+
+The interface definition for this interface is as follows
+
+```go
+// DataSourcePluginBlueprintV200 extends the V100 to provide support for
+// Project, so that complex metrics like DORA can be implemented based on a set
+// of Data Scopes
+type DataSourcePluginBlueprintV200 interface {
+	MakeDataSourcePipelinePlanV200(
+		connectionId uint64,
+		scopes []*BlueprintScopeV200,
+		syncPolicy BlueprintSyncPolicy,
+	) (PipelinePlan, []Scope, errors.Error)
+}
+```
+
+`scopes` in input parameters is a set of arrays containing IDs, Names, and Entities.
+
+The input data format is as follows:
+
+```go
+[]*core.BlueprintScopeV200{
+	{
+		Entities: []string{"CODE", "TICKET",  "CICD"},
+		Id:       "37",
+		Name:     "test",
+	},
+}
+```
+
+`syncPolicy` in input parameters contains some option settings, whose structure is defined as follows:
+
+```go
+type BlueprintSyncPolicy struct {
+	Version          string     `json:"version" validate:"required,semver,oneof=1.0.0"`
+	SkipOnFail       bool       `json:"skipOnFail"`
+	CreatedDateAfter *time.Time `json:"createdDateAfter"`
+}
+```
+
+`PipelinePlan` in output is a part of blueprint JSON:
+
+The input data format is as follows:(Take GitLab plugin as an example)
+
+```go
+core.PipelinePlan{
+	{
+		{
+			Plugin: "gitlab",
+			Subtasks: []string{
+				tasks.ConvertProjectMeta.Name,
+				tasks.CollectApiIssuesMeta.Name,
+				tasks.ExtractApiIssuesMeta.Name,
+				tasks.ConvertIssuesMeta.Name,
+				tasks.ConvertIssueLabelsMeta.Name,
+				tasks.CollectApiJobsMeta.Name,
+				tasks.ExtractApiJobsMeta.Name,
+				tasks.CollectApiPipelinesMeta.Name,
+				tasks.ExtractApiPipelinesMeta.Name,
+			},
+			Options: map[string]interface{}{
+				"connectionId": uint64(1),
+				"projectId":    testID,
+			},
+		},
+		{
+			Plugin: "gitextractor",
+			Options: map[string]interface{}{
+				"proxy":  "",
+				"repoId": repoId,
+				"url":    "https://git:nddtf@this_is_cloneUrl",
+			},
+		},
+	},
+	{
+		{
+			Plugin: "refdiff",
+			Options: map[string]interface{}{
+				"tagsLimit":   10,
+				"tagsOrder":   "reverse semver",
+				"tagsPattern": "pattern",
+			},
+		},
+	},
+}
+```
+
+`project` needs to provide a specific set of [Scopes](../Overview/KeyConcepts.md#data-scope) for a specific `connection` to the plugin through this interface, and then obtain the plugin involved in the `PipelineTask` All `plugins` and corresponding parameter information. At the same time, the plugin needs to convert entities like `repo` and `board` in the data source into a `scope interface` that `project` can understand
+   
+The corresponding `scope interface` has been implemented at following files of in the framework layer:
+- `models/domainlayer/devops/cicd_scope.go`
+- `models/domainlayer/ticket/board.go`
+- `models/domainlayer/code/repo.go`
+
+In the `plugins/gitlab/impl/impl.go` file, there is a `GitLab` plugin implementation of the above interface, which can be used as a reference.
+
+And the `plugins/gitlab/api/blueprint_v200.go` contains implementation details. 
+
+The following files contain the models that the relevant implementations depend on for reference:
+- `plugins/gitlab/models/project.go`
+- `plugins/gitlab/models/transformation_rule.go`
+
+## Metric Plugins
+
+For example `Dora`, and `Refdff` plugins belong to the `Metric Plugins`
+
+These plugins are mainly for calculating various metrics, they do not directly contact the data source, so we classify them as `Metric Plugins`.
+
+## The PluginMetric Interface
+
+`Metric Plugins` needs to implement the `PluginMetric` interface to support `project`
+
+The interface definition for this interface looks like this:
+
+```go
+type PluginMetric interface {
+	// returns a list of required data entities and expected features.
+	// [{ "model": "cicd_tasks", "requiredFields": {"column": "type", "execptedValue": "Deployment"}}, ...]
+	RequiredDataEntities() (data []map[string]interface{}, err errors.Error)
+
+	// returns if the metric depends on Project for calculation.
+	// Currently, only dora would return true.
+	IsProjectMetric() bool
+
+	// indicates which plugins must be executed before executing this one.
+	// declare a set of dependencies with this
+	RunAfter() ([]string, errors.Error)
+
+	// returns an empty pointer of the plugin setting struct.
+	// (no concrete usage at this point)
+	Settings() (p interface{})
+}
+
+```
+
+`Project` needs `PluginMetric` to know whether a `Metric Plugin` is dependent on `project`, and the tables and fields required in its calculation process.
+ 
+In the `plugins/dora/impl/impl.go` file, there is a `Dora` plugin implementation of the above interface, which can be used as a sample reference.You can find it by searching the following fields:
+- `func (plugin Dora) RequiredDataEntities() (data []map[string]interface{}, err errors.Error)`
+- `func (plugin Dora) IsProjectMetric() bool`
+- `func (plugin Dora) RunAfter() ([]string, errors.Error)`
+- `func (plugin Dora) Settings() interface{}`
+
+## References
+
+To dig deeper into developing and utilizing our built-in functions and have a better developer experience, feel free to dive into our [godoc](https://pkg.go.dev/github.com/apache/incubator-devlake) reference.
\ No newline at end of file
diff --git a/versioned_docs/version-v0.15/DeveloperManuals/Release-SOP.md b/versioned_docs/version-v0.15/DeveloperManuals/Release-SOP.md
new file mode 100644
index 0000000000..ac63b62b4b
--- /dev/null
+++ b/versioned_docs/version-v0.15/DeveloperManuals/Release-SOP.md
@@ -0,0 +1,146 @@
+# DevLake Release Guide
+
+**Please make sure your public key was included in the https://downloads.apache.org/incubator/devlake/KEYS , if not, please update this file first.**
+
+## How to update KEYS
+
+1. Clone the svn repository
+   ```shell
+   svn co https://dist.apache.org/repos/dist/dev/incubator/devlake
+   ```
+2. Append your public key to the KEYS file
+   cd devlake
+
+   - Check if your public key is in the KEYS file
+   - If it does not, create a new [GPG key](https://docs.github.com/en/authentication/managing-commit-signature-verification/generating-a-new-gpg-key), and then run the following command to see if it was successful.
+
+   ```shell
+   gpg --list-sigs <your name>
+   ```
+
+   - Append your publick key
+
+   ```shell
+       gpg --armor --export <your name> >> KEYS
+   ```
+
+3. Upload
+   ```shell
+   svn add KEYS
+   svn commit -m "update KEYS"
+   svn cp https://dist.apache.org/repos/dist/dev/incubator/devlake/KEYS https://dist.apache.org/repos/dist/release/incubator/devlake/ -m "update KEYS"
+   ```
+   We will use `v0.14.0` as an example to demonstrate the release process.
+
+## ASF Release Policy
+
+- https://www.apache.org/legal/release-policy.html
+- https://incubator.apache.org/guides/releasemanagement.html
+
+## Tools:
+
+- `gpg` creating and verifying the signature
+- `shasum` creating and verifying the checksum
+- `git` checkout and pack the codebase
+- `svn` uploading the code to the Apache code hosting server
+
+## Prepare
+
+- Check against the Incubator Release Checklist
+- Create folder `releases/lake-v0.14.0` and put the two files `docker-compose.yml` and `env.example` in there.
+- Update the file `.github/ISSUE_TEMPLATE/bug-report.yml` to include the version `v0.14.0`
+
+## Pack
+
+- Checkout to the branch/commit
+
+```shell
+git clone https://github.com/apache/incubator-devlake.git
+cd incubator-devlake
+git checkout b268d53a48edb26d3c9b73b782798703f068f655
+```
+
+- Tag the commit and push to origin
+
+  ```shell
+  git tag v0.14.0-rc1
+  git push origin v0.14.0-rc1
+  ```
+
+- Pack the code
+  ```shell
+  git archive --format=tar.gz --output="<the-output-dir>/apache-devlake-0.14.0-incubating-src.tar.gz" --prefix="apache-devlake-0.14.0-incubating-src/" v0.14.0-rc1
+  ```
+- Before proceeding to the next step, please make sure your public key was included in the https://downloads.apache.org/incubator/devlake/KEYS
+- Create signature and checksum
+  ```shell
+  cd <the-output-dir>
+  gpg -s --armor --output apache-devlake-0.14.0-incubating-src.tar.gz.asc --detach-sig apache-devlake-0.14.0-incubating-src.tar.gz
+  shasum -a 512  apache-devlake-0.14.0-incubating-src.tar.gz > apache-devlake-0.14.0-incubating-src.tar.gz.sha512
+  ```
+- Verify signature and checksum
+  ```shell
+  gpg --verify apache-devlake-0.14.0-incubating-src.tar.gz.asc apache-devlake-0.14.0-incubating-src.tar.gz
+  shasum -a 512 --check apache-devlake-0.14.0-incubating-src.tar.gz.sha512
+  ```
+
+## Upload
+
+- Clone the svn repository
+  ```shell
+  svn co https://dist.apache.org/repos/dist/dev/incubator/devlake
+  ```
+- Copy the files into the svn local directory
+  ```shell
+  cd devlake
+  mkdir -p 0.14.0-incubating-rc1
+  cp <the-output-dir>/apache-devlake-0.14.0-incubating-src.tar.gz* 0.14.0-incubating-rc1/
+  ```
+- Upload local files
+  ```shell
+  svn add 0.14.0-incubating-rc1
+  svn commit -m "add 0.14.0-incubating-rc1"
+  ```
+
+## Vote
+
+You can check [Incubator Release Checklist](https://cwiki.apache.org/confluence/display/INCUBATOR/Incubator+Release+Checklist) before voting.
+
+1. Devlake community vote:
+
+   - Start the vote by sending an email to <de...@devlake.apache.org>
+     [[VOTE] Release Apache DevLake (Incubating) v0.14.0-rc1](https://lists.apache.org/thread/s6jj2tl5mlyb8jpdd88jmo5woydzhp54)
+   - Announce the vote result:
+     [[RESULT][VOTE] Release Apache DevLake (Incubating) v0.14.0-rc1](https://lists.apache.org/thread/mb5sxdopprqksf1ppfggkvkwxs6110zk)
+
+2. Apache incubator community vote:
+   - Start the vote by sending an email to general@incubator.apache.org
+     [[VOTE] Release Apache DevLake (Incubating) v0.14.0-rc1](https://lists.apache.org/thread/lgfrsv0ymfk1c19ngnkkn46cspkf76lg)
+   - Announce the vote result:
+     [[RESULT][VOTE] Release Apache DevLake (Incubating) v0.14.0-rc1](https://lists.apache.org/thread/2xoqzymgvnrvrbn9dwsby39olotvt6oj)
+
+## Release
+
+### Apache
+
+- Move the release to the ASF content distribution system
+  ```shell
+  svn mv https://dist.apache.org/repos/dist/dev/incubator/devlake/0.14.0-incubating-rc1 https://dist.apache.org/repos/dist/release/incubator/devlake/0.14.0-incubating -m "transfer packages for 0.14.0-incubating-rc1"
+  ```
+- Wait until the directory `https://downloads.apache.org/incubator/devlake/0.14.0-incubating/` was created
+- Remove the last release from `https://downloads.apache.org/` (according the Apache release policy, this link should be pointing to the current release)
+  ```shell
+  svn rm https://dist.apache.org/repos/dist/release/incubator/devlake/0.11.0-incubating -m "remove 0.11.0-incubating"
+  ```
+- Announce release by sending an email to general@incubator.apache.org
+  [[ANNOUNCE] Release Apache Devlake(incubating) 0.14.0-incubating](https://lists.apache.org/thread/401p8xm8tcp9tplh2sdht7dnrbs03rht)
+
+### GitHub
+
+- Create tag v0.14.0 and push
+  ```shell
+  git checkout v0.14.0-rc1
+  git tag v0.14.0
+  git push origin v0.14.0
+  ```
+- Open the URL `https://github.com/apache/incubator-devlake/releases/`, draft a new release, fill in the form and upload two files `docker-compose.yml` and `env.example`
diff --git a/versioned_docs/version-v0.15/DeveloperManuals/TagNamingConventions.md b/versioned_docs/version-v0.15/DeveloperManuals/TagNamingConventions.md
new file mode 100644
index 0000000000..3417c29b63
--- /dev/null
+++ b/versioned_docs/version-v0.15/DeveloperManuals/TagNamingConventions.md
@@ -0,0 +1,13 @@
+---
+title: "Tag Naming Conventions"
+description: >
+  Tag Naming Conventions
+sidebar_position: 6
+---
+
+Please refer to the rules when creating a new tag for Apache DevLake
+- alpha: internal testing/preview, i.e. v0.12.0-alpha1
+- beta: community/customer testing/preview, i.e. v0.12.0-beta1
+- rc: asf release candidate, i.e. v0.12.0-rc1
+
+
diff --git a/versioned_docs/version-v0.15/DeveloperManuals/_category_.json b/versioned_docs/version-v0.15/DeveloperManuals/_category_.json
new file mode 100644
index 0000000000..f921ae4715
--- /dev/null
+++ b/versioned_docs/version-v0.15/DeveloperManuals/_category_.json
@@ -0,0 +1,8 @@
+{
+  "label": "Developer Manuals",
+  "position": 8,
+  "link":{
+    "type": "generated-index",
+    "slug": "DeveloperManuals"
+  }
+}
diff --git a/versioned_docs/version-v0.15/GettingStarted/Authentication.md b/versioned_docs/version-v0.15/GettingStarted/Authentication.md
new file mode 100644
index 0000000000..f8f27df55d
--- /dev/null
+++ b/versioned_docs/version-v0.15/GettingStarted/Authentication.md
@@ -0,0 +1,43 @@
+---
+title: "Security and Authentication"
+sidebar_position: 8
+description: How to secure your deployment and enable the Authentication
+---
+
+The document explains how you can set up Apache DevLake securely. 
+
+First of all, there are 4 services included in the deployment:
+
+- database: `postgress` and `mysql` are supported, you may choose one of them or any other compatible DBS like cloud-based systems. You should follow the document from the database to make it secure.
+- grafana: You are likely to use it most of the time, browsing built-in dashboards, and creating your own customized metric. grafana supports [User Management](https://grafana.com/docs/grafana/latest/administration/user-management/), please follow the official document to set it up based on your need.
+- devlake: This is the core service for Data Collection and Metric Calculation, all collected/calculated data would be stored to the database, and accessed by the `grafana` service. `devlake` itself doesn't support User Management of any kind, so we don't recommend that you expose its port to the outside world.
+- config-ui: A web interface to set up `devlake` to do the work. You may set up an automated `blueprint` to collect data. `config-ui` supports `Basic Authentication`, by simply set up the Environment Variable `ADMIN_USER` and `ADMIN_PASS` for the container. There are commented lines in `config-ui.environment` section in our `docker-compose.yml` file for your convenience.
+In General, we suggest that you reduce the Attack Surface as small as possible.
+
+
+### Internal Deployment (Recommended)
+
+- database: Remove the `ports` if you don't need to access the database directly
+- devlake: Remove the `ports` section. If you want to call the API directly, do it via `config-ui/api` endpoint.
+- grafana: We have no choice but to expose the `ports` for people to browse the dashboards. However, you may want to set up the User Management, and a read-only database account for `grafana`
+- config-ui: Normally, exposing the `ports` with `Basic Authentication` is sufficient for Internal Deployment, you may choose to remove the `ports` and use techniques like `k8s port-forwarding` or `expose-port-when-needed` to enhance the security. Keep in mind config-ui is NOT designed to be used by many people, and it shouldn't be. Do NOT grant access if NOT necessary.
+
+
+### Internet Deployment (NOT Recommended)
+
+THIS IS DANGEROUS, DON'T DO IT. If you insist, here are some suggestions you may follow, please consult Security Advisor before everything:
+
+- database: Same as above.
+- grafana: Same as above. In addition, set up the `HTTPS` for the transportation.
+- devlake: Same as above.
+- config-ui: Same as above. In addition, use port-forward if you are using `k8s`, otherwise, set up `HTTPS` for the transportation.
+
+
+## Disclaimer
+
+Security is complicated, all suggestions listed above are based on what we learned so far. Apache Devlake makes no guarantee of any kind, please consult your Security Advisor before applying.
+
+
+## Troubleshooting
+
+If you run into any problem, please check the [Troubleshooting](/Troubleshooting/Installation.md) or [create an issue](https://github.com/apache/incubator-devlake/issues)
diff --git a/versioned_docs/version-v0.15/GettingStarted/DockerComposeSetup.md b/versioned_docs/version-v0.15/GettingStarted/DockerComposeSetup.md
new file mode 100644
index 0000000000..7c592a85f2
--- /dev/null
+++ b/versioned_docs/version-v0.15/GettingStarted/DockerComposeSetup.md
@@ -0,0 +1,41 @@
+---
+title: "Install via Docker Compose"
+description: >
+  The steps to install DevLake via Docker Compose
+sidebar_position: 1
+---
+
+
+## Prerequisites
+
+- [Docker v19.03.10+](https://docs.docker.com/get-docker)
+- [docker-compose v2.2.3+](https://docs.docker.com/compose/install/) (If you have Docker Desktop installed then you already have the Compose plugin installed)
+
+## Launch DevLake
+
+- Commands written `like this` are to be run in your terminal.
+
+1. Download `docker-compose.yml` and `env.example` from [latest release page](https://github.com/apache/incubator-devlake/releases/latest) into a folder.
+2. Rename `env.example` to `.env`. For Mac/Linux users, please run `mv env.example .env` in the terminal. This file contains the environment variables that the Devlake server will use. Additional ones can be found in the compose file(s).
+3. Run `docker-compose up -d` to launch DevLake.
+
+## Collect data and view dashboards
+
+1. Visit `config-ui` at `http://localhost:4000` in your browser to configure DevLake and collect data.
+   - Please follow the [tutorial](UserManuals/ConfigUI/Tutorial.md)
+   - `devlake` container takes a while to fully boot up. If `config-ui` complains about API being unreachable, please wait a few seconds and refresh the page.
+2. To view dashboards, click *View Dashboards* button in the top left corner, or visit `localhost:3002` (username: `admin`, password: `admin`).
+   - We use [Grafana](https://grafana.com/) to visualize the DevOps [data](/Overview/SupportedDataSources.md) and build dashboards.
+   - For how to customize and provision dashboards, please see our [Grafana doc](../UserManuals/Dashboards/GrafanaUserGuide.md).
+
+
+## Upgrade to a newer version
+
+Support for database schema migration was introduced to DevLake in v0.10.0. From v0.10.0 onwards, users can upgrade their instance smoothly to a newer version. However, versions prior to v0.10.0 do not support upgrading to a newer version with a different database schema.
+
+<br/>
+
+
+## Troubleshooting
+
+If you run into any problem, please check the [Troubleshooting](/Troubleshooting/Installation.md) or [create an issue](https://github.com/apache/incubator-devlake/issues)
diff --git a/versioned_docs/version-v0.15/GettingStarted/HelmSetup.md b/versioned_docs/version-v0.15/GettingStarted/HelmSetup.md
new file mode 100644
index 0000000000..f34a8cec0a
--- /dev/null
+++ b/versioned_docs/version-v0.15/GettingStarted/HelmSetup.md
@@ -0,0 +1,157 @@
+---
+title: "Install via Helm"
+description: >
+  The steps to install Apache DevLake via Helm for Kubernetes
+sidebar_position: 2
+---
+
+## Prerequisites
+
+- Helm >= 3.6.0
+- Kubernetes >= 1.19.0
+
+
+## Quick Start
+
+#### You can also check https://github.com/apache/incubator-devlake-helm-chart to make contribution
+
+### Install
+
+To install the chart with release name `devlake`:
+
+```shell
+helm repo add devlake https://apache.github.io/incubator-devlake-helm-chart
+helm repo update
+helm install devlake devlake/devlake --version=v0.15.0-rc4
+```
+
+And visit your devlake from the node port (32001 by default).
+
+http://YOUR-NODE-IP:32001
+
+#### Tips: 
+If you are using minikube inside your mac, please use the following command to forward the port:
+```shell
+kubectl port-forward service/devlake-ui  30090:4000
+```
+and open another terminal:
+```shell
+kubectl port-forward service/devlake-grafana  30091:3000
+```
+
+Then you can visit:
+config-ui by url `http://YOUR-NODE-IP:30090`
+grafana by url `http://YOUR-NODE-IP:30091`
+
+### Update
+
+```shell
+helm repo update
+helm upgrade --install devlake devlake/devlake --version=v0.15.0-rc4
+```
+
+### Uninstall
+
+To uninstall/delete the `devlake` release:
+
+```shell
+helm uninstall devlake
+```
+
+
+## Some example deployments
+
+### Deploy with NodePort
+
+Conditions:
+ - IP Address of Kubernetes node: 192.168.0.6
+ - Want to visit devlake with port 30000.
+
+```
+helm install devlake . --set service.uiPort=30000
+```
+
+After deployed, visit devlake: http://192.168.0.6:30000
+
+### Deploy with Ingress
+
+Conditions:
+ - I have already configured default ingress for the Kubernetes cluster
+ - I want to use http://devlake.example.com for visiting devlake
+
+```
+helm install devlake . --set "ingress.enabled=true,ingress.hostname=devlake.example.com"
+```
+
+After deployed, visit devlake: http://devlake.example.com, and grafana at http://devlake.example.com/grafana
+
+### Deploy with Ingress (Https)
+
+Conditions:
+ - I have already configured ingress(class: nginx) for the Kubernetes cluster, and the https using 8443 port.
+ - I want to use https://devlake-0.example.com:8443 for visiting devlake.
+ - The https certificates are generated by letsencrypt.org, and the certificate and key files: `cert.pem` and `key.pem`
+
+First, create the secret:
+```
+kubectl create secret tls ssl-certificate --cert cert.pem --key secret.pem
+```
+
+Then, deploy the devlake:
+```
+helm install devlake . \
+    --set "ingress.enabled=true,ingress.enableHttps=true,ingress.hostname=devlake-0.example.com" \
+    --set "ingress.className=nginx,ingress.httpsPort=8443" \
+    --set "ingress.tlsSecretName=ssl-certificate"
+```
+
+After deployed, visit devlake: https://devlake-0.example.com:8443, and grafana at https://devlake-0.example.com:8443/grafana
+
+
+## Parameters
+
+Some useful parameters for the chart, you could also check them in values.yaml
+
+| Parameter | Description | Default |
+|-----------|-------------|----|
+| replicaCount  | Replica Count for devlake, currently not used  | 1  |
+| mysql.useExternal  | If use external mysql server, currently not used  |  false |
+| mysql.externalServer  | External mysql server address  | 127.0.0.1 |
+| mysql.externalPort  | External mysql server port  | 3306 |
+| mysql.username  | username for mysql | merico |
+| mysql.password  | password for mysql | merico |
+| mysql.database  | database for mysql | lake |
+| mysql.rootPassword  | root password for mysql | admin |
+| mysql.storage.class  | storage class for mysql's volume | "" |
+| mysql.storage.size  | volume size for mysql's data | 5Gi |
+| mysql.image.repository  | repository for mysql's image | mysql |
+| mysql.image.tag  | image tag for mysql's image | 8  |
+| mysql.image.pullPolicy  | pullPolicy for mysql's image | IfNotPresent |
+| grafana.image.repository  | repository for grafana's image | apache/devlake-dashboard |
+| grafana.image.tag  | image tag for grafana's image | latest |
+| grafana.image.pullPolicy  | pullPolicy for grafana's image | Always |
+| lake.storage.class  | storage class for lake's volume | "" |
+| lake.storage.size  | volume size for lake's data | 100Mi |
+| lake.image.repository  | repository for lake's image | apache/devlake |
+| lake.image.tag  | image tag for lake's image | latest |
+| lake.image.pullPolicy  | pullPolicy for lake's image | Always |
+| lake.loggingDir | the root logging directory of Devlake | /app/logs | 
+| ui.image.repository  | repository for ui's image | apache/devlake-config-ui |
+| ui.image.tag  | image tag for ui's image | latest |
+| ui.image.pullPolicy  | pullPolicy for ui's image | Always |
+| service.type  | Service type for exposed service | NodePort |
+| service.uiPort  | Service port for config ui | 32001 |
+| service.ingress.enabled  | If enable ingress  |  false |
+| service.ingress.enableHttps  | If enable https  |  false |
+| service.ingress.className  | The class name for ingressClass. If leave empty, the default IngressClass will be used  | "" |
+| service.ingress.hostname  | The hostname/domainname for ingress  | localhost |
+| service.ingress.prefix | The prefix for endpoints, currently not supported due to devlake's implementation  | /  |
+| service.ingress.tlsSecretName  | The secret name for tls's certificate, required when https enabled  | "" |
+| service.ingress.httpPort  | The http port for ingress  | 80 |
+| service.ingress.httpsPort  | The https port for ingress  | 443 |
+| option.localtime  | The hostpath for mount as /etc/localtime | /etc/localtime |
+
+
+## Troubleshooting
+
+If you run into any problem, please check the [Troubleshooting](/Troubleshooting/Installation.md) or [create an issue](https://github.com/apache/incubator-devlake/issues)
diff --git a/versioned_docs/version-v0.15/GettingStarted/KubernetesSetup.md b/versioned_docs/version-v0.15/GettingStarted/KubernetesSetup.md
new file mode 100644
index 0000000000..ebb7b34c83
--- /dev/null
+++ b/versioned_docs/version-v0.15/GettingStarted/KubernetesSetup.md
@@ -0,0 +1,62 @@
+---
+title: "Install via Kubernetes"
+description: >
+  The steps to install Apache DevLake via Kubernetes
+sidebar_position: 3
+---
+
+:::caution
+
+We highly recommend the [helm approach](./HelmSetup.md), this page is for Advanced Installation only
+
+:::
+
+We provide a sample [k8s-deploy.yaml](https://github.com/apache/incubator-devlake/blob/main/deployment/k8s/k8s-deploy.yaml) to help deploy DevLake to Kubernetes
+
+[k8s-deploy.yaml](https://github.com/apache/incubator-devlake/blob/main/deployment/k8s/k8s-deploy.yaml) will create a namespace `devlake` on your k8s cluster, and use `nodePort 30004` for `config-ui`,  `nodePort 30002` for `grafana` dashboards. If you would like to use a specific version of Apache DevLake, please update the image tag of `grafana`, `devlake` and `config-ui` deployments.
+
+## Step-by-step guide
+
+1. Download [k8s-deploy.yaml](https://github.com/apache/incubator-devlake/blob/main/deployment/k8s/k8s-deploy.yaml)
+2. Customize the settings (`devlake-config` config map):
+   - Settings shared between `grafana` and `mysql`
+     * `MYSQL_ROOT_PASSWORD`: set root password for `mysql`
+     * `MYSQL_USER`: shared between `mysql` and `grafana`
+     * `MYSQL_PASSWORD`: shared between `mysql` and `grafana`
+     * `MYSQL_DATABASE`: shared between `mysql` and `grafana`
+   - Settings used by `grafana`
+     * `MYSQL_URL`: set MySQL URL for `grafana` in `$HOST:$PORT` format
+     * `GF_SERVER_ROOT_URL`: Public URL to the `grafana`
+   - Settings used by `config-ui`:
+     * `GRAFANA_ENDPOINT`: FQDN of grafana which can be reached within k8s cluster, normally you don't need to change it unless namespace was changed
+     * `DEVLAKE_ENDPOINT`: FQDN of devlake which can be reached within k8s cluster, normally you don't need to change it unless namespace was changed
+     * `ADMIN_USER`/`ADMIN_PASS`: Not required, but highly recommended
+   - Settings used by `devlake`:
+     * `DB_URL`: update this value if  `MYSQL_USER`, `MYSQL_PASSWORD` or `MYSQL_DATABASE` were changed
+     * `LOGGING_DIR`: the directory of logs for Devlake - you likely don't need to change it.
+3. The `devlake` deployment store its configuration in `/app/.env`. In our sample yaml, we use `hostPath` volume, so please make sure directory `/var/lib/devlake` exists on your k8s workers, or employ other techniques to persist `/app/.env` file. Please do NOT mount the entire `/app` directory, because plugins are located in `/app/bin` folder.
+4. Finally, execute the following command and DevLake should be up and running:
+   ```sh
+   kubectl apply -f k8s-deploy.yaml
+   ```
+
+
+## FAQ
+
+1. Can I use a managed Cloud database service instead of running database in docker?
+
+   Yes, it only takes a few changes in the sample yaml file. Below we'll use MySQL on AWS RDS as an example.
+   1. (Optional) Create a MySQL instance on AWS RDS following this [doc](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_GettingStarted.CreatingConnecting.MySQL.html), skip this step if you'd like to use an existing instance
+   2. Remove the `mysql` deployment and service sections from `k8s-deploy.yaml`
+   3. Update `devlake-config` configmap according to your RDS instance setup:
+     * `MYSQL_ROOT_PASSWORD`: remove this line
+     * `MYSQL_USER`: use your RDS instance's master username
+     * `MYSQL_PASSWORD`: use your RDS instance's password
+     * `MYSQL_DATABASE`: use your RDS instance's DB name, you may need to create a database first with `CREATE DATABASE <DB name>;`
+     * `MYSQL_URL`: set this for `grafana` in `$HOST:$PORT` format, where $HOST and $PORT should be your RDS instance's endpoint and port respectively
+     * `DB_URL`: update the connection string with your RDS instance's info for `devlake`
+
+
+## Troubleshooting
+
+If you run into any problem, please check the [Troubleshooting](/Troubleshooting/Installation.md) or [create an issue](https://github.com/apache/incubator-devlake/issues)
diff --git a/versioned_docs/version-v0.15/GettingStarted/RainbondSetup.md b/versioned_docs/version-v0.15/GettingStarted/RainbondSetup.md
new file mode 100644
index 0000000000..fde8836547
--- /dev/null
+++ b/versioned_docs/version-v0.15/GettingStarted/RainbondSetup.md
@@ -0,0 +1,39 @@
+---
+title: "Install via Rainbond"
+sidebar_position: 7
+description: >
+  The steps to install DevLake in Rainbond.
+---
+
+This tutorial is for users who don't know Kubernetes. [Rainbond](https://www.rainbond.com/) is cloud native application management platform built on Kubernetes, easy to use, no need to know Kubernetes knowledge, easily deploy applications in Kubernetes. 
+
+Install DevLake in Rainbond is the easiest way to get started.
+
+## Requirements
+
+* Rainbond 5.8.x or above
+
+## Deploy DevLake
+
+1.Login to Rainbond console, click `Market` in the left menu, switch to open source app store, and search `DevLake` in the search box, and click `Install` button.
+
+![](/img/GettingStarted/install-devlake.jpg)
+
+2.fill in the installation information, and click `Confirm` button startup install.
+  * Team: select a team or create a new team
+  * Cluster: select a cluster
+  * Application: select an application or create a new application
+  * Version: select a version
+
+3.Moment later, DevLake will be installed successfully, via the `Access` button to access DevLake.
+
+![](/img/GettingStarted/topology-devlake.jpg)
+
+## Next Step
+
+Creating a Blueprint, ref [Tutorial](/docs/UserManuals/ConfigUI/Tutorial#creating-a-blueprint)
+
+
+## Troubleshooting
+
+If you run into any problem, please check the [Troubleshooting](/Troubleshooting/Installation.md) or [create an issue](https://github.com/apache/incubator-devlake/issues)
diff --git a/versioned_docs/version-v0.15/GettingStarted/TemporalSetup.md b/versioned_docs/version-v0.15/GettingStarted/TemporalSetup.md
new file mode 100644
index 0000000000..1d6ce844a5
--- /dev/null
+++ b/versioned_docs/version-v0.15/GettingStarted/TemporalSetup.md
@@ -0,0 +1,40 @@
+---
+title: "Install via Temporal"
+sidebar_position: 6
+description: >
+  The steps to install DevLake in Temporal mode.
+---
+
+
+Normally, DevLake would execute pipelines on a local machine (we call it `local mode`), it is sufficient most of the time. However, when you have too many pipelines that need to be executed in parallel, it can be problematic, as the horsepower and throughput of a single machine is limited.
+
+`temporal mode` was added to support distributed pipeline execution, you can fire up arbitrary workers on multiple machines to carry out those pipelines in parallel to overcome the limitations of a single machine.
+
+But, be careful, many API services like JIRA/GITHUB have a request rate limit mechanism. Collecting data in parallel against the same API service with the same identity would most likely hit such limit.
+
+## How it works
+
+1. DevLake Server and Workers connect to the same temporal server by setting up `TEMPORAL_URL`
+2. DevLake Server sends a `pipeline` to the temporal server, and one of the Workers pick it up and execute it
+
+
+**IMPORTANT: This feature is in early stage of development. Please use with caution**
+
+
+## Temporal Demo
+
+### Requirements
+
+- [Docker](https://docs.docker.com/get-docker)
+- [docker-compose](https://docs.docker.com/compose/install/)
+- [temporalio](https://temporal.io/)
+
+### How to setup
+
+1. Clone and fire up the [temporalio](https://temporal.io/) services
+2. Clone this repo, and fire up DevLake with command `docker-compose -f deployment/temporal/docker-compose-temporal.yml up -d`
+
+
+## Troubleshooting
+
+If you run into any problem, please check the [Troubleshooting](/Troubleshooting/Installation.md) or [create an issue](https://github.com/apache/incubator-devlake/issues)
diff --git a/versioned_docs/version-v0.15/GettingStarted/_category_.json b/versioned_docs/version-v0.15/GettingStarted/_category_.json
new file mode 100644
index 0000000000..063400ae11
--- /dev/null
+++ b/versioned_docs/version-v0.15/GettingStarted/_category_.json
@@ -0,0 +1,8 @@
+{
+  "label": "Getting Started",
+  "position": 2,
+  "link":{
+    "type": "generated-index",
+    "slug": "GettingStarted"
+  }
+}
diff --git a/versioned_docs/version-v0.15/Metrics/AddedLinesOfCode.md b/versioned_docs/version-v0.15/Metrics/AddedLinesOfCode.md
new file mode 100644
index 0000000000..2d9443e58d
--- /dev/null
+++ b/versioned_docs/version-v0.15/Metrics/AddedLinesOfCode.md
@@ -0,0 +1,79 @@
+---
+title: "Added Lines of Code"
+description: >
+  Added Lines of Code
+sidebar_position: 11
+---
+
+## What is this metric? 
+The accumulated number of added lines of code.
+
+## Why is it important?
+1. identify potential bottlenecks that may affect the output
+2. Encourage the team to implement a development model that matches the business requirements; develop excellent coding habits
+
+## Which dashboard(s) does it exist in
+N/A
+
+## How is it calculated?
+This metric is calculated by summing the additions of commits in the given data range.
+
+<b>Data Sources Required</b>
+
+This metric relies on `commits` collected from GitHub, GitLab or BitBucket.
+
+<b>Data Transformation Required</b>
+
+N/A
+
+<b>SQL Queries</b>
+
+The following SQL shows how to find the added lines of code in specific repositories, eg. 'repo-1' and 'repo-2'.
+
+```
+SELECT
+  sum(c.additions) as added_lines_of_code
+FROM 
+  commits c
+  LEFT JOIN repo_commits rc ON c.sha = rc.commit_sha
+  LEFT JOIN repos r ON r.id = rc.repo_id
+WHERE
+  -- please replace the repo ids with your own, or create a '$repo_id' variable in Grafana
+  r.id in ('repo-1','repo-2')
+  and message not like '%Merge%'
+  and $__timeFilter(c.authored_date)
+  -- the following condition will remove the month with incomplete data
+  and c.authored_date >= DATE_ADD(DATE_ADD($__timeFrom(), INTERVAL -DAY($__timeFrom())+1 DAY), INTERVAL +1 MONTH)
+```
+
+
+If you want to measure the monthly trend of `added lines of code` in the screenshot below, please run the following SQL in Grafana.
+
+![](/img/Metrics/added-loc-monthly.png)
+
+```
+WITH _commits as(
+  SELECT
+    DATE_ADD(date(authored_date), INTERVAL -DAY(date(authored_date))+1 DAY) as time,
+    sum(additions) as added_lines_of_code
+  FROM commits
+  WHERE
+    message not like '%Merge%'
+    and $__timeFilter(authored_date)
+    -- the following condition will remove the month with incomplete data
+    and authored_date >= DATE_ADD(DATE_ADD($__timeFrom(), INTERVAL -DAY($__timeFrom())+1 DAY), INTERVAL +1 MONTH)
+  group by 1
+)
+
+SELECT 
+  date_format(time,'%M %Y') as month,
+  added_lines_of_code
+FROM _commits
+ORDER BY time
+```
+
+
+## How to improve?
+1. From the project/team dimension, observe the accumulated change in added lines to assess the team activity and code growth rate
+2. From version cycle dimension, observe the active time distribution of code changes, and evaluate the effectiveness of project development model.
+3. From the member dimension, observe the trend and stability of code output of each member, and identify the key points that affect code output by comparison.
diff --git a/versioned_docs/version-v0.15/Metrics/BugAge.md b/versioned_docs/version-v0.15/Metrics/BugAge.md
new file mode 100644
index 0000000000..dddced271b
--- /dev/null
+++ b/versioned_docs/version-v0.15/Metrics/BugAge.md
@@ -0,0 +1,77 @@
+---
+title: "Bug Age"
+description: >
+  Bug Age
+sidebar_position: 5
+---
+
+## What is this metric? 
+The amount of time it takes a bug to fix.
+
+## Why is it important?
+1. Help the team to establish an effective hierarchical response mechanism for bugs. Focus on the resolution of important problems in the backlog.
+2. Improve team's and individual's bug fixing efficiency. Identify good/to-be-improved practices that affect bug age age
+
+## Which dashboard(s) does it exist in
+- [Jira](https://devlake.apache.org/livedemo/DataSources/Jira)
+- [GitHub](https://devlake.apache.org/livedemo/DataSources/GitHub)
+- [Weekly Bug Retro](https://devlake.apache.org/livedemo/QAEngineers/WeeklyBugRetro)
+
+
+## How is it calculated?
+Similar to [requirement lead time](./RequirementLeadTime.md), this metric equals `resolution_date - created_date` of issues in type "BUG".
+
+<b>Data Sources Required</b>
+
+This metric relies on `issues` collected from Jira, GitHub, or TAPD.
+
+<b>Data Transformation Required</b>
+
+This metric relies on the 'type-bug' configuration in Jira, GitHub or TAPD's transformation rules while adding/editing a blueprint. This configuration tells DevLake what issues are `bugs`.
+
+<b>SQL Queries</b>
+
+The following SQL shows how to find the bug age of a specific `bug`.
+```
+-- lead_time_minutes is a pre-calculated field whose value equals 'resolution_date - created_date'
+SELECT
+  lead_time_minutes/1440 as bug_age_in_days
+FROM
+  issues
+WHERE
+  type = 'BUG'
+```
+
+If you want to measure the `mean bug age` in the screenshot below, please run the following SQL in Grafana.
+
+![](/img/Metrics/bug-age-monthly.png)
+
+```
+with _bugs as(
+  SELECT
+    DATE_ADD(date(i.resolution_date), INTERVAL -DAY(date(i.resolution_date))+1 DAY) as time,
+    AVG(i.lead_time_minutes/1440) as issue_lead_time
+  FROM issues i
+  	join board_issues bi on i.id = bi.issue_id
+  	join boards b on bi.board_id = b.id
+  WHERE
+    -- $board_id is a variable defined in Grafana's dashboard settings to filter out issues by boards
+    b.id in ($board_id)
+    and i.status = "DONE"
+    and i.type = 'BUG'
+    and $__timeFilter(i.resolution_date)
+    -- the following condition will remove the month with incomplete data
+    and i.resolution_date >= DATE_ADD(DATE_ADD($__timeFrom(), INTERVAL -DAY($__timeFrom())+1 DAY), INTERVAL +1 MONTH)
+  group by 1
+)
+
+SELECT 
+  date_format(time,'%M %Y') as month,
+  issue_lead_time as "Mean Bug Age in Days"
+FROM _bugs
+ORDER BY time
+```
+
+## How to improve?
+1. Observe the trend of bug age and locate the key reasons.
+2. Compare the age of bugs by severity levels, types (business, functional classification), affected components, etc.
\ No newline at end of file
diff --git a/versioned_docs/version-v0.15/Metrics/BugCountPer1kLinesOfCode.md b/versioned_docs/version-v0.15/Metrics/BugCountPer1kLinesOfCode.md
new file mode 100644
index 0000000000..99e9128901
--- /dev/null
+++ b/versioned_docs/version-v0.15/Metrics/BugCountPer1kLinesOfCode.md
@@ -0,0 +1,88 @@
+---
+title: "Bug Count per 1k Lines of Code"
+description: >
+  Bug Count per 1k Lines of Code
+sidebar_position: 6
+---
+
+## What is this metric? 
+Amount of bugs per 1,000 lines of code.
+
+## Why is it important?
+1. Defect drill-down analysis to inform the development of design and code review strategies and to improve the internal QA process
+2. Assist teams to locate projects/modules with higher defect severity and density, and clean up technical debts
+3. Identify good/to-be-improved practices that affect defect count or defect rate, to reduce the number of future defects
+
+## Which dashboard(s) does it exist in
+N/A
+
+
+## How is it calculated?
+The number of bugs divided by the total accumulated lines of code (additions + deletions) in the given data range.
+
+<b>Data sources required</b>
+
+- `issues` collected from Jira, GitHub or TAPD.
+- `commits` collected from GitHub, GitLab or BitBucket.
+
+<b>Data Transformation Required</b>
+
+This metric relies on the 'type-bug' configuration in Jira, GitHub or TAPD's transformation rules while adding/editing a blueprint. This configuration tells DevLake what issues are `bugs`.
+
+<b>SQL Queries</b>
+
+If you want to measure the monthly trend of `Bugs per 1k lines of code` in the screenshot below, please run the following SQL in Grafana.
+
+![](/img/Metrics/bug-per-1k-loc-monthly.png)
+
+```
+with _line_of_code as (
+	select 
+	  DATE_ADD(date(authored_date), INTERVAL -DAY(date(authored_date))+1 DAY) as time,
+	  sum(additions + deletions) as line_count
+	from 
+	  commits
+	where 
+	  message not like 'Merge%'
+	  and $__timeFilter(authored_date)
+	group by 1
+),
+
+
+_bug_count as(
+  select 
+    DATE_ADD(date(created_date), INTERVAL -DAY(date(created_date))+1 DAY) as time,
+    count(*) as bug_count
+  from issues i
+  where 
+    type = 'Bug'
+    and $__timeFilter(created_date)
+  group by 1
+),
+
+
+_bug_count_per_1k_loc as(
+  select 
+    loc.time,
+    1.0 * bc.bug_count / loc.line_count * 1000 as bug_count_per_1k_loc
+  from 
+    _line_of_code loc
+    left join _bug_count bc on bc.time = loc.time
+  where
+    bc.bug_count is not null 
+    and loc.line_count is not null 
+    and loc.line_count != 0
+)
+
+select 
+  date_format(time,'%M %Y') as month,
+  bug_count_per_1k_loc as 'Bug Count per 1000 Lines of Code'
+from _bug_count_per_1k_loc 
+order by time;
+```
+
+## How to improve?
+1. From the project or team dimension, observe the statistics on the total number of defects, the distribution of the number of defects in each severity level/type/owner, the cumulative trend of defects, and the change trend of the defect rate in thousands of lines, etc.
+2. From version cycle dimension, observe the statistics on the cumulative trend of the number of defects/defect rate, which can be used to determine whether the growth rate of defects is slowing down, showing a flat convergence trend, and is an important reference for judging the stability of software version quality
+3. From the time dimension, analyze the trend of the number of test defects, defect rate to locate the key items/key points
+4. Evaluate whether the software quality and test plan are reasonable by referring to CMMI standard values
diff --git a/versioned_docs/version-v0.15/Metrics/BuildCount.md b/versioned_docs/version-v0.15/Metrics/BuildCount.md
new file mode 100644
index 0000000000..ff91addda5
--- /dev/null
+++ b/versioned_docs/version-v0.15/Metrics/BuildCount.md
@@ -0,0 +1,72 @@
+---
+title: "Build Count"
+description: >
+  Build Count
+sidebar_position: 23
+---
+
+## What is this metric? 
+The number of successful builds.
+
+## Why is it important?
+1. As a process indicator, it reflects the value flow efficiency of upstream production and research links
+2. Identify excellent/to-be-improved practices that impact the build, and drive the team to precipitate reusable tools and mechanisms to build infrastructure for fast and high-frequency delivery
+
+## Which dashboard(s) does it exist in
+- [Jenkins](https://grafana-lake.demo.devlake.io/grafana/d/W8AiDFQnk/jenkins?orgId=1)
+
+
+## How is it calculated?
+This metric is calculated by counting the number of successful cicd_pipelines, such as Jenkins builds, GitLab pipelines and GitHub workflow runs in the given data range.
+
+<b>Data Sources Required</b>
+
+This metric relies on Jenkins builds, GitLab pipelines or GitHub workflow runs.
+
+<b>Data Transformation Required</b>
+
+N/A
+
+<b>SQL Queries</b>
+
+The following SQL shows how to find the total number of successful CI builds **finished** in the given time range.
+```
+SELECT
+  count(*)
+FROM 
+  cicd_pipelines
+WHERE
+  result = 'SUCCESS'
+  and $__timeFilter(finished_date)
+ORDER BY 1
+```
+
+If you want to measure the monthly trend of the `successful build count` in the screenshot below, please run the following SQL in Grafana.
+
+![](/img/Metrics/build-count-monthly.png)
+
+```
+WITH _builds as(
+  SELECT
+    DATE_ADD(date(finished_date), INTERVAL -DAYOFMONTH(date(finished_date))+1 DAY) as time,
+    count(*) as build_count
+  FROM 
+    cicd_pipelines
+  WHERE
+    result = "SUCCESS"
+    and $__timeFilter(finished_date)
+    -- the following condition will remove the month with incomplete data
+    and finished_date >= DATE_ADD(DATE_ADD($__timeFrom(), INTERVAL -DAY($__timeFrom())+1 DAY), INTERVAL +1 MONTH)
+  GROUP BY 1
+)
+
+SELECT 
+  date_format(time,'%M %Y') as month,
+  build_count as "Build Count"
+FROM _builds
+ORDER BY time
+```
+
+## How to improve?
+1. From the project dimension, compare the number of builds and success rate by combining the project phase and the complexity of tasks.
+2. From the time dimension, analyze the trend of the number of builds and success rate to see if it has improved over time.
diff --git a/versioned_docs/version-v0.15/Metrics/BuildDuration.md b/versioned_docs/version-v0.15/Metrics/BuildDuration.md
new file mode 100644
index 0000000000..b431972500
--- /dev/null
+++ b/versioned_docs/version-v0.15/Metrics/BuildDuration.md
@@ -0,0 +1,72 @@
+---
+title: "Build Duration"
+description: >
+  Build Duration
+sidebar_position: 24
+---
+
+## What is this metric? 
+The duration of successful builds.
+
+## Why is it important?
+1. As a process indicator, it reflects the value flow efficiency of upstream production and research links
+2. Identify excellent/to-be-improved practices that impact the build, and drive the team to precipitate reusable tools and mechanisms to build infrastructure for fast and high-frequency delivery
+
+## Which dashboard(s) does it exist in
+- [Jenkins](https://grafana-lake.demo.devlake.io/grafana/d/W8AiDFQnk/jenkins?orgId=1)
+
+
+## How is it calculated?
+This metric is calculated by getting the duration of successful cicd_pipelines, such as Jenkins builds, GitLab pipelines and GitHub workflow runs in the given data range.
+
+<b>Data Sources Required</b>
+
+This metric relies on Jenkins builds, GitLab pipelines or GitHub workflow runs.
+
+<b>Data Transformation Required</b>
+
+N/A
+
+<b>SQL Queries</b>
+
+The following SQL shows how to find the mean duration of successful CI builds **finished** in the given time range.
+```
+SELECT
+  avg(duration_sec/60) as duration_in_minutes
+FROM cicd_pipelines
+WHERE
+  result = 'SUCCESS'
+  and $__timeFilter(finished_date)
+ORDER BY 1
+```
+
+If you want to measure the `mean duration of builds` in the screenshot below, please run the following SQL in Grafana.
+
+![](/img/Metrics/build-duration-monthly.png)
+
+```
+WITH _builds as(
+  SELECT
+    DATE_ADD(date(finished_date), INTERVAL -DAYOFMONTH(date(finished_date))+1 DAY) as time,
+    avg(duration_sec) as mean_duration_sec
+  FROM 
+    cicd_pipelines
+  WHERE
+    $__timeFilter(finished_date)
+    and id like "%jenkins%"
+    and name in ($job_id)
+    -- the following condition will remove the month with incomplete data
+    and finished_date >= DATE_ADD(DATE_ADD($__timeFrom(), INTERVAL -DAY($__timeFrom())+1 DAY), INTERVAL +1 MONTH)
+  GROUP BY 1
+)
+
+SELECT 
+  date_format(time,'%M %Y') as month,
+  mean_duration_sec/60 as mean_duration_minutes
+FROM _builds
+ORDER BY time
+```
+
+## How to improve?
+1. From the project dimension, compare the number of builds and success rate by combining the project phase and the complexity of tasks.
+2. From the time dimension, analyze the trend of the number of builds and success rate to see if it has improved over time.
diff --git a/versioned_docs/version-v0.15/Metrics/BuildSuccessRate.md b/versioned_docs/version-v0.15/Metrics/BuildSuccessRate.md
new file mode 100644
index 0000000000..e9ea2e2d74
--- /dev/null
+++ b/versioned_docs/version-v0.15/Metrics/BuildSuccessRate.md
@@ -0,0 +1,89 @@
+---
+title: "Build Success Rate"
+description: >
+  Build Success Rate
+sidebar_position: 25
+---
+
+## What is this metric? 
+The ratio of successful builds to all builds.
+
+## Why is it important?
+1. As a process indicator, it reflects the value flow efficiency of upstream production and research links
+2. Identify excellent/to-be-improved practices that impact the build, and drive the team to precipitate reusable tools and mechanisms to build infrastructure for fast and high-frequency delivery
+
+## Which dashboard(s) does it exist in
+- [Jenkins](https://grafana-lake.demo.devlake.io/grafana/d/W8AiDFQnk/jenkins?orgId=1)
+
+## How is it calculated?
+The number of successful builds divided by the total number of builds in the given data range.
+
+<b>Data Sources Required</b>
+
+This metric relies on Jenkins builds, GitLab pipelines or GitHub workflow runs.
+
+<b>Data Transformation Required</b>
+
+N/A
+
+<b>SQL Queries</b>
+
+The following SQL shows how to find the success rate of CI builds **finished** in the given time range.
+```
+SELECT
+  1.0 * sum(case when result = 'SUCCESS' then 1 else 0 end)/ count(*) as "Build Success Rate"
+FROM 
+  cicd_pipelines
+WHERE
+  $__timeFilter(finished_date)
+ORDER BY 1
+```
+
+If you want to measure the distribution of CI build result like the donut chart below, please run the following SQL in Grafana.
+
+![](/img/Metrics/build-result-distribution.png)
+
+```
+SELECT
+  result,
+  count(*) as build_count
+FROM 
+  cicd_pipelines
+WHERE
+  $__timeFilter(finished_date)
+  and id like "%jenkins%"
+  and name in ($job_id)
+  -- the following condition will remove the month with incomplete data
+  and finished_date >= DATE_ADD(DATE_ADD($__timeFrom(), INTERVAL -DAY($__timeFrom())+1 DAY), INTERVAL +1 MONTH)
+GROUP BY 1
+ORDER BY 2 DESC
+```
+
+If you want to measure the `mean build success rate per month` in the screenshot below, please run the following SQL in Grafana.
+
+![](/img/Metrics/build-success-rate-monthly.png)
+
+```
+WITH _build_success_rate as(
+  SELECT
+    DATE_ADD(date(finished_date), INTERVAL -DAYOFMONTH(date(finished_date))+1 DAY) as time,
+    result
+  FROM
+    cicd_pipelines
+  WHERE
+    $__timeFilter(finished_date)
+    -- the following condition will remove the month with incomplete data
+    and finished_date >= DATE_ADD(DATE_ADD($__timeFrom(), INTERVAL -DAY($__timeFrom())+1 DAY), INTERVAL +1 MONTH)
+)
+
+SELECT 
+  date_format(time,'%M %Y') as month,
+  1.0 * sum(case when result = 'SUCCESS' then 1 else 0 end)/ count(*) as "Build Success Rate"
+FROM _build_success_rate
+GROUP BY 1
+ORDER BY 1
+```
+
+## How to improve?
+1. From the project dimension, compare the number of builds and success rate by combining the project phase and the complexity of tasks.
+2. From the time dimension, analyze the trend of the number of builds and success rate to see if it has improved over time.
diff --git a/versioned_docs/version-v0.15/Metrics/CFR.md b/versioned_docs/version-v0.15/Metrics/CFR.md
new file mode 100644
index 0000000000..1718a415eb
--- /dev/null
+++ b/versioned_docs/version-v0.15/Metrics/CFR.md
@@ -0,0 +1,149 @@
+---
+title: "DORA - Change Failure Rate"
+description: >
+  DORA - Change Failure Rate
+sidebar_position: 29
+---
+
+## What is this metric? 
+The percentage of changes that were made to a code that then resulted in incidents, rollbacks, or any type of production failure.
+
+## Why is it important?
+Unlike Deployment Frequency and Lead Time for Changes that measure the throughput, Change Failure Rate measures the stability and quality of software delivery. A low CFR reflects a bad end-user experience as the production failure is relatively high.
+
+## Which dashboard(s) does it exist in
+DORA dashboard. See [live demo](https://grafana-lake.demo.devlake.io/grafana/d/qNo8_0M4z/dora?orgId=1).
+
+
+## How is it calculated?
+The number of failures per the number of deployments. For example, if there are five deployments in a day and one causes a failure, that is a 20% change failure rate.
+
+Below are the benchmarks for different development teams from Google's report. However, it's difficult to tell which group a team falls into when the team's change failure rate is `18%` or `40%`. Therefore, DevLake provides its own benchmarks to address this problem:
+
+| Groups           | Benchmarks      | DevLake Benchmarks |
+| -----------------| ----------------| -------------------|
+| Elite performers | 0%-15%          | 0%-15%             |
+| High performers  | 16%-30%         | 16-20%             |
+| Medium performers| 16%-30%         | 21%-30%            |
+| Low performers   | 16%-30%         | > 30%              |
+
+<p><i>Source: 2021 Accelerate State of DevOps, Google</i></p>
+
+<b>Data Sources Required</b>
+
+This metric relies on:
+- `Deployments` collected in one of the following ways:
+  - Open APIs of Jenkins, GitLab, GitHub, etc.
+  - Webhook for general CI tools.
+  - Releases and PR/MRs from GitHub, GitLab APIs, etc.
+- `Incidents` collected in one of the following ways:
+  - Issue tracking tools such as Jira, TAPD, GitHub, etc.
+  - Bug or Service Monitoring tools such as PagerDuty, Sentry, etc.
+  - CI pipelines that marked the 'failed' deployments.
+
+<b>Transformation Rules Required</b>
+
+This metric relies on:
+- Deployment configuration in Jenkins, GitLab or GitHub transformation rules to let DevLake know what CI builds/jobs can be regarded as `Deployments`.
+- Incident configuration in Jira, GitHub or TAPD transformation rules to let DevLake know what CI builds/jobs can be regarded as `Incidents`.
+
+<b>SQL Queries</b>
+
+If you want to measure the monthly trend of change failure rate as the picture shown below, run the following SQL in Grafana.
+
+![](/img/Metrics/cfr-monthly.jpeg)
+
+```
+with _deployments as (
+-- get the deployment count each month
+	SELECT
+		date_format(finished_date,'%y/%m') as month,
+		COUNT(distinct id) AS deployment_count
+	FROM
+		cicd_tasks
+	WHERE
+		type = 'DEPLOYMENT'
+		and result = 'SUCCESS'
+	GROUP BY 1
+),
+
+_incidents as (
+-- get the incident count each month
+	SELECT
+		date_format(created_date,'%y/%m') as month,
+		COUNT(distinct id) AS incident_count
+	FROM
+		issues
+	WHERE
+		type = 'INCIDENT'
+	GROUP BY 1
+),
+
+_calendar_months as(
+-- deal with the month with no incidents
+	SELECT date_format(CAST((SYSDATE()-INTERVAL (month_index) MONTH) AS date), '%y/%m') as month
+	FROM ( SELECT 0 month_index
+			UNION ALL SELECT   1  UNION ALL SELECT   2 UNION ALL SELECT   3
+			UNION ALL SELECT   4  UNION ALL SELECT   5 UNION ALL SELECT   6
+			UNION ALL SELECT   7  UNION ALL SELECT   8 UNION ALL SELECT   9
+			UNION ALL SELECT   10 UNION ALL SELECT  11
+		) month_index
+	WHERE (SYSDATE()-INTERVAL (month_index) MONTH) > SYSDATE()-INTERVAL 6 MONTH	
+)
+
+SELECT 
+	cm.month,
+	case 
+		when d.deployment_count is null or i.incident_count is null then 0 
+		else i.incident_count/d.deployment_count end as change_failure_rate
+FROM 
+	_calendar_months cm
+	left join _incidents i on cm.month = i.month
+	left join _deployments d on cm.month = d.month
+ORDER BY 1
+```
+
+If you want to measure in which category your team falls into as the picture shown below, run the following SQL in Grafana.
+
+![](/img/Metrics/cfr-text.jpeg)
+
+```
+with _deployment_count as (
+-- get the deployment deployed within the selected time period in the top-right corner
+	SELECT
+		COUNT(distinct id) AS deployment_count
+	FROM
+		cicd_tasks
+	WHERE
+		type = 'DEPLOYMENT'
+		and result = 'SUCCESS'
+    and $__timeFilter(finished_date)
+),
+
+_incident_count as (
+-- get the incident created within the selected time period in the top-right corner
+	SELECT
+		COUNT(distinct id) AS incident_count
+	FROM
+		issues
+	WHERE
+		type = 'INCIDENT'
+		and $__timeFilter(created_date)
+)
+
+SELECT 
+	case 
+		when deployment_count is null or incident_count is null or deployment_count = 0 then NULL 
+		when incident_count/deployment_count <= .15 then "0-15%"
+		when incident_count/deployment_count <= .20 then "16%-20%"
+		when incident_count/deployment_count <= .30 then "21%-30%"
+		else "> 30%"
+		end as change_failure_rate
+FROM 
+	_deployment_count, _incident_count
+```
+
+## How to improve?
+- Add unit tests for all new feature
+- "Shift left", start QA early and introduce more automated tests
+- Enforce code review if it's not strictly executed
diff --git a/versioned_docs/version-v0.15/Metrics/CommitAuthorCount.md b/versioned_docs/version-v0.15/Metrics/CommitAuthorCount.md
new file mode 100644
index 0000000000..46bf55934c
--- /dev/null
+++ b/versioned_docs/version-v0.15/Metrics/CommitAuthorCount.md
@@ -0,0 +1,52 @@
+---
+title: "Commit Author Count"
+description: >
+  Commit Author Count
+sidebar_position: 10
+---
+
+## What is this metric? 
+The number of commit authors who have committed code.
+
+## Why is it important?
+Take inventory of project/team R&D resource inputs, assess input-output ratio, and rationalize resource deployment.
+
+
+## Which dashboard(s) does it exist in
+N/A
+
+
+## How is it calculated?
+This metric is calculated by counting the number of commit authors in the given data range.
+
+<b>Data Sources Required</b>
+
+This metric relies on commits collected from GitHub, GitLab or BitBucket.
+
+<b>Data Transformation Required</b>
+
+N/A
+
+<b>SQL Queries</b>
+
+The following SQL shows how to find the `commit author count` in specific repositories, eg. 'repo-1' and 'repo-2'.
+
+```
+SELECT
+  count(distinct c.author_id)
+FROM 
+  commits c
+  LEFT JOIN repo_commits rc ON c.sha = rc.commit_sha
+  LEFT JOIN repos r ON r.id = rc.repo_id
+WHERE
+  -- please replace the repo ids with your own, or create a '$repo_id' variable in Grafana
+  r.id in ('repo-1', 'repo-2')
+  and message not like '%Merge%'
+  and $__timeFilter(c.authored_date)
+  -- the following condition will remove the month with incomplete data
+  and c.authored_date >= DATE_ADD(DATE_ADD($__timeFrom(), INTERVAL -DAY($__timeFrom())+1 DAY), INTERVAL +1 MONTH)
+```
+
+
+## How to improve?
+As a secondary indicator, this helps assess the labor cost of participating in coding.
diff --git a/versioned_docs/version-v0.15/Metrics/CommitCount.md b/versioned_docs/version-v0.15/Metrics/CommitCount.md
new file mode 100644
index 0000000000..336accb720
--- /dev/null
+++ b/versioned_docs/version-v0.15/Metrics/CommitCount.md
@@ -0,0 +1,83 @@
+---
+title: "Commit Count"
+description: >
+  Commit Count
+sidebar_position: 9
+---
+
+## What is this metric? 
+The number of commits created.
+
+## Why is it important?
+1. Identify potential bottlenecks that may affect output
+2. Encourage R&D practices of small step submissions and develop excellent coding habits
+
+## Which dashboard(s) does it exist in
+- GitHub Release Quality and Contribution Analysis
+- Demo-Is this month more productive than last?
+- Demo-Commit Count by Author
+
+## How is it calculated?
+This metric is calculated by counting the number of commits in the given data range.
+
+<b>Data Sources Required</b>
+
+This metric relies on commits collected from GitHub, GitLab or BitBucket.
+
+<b>Data Transformation Required</b>
+
+N/A
+
+<b>SQL Queries</b>
+
+The following SQL shows how to find commits in specific repositories, eg. 'repo-1' and 'repo-2'.
+```
+SELECT
+  r.id,
+  c.*
+FROM 
+  commits c
+  LEFT JOIN repo_commits rc ON c.sha = rc.commit_sha
+  LEFT JOIN repos r ON r.id = rc.repo_id
+WHERE
+  -- please replace the repo ids with your own, or create a '$repo_id' variable in Grafana
+  r.id in ('repo-1','repo-2')
+  and message not like '%Merge%'
+  and $__timeFilter(c.authored_date)
+  -- the following condition will remove the month with incomplete data
+  and c.authored_date >= DATE_ADD(DATE_ADD($__timeFrom(), INTERVAL -DAY($__timeFrom())+1 DAY), INTERVAL +1 MONTH)
+```
+
+If you want to measure the monthly trend of `commit count` in the screenshot below, please run the following SQL in Grafana.
+
+![](/img/Metrics/commit-count-monthly.png)
+
+```
+with _commits as(
+  SELECT
+    DATE_ADD(date(c.authored_date), INTERVAL -DAY(date(c.authored_date))+1 DAY) as time,
+    count(c.sha) as commit_count
+  FROM 
+    commits c
+    LEFT JOIN repo_commits rc ON c.sha = rc.commit_sha
+    LEFT JOIN repos r ON r.id = rc.repo_id
+  WHERE
+    -- please replace the repo ids with your own, or create a '$repo_id' variable in Grafana
+    r.id in ($repo_id)
+    and message not like '%Merge%'
+    and $__timeFilter(c.authored_date)
+    -- the following condition will remove the month with incomplete data
+    and c.authored_date >= DATE_ADD(DATE_ADD($__timeFrom(), INTERVAL -DAY($__timeFrom())+1 DAY), INTERVAL +1 MONTH)
+  group by 1
+)
+
+SELECT 
+  date_format(time,'%M %Y') as month,
+  commit_count as "Commit Count"
+FROM _commits
+ORDER BY time
+```
+
+## How to improve?
+1. Identify the main reasons for the unusual number of commits and the possible impact on the number of commits through comparison
+2. Evaluate whether the number of commits is reasonable in conjunction with more microscopic workload metrics (e.g. lines of code/code equivalents)
diff --git a/versioned_docs/version-v0.15/Metrics/DeletedLinesOfCode.md b/versioned_docs/version-v0.15/Metrics/DeletedLinesOfCode.md
new file mode 100644
index 0000000000..963834af1a
--- /dev/null
+++ b/versioned_docs/version-v0.15/Metrics/DeletedLinesOfCode.md
@@ -0,0 +1,77 @@
+---
+title: "Deleted Lines of Code"
+description: >
+  Deleted Lines of Code
+sidebar_position: 12
+---
+
+## What is this metric? 
+The accumulated number of deleted lines of code.
+
+## Why is it important?
+1. identify potential bottlenecks that may affect the output
+2. Encourage the team to implement a development model that matches the business requirements; develop excellent coding habits
+
+## Which dashboard(s) does it exist in
+N/A
+
+## How is it calculated?
+This metric is calculated by summing the deletions of commits in the given data range.
+
+<b>Data Sources Required</b>
+
+This metric relies on `commits` collected from GitHub, GitLab or BitBucket.
+
+<b>Data Transformation Required</b>
+
+N/A
+
+<b>SQL Queries</b>
+
+The following SQL shows how to find the `deleted lines of code` in specific repositories, eg. 'repo-1' and 'repo-2'.
+
+```
+SELECT
+  sum(c.deletions) as added_lines_of_code
+FROM 
+  commits c
+  LEFT JOIN repo_commits rc ON c.sha = rc.commit_sha
+  LEFT JOIN repos r ON r.id = rc.repo_id
+WHERE
+  -- please replace the repo ids with your own, or create a '$repo_id' variable in Grafana
+  r.id in ('repo-1','repo-2')
+  and message not like '%Merge%'
+  and $__timeFilter(c.authored_date)
+  -- the following condition will remove the month with incomplete data
+  and c.authored_date >= DATE_ADD(DATE_ADD($__timeFrom(), INTERVAL -DAY($__timeFrom())+1 DAY), INTERVAL +1 MONTH)
+```
+
+If you want to measure the monthly trend of `deleted lines of code` in the screenshot below, please run the following SQL in Grafana.
+
+![](/img/Metrics/deleted-loc-monthly.png)
+
+```
+with _commits as(
+  SELECT
+    DATE_ADD(date(authored_date), INTERVAL -DAY(date(authored_date))+1 DAY) as time,
+    sum(deletions) as deleted_lines_of_code
+  FROM commits
+  WHERE
+    message not like '%Merge%'
+    and $__timeFilter(authored_date)
+    -- the following condition will remove the month with incomplete data
+    and authored_date >= DATE_ADD(DATE_ADD($__timeFrom(), INTERVAL -DAY($__timeFrom())+1 DAY), INTERVAL +1 MONTH)
+  group by 1
+)
+
+SELECT 
+  date_format(time,'%M %Y') as month,
+  deleted_lines_of_code
+FROM _commits
+ORDER BY time
+```
+
+## How to improve?
+1. From the project/team dimension, observe the accumulated change in Added lines to assess the team activity and code growth rate
+2. From version cycle dimension, observe the active time distribution of code changes, and evaluate the effectiveness of project development model.
+3. From the member dimension, observe the trend and stability of code output of each member, and identify the key points that affect code output by comparison.
diff --git a/versioned_docs/version-v0.15/Metrics/DeploymentFrequency.md b/versioned_docs/version-v0.15/Metrics/DeploymentFrequency.md
new file mode 100644
index 0000000000..9cd3c6cbcb
--- /dev/null
+++ b/versioned_docs/version-v0.15/Metrics/DeploymentFrequency.md
@@ -0,0 +1,169 @@
+---
+title: "DORA - Deployment Frequency"
+description: >
+  DORA - Deployment Frequency
+sidebar_position: 26
+---
+
+## What is this metric? 
+How often an organization deploys code to production or release it to end users.
+
+## Why is it important?
+Deployment frequency reflects the efficiency of a team's deployment. A team that deploys more frequently can deliver the product faster and users' feature requirements can be met faster.
+
+## Which dashboard(s) does it exist in
+DORA dashboard. See [live demo](https://grafana-lake.demo.devlake.io/grafana/d/qNo8_0M4z/dora?orgId=1).
+
+
+## How is it calculated?
+Deployment frequency is calculated based on the number of deployment days, not the number of deployments, e.g.,daily, weekly, monthly, yearly.
+
+Below are the benchmarks for different development teams from Google's report. DevLake uses the same benchmarks.
+
+| Groups           | Benchmarks                                    | DevLake Benchmarks                             |
+| -----------------| --------------------------------------------- | ---------------------------------------------- |
+| Elite performers | On-demand (multiple deploys per day)          | On-demand                                      |
+| High performers  | Between once per week and once per month      | Between once per week and once per month       |
+| Medium performers| Between once per month and once every 6 months| Between once per month and once every 6 months |
+| Low performers   | Fewer than once per six months                | Fewer than once per six months                 |
+
+<p><i>Source: 2021 Accelerate State of DevOps, Google</i></p>
+
+
+<b>Data Sources Required</b>
+
+This metric relies on deployments collected in multiple ways:
+- Open APIs of Jenkins, GitLab, GitHub, etc.
+- Webhook for general CI tools.
+- Releases and PR/MRs from GitHub, GitLab APIs, etc.
+
+<b>Transformation Rules Required</b>
+
+This metric relies on the deployment configuration in Jenkins, GitLab or GitHub transformation rules to let DevLake know what CI builds/jobs can be regarded as deployments.
+
+<b>SQL Queries</b>
+
+If you want to measure the monthly trend of deployment count as the picture shown below, run the following SQL in Grafana.
+
+![](/img/Metrics/deployment-frequency-monthly.jpeg)
+
+```
+with _deployments as (
+-- get the deployment count each month
+	SELECT
+		date_format(finished_date,'%y/%m') as month,
+		COUNT(distinct id) AS deployment_count
+	FROM
+		cicd_tasks
+	WHERE
+		type = 'DEPLOYMENT'
+		and result = 'SUCCESS'
+	GROUP BY 1
+),
+
+_calendar_months as(
+-- deal with the month with no deployments
+	SELECT date_format(CAST((SYSDATE()-INTERVAL (month_index) MONTH) AS date), '%y/%m') as month
+	FROM ( SELECT 0 month_index
+			UNION ALL SELECT   1  UNION ALL SELECT   2 UNION ALL SELECT   3
+			UNION ALL SELECT   4  UNION ALL SELECT   5 UNION ALL SELECT   6
+			UNION ALL SELECT   7  UNION ALL SELECT   8 UNION ALL SELECT   9
+			UNION ALL SELECT   10 UNION ALL SELECT  11
+		) month_index
+	WHERE (SYSDATE()-INTERVAL (month_index) MONTH) > SYSDATE()-INTERVAL 6 MONTH	
+)
+
+SELECT 
+	cm.month, 
+	case when d.deployment_count is null then 0 else d.deployment_count end as deployment_count
+FROM 
+	_calendar_months cm
+	left join _deployments d on cm.month = d.month
+ORDER BY 1
+```
+
+If you want to measure in which category your team falls into as the picture shown below, run the following SQL in Grafana.
+
+![](/img/Metrics/deployment-frequency-text.jpeg)
+
+```
+with last_few_calendar_months as(
+-- get the last few months within the selected time period in the top-right corner
+	SELECT CAST((SYSDATE()-INTERVAL (H+T+U) DAY) AS date) day
+	FROM ( SELECT 0 H
+			UNION ALL SELECT 100 UNION ALL SELECT 200 UNION ALL SELECT 300
+		) H CROSS JOIN ( SELECT 0 T
+			UNION ALL SELECT  10 UNION ALL SELECT  20 UNION ALL SELECT  30
+			UNION ALL SELECT  40 UNION ALL SELECT  50 UNION ALL SELECT  60
+			UNION ALL SELECT  70 UNION ALL SELECT  80 UNION ALL SELECT  90
+		) T CROSS JOIN ( SELECT 0 U
+			UNION ALL SELECT   1 UNION ALL SELECT   2 UNION ALL SELECT   3
+			UNION ALL SELECT   4 UNION ALL SELECT   5 UNION ALL SELECT   6
+			UNION ALL SELECT   7 UNION ALL SELECT   8 UNION ALL SELECT   9
+		) U
+	WHERE
+		(SYSDATE()-INTERVAL (H+T+U) DAY) > $__timeFrom()
+),
+
+_days_weeks_deploy as(
+	SELECT
+			date(DATE_ADD(last_few_calendar_months.day, INTERVAL -WEEKDAY(last_few_calendar_months.day) DAY)) as week,
+			MAX(if(deployments.day is not null, 1, 0)) as week_deployed,
+			COUNT(distinct deployments.day) as days_deployed
+	FROM 
+		last_few_calendar_months
+		LEFT JOIN(
+			SELECT
+				DATE(finished_date) AS day,
+				id
+			FROM cicd_tasks
+			WHERE
+				type = 'DEPLOYMENT'
+				and result = 'SUCCESS') deployments ON deployments.day = last_few_calendar_months.day
+	GROUP BY week
+	),
+
+_monthly_deploy as(
+	SELECT
+			date(DATE_ADD(last_few_calendar_months.day, INTERVAL -DAY(last_few_calendar_months.day)+1 DAY)) as month,
+			MAX(if(deployments.day is not null, 1, 0)) as months_deployed
+	FROM 
+		last_few_calendar_months
+		LEFT JOIN(
+			SELECT
+				DATE(finished_date) AS day,
+				id
+			FROM cicd_tasks
+			WHERE
+				type = 'DEPLOYMENT'
+				and result = 'SUCCESS') deployments ON deployments.day = last_few_calendar_months.day
+	GROUP BY month
+	),
+
+_median_number_of_deployment_days_per_week as (
+	SELECT x.days_deployed as median_number_of_deployment_days_per_week from _days_weeks_deploy x, _days_weeks_deploy y
+	GROUP BY x.days_deployed
+	HAVING SUM(SIGN(1-SIGN(y.days_deployed-x.days_deployed)))/COUNT(*) > 0.5
+	LIMIT 1
+),
+
+_median_number_of_deployment_days_per_month as (
+	SELECT x.months_deployed as median_number_of_deployment_days_per_month from _monthly_deploy x, _monthly_deploy y
+	GROUP BY x.months_deployed
+	HAVING SUM(SIGN(1-SIGN(y.months_deployed-x.months_deployed)))/COUNT(*) > 0.5
+	LIMIT 1
+)
+
+SELECT 
+	CASE  
+		WHEN median_number_of_deployment_days_per_week >= 3 THEN 'On-demand'
+		WHEN median_number_of_deployment_days_per_week >= 1 THEN 'Between once per week and once per month'
+		WHEN median_number_of_deployment_days_per_month >= 1 THEN 'Between once per month and once every 6 months'
+		ELSE 'Fewer than once per six months' END AS 'Deployment Frequency'
+FROM _median_number_of_deployment_days_per_week, _median_number_of_deployment_days_per_month
+```
+
+## How to improve?
+- Trunk development. Work in small batches and often merge their work into shared trunks.
+- Integrate CI/CD tools for automated deployment
+- Improve automated test coverage
diff --git a/versioned_docs/version-v0.15/Metrics/IncidentAge.md b/versioned_docs/version-v0.15/Metrics/IncidentAge.md
new file mode 100644
index 0000000000..edf0307725
--- /dev/null
+++ b/versioned_docs/version-v0.15/Metrics/IncidentAge.md
@@ -0,0 +1,76 @@
+---
+title: "Incident Age"
+description: >
+  Incident Age
+sidebar_position: 7
+---
+
+## What is this metric? 
+The amount of time it takes an incident to fix.
+
+## Why is it important?
+1. Help the team to establish an effective hierarchical response mechanism for incidents. Focus on the resolution of important problems in the backlog.
+2. Improve team's and individual's incident fixing efficiency. Identify good/to-be-improved practices that affect incident age
+
+## Which dashboard(s) does it exist in
+- [Jira](https://devlake.apache.org/livedemo/DataSources/Jira)
+- [GitHub](https://devlake.apache.org/livedemo/DataSources/GitHub)
+
+
+## How is it calculated?
+Similar to [requirement lead time](./RequirementLeadTime.md), this metric equals `resolution_date - created_date` of issues in type "INCIDENT".
+
+<b>Data Sources Required</b>
+
+This metric relies on `issues` collected from Jira, GitHub, TAPD, or PagerDuty.
+
+<b>Transformation Rules Required</b>
+
+This metric relies on the 'type-incident' configuration in Jira, GitHub or TAPD's transformation rules while adding/editing a blueprint. This configuration tells DevLake what issues are `incidents`.
+
+<b>SQL Queries</b>
+
+The following SQL shows how to find the incident age of a specific `incident`.
+```
+-- lead_time_minutes is a pre-calculated field whose value equals 'resolution_date - created_date'
+SELECT
+  lead_time_minutes/1440 as incident_age_in_days
+FROM
+  issues
+WHERE
+  type = 'INCIDENT'
+```
+
+If you want to measure the `mean incident age` in the screenshot below, please run the following SQL in Grafana.
+
+![](/img/Metrics/incident-age-monthly.png)
+
+```
+with _incidents as(
+  SELECT
+    DATE_ADD(date(i.resolution_date), INTERVAL -DAY(date(i.resolution_date))+1 DAY) as time,
+    AVG(i.lead_time_minutes/1440) as issue_lead_time
+  FROM issues i
+  	join board_issues bi on i.id = bi.issue_id
+  	join boards b on bi.board_id = b.id
+  WHERE
+    -- $board_id is a variable defined in Grafana's dashboard settings to filter out issues by boards
+    b.id in ($board_id)
+    and i.status = "DONE"
+    and i.type = 'INCIDENT'
+    and $__timeFilter(i.resolution_date)
+    -- the following condition will remove the month with incomplete data
+    and i.resolution_date >= DATE_ADD(DATE_ADD($__timeFrom(), INTERVAL -DAY($__timeFrom())+1 DAY), INTERVAL +1 MONTH)
+  group by 1
+)
+
+SELECT 
+  date_format(time,'%M %Y') as month,
+  issue_lead_time as "Mean Incident Age in Days"
+FROM _incidents
+ORDER BY time
+```
+
+## How to improve?
+1. Observe the trend of incident age and locate the key reasons.
+2. Compare the age of incidents by severity levels, types (business, functional classification), affected components, etc.
\ No newline at end of file
diff --git a/versioned_docs/version-v0.15/Metrics/IncidentCountPer1kLinesOfCode.md b/versioned_docs/version-v0.15/Metrics/IncidentCountPer1kLinesOfCode.md
new file mode 100644
index 0000000000..c789c0667b
--- /dev/null
+++ b/versioned_docs/version-v0.15/Metrics/IncidentCountPer1kLinesOfCode.md
@@ -0,0 +1,88 @@
+---
+title: "Incident Count per 1k Lines of Code"
+description: >
+  Incident Count per 1k Lines of Code
+sidebar_position: 8
+---
+
+## What is this metric? 
+Amount of incidents per 1,000 lines of code.
+
+## Why is it important?
+1. Defect drill-down analysis to inform the development of design and code review strategies and to improve the internal QA process
+2. Assist teams to locate projects/modules with higher defect severity and density, and clean up technical debts
+3. Identify good/to-be-improved practices that affect defect count or defect rate, to reduce the number of future defects
+
+## Which dashboard(s) does it exist in
+N/A
+
+
+## How is it calculated?
+The number of incidents divided by total accumulated lines of code (additions + deletions) in the given data range.
+
+<b>Data Sources Required</b>
+
+- `issues` collected from Jira, GitHub, TAPD or PagerDuty.
+- `commits` collected from GitHub, GitLab or BitBucket.
+
+<b>Data Transformation Required</b>
+
+This metric relies on the 'type-incident' configuration in Jira, GitHub or TAPD's transformation rules while adding/editing a blueprint. This configuration tells DevLake what issues are `incidents`.
+
+<b>SQL Queries</b>
+
+If you want to measure the monthly trend of `Incidents per 1k lines of code` in the screenshot below, please run the following SQL in Grafana.
+
+![](/img/Metrics/incident-per-1k-loc-monthly.png)
+
+```
+with _line_of_code as (
+	select 
+	  DATE_ADD(date(authored_date), INTERVAL -DAY(date(authored_date))+1 DAY) as time,
+	  sum(additions + deletions) as line_count
+	from 
+	  commits
+	where 
+	  message not like 'Merge%'
+	  and $__timeFilter(authored_date)
+	group by 1
+),
+
+
+_incident_count as(
+  select 
+    DATE_ADD(date(created_date), INTERVAL -DAY(date(created_date))+1 DAY) as time,
+    count(*) as incident_count
+  from issues i
+  where 
+    type = 'INCIDENT'
+    and $__timeFilter(created_date)
+  group by 1
+),
+
+
+_incident_count_per_1k_loc as(
+  select 
+    loc.time,
+    1.0 * ic.incident_count / loc.line_count * 1000 as incident_count_per_1k_loc
+  from 
+    _line_of_code loc
+    left join _incident_count ic on ic.time = loc.time
+  where
+    ic.incident_count is not null 
+    and loc.line_count is not null 
+    and loc.line_count != 0
+)
+
+select 
+  date_format(time,'%M %Y') as month,
+  incident_count_per_1k_loc as 'Incident Count per 1000 Lines of Code'
+from _incident_count_per_1k_loc 
+order by time;
+```
+
+## How to improve?
+1. From the project or team dimension, observe the statistics on the total number of defects, the distribution of the number of defects in each severity level/type/owner, the cumulative trend of defects, and the change trend of the defect rate in thousands of lines, etc.
+2. From version cycle dimension, observe the statistics on the cumulative trend of the number of defects/defect rate, which can be used to determine whether the growth rate of defects is slowing down, showing a flat convergence trend, and is an important reference for judging the stability of software version quality
+3. From the time dimension, analyze the trend of the number of test defects, defect rate to locate the key items/key points
+4. Evaluate whether the software quality and test plan are reasonable by referring to CMMI standard values
diff --git a/versioned_docs/version-v0.15/Metrics/LeadTimeForChanges.md b/versioned_docs/version-v0.15/Metrics/LeadTimeForChanges.md
new file mode 100644
index 0000000000..efd9fe7c55
--- /dev/null
+++ b/versioned_docs/version-v0.15/Metrics/LeadTimeForChanges.md
@@ -0,0 +1,158 @@
+---
+title: "DORA - Lead Time for Changes"
+description: >
+  DORA - Lead Time for Changes
+sidebar_position: 27
+---
+
+## What is this metric? 
+The median amount of time for a commit to be deployed into production.
+
+## Why is it important?
+This metric measures the time it takes to commit code to the production environment and reflects the speed of software delivery. A lower average change preparation time means that your team is efficient at coding and deploying your project.
+
+## Which dashboard(s) does it exist in
+DORA dashboard. See [live demo](https://grafana-lake.demo.devlake.io/grafana/d/qNo8_0M4z/dora?orgId=1).
+
+
+## How is it calculated?
+1. Find the deployments whose finished_date falls into the time range that users select
+2. Calculate the commits diff between each deployment by deployments' commit_sha
+3. Find the PRs mapped to the commits in step 2, now we have the relation of Deployment - Deployed_commits - Deployed_PRs.
+4. Calculate PR Deploy Time by using finish_time of deployment minus merge_time of PR
+
+![](/img/Metrics/pr-commit-deploy.jpeg)
+
+PR cycle time is pre-calculated when dora plugin is triggered. You can connect to DevLake's database and find it in the field `change_timespan` in [table.pull_requests](https://devlake.apache.org/docs/DataModels/DevLakeDomainLayerSchema/#pull_requests).
+
+
+Below are the benchmarks for different development teams from Google's report. However, it's difficult to tell which group a team falls into when the team's median lead time for changes is `between one week and one month`. Therefore, DevLake provides its own benchmarks to address this problem:
+
+| Groups           | Benchmarks                           | DevLake Benchmarks 
+| -----------------| -------------------------------------| --------------------------------|
+| Elite performers | Less than one hour                   | Less than one hour              |
+| High performers  | Between one day and one week         | Less than one week              |
+| Medium performers| Between one month and six months     | Between one week and six months |
+| Low performers   | More than six months                 | More than six months            |
+
+<p><i>Source: 2021 Accelerate State of DevOps, Google</i></p>
+
+<b>Data Sources Required</b>
+
+This metric relies on deployments collected in multiple ways:
+- Open APIs of Jenkins, GitLab, GitHub, etc.
+- Webhook for general CI tools.
+- Releases and PR/MRs from GitHub, GitLab APIs, etc.
+
+<b>Transformation Rules Required</b>
+
+This metric relies on the deployment configuration in Jenkins, GitLab or GitHub transformation rules to let DevLake know what CI builds/jobs can be regarded as deployments.
+
+<b>SQL Queries</b>
+
+If you want to measure the monthly trend of median lead time for changes as the picture shown below, run the following SQL in Grafana.
+
+![](/img/Metrics/lead-time-for-changes-monthly.jpeg)
+
+```
+with _pr_stats as (
+-- get PRs' cycle lead time in each month
+	SELECT
+		pr.id,
+		date_format(pr.merged_date,'%y/%m') as month,
+		pr.change_timespan as pr_cycle_time
+	FROM
+		pull_requests pr
+	WHERE
+		pr.merged_date is not null
+		and pr.change_timespan is not null
+		and $__timeFilter(pr.merged_date)
+),
+
+_find_median_clt_each_month as (
+	SELECT x.month, x.pr_cycle_time as med_change_lead_time 
+	FROM _pr_stats x JOIN _pr_stats y ON x.month = y.month
+	GROUP BY x.month, x.pr_cycle_time
+	HAVING SUM(SIGN(1-SIGN(y.pr_cycle_time-x.pr_cycle_time)))/COUNT(*) > 0.5
+),
+
+_find_clt_rank_each_month as (
+	SELECT
+		*,
+		rank() over(PARTITION BY month ORDER BY med_change_lead_time) as _rank 
+	FROM
+		_find_median_clt_each_month
+),
+
+_clt as (
+	SELECT
+		month,
+		med_change_lead_time
+	from _find_clt_rank_each_month
+	WHERE _rank = 1
+),
+
+_calendar_months as(
+-- to deal with the month with no incidents
+	SELECT date_format(CAST((SYSDATE()-INTERVAL (month_index) MONTH) AS date), '%y/%m') as month
+	FROM ( SELECT 0 month_index
+			UNION ALL SELECT   1  UNION ALL SELECT   2 UNION ALL SELECT   3
+			UNION ALL SELECT   4  UNION ALL SELECT   5 UNION ALL SELECT   6
+			UNION ALL SELECT   7  UNION ALL SELECT   8 UNION ALL SELECT   9
+			UNION ALL SELECT   10 UNION ALL SELECT  11
+		) month_index
+	WHERE (SYSDATE()-INTERVAL (month_index) MONTH) > SYSDATE()-INTERVAL 6 MONTH	
+)
+
+SELECT 
+	cm.month,
+	case 
+		when _clt.med_change_lead_time is null then 0 
+		else _clt.med_change_lead_time/60 end as med_change_lead_time_in_hour
+FROM 
+	_calendar_months cm
+	left join _clt on cm.month = _clt.month
+ORDER BY 1
+```
+
+If you want to measure in which category your team falls into as the picture shown below, run the following SQL in Grafana.
+
+![](/img/Metrics/lead-time-for-changes-text.jpeg)
+
+```
+with _pr_stats as (
+-- get PRs' cycle time in the selected period
+	SELECT
+		pr.id,
+		pr.change_timespan as pr_cycle_time
+	FROM
+		pull_requests pr
+	WHERE
+		pr.merged_date is not null
+		and pr.change_timespan is not null
+		and $__timeFilter(pr.merged_date)
+),
+
+_median_change_lead_time as (
+-- use median PR cycle time as the median change lead time
+	SELECT x.pr_cycle_time as median_change_lead_time from _pr_stats x, _pr_stats y
+	GROUP BY x.pr_cycle_time
+	HAVING SUM(SIGN(1-SIGN(y.pr_cycle_time-x.pr_cycle_time)))/COUNT(*) > 0.5
+	LIMIT 1
+)
+
+SELECT 
+  CASE
+    WHEN median_change_lead_time < 60 then "Less than one hour"
+    WHEN median_change_lead_time < 7 * 24 * 60 then "Less than one week"
+    WHEN median_change_lead_time < 180 * 24 * 60 then "Between one week and six months"
+    ELSE "More than six months"
+    END as median_change_lead_time
+FROM _median_change_lead_time
+```
+
+## How to improve?
+- Break requirements into smaller, more manageable deliverables
+- Optimize the code review process
+- "Shift left", start QA early and introduce more automated tests
+- Integrate CI/CD tools to automate the deployment process
diff --git a/versioned_docs/version-v0.15/Metrics/MTTR.md b/versioned_docs/version-v0.15/Metrics/MTTR.md
new file mode 100644
index 0000000000..aa5b3e0d1c
--- /dev/null
+++ b/versioned_docs/version-v0.15/Metrics/MTTR.md
@@ -0,0 +1,159 @@
+---
+title: "DORA - Median Time to Restore Service"
+description: >
+  DORA - Median Time to Restore Service
+sidebar_position: 28
+---
+
+## What is this metric? 
+The time to restore service after service incidents, rollbacks, or any type of production failure happened.
+
+## Why is it important?
+This metric is essential to measure the disaster control capability of your team and the robustness of the software.
+
+## Which dashboard(s) does it exist in
+DORA dashboard. See [live demo](https://grafana-lake.demo.devlake.io/grafana/d/qNo8_0M4z/dora?orgId=1).
+
+
+## How is it calculated?
+MTTR = Total [incident age](./IncidentAge.md) (in hours)/number of incidents.
+
+If you have three incidents that happened in the given data range, one lasting 1 hour, one lasting 2 hours and one lasting 3 hours. Your MTTR will be: (1 + 2 + 3) / 3 = 2 hours.
+
+Below are the benchmarks for different development teams from Google's report. However, it's difficult to tell which group a team falls into when the team's median time to restore service is `between one week and six months`. Therefore, DevLake provides its own benchmarks to address this problem:
+
+| Groups           | Benchmarks                           | DevLake Benchmarks   
+| -----------------| -------------------------------------| -------------------------------|
+| Elite performers | Less than one hour                   | Less than one hour             |
+| High performers  | Less one day                         | Less than one day              |
+| Medium performers| Between one day and one week         | Between one day and one week   |
+| Low performers   | More than six months                 | More than one week             |
+
+<p><i>Source: 2021 Accelerate State of DevOps, Google</i></p>
+
+<b>Data Sources Required</b>
+
+This metric relies on:
+- `Deployments` collected in one of the following ways:
+  - Open APIs of Jenkins, GitLab, GitHub, etc.
+  - Webhook for general CI tools.
+  - Releases and PR/MRs from GitHub, GitLab APIs, etc.
+- `Incidents` collected in one of the following ways:
+  - Issue tracking tools such as Jira, TAPD, GitHub, etc.
+  - Bug or Service Monitoring tools such as PagerDuty, Sentry, etc.
+  - CI pipelines that marked the 'failed' deployments.
+
+<b>Transformation Rules Required</b>
+
+This metric relies on:
+- Deployment configuration in Jenkins, GitLab or GitHub transformation rules to let DevLake know what CI builds/jobs can be regarded as `Deployments`.
+- Incident configuration in Jira, GitHub or TAPD transformation rules to let DevLake know what CI builds/jobs can be regarded as `Incidents`.
+
+<b>SQL Queries</b>
+
+If you want to measure the monthly trend of median time to restore service as the picture shown below, run the following SQL in Grafana.
+
+![](/img/Metrics/mttr-monthly.jpeg)
+
+```
+with _incidents as (
+-- get the incident count each month
+	SELECT
+		date_format(created_date,'%y/%m') as month,
+		cast(lead_time_minutes as signed) as lead_time_minutes
+	FROM
+		issues
+	WHERE
+		type = 'INCIDENT'
+),
+
+_find_median_mttr_each_month as (
+	SELECT 
+		x.*
+	from _incidents x join _incidents y on x.month = y.month
+	WHERE x.lead_time_minutes is not null and y.lead_time_minutes is not null
+	GROUP BY x.month, x.lead_time_minutes
+	HAVING SUM(SIGN(1-SIGN(y.lead_time_minutes-x.lead_time_minutes)))/COUNT(*) > 0.5
+),
+
+_find_mttr_rank_each_month as (
+	SELECT
+		*,
+		rank() over(PARTITION BY month ORDER BY lead_time_minutes) as _rank 
+	FROM
+		_find_median_mttr_each_month
+),
+
+_mttr as (
+	SELECT
+		month,
+		lead_time_minutes as med_time_to_resolve
+	from _find_mttr_rank_each_month
+	WHERE _rank = 1
+),
+
+_calendar_months as(
+-- deal with the month with no incidents
+	SELECT date_format(CAST((SYSDATE()-INTERVAL (month_index) MONTH) AS date), '%y/%m') as month
+	FROM ( SELECT 0 month_index
+			UNION ALL SELECT   1  UNION ALL SELECT   2 UNION ALL SELECT   3
+			UNION ALL SELECT   4  UNION ALL SELECT   5 UNION ALL SELECT   6
+			UNION ALL SELECT   7  UNION ALL SELECT   8 UNION ALL SELECT   9
+			UNION ALL SELECT   10 UNION ALL SELECT  11
+		) month_index
+	WHERE (SYSDATE()-INTERVAL (month_index) MONTH) > SYSDATE()-INTERVAL 6 MONTH	
+)
+
+SELECT 
+	cm.month,
+	case 
+		when m.med_time_to_resolve is null then 0 
+		else m.med_time_to_resolve/60 end as med_time_to_resolve_in_hour
+FROM 
+	_calendar_months cm
+	left join _mttr m on cm.month = m.month
+ORDER BY 1
+```
+
+If you want to measure in which category your team falls into as the picture shown below, run the following SQL in Grafana.
+
+![](/img/Metrics/mttr-text.jpeg)
+
+``` 
+with _incidents as (
+-- get the incidents created within the selected time period in the top-right corner
+	SELECT
+		cast(lead_time_minutes as signed) as lead_time_minutes
+	FROM
+		issues
+	WHERE
+		type = 'INCIDENT'
+		and $__timeFilter(created_date)
+),
+
+_median_mttr as (
+	SELECT 
+		x.lead_time_minutes as med_time_to_resolve
+	from _incidents x, _incidents y
+	WHERE x.lead_time_minutes is not null and y.lead_time_minutes is not null
+	GROUP BY x.lead_time_minutes
+	HAVING SUM(SIGN(1-SIGN(y.lead_time_minutes-x.lead_time_minutes)))/COUNT(*) > 0.5
+	LIMIT 1
+)
+
+SELECT 
+	case
+		WHEN med_time_to_resolve < 60  then "Less than one hour"
+    WHEN med_time_to_resolve < 24 * 60 then "Less than one Day"
+    WHEN med_time_to_resolve < 7 * 24 * 60  then "Between one day and one week"
+    ELSE "More than one week"
+    END as med_time_to_resolve
+FROM 
+	_median_mttr
+```
+
+## How to improve?
+- Use automated tools to quickly report failure
+- Prioritize recovery when a failure happens
+- Establish a go-to action plan to respond to failures immediately
+- Reduce the deployment time for failure-fixing
diff --git a/docs/Metrics/PRCodingTime.md b/versioned_docs/version-v0.15/Metrics/PRCodingTime.md
similarity index 84%
copy from docs/Metrics/PRCodingTime.md
copy to versioned_docs/version-v0.15/Metrics/PRCodingTime.md
index f9fca08899..7f0ac87f9e 100644
--- a/docs/Metrics/PRCodingTime.md
+++ b/versioned_docs/version-v0.15/Metrics/PRCodingTime.md
@@ -12,8 +12,8 @@ The time it takes from the first commit until a PR is issued.
 It is recommended that you keep every task on a workable and manageable scale for a reasonably short amount of coding time. The average coding time of most engineering teams is around 3-4 days.
 
 ## Which dashboard(s) does it exist in?
-- [Engineering Throughput and Cycle Time](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
-- [Engineering Throughput and Cycle Time - Team View](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
+- [Engineering Throughput and Cycle Time](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
+- [Engineering Throughput and Cycle Time - Team View](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
 
 
 ## How is it calculated?
diff --git a/docs/Metrics/PRCount.md b/versioned_docs/version-v0.15/Metrics/PRCount.md
similarity index 86%
copy from docs/Metrics/PRCount.md
copy to versioned_docs/version-v0.15/Metrics/PRCount.md
index 367fb8be30..cbef92826c 100644
--- a/docs/Metrics/PRCount.md
+++ b/versioned_docs/version-v0.15/Metrics/PRCount.md
@@ -14,11 +14,11 @@ The number of pull requests (eg. GitHub PRs, Bitbucket PRs, GitLab MRs) created.
 3. Identify modules with low-quality risks in advance, optimize practices, and precipitate into reusable knowledge and tools to avoid technical debt accumulation
 
 ## Which dashboard(s) does it exist in
-- [GitHub](../../../livedemo/DataSources/GitHub)
-- [GitLab](../../../livedemo/DataSources/GitLab)
-- [Weekly Community Retro](../../../livedemo/OSSMaintainers/WeeklyCommunityRetro)
-- [Engineering Throughput and Cycle Time](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
-- [Engineering Throughput and Cycle Time - Team View](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
+- [GitHub](/livedemo/DataSources/GitHub)
+- [GitLab](/livedemo/DataSources/GitLab)
+- [Weekly Community Retro](/livedemo/OSSMaintainers/WeeklyCommunityRetro)
+- [Engineering Throughput and Cycle Time](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
+- [Engineering Throughput and Cycle Time - Team View](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
 
 
 ## How is it calculated?
diff --git a/docs/Metrics/PRCycleTime.md b/versioned_docs/version-v0.15/Metrics/PRCycleTime.md
similarity index 86%
copy from docs/Metrics/PRCycleTime.md
copy to versioned_docs/version-v0.15/Metrics/PRCycleTime.md
index 3b61a7e3f8..46c7f0cc61 100644
--- a/docs/Metrics/PRCycleTime.md
+++ b/versioned_docs/version-v0.15/Metrics/PRCycleTime.md
@@ -12,8 +12,8 @@ PR Cycle Time is the sum of PR Coding Time, Pickup TIme, Review Time and Deploy
 PR Cycle Time indicates the overall velocity of the delivery progress in terms of PR. 
 
 ## Which dashboard(s) does it exist in?
-- [Engineering Throughput and Cycle Time](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
-- [Engineering Throughput and Cycle Time - Team View](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
+- [Engineering Throughput and Cycle Time](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
+- [Engineering Throughput and Cycle Time - Team View](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
 
 
 ## How is it calculated?
diff --git a/docs/Metrics/PRDeployTime.md b/versioned_docs/version-v0.15/Metrics/PRDeployTime.md
similarity index 90%
copy from docs/Metrics/PRDeployTime.md
copy to versioned_docs/version-v0.15/Metrics/PRDeployTime.md
index ca3046bf1e..077535bfe2 100644
--- a/docs/Metrics/PRDeployTime.md
+++ b/versioned_docs/version-v0.15/Metrics/PRDeployTime.md
@@ -13,8 +13,8 @@ The time it takes from when a PR is merged to when it is deployed.
 2. Evaluate whether the delivery capacity matches the business phase and demand scale. Identify key bottlenecks and reasonably allocate resources.
 
 ## Which dashboard(s) does it exist in?
-- [Engineering Throughput and Cycle Time](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
-- [Engineering Throughput and Cycle Time - Team View](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
+- [Engineering Throughput and Cycle Time](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
+- [Engineering Throughput and Cycle Time - Team View](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
 
 ## How is it calculated?
 `PR deploy time` is calculated by subtracting a PR's deployed_date and merged_date. Hence, we should associate PR/MRs with deployments.
diff --git a/docs/Metrics/PRMergeRate.md b/versioned_docs/version-v0.15/Metrics/PRMergeRate.md
similarity index 88%
copy from docs/Metrics/PRMergeRate.md
copy to versioned_docs/version-v0.15/Metrics/PRMergeRate.md
index 9fa6cb029a..af4e178460 100644
--- a/docs/Metrics/PRMergeRate.md
+++ b/versioned_docs/version-v0.15/Metrics/PRMergeRate.md
@@ -14,11 +14,11 @@ The ratio of PRs/MRs that get merged.
 3. Identify modules with low-quality risks in advance, optimize practices, and precipitate into reusable knowledge and tools to avoid technical debt accumulation
 
 ## Which dashboard(s) does it exist in
-- [GitHub](../../../livedemo/DataSources/GitHub)
-- [GitLab](../../../livedemo/DataSources/GitLab)
-- [Weekly Community Retro](../../../livedemo/OSSMaintainers/WeeklyCommunityRetro)
-- [Engineering Throughput and Cycle Time](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
-- [Engineering Throughput and Cycle Time - Team View](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
+- [GitHub](/livedemo/DataSources/GitHub)
+- [GitLab](/livedemo/DataSources/GitLab)
+- [Weekly Community Retro](/livedemo/OSSMaintainers/WeeklyCommunityRetro)
+- [Engineering Throughput and Cycle Time](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
+- [Engineering Throughput and Cycle Time - Team View](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
 
 
 ## How is it calculated?
diff --git a/docs/Metrics/PRPickupTime.md b/versioned_docs/version-v0.15/Metrics/PRPickupTime.md
similarity index 85%
copy from docs/Metrics/PRPickupTime.md
copy to versioned_docs/version-v0.15/Metrics/PRPickupTime.md
index d22f77714d..d33a9e46db 100644
--- a/docs/Metrics/PRPickupTime.md
+++ b/versioned_docs/version-v0.15/Metrics/PRPickupTime.md
@@ -12,8 +12,8 @@ The time it takes from when a PR is issued until the first comment is added to t
 PR Pickup Time shows how engaged your team is in collaborative work by identifying the delay in picking up PRs. 
 
 ## Which dashboard(s) does it exist in?
-- [Engineering Throughput and Cycle Time](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
-- [Engineering Throughput and Cycle Time - Team View](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
+- [Engineering Throughput and Cycle Time](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
+- [Engineering Throughput and Cycle Time - Team View](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
 
 
 ## How is it calculated?
diff --git a/docs/Metrics/PRReviewDepth.md b/versioned_docs/version-v0.15/Metrics/PRReviewDepth.md
similarity index 84%
copy from docs/Metrics/PRReviewDepth.md
copy to versioned_docs/version-v0.15/Metrics/PRReviewDepth.md
index 7c8c2cc529..4f6a637071 100644
--- a/docs/Metrics/PRReviewDepth.md
+++ b/versioned_docs/version-v0.15/Metrics/PRReviewDepth.md
@@ -12,8 +12,8 @@ The average number of comments of PRs in the selected time range.
 PR Review Depth (in Comments per RR) is related to the quality of code review, indicating how thorough your team reviews PRs.
 
 ## Which dashboard(s) does it exist in?
-- [Engineering Throughput and Cycle Time](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
-- [Engineering Throughput and Cycle Time - Team View](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
+- [Engineering Throughput and Cycle Time](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
+- [Engineering Throughput and Cycle Time - Team View](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
 
 ## How is it calculated?
 This metric is calculated by counting the total number of PR comments divided by the total number of PRs in the selected time range.
diff --git a/docs/Metrics/PRReviewTime.md b/versioned_docs/version-v0.15/Metrics/PRReviewTime.md
similarity index 84%
copy from docs/Metrics/PRReviewTime.md
copy to versioned_docs/version-v0.15/Metrics/PRReviewTime.md
index 5754d2555e..e7075db7b2 100644
--- a/docs/Metrics/PRReviewTime.md
+++ b/versioned_docs/version-v0.15/Metrics/PRReviewTime.md
@@ -14,8 +14,8 @@ Code review should be conducted almost in real-time and usually take less than t
 2. The team is too busy to review code.
 
 ## Which dashboard(s) does it exist in?
-- [Engineering Throughput and Cycle Time](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
-- [Engineering Throughput and Cycle Time - Team View](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
+- [Engineering Throughput and Cycle Time](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
+- [Engineering Throughput and Cycle Time - Team View](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
 
 
 ## How is it calculated?
diff --git a/docs/Metrics/PRSize.md b/versioned_docs/version-v0.15/Metrics/PRSize.md
similarity index 87%
copy from docs/Metrics/PRSize.md
copy to versioned_docs/version-v0.15/Metrics/PRSize.md
index 8e898bdd44..3e24baecc2 100644
--- a/docs/Metrics/PRSize.md
+++ b/versioned_docs/version-v0.15/Metrics/PRSize.md
@@ -12,8 +12,8 @@ The average code changes (in Lines of Code) of PRs in the selected time range.
 Small PRs can reduce risks of introducing new bugs and increase code review quality, as problems may often be hidden in big chuncks of code and difficult to identify.
 
 ## Which dashboard(s) does it exist in?
-- [Engineering Throughput and Cycle Time](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
-- [Engineering Throughput and Cycle Time - Team View](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
+- [Engineering Throughput and Cycle Time](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
+- [Engineering Throughput and Cycle Time - Team View](/livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
 
 
 ## How is it calculated?
diff --git a/docs/Metrics/PRTimeToMerge.md b/versioned_docs/version-v0.15/Metrics/PRTimeToMerge.md
similarity index 94%
copy from docs/Metrics/PRTimeToMerge.md
copy to versioned_docs/version-v0.15/Metrics/PRTimeToMerge.md
index c1bcbeeda1..5a83db129e 100644
--- a/docs/Metrics/PRTimeToMerge.md
+++ b/versioned_docs/version-v0.15/Metrics/PRTimeToMerge.md
@@ -12,8 +12,8 @@ The time it takes from when a PR is issued to when it is merged. Essentially, PR
 The delay of reviewing and waiting to review PRs has large impact on delivery speed, while reasonably short PR Time to Merge can indicate frictionless teamwork. Improving on this metric is the key to reduce PR cycle time.
 
 ## Which dashboard(s) does it exist in?
-- [GitHub](../../../livedemo/DataSources/GitHub)
-- [Weekly Community Retro](../../../livedemo/OSSMaintainers/WeeklyCommunityRetro)
+- [GitHub](/livedemo/DataSources/GitHub)
+- [Weekly Community Retro](/livedemo/OSSMaintainers/WeeklyCommunityRetro)
 
 
 ## How is it calculated?
diff --git a/versioned_docs/version-v0.15/Metrics/RequirementCount.md b/versioned_docs/version-v0.15/Metrics/RequirementCount.md
new file mode 100644
index 0000000000..f8ea398658
--- /dev/null
+++ b/versioned_docs/version-v0.15/Metrics/RequirementCount.md
@@ -0,0 +1,72 @@
+---
+title: "Requirement Count"
+description: >
+  Requirement Count
+sidebar_position: 1
+---
+
+## What is this metric? 
+The number of delivered requirements or features.
+
+## Why is it important?
+1. Based on historical data, establish a baseline of the delivery capacity of a single iteration to improve the organization and planning of R&D resources.
+2. Evaluate whether the delivery capacity matches the business phase and demand scale. Identify key bottlenecks and reasonably allocate resources.
+
+## Which dashboard(s) does it exist in
+- [Jira](https://devlake.apache.org/livedemo/DataSources/Jira)
+- [GitHub](https://devlake.apache.org/livedemo/DataSources/GitHub)
+
+
+## How is it calculated?
+This metric is calculated by counting the number of delivered issues in type "REQUIREMENT" in the given data range.
+
+<b>Data Sources Required</b>
+
+This metric relies on the `issues` collected from Jira, GitHub, or TAPD.
+
+<b>Data Transformation Required</b>
+
+This metric relies on the 'type-requirement' configuration in Jira, GitHub or TAPD's transformation rules while adding/editing a blueprint. This configuration tells DevLake what issues are `requirements`.
+
+<b>SQL Queries</b>
+
+The following SQL shows how to find the total count of requirements in specific boards, eg. 'board-1' and 'board-2'.
+
+```
+select 
+  count(*) as "Requirement Count"
+from issues i
+  join board_issues bi on i.id = bi.issue_id
+where 
+  i.type = 'REQUIREMENT'
+  and i.status = 'DONE'
+  -- please replace the board ids with your own, or create a '$board_id' variable in Grafana
+  and bi.board_id in ('board-1','board-2')
+  and $__timeFilter(i.created_date)
+```
+
+If you want to see the monthly trend of `requirement count` in the screenshot below, please run the following SQL
+
+![](/img/Metrics/requirement-count-monthly.png)
+
+```
+SELECT
+  DATE_ADD(date(i.created_date), INTERVAL -DAYOFMONTH(date(i.created_date))+1 DAY) as time,
+  count(distinct case when status != 'DONE' then i.id else null end) as "Number of Open Requirements",
+  count(distinct case when status = 'DONE' then i.id else null end) as "Number of Delivered Requirements"
+FROM issues i
+	join board_issues bi on i.id = bi.issue_id
+	join boards b on bi.board_id = b.id
+where 
+  i.type = 'REQUIREMENT'
+  and $__timeFilter(i.created_date)
+  -- please replace the board ids with your own, or create a '$board_id' variable in Grafana
+  and bi.board_id in ($board_id)
+group by 1
+```
+
+## How to improve?
+1. Analyze the number of requirements and delivery rate of different time cycles to find the stability and trend of the development process.
+2. Analyze and compare the number of requirements delivered and delivery rate of each project/team, and compare the scale of requirements of different projects.
+3. Based on historical data, establish a baseline of the delivery capacity of a single iteration (optimistic, probable and pessimistic values) to provide a reference for iteration estimation.
+4. Drill down to analyze the number and percentage of requirements in different phases of SDLC. Analyze rationality and identify the requirements stuck in the backlog. 
diff --git a/versioned_docs/version-v0.15/Metrics/RequirementDeliveryRate.md b/versioned_docs/version-v0.15/Metrics/RequirementDeliveryRate.md
new file mode 100644
index 0000000000..1c1b245eb8
--- /dev/null
+++ b/versioned_docs/version-v0.15/Metrics/RequirementDeliveryRate.md
@@ -0,0 +1,88 @@
+---
+title: "Requirement Delivery Rate"
+description: >
+  Requirement Delivery Rate
+sidebar_position: 3
+---
+
+## What is this metric? 
+The ratio of delivered requirements to all requirements.
+
+## Why is it important?
+1. Based on historical data, establish a baseline of the delivery capacity of a single iteration to improve the organization and planning of R&D resources.
+2. Evaluate whether the delivery capacity matches the business phase and demand scale. Identify key bottlenecks and reasonably allocate resources.
+
+## Which dashboard(s) does it exist in
+- [Jira](https://devlake.apache.org/livedemo/DataSources/Jira)
+- [GitHub](https://devlake.apache.org/livedemo/DataSources/GitHub)
+
+
+## How is it calculated?
+The number of delivered requirements divided by the total number of requirements in the given data range.
+
+<b>Data Sources Required</b>
+
+This metric relies on the `issues` collected from Jira, GitHub, or TAPD.
+
+<b>Data Transformation Required</b>
+
+This metric relies on the 'type-requirement' configuration in Jira, GitHub or TAPD's transformation rules while adding/editing a blueprint. This configuration tells DevLake what issues are `requirements`.
+
+<b>SQL Queries</b>
+
+The following SQL shows how to find the `requirement delivery rate` in specific boards, eg. 'board-1' and 'board-2'.
+
+![](/img/Metrics/requirement-delivery-rate-text.png)
+
+```
+WITH _requirements as(
+  SELECT
+    count(distinct i.id) as total_count,
+    count(distinct case when i.status = 'DONE' then i.id else null end) as delivered_count
+  FROM issues i
+    join board_issues bi on i.id = bi.issue_id
+  WHERE 
+    i.type = 'REQUIREMENT'
+    and $__timeFilter(i.created_date)
+    -- please replace the board ids with your own, or create a '$board_id' variable in Grafana
+    and bi.board_id in ('board_1', 'board_2')
+)
+
+SELECT 
+  now() as time,
+  1.0 * delivered_count/total_count as requirement_delivery_rate
+FROM _requirements
+```
+
+If you want to measure the monthly trend of `requirement delivery rate` in the screenshot below, please run the following SQL in Grafana.
+
+![](/img/Metrics/requirement-delivery-rate-monthly.png)
+
+```
+WITH _requirements as(
+  SELECT
+    DATE_ADD(date(i.created_date), INTERVAL -DAYOFMONTH(date(i.created_date))+1 DAY) as time,
+    1.0 * count(distinct case when i.status = 'DONE' then i.id else null end)/count(distinct i.id) as delivered_rate
+  FROM issues i
+    join board_issues bi on i.id = bi.issue_id
+  WHERE 
+     i.type = 'REQUIREMENT'
+    and $__timeFilter(i.created_date)
+    -- please replace the board ids with your own, or create a '$board_id' variable in Grafana
+    and bi.board_id in ($board_id)
+  GROUP BY 1
+)
+
+SELECT
+  time,
+  delivered_rate
+FROM _requirements
+ORDER BY time
+```
+
+
+## How to improve?
+1. Analyze the number of requirements and delivery rate of different time cycles to find the stability and trend of the development process.
+2. Analyze and compare the number of requirements delivered and delivery rate of each project/team, and compare the scale of requirements of different projects.
+3. Based on historical data, establish a baseline of the delivery capacity of a single iteration (optimistic, probable and pessimistic values) to provide a reference for iteration estimation.
+4. Drill down to analyze the number and percentage of requirements in different phases of SDLC. Analyze rationality and identify the requirements stuck in the backlog. 
diff --git a/versioned_docs/version-v0.15/Metrics/RequirementGranularity.md b/versioned_docs/version-v0.15/Metrics/RequirementGranularity.md
new file mode 100644
index 0000000000..9747660219
--- /dev/null
+++ b/versioned_docs/version-v0.15/Metrics/RequirementGranularity.md
@@ -0,0 +1,36 @@
+---
+title: "Requirement Granularity"
+description: >
+  Requirement Granularity
+sidebar_position: 4
+---
+
+## What is this metric? 
+The average number of story points per requirement.
+
+## Why is it important?
+1. Promote product teams to split requirements carefully, improve requirements quality, help developers understand requirements clearly, deliver efficiently and with high quality, and improve the project management capability of the team.
+2. Establish a data-supported workload estimation model to help R&D teams calibrate their estimation methods and more accurately assess the granularity of requirements, which is useful to achieve better issue planning in project management.
+
+## Which dashboard(s) does it exist in
+- [Jira](https://devlake.apache.org/livedemo/DataSources/Jira)
+- [GitHub](https://devlake.apache.org/livedemo/DataSources/GitHub)
+
+
+## How is it calculated?
+The average story points of issues in type "REQUIREMENT" in the given data range.
+
+<b>Data Sources Required</b>
+
+This metric relies on `issues` collected from Jira, GitHub, or TAPD.
+
+<b>Data Transformation Required</b>
+
+This metric relies on the 'type-requirement' configuration in Jira, GitHub or TAPD's transformation rules while adding/editing a blueprint. This configuration tells DevLake what issues are `requirements`.
+
+Besides, if you are importing Jira issues, you also need to configure the field of 'story_points' in the transformation.
+
+
+## How to improve?
+1. Analyze the story points/requirement lead time of requirements to evaluate whether the ticket size, ie. requirement complexity is optimal.
+2. Compare the estimated requirement granularity with the actual situation and evaluate whether the difference is reasonable by combining more microscopic workload metrics (e.g. lines of code/code equivalents)
diff --git a/versioned_docs/version-v0.15/Metrics/RequirementLeadTime.md b/versioned_docs/version-v0.15/Metrics/RequirementLeadTime.md
new file mode 100644
index 0000000000..96c64dd6a6
--- /dev/null
+++ b/versioned_docs/version-v0.15/Metrics/RequirementLeadTime.md
@@ -0,0 +1,79 @@
+---
+title: "Requirement Lead Time"
+description: >
+  Requirement Lead Time
+sidebar_position: 2
+---
+
+## What is this metric? 
+The amount of time it takes a requirement to deliver.
+
+## Why is it important?
+1. Analyze key projects and critical points, identify good/to-be-improved practices that affect requirement lead time, and reduce the risk of delays
+2. Focus on the end-to-end velocity of value delivery process; coordinate different parts of R&D to avoid efficiency shafts; make targeted improvements to bottlenecks.
+
+## Which dashboard(s) does it exist in
+- [Jira](https://devlake.apache.org/livedemo/DataSources/Jira)
+- [GitHub](https://devlake.apache.org/livedemo/DataSources/GitHub)
+- [Community Experience](https://devlake.apache.org/livedemo/OSSMaintainers/CommunityExperience)
+
+
+## How is it calculated?
+This metric equals `resolution_date - created_date` of issues in type "REQUIREMENT".
+
+<b>Data Sources Required</b>
+
+This metric relies on issues collected from Jira, GitHub, or TAPD.
+
+<b>Data Transformation Required</b>
+
+This metric relies on the 'type-requirement' configuration in Jira, GitHub or TAPD's transformation rules while adding/editing a blueprint. This configuration tells DevLake what issues are `requirements`.
+
+<b>SQL Queries</b>
+
+The following SQL shows how to find the lead time of a specific `requirement`.
+```
+-- lead_time_minutes is a pre-calculated field whose value equals 'resolution_date - created_date'
+SELECT
+  lead_time_minutes/1440 as requirement_lead_time_in_days
+FROM
+  issues
+WHERE
+  type = 'REQUIREMENT'
+```
+
+
+If you want to measure the `mean requirement lead time` in the screenshot below, please run the following SQL in Grafana.
+
+![](/img/Metrics/requirement-lead-time-monthly.png)
+
+```
+with _issues as(
+  SELECT
+    DATE_ADD(date(i.resolution_date), INTERVAL -DAY(date(i.resolution_date))+1 DAY) as time,
+    AVG(i.lead_time_minutes/1440) as issue_lead_time
+  FROM issues i
+  	join board_issues bi on i.id = bi.issue_id
+  	join boards b on bi.board_id = b.id
+  WHERE
+    -- $board_id is a variable defined in Grafana's dashboard settings to filter out issues by boards
+    b.id in ($board_id)
+    and i.type = 'REQUIREMENT'
+    and i.status = "DONE"
+    and $__timeFilter(i.resolution_date)
+    -- the following condition will remove the month with incomplete data
+    and i.resolution_date >= DATE_ADD(DATE_ADD($__timeFrom(), INTERVAL -DAY($__timeFrom())+1 DAY), INTERVAL +1 MONTH)
+  group by 1
+)
+
+SELECT 
+  date_format(time,'%M %Y') as month,
+  issue_lead_time as "Mean Requirement Lead Time in Days"
+FROM _issues
+ORDER BY time
+```
+
+## How to improve?
+1. Analyze the trend of requirement lead time to observe if it has improved over time.
+2. Compare the requirement lead time of each project/team to identify key projects with abnormal lead time.
+3. Drill down to analyze a requirement's staying time in different phases of SDLC. Analyze the bottleneck of delivery velocity and improve the workflow.
\ No newline at end of file
diff --git a/versioned_docs/version-v0.15/Metrics/_category_.json b/versioned_docs/version-v0.15/Metrics/_category_.json
new file mode 100644
index 0000000000..e944147d52
--- /dev/null
+++ b/versioned_docs/version-v0.15/Metrics/_category_.json
@@ -0,0 +1,8 @@
+{
+  "label": "Metrics",
+  "position": 5,
+  "link":{
+    "type": "generated-index",
+    "slug": "Metrics"
+  }
+}
diff --git a/versioned_docs/version-v0.15/Overview/Architecture.md b/versioned_docs/version-v0.15/Overview/Architecture.md
new file mode 100755
index 0000000000..66c9fc984a
--- /dev/null
+++ b/versioned_docs/version-v0.15/Overview/Architecture.md
@@ -0,0 +1,39 @@
+---
+title: "Architecture"
+description: >
+  Understand the architecture of Apache DevLake
+sidebar_position: 2
+---
+
+## Overview
+
+<p align="center"><img src="/img/Architecture/arch-component.svg" /></p>
+<p align="center">DevLake Components</p>
+
+A DevLake installation typically consists of the following components:
+
+- Config UI: A handy user interface to create, trigger, and debug Blueprints. A Blueprint specifies the where (data connection), what (data scope), how (transformation rule), and when (sync frequency) of a data pipeline.
+- API Server: The main programmatic interface of DevLake.
+- Runner: The runner does all the heavy-lifting for executing tasks. In the default DevLake installation, it runs within the API Server, but DevLake provides a temporal-based runner (beta) for production environments.
+- Database: The database stores both DevLake's metadata and user data collected by data pipelines. DevLake supports MySQL and PostgreSQL as of v0.11.
+- Plugins: Plugins enable DevLake to collect and analyze dev data from any DevOps tools with an accessible API. DevLake community is actively adding plugins for popular DevOps tools, but if your preferred tool is not covered yet, feel free to open a GitHub issue to let us know or check out our doc on how to build a new plugin by yourself.
+- Dashboards: Dashboards deliver data and insights to DevLake users. A dashboard is simply a collection of SQL queries along with corresponding visualization configurations. DevLake's official dashboard tool is Grafana and pre-built dashboards are shipped in Grafana's JSON format. Users are welcome to swap for their own choice of dashboard/BI tool if desired.
+
+## Dataflow
+
+<p align="center"><img src="/img/Architecture/arch-dataflow.svg" /></p>
+<p align="center">DevLake Dataflow</p>
+
+A typical plugin's dataflow is illustrated below:
+
+1. The Raw layer stores the API responses from data sources (DevOps tools) in JSON. This saves developers' time if the raw data is to be transformed differently later on. Please note that communicating with data sources' APIs is usually the most time-consuming step.
+2. The Tool layer extracts raw data from JSONs into a relational schema that's easier to consume by analytical tasks. Each DevOps tool would have a schema that's tailored to its data structure, hence the name, the Tool layer.
+3. The Domain layer attempts to build a layer of abstraction on top of the Tool layer so that analytics logics can be re-used across different tools. For example, GitHub's Pull Request (PR) and GitLab's Merge Request (MR) are similar entities. They each have their own table name and schema in the Tool layer, but they're consolidated into a single entity in the Domain layer, so that developers only need to implement metrics like Cycle Time and Code Review Rounds once against the domain la [...]
+
+## Principles
+
+1. Extensible: DevLake's plugin system allows users to integrate with any DevOps tool. DevLake also provides a dbt plugin that enables users to define their own data transformation and analysis workflows.
+2. Portable: DevLake has a modular design and provides multiple options for each module. Users of different setups can freely choose the right configuration for themselves.
+3. Robust: DevLake provides an SDK to help plugins efficiently and reliably collect data from data sources while respecting their API rate limits and constraints.
+
+<br/>
diff --git a/versioned_docs/version-v0.15/Overview/Introduction.md b/versioned_docs/version-v0.15/Overview/Introduction.md
new file mode 100755
index 0000000000..6bb1194941
--- /dev/null
+++ b/versioned_docs/version-v0.15/Overview/Introduction.md
@@ -0,0 +1,39 @@
+---
+title: "Introduction"
+description: General introduction of Apache DevLake
+sidebar_position: 1
+---
+
+## What is Apache DevLake?
+Apache DevLake (Incubating) is an open-source dev data platform that ingests, analyzes, and visualizes the fragmented data from DevOps tools to extract insights for engineering excellence, developer experience, and community growth.
+
+Apache DevLake is designed for developer teams looking to make better sense of their development process and to bring a more data-driven approach to their own practices. You can ask Apache DevLake many questions regarding your development process. Just connect and query.
+
+## What can be accomplished with DevLake?
+1. Collect DevOps data across the entire Software Development Life Cycle (SDLC) and connect the siloed data with a standard [data model](../DataModels/DevLakeDomainLayerSchema.md).
+2. Visualize out-of-the-box [engineering metrics](../Metrics) in a series of use-case driven dashboards
+3. Easily extend DevLake to support your data sources, metrics, and dashboards with a flexible [framework](Architecture.md) for data collection and ETL (Extract, Transform, Load).
+
+## How do I use DevLake?
+### 1. Set up DevLake
+You can easily set up Apache DevLake by following our step-by step instructions for [Docker Compose setup](../GettingStarted/DockerComposeSetup.md) or [Kubernetes setup](../GettingStarted/KubernetesSetup.md).
+
+### 2. Create a Blueprint
+The DevLake Configuration UI will guide you through the process (a Blueprint) to define the data connections, data scope, transformation and sync frequency of the data you wish to collect.
+
+![img](/img/Introduction/userflow1.svg)
+
+### 3. Track the Blueprint's progress
+You can track the progress of the Blueprint you have just set up.
+
+![img](/img/Introduction/userflow2.svg)
+
+### 4. View the pre-built dashboards
+Once the first run of the Blueprint is completed, you can view the corresponding dashboards.
+
+![img](/img/Introduction/userflow3.png)
+
+### 5. Customize the dashboards with SQL
+If the pre-built dashboards are limited for your use cases, you can always customize or create your own metrics or dashboards with SQL.
+
+![img](/img/Introduction/userflow4.png)
diff --git a/versioned_docs/version-v0.15/Overview/KeyConcepts.md b/versioned_docs/version-v0.15/Overview/KeyConcepts.md
new file mode 100644
index 0000000000..aa011c1ae6
--- /dev/null
+++ b/versioned_docs/version-v0.15/Overview/KeyConcepts.md
@@ -0,0 +1,110 @@
+---
+sidebar_position: 4
+title: "Key Concepts"
+linkTitle: "KeyConepts"
+description: >
+  DevLake Key Concepts
+---
+
+*Last updated: May 16 2022*
+
+
+## In Configuration UI (Regular Mode)
+
+The following terms are arranged in the order of their appearance in the actual user workflow.
+
+### Projects
+**A project is a method to group data**. Apache DevLake supports users to view metrics based on projects. A `project` is associated with multiple sets of [Data Scope](#data-scope), such as GitHub/GitLab repositories, Jira boards, Jenkins pipelines, etc. Metrics for a project are calculated based on the data entities(#data-entities) under the project's data scope. 
+
+A project has one [Blueprint](#Bluepirnts) for data collection and metric computation.
+
+For example, when a user associates 'Jenkins Job A' and  'Jira board B' with project 1, then ONLY `deployments` in 'Jenkins Job A' and `incidents` in 'Jira board B' will be used to calculate **Change Failure Rate** for project 1.
+
+### Blueprints
+**A blueprint is the plan that covers all the work to get your raw data ready for query and metric computation in the dashboards.** Creating a blueprint consists of four steps:
+1. **Adding [Data Connections](#data-connections)**: For each [data source](#data-sources), one or more data connections can be added to a single blueprint, depending on the data you want to sync to DevLake.
+2. **Setting the [Data Scope](#data-scope)**: For each data connection, you need to configure the scope of data, such as GitHub projects, Jira boards, and their corresponding [data entities](#data-entities).
+3. **Adding [Transformation Rules](#transformation-rules) (optional)**: You can optionally apply transformation for the data scope you have just selected, in order to view more advanced metrics.
+3. **Setting the Sync Frequency**: You can specify the sync frequency for your blueprint to achieve recurring data syncs and transformation. Alternatively, you can set the frequency to manual if you wish to run the tasks in the blueprint manually.
+
+The relationship among Blueprint, Data Connections, Data Scope and Transformation Rules is explained as follows:
+
+![Blueprint ERD](/img/Glossary/blueprint-erd.svg)
+- Each blueprint can have multiple data connections.
+- Each data connection can have multiple sets of data scope.
+- Each set of data scope only consists of one GitHub/GitLab project or Jira board, along with their corresponding data entities.
+- Each set of data scope can only have one set of transformation rules.
+
+### Data Sources
+**A data source is a specific DevOps tool from which you wish to sync your data, such as GitHub, GitLab, Jira and Jenkins.**
+
+DevLake normally uses one [data plugin](#data-plugins) to pull data for a single data source. However, in some cases, DevLake uses multiple data plugins for one data source for the purpose of improved sync speed, among many other advantages. For instance, when you pull data from GitHub or GitLab, aside from the GitHub or GitLab plugin, Git Extractor is also used to pull data from the repositories. In this case, DevLake still refers GitHub or GitLab as a single data source.
+
+### Data Connections
+**A data connection is a specific instance of a data source that stores information such as `endpoint` and `auth`.** A single data source can have one or more data connections (e.g. two Jira instances). Currently, DevLake supports one data connection for GitHub, GitLab and Jenkins, and multiple connections for Jira.
+
+You can set up a new data connection either during the first step of creating a blueprint, or in the Connections page that can be accessed from the navigation bar. Because one single data connection can be reused in multiple blueprints, you can update the information of a particular data connection in Connections, to ensure all its associated blueprints will run properly. For example, you may want to update your GitHub token in a data connection if it goes expired.
+
+### Data Scope
+**In a blueprint, each data connection can have multiple sets of data scope configurations, including GitHub or GitLab projects, Jira boards and their corresponding [data entities](#data-entities).** The fields for data scope configuration vary according to different data sources.
+
+Each set of data scope refers to one GitHub or GitLab project, or one Jira board and the data entities you would like to sync for them, for the convenience of applying transformation in the next step. For instance, if you wish to sync 5 GitHub projects, you will have 5 sets of data scope for GitHub.
+
+To learn more about the default data scope of all data sources and data plugins, please refer to [Supported Data Sources](./SupportedDataSources.md).
+
+### Data Entities
+**Data entities refer to the data fields from one of the five data domains: Issue Tracking, Source Code Management, Code Review, CI/CD and Cross-Domain.**
+
+For instance, if you wish to pull Source Code Management data from GitHub and Issue Tracking data from Jira, you can check the corresponding data entities during setting the data scope of these two data connections.
+
+To learn more details, please refer to [Domain Layer Schema](/DataModels/DevLakeDomainLayerSchema.md).
+
+### Transformation Rules
+**Transformation rules are a collection of methods that allow you to customize how DevLake normalizes raw data for query and metric computation.** Each set of data scope is strictly accompanied with one set of transformation rules. However, for your convenience, transformation rules can also be duplicated across different sets of data scope.
+
+DevLake uses these normalized values in the transformation to design more advanced dashboards, such as the Weekly Bug Retro dashboard. Although configuring transformation rules is not mandatory, if you leave the rules blank or have not configured correctly, only the basic dashboards (e.g. GitHub Basic Metrics) will be displayed as expected, while the advanced dashboards will not.
+
+### Historical Runs
+**A historical run of a blueprint is an actual execution of the data collection and transformation [tasks](#tasks) defined in the blueprint at its creation.** A list of historical runs of a blueprint is the entire running history of that blueprint, whether executed automatically or manually. Historical runs can be triggered in three ways:
+- By the blueprint automatically according to its schedule in the Regular Mode of the Configuration UI
+- By running the JSON in the Advanced Mode of the Configuration UI
+- By calling the API `/pipelines` endpoint manually
+
+However, the name Historical Runs is only used in the Configuration UI. In DevLake API, they are called [pipelines](#pipelines).
+
+## In Configuration UI (Advanced Mode) and API
+
+The following terms have not appeared in the Regular Mode of Configuration UI for simplification, but can be very useful if you want to learn about the underlying framework of Devalke or use Advanced Mode and the DevLake API.
+
+### Data Plugins
+**A data plugin is a specific module that syncs or transforms data.** There are two types of data plugins: Data Collection Plugins and Data Transformation Plugins.
+
+Data Collection Plugins pull data from one or more data sources. DevLake supports 8 data plugins in this category: `ae`, `feishu`, `gitextractor`, `github`, `gitlab`, `jenkins`, `jira` and `tapd`.
+
+Data Transformation Plugins transform the data pulled by other Data Collection Plugins. `refdiff` is currently the only plugin in this category.
+
+Although the names of the data plugins are not displayed in the regular mode of DevLake Configuration UI, they can be used directly in JSON in the Advanced Mode.
+
+For detailed information about the relationship between data sources and data plugins, please refer to [Supported Data Sources](./SupportedDataSources.md).
+
+
+### Pipelines
+**A pipeline is an orchestration of [tasks](#tasks) of data `collection`, `extraction`, `conversion` and `enrichment`, defined in the DevLake API.** A pipeline is composed of one or multiple [stages](#stages) that are executed in a sequential order. Any error occurring during the execution of any stage, task or subtask will cause the immediate fail of the pipeline.
+
+The composition of a pipeline is explained as follows:
+![Blueprint ERD](/img/Glossary/pipeline-erd.svg)
+Notice: **You can manually orchestrate the pipeline in Configuration UI Advanced Mode and the DevLake API; whereas in Configuration UI regular mode, an optimized pipeline orchestration will be automatically generated for you.**
+
+
+### Stages
+**A stages is a collection of tasks performed by data plugins.** Stages are executed in a sequential order in a pipeline.
+
+### Tasks
+**A task is a collection of [subtasks](#subtasks) that perform any of the `collection`, `extraction`, `conversion` and `enrichment` jobs of a particular data plugin.** Tasks are executed in a parallel order in any stages.
+
+### Subtasks
+**A subtask is the minimal work unit in a pipeline that performs in any of the four roles: `Collectors`, `Extractors`, `Converters` and `Enrichers`.** Subtasks are executed in sequential orders.
+- `Collectors`: Collect raw data from data sources, normally via DevLake API and stored into `raw data table`
+- `Extractors`: Extract data from `raw data table` to `domain layer tables`
+- `Converters`: Convert data from `tool layer tables` into `domain layer tables`
+- `Enrichers`: Enrich data from one domain to other domains. For instance, the Fourier Transformation can examine `issue_changelog` to show time distribution of an issue on every assignee.
diff --git a/versioned_docs/version-v0.15/Overview/References.md b/versioned_docs/version-v0.15/Overview/References.md
new file mode 100644
index 0000000000..32515fb811
--- /dev/null
+++ b/versioned_docs/version-v0.15/Overview/References.md
@@ -0,0 +1,28 @@
+---
+title: "References"
+description: >
+  References
+sidebar_position: 6
+---
+
+
+## RESTful API Reference
+
+For users/developers who wish to interact with the Apache DevLake by using the RESTful APIs,
+the Swagger Document would very useful for you. The `devlake` docker image has it packaged, you may access it from:
+If you are using the `devlake` container alone without `config-ui`:
+```
+http://<DEVLAKE_CONTIANER_HOST>:<PORT>/swagger/index.html
+```
+or
+```
+http://<CONFIG_UI_CONTIANER_HOST>:<PORT>/api/swagger/index.html
+```
+
+## Source Code Reference
+
+For developers who wish to contribute to or develop based on the Apache DevLake, the 
+[pkg.go.dev](https://pkg.go.dev/github.com/apache/incubator-devlake#section-documentation)
+is a good resource for reference, you can learn the overall structure of the code base or 
+the definition of a specific function.
+
diff --git a/versioned_docs/version-v0.15/Overview/Roadmap.md b/versioned_docs/version-v0.15/Overview/Roadmap.md
new file mode 100644
index 0000000000..1ff7bfaf37
--- /dev/null
+++ b/versioned_docs/version-v0.15/Overview/Roadmap.md
@@ -0,0 +1,33 @@
+---
+title: "Roadmap"
+description: >
+  The goals and roadmap for DevLake
+sidebar_position: 3
+---
+
+## Goals
+
+DevLake has joined the Apache Incubator and is aiming to become a top-level project. To achieve this goal, the Apache DevLake (Incubating) community will continue to make efforts in helping development teams to analyze and improve their engineering productivity. In this roadmap, we have summarized three major goals followed by the feature breakdown to invite the broader community to join us and grow together.
+
+1. As a dev data analysis application, discover and implement 3 (or even more!) usage scenarios:
+   - A collection of metrics to track the contribution, quality and growth of open-source projects
+   - DORA metrics for DevOps engineers
+   - To be decided ([let us know](https://join.slack.com/t/devlake-io/shared_invite/zt-17b6vuvps-x98pqseoUagM7EAmKC82xQ) if you have any suggestions!)
+2. As dev data infrastructure, provide robust data collection modules, customizable data models, and data extensibility.
+3. Design better user experience for end-users and contributors.
+
+## Feature Breakdown
+
+Apache DevLake is currently under rapid development. You are more than welcome to use the following table to explore your intereted features and make contributions. We deeply appreciate the collective effort of our community to make this project possible!
+
+| Category                                                                                                              | Features                                                                                                                                                                                                                                                                                                                                                                           [...]
+| --------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [...]
+| More data sources across different DevOps domains (Goal No.1 & 2). See [existing data sources](/docs/Overview/SupportedDataSources.md)        | Plugins in **bold** are of higher priority <br/><br/> Issue/Task Management: <ul><li>**Jira (cloud)** [#886 (Done)](https://github.com/apache/incubator-devlake/issues/886)</li><li>**Jira (server/data center)** [#1687 (Done)](https://github.com/apache/incubator-devlake/issues/1687)</li><li>**GitHub Issues** [#407 (Done)](https://github.com/apach [...]
+| Improved data collection, [data models](../DataModels/DevLakeDomainLayerSchema.md) and data extensibility (Goal No.2) | Data Collection: <br/> <ul><li>Complete the logging system</li><li>Implement a good error handling mechanism during data collection</li></ul> Data Models:<ul><li>Introduce DBT to allow users to create and modify the domain layer schema. [#1479 (Done)](https://github.com/apache/incubator-devlake/issues/1479)</li><li>Design the data models for 6 new domains, please refe [...]
+| Better user experience (Goal No.3)                                                                                    | For new users: <ul><li> Iterate on a clearer step-by-step guide to improve the pre-configuration experience.</li><li>Provide a new Config UI to reduce frictions for data configuration [#1700 (Done)](https://github.com/apache/incubator-devlake/issues/1700)</li><li> Showcase dashboard live demos to let users explore and learn about the dashboards. [#1784 (Done)](https:/ [...]
+
+## How to Influence the Roadmap
+
+A roadmap is only useful when it captures real user needs. We are glad to hear from you if you have specific use cases, feedback, or ideas. You can submit an issue to let us know!
+Also, if you plan to work (or are already working) on a new or existing feature, tell us, so that we can update the roadmap accordingly. We are happy to share knowledge and context to help your feature land successfully.
+<br/><br/><br/>
diff --git a/versioned_docs/version-v0.15/Overview/SupportedDataSources.md b/versioned_docs/version-v0.15/Overview/SupportedDataSources.md
new file mode 100644
index 0000000000..bb08a2d690
--- /dev/null
+++ b/versioned_docs/version-v0.15/Overview/SupportedDataSources.md
@@ -0,0 +1,179 @@
+---
+title: "Supported Data Sources"
+description: >
+  Data sources that DevLake supports
+sidebar_position: 5
+---
+
+## Data Sources and Data Plugins
+
+Apache DevLake(incubating) supports the following data sources. The data from each data source is collected with one or more plugins. Detailed plugin docs can be found [here](/docs/Plugins).
+
+| Data Source      | Domain(s)                                                                   | Supported Versions                   | Config UI Availability | Triggered Plugins           | Collection Mode                                                |
+|------------------|-----------------------------------------------------------------------------|--------------------------------------|------------------------|-----------------------------|----------------------------------------------------------------|
+| GitHub           | Source Code Management, Code Review, Issue Tracking, CI/CD (GitHub Actions) | Cloud                                | Available              | `github`, `gitextractor`    | Full Refresh                                                   |
+| GitLab           | Source Code Management, Code Review, Issue Tracking, CI/CD (GitLab CI)      | Cloud, Community Edition 13.x+       | Available              | `gitlab`, `gitextractor`    | Full Refresh, Incremental Sync(for `issues`,`MRs`)             |
+| Jira             | Issue Tracking                                                              | Cloud, Server 7.x+, Data Center 7.x+ | Available              | `jira`                      | Full Refresh, Incremental Sync(for `issues` and related)       |
+| Jenkins          | CI/CD                                                                       | 2.263.x+                             | Available              | `jenkins`                   | Full Refresh                                                   |
+| BitBucket (Beta) | Source Code Management, Code Review                                         | Cloud                                | WIP                    | `bitbucket`, `gitextractor` | Full Refresh                                                   |
+| TAPD (Beta)      | Issue Tracking                                                              | Cloud                                | Not Available          | `tapd`                      | Full Refresh, Incremental Sync(for `stories`, `bugs`, `tasks`) |
+| Zentao (Beta)    | Issue Tracking                                                              | Cloud                                | Not Available          | `zentao`                    | Full Refresh                                                   |
+| Gitee (WIP)      | Source Code Management, Code Review, Issue Tracking                         | Cloud                                | Not Available          | `gitee`, `gitextractor`     | Full Refresh, Incremental Sync(for `issues`,`MRs`)             |
+| PagerDuty (WIP)  | Issue Tracking                                                              | Cloud                                | Not Available          | `pagerduty`                 | Full Refresh                                                   |
+| Feishu (WIP)     | Calendar                                                                    | Cloud                                | Not Available          | `feishu`                    | Full Refresh                                                   |
+| AE               | Source Code Management                                                      | On-prem                              | Not Available          | `ae`                        | Full Refresh                                                   |
+
+
+
+## Data Collection Scope By Each Plugin
+
+This table shows the entities collected by each plugin. Domain layer entities in this table are consistent with the entities [here](/DataModels/DevLakeDomainLayerSchema.md).
+✅ : Collect by default.
+💪 : Collect not by default. You need to add the corresponding subtasks to collect these entities in the [advanced mode](../UserManuals/ConfigUI/AdvancedMode.md).
+
+| Domain Layer Entities                                                                       | ae  | dora | gitextractor | incoming webhook | github | gitlab | jenkins | jira | refdiff | tapd |
+| ------------------------------------------------------------------------------------------- | --- | ---- | ------------ | ---------------- | ------ | ------ | ------- | ---- | ------- | ---- |
+| [accounts](../DataModels/DevLakeDomainLayerSchema.md/#accounts)                             |     |      |              |                  | ✅     | ✅     |         | ✅   |         | ✅   |
+| [board_issues](../DataModels/DevLakeDomainLayerSchema.md/#board_issues)                     |     |      |              |                  | ✅     | ✅     |         | ✅   |         | ✅   |
+| [board_repos](../DataModels/DevLakeDomainLayerSchema.md/#board_repos)                       |     |      |              |                  | ✅     | ✅     |         | ✅   |         |      |
+| [board_sprints](../DataModels/DevLakeDomainLayerSchema.md/#board_sprints)                   |     |      |              |                  | ✅     |        |         | ✅   |         | ✅   |
+| [boards](../DataModels/DevLakeDomainLayerSchema.md/#boards)                                 |     |      |              |                  | ✅     | ✅     |         | ✅   |         | ✅   |
+| [cicd_pipeline_commits](../DataModels/DevLakeDomainLayerSchema.md/#cicd_pipeline_commits)   |     | ✅   |              |                  | ✅     | ✅     | ✅      |      |         |      |
+| [cicd_pipelines](../DataModels/DevLakeDomainLayerSchema.md/#cicd_pipelines)                 |     | ✅   |              |                  | ✅     | ✅     | ✅      |      |         |      |
+| [cicd_scopes](../DataModels/DevLakeDomainLayerSchema.md/#cicd_scopes)                       |     | ✅   |              |                  | ✅     | ✅     | ✅      |      |         |      |
+| [cicd_tasks](../DataModels/DevLakeDomainLayerSchema.md/#cicd_tasks)                         |     | ✅   |              | 💪               | ✅     | ✅     | ✅      |      |         |      |
+| [commit_file_components](../DataModels/DevLakeDomainLayerSchema.md/#commit_file_components) |     |      | ✅           |                  |        |        |         |      |         |      |
+| [commit_files](../DataModels/DevLakeDomainLayerSchema.md/#commit_files)                     |     |      | ✅           |                  |        |        |         |      |         |      |
+| [commit_line_change](../DataModels/DevLakeDomainLayerSchema.md/#commit_line_change)         |     |      | ✅           |                  |        |        |         |      |         |      |
+| [commit_parents](../DataModels/DevLakeDomainLayerSchema.md/#commit_parents)                 |     |      | ✅           |                  |        |        |         |      |         |      |
+| [commits](../DataModels/DevLakeDomainLayerSchema.md/#commits)                               | ✅  |      | ✅           |                  | 💪     | 💪     |         |      |         |      |
+| [commits_diffs](../DataModels/DevLakeDomainLayerSchema.md/#commits_diffs)                   |     |      |              |                  |        |        |         |      | ✅      |      |
+| [components](../DataModels/DevLakeDomainLayerSchema.md/#components)                         |     |      |              |                  |        |        |         |      |         |      |
+| [finished_commits_diffs](../DataModels/DevLakeDomainLayerSchema.md/#finished_commits_diffs) |     |      |              |                  |        |        |         |      |         |      |
+| [issue_changelogs](../DataModels/DevLakeDomainLayerSchema.md/#issue_changelogs)             |     |      |              |                  |        |        |         | ✅   |         | ✅   |
+| [issue_comments](../DataModels/DevLakeDomainLayerSchema.md/#issue_commentswip)              |     |      |              |                  | ✅     |        |         |      |         | ✅   |
+| [issue_commits](../DataModels/DevLakeDomainLayerSchema.md/#issue_commits)                   |     |      |              |                  |        |        |         | ✅   |         | ✅   |
+| [issue_labels](../DataModels/DevLakeDomainLayerSchema.md/#issue_labels)                     |     |      |              |                  | ✅     | ✅     |         |      |         | ✅   |
+| [issue_repo_commits](../DataModels/DevLakeDomainLayerSchema.md/#issue_repo_commits)         |     |      |              |                  |        |        |         | ✅   |         |      |
+| [issue_worklogs](../DataModels/DevLakeDomainLayerSchema.md/#issue_worklogs)                 |     |      |              |                  |        |        |         | ✅   |         | ✅   |
+| [issues](../DataModels/DevLakeDomainLayerSchema.md/#issues)                                 |     |      |              |                  | ✅     |        |         | ✅   |         | ✅   |
+| [project_issue_metrics](../DataModels/DevLakeDomainLayerSchema.md/#project_issue_metrics)   |     | ✅   |              |                  | ✅     | ✅     |         | ✅   |         | ✅   |
+| [project_mapping](../DataModels/DevLakeDomainLayerSchema.md/#project_mapping)               |     | ✅   |              |                  | ✅     | ✅     | ✅      | ✅   |         | ✅   |
+| [project_metrics](../DataModels/DevLakeDomainLayerSchema.md/#project_metrics)               |     | ✅   |              |                  | ✅     | ✅     | ✅      | ✅   |         | ✅   |
+| [project_pr_metrics](../DataModels/DevLakeDomainLayerSchema.md/#project_pr_metrics)         |     | ✅   |              |                  | ✅     | ✅     |         |      |         | ✅   |
+| [projects](../DataModels/DevLakeDomainLayerSchema.md/#project)                              |     | ✅   |              |                  | ✅     | ✅     | ✅      | ✅   |         | ✅   |
+| [pull_request_comments](../DataModels/DevLakeDomainLayerSchema.md/#pull_request_comments)   |     |      |              |                  | ✅     | ✅     |         |      |         |      |
+| [pull_request_commits](../DataModels/DevLakeDomainLayerSchema.md/#pull_request_commits)     |     |      |              |                  | ✅     | ✅     |         |      |         |      |
+| [pull_request_issues](../DataModels/DevLakeDomainLayerSchema.md/#pull_request_issues)       |     |      |              |                  | ✅     |        |         |      |         |      |
+| [pull_request_labels](../DataModels/DevLakeDomainLayerSchema.md/#pull_request_labels)       |     |      |              |                  | ✅     | ✅     |         |      |         |      |
+| [pull_requests](../DataModels/DevLakeDomainLayerSchema.md/#pull_requests)                   |     |      |              |                  | ✅     | ✅     |         |      |         |      |
+| [ref_commits](../DataModels/DevLakeDomainLayerSchema.md/#ref_commits)                       |     |      |              |                  |        |        |         |      | ✅      |      |
+| [refs](../DataModels/DevLakeDomainLayerSchema.md/#refs)                                     |     |      | ✅           |                  |        |        |         |      | ✅      |      |
+| [refs_issues_diffs](../DataModels/DevLakeDomainLayerSchema.md/#refs_issues_diffs)           |     |      |              |                  |        |        |         |      | ✅      |      |
+| [ref_pr_cherry_picks](../DataModels/DevLakeDomainLayerSchema.md/#ref_pr_cherry_picks)       |     |      |              |                  |        |        |         |      | ✅      |      |
+| [repo_commits](../DataModels/DevLakeDomainLayerSchema.md/#repo_commits)                     |     |      | ✅           |                  | 💪     | 💪     |         |      |         |      |
+| [repo_snapshot](../DataModels/DevLakeDomainLayerSchema.md/#repo_snapshot)                   |     |      | ✅           |                  |        |        |         |      |         |      |
+| [repos](../DataModels/DevLakeDomainLayerSchema.md/#repos)                                   |     |      |              |                  | ✅     | ✅     |         |      |         |      |
+| [sprint_issues](../DataModels/DevLakeDomainLayerSchema.md/#sprint_issues)                   |     |      |              |                  | ✅     |        |         | ✅   |         | ✅   |
+| [sprints](../DataModels/DevLakeDomainLayerSchema.md/#sprints)                               |     |      |              |                  | ✅     |        |         | ✅   |         | ✅   |
+| [team_users](../DataModels/DevLakeDomainLayerSchema.md/#team_users)                         |     |      |              |                  |        |        |         |      |         |      |
+| [teams](../DataModels/DevLakeDomainLayerSchema.md/#teams)                                   |     |      |              |                  |        |        |         |      |         |      |
+| [user_account](../DataModels/DevLakeDomainLayerSchema.md/#user_accounts)                    |     |      |              |                  |        |        |         |      |         |      |
+| [users](../DataModels/DevLakeDomainLayerSchema.md/#users)                                   |     |      |              |                  |        |        |         | ✅   |         | ✅   |
+
+## Data Sync Policy
+
+**bold:** means it may collect slowly.
+
+**\*bold\*:** means it may collect very slowly.
+
+### Jira
+
+| Subtask Name               | Estimated Max Number of Request | Does It support Incremental Collection? | Does It Support Time Filter? |
+| -------------------------- | ------------------------------- | --------------------------------------- | ---------------------------- |
+| CollectStatusMeta          | 1                               | -                                       | -                            |
+| CollectProjectsMeta        | <10                             | ❌                                      | -                            |
+| CollectIssueTypesMeta      | <10                             | ❌                                      | -                            |
+| CollectIssuesMeta          | <10^4                           | ✅                                      | ✅                           |
+| CollectIssueChangelogsMeta | 1000~10^5                       | ✅                                      | ✅                           |
+| CollectAccountsMeta        | <10^3                           | ❌                                      | ❌                           |
+| CollectWorklogsMeta        | 1000~10^5                       | ✅                                      | ✅                           |
+| CollectRemotelinksMeta     | 1000~10^5                       | ✅                                      | ✅                           |
+| CollectSprintsMeta         | <100                            | ❌                                      | ❌                           |
+| CollectEpicsMeta           | <100                            | ❌                                      | ✅                           |
+
+### Jenkins
+
+| Subtask Name         | Estimated Max Number of Request | Does It support Incremental Collection? | Does It Support Time Filter? |
+| -------------------- | ------------------------------- | --------------------------------------- | ---------------------------- |
+| CollectApiBuildsMeta | ≈100                            | ❌                                      | ❌                           |
+| CollectApiStagesMeta | ≈10^4                           | ❌                                      | ✅                           |
+
+### Gitlab
+
+| Subtask Name                | Estimated Max Number of Request | Does It support Incremental Collection? | Does It Support Time Filter? |
+| --------------------------- | ------------------------------- | --------------------------------------- | ---------------------------- |
+| CollectApiIssuesMeta        | <10^4                           | ✅                                      | ✅                           |
+| CollectApiMergeRequestsMeta | <10^3                           | ✅                                      | ✅                           |
+| CollectApiMrNotesMeta       | <10^5                           | ❌                                      | ✅                           |
+| CollectApiMrCommitsMeta     | <10^5                           | ❌                                      | ✅                           |
+| **CollectApiPipelinesMeta** | <10^4                           | ✅                                      | ❌                           |
+| CollectApiJobsMeta          | <10^5                           | ❌                                      | ✅                           |
+
+### Github
+
+| Subtask Name                       | Estimated Max Number of Request     | Does It support Incremental Collection? | Does It Support Time Filter? |
+| ---------------------------------- | ----------------------------------- | --------------------------------------- | ---------------------------- |
+| ---------------------------------  | Common                              | -----------------------                 |                              |
+| CollectMilestonesMeta              | ≈10                                 | ✅                                       | ❌                            |
+| CollectRunsMeta                    | <10^4                               | ✅                                       | ✅                            |
+| CollectApiCommentsMeta             | 400 (max page that GitHub supports) | ✅                                       | ✅                            |
+| **CollectApiEventsMeta**           | 400 (max page that GitHub supports) | ❌                                       | ❌                            |
+| CollectApiPullRequestReviewsMeta   | <10^5                               | ✅                                       | ✅                            |
+| ---------------------------------  | Graphql Only (Default)              | -----------------------                 |                              |
+| CollectIssueMeta                   | ≈10^4                               | ❌                                       | ✅                            |
+| CollectPrMeta                      | ≈10^3                               | ❌                                       | ✅                            |
+| CollectCheckRunMeta                | <10^4                               | ❌                                       | ✅                            |
+| CollectAccountMeta                 | ≈10^2                               | ❌                                       | -                            |
+| ---------------------------------  | Restful Only (Not by Default)       | -----------------------                 |                              |
+| CollectApiIssuesMeta               | ≈10^4                               | ✅                                       | ❌                            |
+| CollectApiPullRequestsMeta         | ≈10^2                               | ❌                                       | ❌                            |
+| CollectApiPullRequestCommitsMeta   | ≈10^4                               | ✅                                       | ✅                            |
+| **CollectApiPrReviewCommentsMeta** | ≈10^4                               | ✅                                       | ✅                            |
+| **CollectAccountsMeta**            | ≈10^4                               | ❌                                       | ❌                            |
+| **CollectAccountOrgMeta**          | ≈10^4                               | ❌                                       | ❌                            |
+| CollectJobsMeta                    | <10^6                               | ❌                                       | ✅                            |
+| CollectApiCommitsMeta              | Not enabled                         | -                                       | -                            |
+| CollectApiCommitStatsMeta          | Not enabled                         | -                                       | -                            |
+
+### Feishu
+
+| Subtask Name                  | Estimated Max Number of Request | Does It support Incremental Collection? | Does It Support Time Filter? |
+| ----------------------------- | ------------------------------- | --------------------------------------- | ---------------------------- |
+| CollectMeetingTopUserItemMeta | ≈10^3                           | ❌                                      | ✅                           |
+
+### Bitbucket
+
+| Subtask Name                        | Estimated Max Number of Request | Does It support Incremental Collection? | Does It Support Time Filter? |
+| ----------------------------------- | ------------------------------- | --------------------------------------- | ---------------------------- |
+| ~~CollectApiRepoMeta~~              | 1                               | ❌                                      | ❌                           |
+| CollectApiPullRequestsMeta          | ≈10^3                           | ❌                                      | ❌                           |
+| **CollectApiIssuesMeta**            | ≈10^4                           | ❌                                      | ❌                           |
+| **CollectApiPrCommentsMeta**        | ≈10^5                           | ❌                                      | ❌                           |
+| **\*CollectApiIssueCommentsMeta\*** | ≈10^6                           | ❌                                      | ❌                           |
+| **CollectApiPipelinesMeta**         | <10^4                           | ❌                                      | ❌                           |
+| CollectApiDeploymentsMeta           | <10^2                           | ❌                                      | ❌                           |
+
+### Gitee
+
+| Subtask Name                         | Estimated Max Number of Request | Does It support Incremental Collection? | Does It Support Time Filter? |
+| ------------------------------------ | ------------------------------- | --------------------------------------- | ---------------------------- |
+| ~~CollectApiRepoMeta~~               | 1                               | ❌                                      | ❌                           |
+| CollectApiPullRequestsMeta           | ≈10^3                           | ✅                                      | ❌                           |
+| **CollectApiIssuesMeta**             | ≈10^4                           | ✅                                      | ❌                           |
+| **CollectCommitsMeta?**              | ≈10^4                           | ✅                                      | ❌                           |
+| **CollectApiPrCommentsMeta**         | ≈10^5                           | ❌                                      | ❌                           |
+| **\*CollectApiIssueCommentsMeta\***  | ≈10^6                           | ✅                                      | ❌                           |
+| **CollectApiPullRequestCommitsMeta** | ≈10^5                           | ❌                                      | ❌                           |
+| **CollectApiPullRequestReviewsMeta** | ≈10^5                           | ❌                                      | ❌                           |
+| **\*CollectApiCommitStatsMeta\***    | ≈10^6 (Not enable)              | ❌                                      | ❌                           |
diff --git a/versioned_docs/version-v0.15/Overview/_category_.json b/versioned_docs/version-v0.15/Overview/_category_.json
new file mode 100644
index 0000000000..3e819ddc4f
--- /dev/null
+++ b/versioned_docs/version-v0.15/Overview/_category_.json
@@ -0,0 +1,8 @@
+{
+  "label": "Overview",
+  "position": 1,
+  "link":{
+    "type": "generated-index",
+    "slug": "Overview"
+  }
+}
diff --git a/versioned_docs/version-v0.15/Plugins/_category_.json b/versioned_docs/version-v0.15/Plugins/_category_.json
new file mode 100644
index 0000000000..bbea8d5910
--- /dev/null
+++ b/versioned_docs/version-v0.15/Plugins/_category_.json
@@ -0,0 +1,8 @@
+{
+  "label": "Plugins",
+  "position": 9,
+  "link":{
+    "type": "generated-index",
+    "slug": "Plugins"
+  }
+}
diff --git a/versioned_docs/version-v0.15/Plugins/bitbucket.md b/versioned_docs/version-v0.15/Plugins/bitbucket.md
new file mode 100644
index 0000000000..c415e3abcd
--- /dev/null
+++ b/versioned_docs/version-v0.15/Plugins/bitbucket.md
@@ -0,0 +1,77 @@
+---
+title: "BitBucket(Beta)"
+description: >
+  BitBucket Plugin
+---
+
+
+
+## Summary
+
+This plugin collects various entities from Bitbucket, including pull requests, issues, comments, pipelines, git commits, and etc.
+
+As of v0.14.2, `bitbucket` plugin can only be invoked through DevLake API. Its support in Config-UI is WIP.
+
+
+## Usage via DevLake API
+
+> Note: Please replace the `http://localhost:8080` in the sample requests with your actual DevLake API endpoint. For how to view DevLake API's swagger documentation, please refer to the "Using DevLake API" section of [Developer Setup](../DeveloperManuals/DeveloperSetup.md).
+
+
+1. Create a Bitbucket data connection: `POST /plugins/bitbucket/connections`. Please see a sample request below:
+
+```
+curl --location --request POST 'http://localhost:8080/plugins/bitbucket/connections' \
+--header 'Content-Type: application/json' \
+--data-raw '{
+    "endpoint": "https://api.bitbucket.org/2.0/",
+    "username": "<your username>",
+    "password": "<your app password>",
+    "name": "Bitbucket Cloud"
+}'
+```
+
+2. Create a blueprint to collect data from Bitbucket: `POST /blueprints`. Please see a sample request below:
+
+```
+curl --location --request POST 'http://localhost:8080/blueprints' \
+--header 'Content-Type: application/json' \
+--data-raw '{
+    "enable": true,
+    "mode": "NORMAL",
+    "name": "My Bitbucket Blueprint",
+    "cronConfig": "<cron string of your choice>",
+    "isManual": false,
+    "plan": [[]],
+    "settings": {
+        "connections": [
+            {
+                "plugin": "bitbucket",
+                "connectionId": 1,
+                "scope": [
+                    {
+                        "entities": [
+                            "CODE",
+                            "TICKET",
+                            "CODEREVIEW",
+                            "CROSS"
+                        ],
+                        "options": {
+                            "owner": "<owner of your repo>",
+                            "repo": "<your repo name>"
+                        }
+                    }
+                ]
+            }
+        ],
+        "version": "1.0.0"
+    }
+}'
+```
+
+3. [Optional] Trigger the blueprint manually: `POST /blueprints/{blueprintId}/trigger`. Run this step if you want to trigger the newly created blueprint right away. See an example request below:
+
+```
+curl --location --request POST 'http://localhost:8080/blueprints/<blueprintId>/trigger' \
+--header 'Content-Type: application/json'
+```
diff --git a/versioned_docs/version-v0.15/Plugins/customize.md b/versioned_docs/version-v0.15/Plugins/customize.md
new file mode 100644
index 0000000000..1516160b59
--- /dev/null
+++ b/versioned_docs/version-v0.15/Plugins/customize.md
@@ -0,0 +1,99 @@
+---
+title: "Customize"
+description: >
+  Customize Plugin
+---
+
+
+
+## Summary
+
+This plugin provides users the ability to create/delete columns and extract data from certain raw layer tables.
+The columns created with this plugin must be start with the prefix `x_`
+
+**NOTE:** All columns created by this plugin are of the datatype `VARCHAR(255)`
+
+## Sample Request
+To extract data, switch to `Advanced Mode` on the the first step of creating a Blueprint and paste a JSON config as the following:
+
+The example below demonstrates how to extract status name from the table `_raw_jira_api_issues` and assign it to the `x_test` column of the table `issues`.
+We leverage the package `https://github.com/tidwall/gjson` to extract value from the JSON. For the extraction syntax, please refer to this [docs](https://github.com/tidwall/gjson/blob/master/SYNTAX.md)
+
+- `table`: domain layer table name
+- `rawDataTable`: raw layer table, from which we extract values by json path
+- `rawDataParams`: the filter to select records from the raw layer table (**The value should be a string not an object**)
+- `mapping` the extraction rule; the key is the extension field name; the value is json path
+
+```json
+[
+  [
+    {
+      "plugin":"customize",
+      "options":{
+        "transformationRules":[
+          {
+            "table":"issues", 
+            "rawDataTable":"_raw_jira_api_issues", 
+            "rawDataParams":"{\"ConnectionId\":1,\"BoardId\":8}", 
+            "mapping":{
+              "x_test":"fields.status.name" 
+            }
+          }
+        ]
+      }
+    }
+  ]
+]
+```
+
+You can also trigger data extraction by making a POST request to `/pipelines`.
+```
+curl 'http://localhost:8080/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+    "name": "extract fields",
+    "plan": [
+        [
+            {
+                "plugin": "customize",
+                "options": {
+                    "transformationRules": [
+                        {
+                            "table": "issues",
+                            "rawDataTable": "_raw_jira_api_issues",
+                            "rawDataParams": "{\"ConnectionId\":1,\"BoardId\":8}",
+                            "mapping": {
+                                "x_test": "fields.status.name"
+                            }
+                        }
+                    ]
+                }
+            }
+        ]
+    ]
+}
+'
+```
+Get all extension columns(start with `x_`) of the table `issues`
+> GET /plugins/customize/issues/fields
+
+response
+```json
+[
+    {
+        "columnName": "x_test",
+        "columnType": "VARCHAR(255)"
+    }
+]
+```
+Create extension column `x_test` for the table `issues`
+
+> POST /plugins/customize/issues/fields
+```json
+{
+    "name": "x_test"
+}
+```
+Drop the column `x_text` for the table `issues`
+> DELETE /plugins/customize/issues/fields/x_test
diff --git a/versioned_docs/version-v0.15/Plugins/dbt.md b/versioned_docs/version-v0.15/Plugins/dbt.md
new file mode 100644
index 0000000000..059bf12c61
--- /dev/null
+++ b/versioned_docs/version-v0.15/Plugins/dbt.md
@@ -0,0 +1,67 @@
+---
+title: "DBT"
+description: >
+  DBT Plugin
+---
+
+
+## Summary
+
+dbt (data build tool) enables analytics engineers to transform data in their warehouses by simply writing select statements. dbt handles turning these select statements into tables and views.
+dbt does the T in ELT (Extract, Load, Transform) processes – it doesn’t extract or load data, but it’s extremely good at transforming data that’s already loaded into your warehouse.
+
+## User setup<a id="user-setup"></a>
+- If you plan to use this product, you need to install some environments first.
+
+#### Required Packages to Install<a id="user-setup-requirements"></a>
+- [python3.7+](https://www.python.org/downloads/)
+- [dbt-mysql](https://pypi.org/project/dbt-mysql/#configuring-your-profile)
+
+#### Commands to run or create in your terminal and the dbt project<a id="user-setup-commands"></a>
+1. pip install dbt-mysql
+2. dbt init demoapp (demoapp is project name)
+3. create your SQL transformations and data models
+
+## Convert Data By DBT
+
+Use the Raw JSON API to manually initiate a run using **cURL** or graphical API tool such as **Postman**. `POST` the following request to the DevLake API Endpoint.
+
+```json
+[
+  [
+    {
+      "plugin": "dbt",
+      "options": {
+          "projectPath": "/Users/abeizn/demoapp",
+          "projectName": "demoapp",
+          "projectTarget": "dev",
+          "selectedModels": ["my_first_dbt_model","my_second_dbt_model"],
+          "projectVars": {
+            "demokey1": "demovalue1",
+            "demokey2": "demovalue2"
+        }
+      }
+    }
+  ]
+]
+```
+
+- `projectPath`: the absolute path of the dbt project. (required)
+- `projectName`: the name of the dbt project. (required)
+- `projectTarget`: this is the default target your dbt project will use. (optional)
+- `selectedModels`: a model is a select statement. Models are defined in .sql files, and typically in your models directory. (required)
+And selectedModels accepts one or more arguments. Each argument can be one of:
+1. a package name, runs all models in your project, example: example
+2. a model name, runs a specific model, example: my_fisrt_dbt_model
+3. a fully-qualified path to a directory of models.
+
+- `projectVars`: variables to parametrize dbt models. (optional)
+example:
+`select * from events where event_type = '{{ var("event_type") }}'`
+To execute this SQL query in your model, you need set a value for `event_type`.
+
+### Resources:
+- Learn more about dbt [in the docs](https://docs.getdbt.com/docs/introduction)
+- Check out [Discourse](https://discourse.getdbt.com/) for commonly asked questions and answers
+
+<br/><br/><br/>
diff --git a/versioned_docs/version-v0.15/Plugins/feishu.md b/versioned_docs/version-v0.15/Plugins/feishu.md
new file mode 100644
index 0000000000..6cd596f63f
--- /dev/null
+++ b/versioned_docs/version-v0.15/Plugins/feishu.md
@@ -0,0 +1,71 @@
+---
+title: "Feishu"
+description: >
+  Feishu Plugin
+---
+
+## Summary
+
+This plugin collects Feishu meeting data through [Feishu Openapi](https://open.feishu.cn/document/home/user-identity-introduction/introduction).
+
+## Configuration
+
+In order to fully use this plugin, you will need to get `app_id` and `app_secret` from a Feishu administrator (for help on App info, please see [official Feishu Docs](https://open.feishu.cn/document/ukTMukTMukTM/ukDNz4SO0MjL5QzM/auth-v3/auth/tenant_access_token_internal)),
+
+A connection should be created before you can collection any data. Currently, this plugin supports creating connection by requesting `connections` API:
+
+```
+curl 'http://localhost:8080/plugins/feishu/connections' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+    "name": "feishu",
+    "endpoint": "https://open.feishu.cn/open-apis/vc/v1/",
+    "proxy": "http://localhost:1080",
+    "rateLimitPerHour": 20000,
+    "appId": "<YOUR_APP_ID>",
+    "appSecret": "<YOUR_APP_SECRET>"
+}
+'
+```
+
+## Collect data from Feishu
+
+To collect data, select `Advanced Mode` on the `Create Pipeline Run` page and paste a JSON config like the following:
+
+
+```json
+[
+  [
+    {
+      "plugin": "feishu",
+      "options": {
+        "connectionId": 1,
+        "numOfDaysToCollect" : 80
+      }
+    }
+  ]
+]
+```
+
+> `numOfDaysToCollect`: The number of days you want to collect
+
+> `rateLimitPerSecond`: The number of requests to send(Maximum is 8)
+
+You can also trigger data collection by making a POST request to `/pipelines`.
+```
+curl 'http://localhost:8080/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+    "name": "feishu 20211126",
+    "tasks": [[{
+      "plugin": "feishu",
+      "options": {
+        "connectionId": 1,
+        "numOfDaysToCollect" : 80
+      }
+    }]]
+}
+'
+```
diff --git a/versioned_docs/version-v0.15/Plugins/gitee.md b/versioned_docs/version-v0.15/Plugins/gitee.md
new file mode 100644
index 0000000000..ffed3f537a
--- /dev/null
+++ b/versioned_docs/version-v0.15/Plugins/gitee.md
@@ -0,0 +1,106 @@
+---
+title: "Gitee(WIP)"
+description: >
+  Gitee Plugin
+---
+
+## Summary
+
+This plugin collects `Gitee` data through [Gitee Openapi](https://gitee.com/api/v5/swagger).
+
+## Configuration
+
+In order to fully use this plugin, you will need to get `token` on the Gitee website.
+
+A connection should be created before you can collection any data. Currently, this plugin supports creating connection by requesting `connections` API:
+
+```
+curl 'http://localhost:8080/plugins/gitee/connections' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+    "name": "gitee",
+    "endpoint": "https://gitee.com/api/v5/",
+    "proxy": "http://localhost:1080",
+    "rateLimitPerHour": 20000,
+    "token": "<YOUR_TOKEN>"
+}
+'
+```
+
+
+
+## Collect data from Gitee
+
+In order to collect data, you have to compose a JSON looks like following one, and send it by selecting `Advanced Mode` on `Create Pipeline Run` page:
+
+1. Configure-UI Mode
+```json
+[
+  [
+    {
+      "plugin": "gitee",
+      "options": {
+        "connectionId": 1,
+        "repo": "lake",
+        "owner": "merico-dev"
+      }
+    }
+  ]
+]
+```
+and if you want to perform certain subtasks.
+```json
+[
+  [
+    {
+      "plugin": "gitee",
+      "subtasks": ["collectXXX", "extractXXX", "convertXXX"],
+      "options": {
+        "connectionId": 1,
+        "repo": "lake",
+        "owner": "merico-dev"
+      }
+    }
+  ]
+]
+```
+
+2. Curl Mode:
+   You can also trigger data collection by making a POST request to `/pipelines`.
+```
+curl 'http://localhost:8080/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+    "name": "gitee 20211126",
+    "tasks": [[{
+        "plugin": "gitee",
+        "options": {
+            "connectionId": 1,
+            "repo": "lake",
+            "owner": "merico-dev"
+        }
+    }]]
+}
+'
+```
+and if you want to perform certain subtasks.
+```
+curl 'http://localhost:8080/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+    "name": "gitee 20211126",
+    "tasks": [[{
+        "plugin": "gitee",
+        "subtasks": ["collectXXX", "extractXXX", "convertXXX"],
+        "options": {
+            "connectionId": 1,
+            "repo": "lake",
+            "owner": "merico-dev"
+        }
+    }]]
+}
+'
+```
diff --git a/versioned_docs/version-v0.15/Plugins/gitextractor.md b/versioned_docs/version-v0.15/Plugins/gitextractor.md
new file mode 100644
index 0000000000..a357a845e5
--- /dev/null
+++ b/versioned_docs/version-v0.15/Plugins/gitextractor.md
@@ -0,0 +1,134 @@
+---
+title: "GitExtractor"
+description: >
+  GitExtractor Plugin
+---
+
+## Summary
+This plugin extracts commits and references from a remote or local git repository. It then saves the data into the database or csv files.
+
+## Steps to make this plugin work
+
+1. Use the Git repo extractor to retrieve data about commits and branches from your repository.
+2. Use the GitHub plugin to retrieve data about Github issues and PRs from your repository.
+NOTE: you can run only one issue collection stage as described in the Github Plugin README.
+3. Use the [RefDiff](./refdiff.md) plugin to calculate version diff, which will be stored in `refs_commits_diffs` table.
+
+## Sample Request
+
+```
+curl --location --request POST 'localhost:8080/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+    "name": "git repo extractor",
+    "tasks": [
+        [
+            {
+                "Plugin": "gitextractor",
+                "Options": {
+                    "url": "https://github.com/merico-dev/lake.git",
+                    "repoId": "github:GithubRepo:384111310"
+                }
+            }
+        ]
+    ]
+}
+'
+```
+- `url`: the location of the git repository. It should start with `http`/`https` for a remote git repository and with `/` for a local one.
+- `repoId`: column `id` of  `repos`.
+   Note : For GitHub, to find the repo id run `$("meta[name=octolytics-dimension-repository_id]").getAttribute('content')` in browser console. 
+- `proxy`: optional, http proxy, e.g. `http://your-proxy-server.com:1080`.
+- `user`: optional, for cloning private repository using HTTP/HTTPS
+- `password`: optional, for cloning private repository using HTTP/HTTPS
+- `privateKey`: optional, for SSH cloning, base64 encoded `PEM` file
+- `passphrase`: optional, passphrase for the private key
+
+
+## Standalone Mode
+
+You call also run this plugin in a standalone mode without any DevLake service running using the following command:
+
+```
+go run plugins/gitextractor/main.go -url https://github.com/merico-dev/lake.git -id github:GithubRepo:384111310 -db "merico:merico@tcp(127.0.0.1:3306)/lake?charset=utf8mb4&parseTime=True"
+```
+
+For more options (e.g., saving to a csv file instead of a db), please read `plugins/gitextractor/main.go`.
+
+## Development
+
+This plugin depends on `libgit2`, you need to install version 1.3.0 to run and debug this plugin on your local
+machine.
+
+### Linux
+
+```
+1. require cmake
+[ubuntu]
+apt install cmake -y
+[centos]
+yum install cmake -y
+
+2. compiling
+git clone -b v1.3.0 https://github.com/libgit2/libgit2.git && cd libgit2
+mkdir build && cd build && cmake ..
+make && make install
+
+3.PKG_CONFIG and LD_LIBRARY_PATH
+[centos]
+export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib64:/usr/local/lib64/pkgconfig
+export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib64
+[ubuntu]
+export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib:/usr/local/lib/pkgconfig
+export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
+```
+
+#### Troubleshooting (Linux)
+
+> Q: # pkg-config --cflags -- libgit2 Package libgit2 was not found in the pkg-config search path.
+> Perhaps you should add the directory containing `libgit2.pc` to the PKG_CONFIG_PATH environment variable
+> No package 'libgit2' found pkg-config: exit status 1
+
+> A:
+> Make sure your pkg config path covers the installation:
+> if your libgit2.pc in `/usr/local/lib64/pkgconfig`(like centos)
+>
+> `export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib64:/usr/local/lib64/pkgconfig`
+>
+> else if your libgit2.pc in `/usr/local/lib/pkgconfig`(like ubuntu)
+>
+> `export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib:/usr/local/lib/pkgconfig`
+>
+> else consider install pkgconfig or rebuild the libgit2
+
+### MacOS
+
+NOTE: **Do NOT** install libgit2 via `MacPorts` or `homebrew`, install from source instead.
+
+```
+brew install cmake
+git clone https://github.com/libgit2/libgit2.git
+cd libgit2
+git checkout v1.3.0
+mkdir build
+cd build
+cmake ..
+make
+make install
+```
+
+#### Troubleshooting (MacOS)
+
+> Q: I got an error saying: `pkg-config: exec: "pkg-config": executable file not found in $PATH`
+
+> A:
+>
+> 1. Make sure you have pkg-config installed:
+>
+> `brew install pkg-config`
+>
+> 2. Make sure your pkg config path covers the installation:
+     >    `export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib:/usr/local/lib/pkgconfig`
+
+<br/><br/><br/>
diff --git a/versioned_docs/version-v0.15/Plugins/github.md b/versioned_docs/version-v0.15/Plugins/github.md
new file mode 100644
index 0000000000..f8874548fb
--- /dev/null
+++ b/versioned_docs/version-v0.15/Plugins/github.md
@@ -0,0 +1,141 @@
+---
+title: "GitHub"
+description: >
+  GitHub Plugin
+---
+
+## Summary
+
+This plugin collects GitHub data through [REST API](https://docs.github.com/en/rest/) and [GraphQL API](https://docs.github.com/en/graphql). It then computes and visualizes various DevOps metrics from the GitHub data, which helps tech leads, QA and DevOps engineers, and project managers to answer questions such as:
+
+- Is this month more productive than last?
+- How fast do we respond to customer requirements?
+- Was our quality improved or not?
+
+## Entities
+
+Check out the [GitHub entities](/Overview/SupportedDataSources.md#data-collection-scope-by-each-plugin) collected by this plugin.
+
+## Data Refresh Policy
+
+Check out the [data refresh policy](/Overview/SupportedDataSources.md#github) of this plugin.
+
+## Metrics
+
+Metrics that can be calculated based on the data collected from GitHub:
+
+- [Requirement Count](/Metrics/RequirementCount.md)
+- [Requirement Lead Time](/Metrics/RequirementLeadTime.md)
+- [Requirement Delivery Rate](/Metrics/RequirementDeliveryRate.md)
+- [Requirement Granularity](/Metrics/RequirementGranularity.md)
+- [Bug Age](/Metrics/BugAge.md)
+- [Bug Count per 1k Lines of Code](/Metrics/BugCountPer1kLinesOfCode.md)
+- [Incident Age](/Metrics/IncidentAge.md)
+- [Incident Count per 1k Lines of Code](/Metrics/IncidentCountPer1kLinesOfCode.md)
+- [Commit Count](/Metrics/CommitCount.md)
+- [Commit Author Count](/Metrics/CommitAuthorCount.md)
+- [Added Lines of Code](/Metrics/AddedLinesOfCode.md)
+- [Deleted Lines of Code](/Metrics/DeletedLinesOfCode.md)
+- [PR Count](/Metrics/PRCount.md)
+- [PR Cycle Time](/Metrics/PRCycleTime.md)
+- [PR Coding Time](/Metrics/PRCodingTime.md)
+- [PR Pickup Time](/Metrics/PRPickupTime.md)
+- [PR Review Time](/Metrics/PRReviewTime.md)
+- [PR Deploy Time](/Metrics/PRDeployTime.md)
+- [PR Time To Merge](/Metrics/PRTimeToMerge.md)
+- [PR Merge Rate](/Metrics/PRMergeRate.md)
+- [PR Review Depth](/Metrics/PRReviewDepth.md)
+- [PR Size](/Metrics/PRSize.md)
+- [Build Count](/Metrics/BuildCount.md)
+- [Build Duration](/Metrics/BuildDuration.md)
+- [Build Success Rate](/Metrics/BuildSuccessRate.md)
+- [DORA - Deployment Frequency](/Metrics/DeploymentFrequency.md)
+- [DORA - Lead Time for Changes](/Metrics/LeadTimeForChanges.md)
+- [DORA - Median Time to Restore Service](/Metrics/MTTR.md)
+- [DORA - Change Failure Rate](/Metrics/CFR.md)
+
+## Configuration
+
+- Configuring GitHub via [Config UI](/UserManuals/ConfigUI/GitHub.md)
+- Configuring GitHub via Config UI's [advanced mode](/UserManuals/ConfigUI/AdvancedMode.md#1-github).
+
+## API Sample Request
+
+You can trigger data collection by making a POST request to `/pipelines`.
+
+```
+curl 'http://localhost:8080/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+  "name": "project1-BLUEPRINT",
+  "blueprintId": 1,
+  "plan": [
+    [
+      {
+        "plugin": "github",
+        "options": {
+          "connectionId": 1,
+          "scopeId": "384111310",
+          "transformationRules":{
+            "deploymentPattern":"",
+            "productionPattern":"",
+            "issueComponent":"",
+            "issuePriority":"(high|medium|low)$",
+            "issueSeverity":"",
+            "issueTypeBug":"(bug)$",
+            "issueTypeIncident":"",
+            "issueTypeRequirement":"(feature|feature-request)$",
+            "prBodyClosePattern":"",
+            "prComponent":"",
+            "prType":""
+          }
+        }
+      }
+    ]
+  ]
+}
+'
+```
+
+or
+
+```
+curl 'http://localhost:8080/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+  "name": "project1-BLUEPRINT",
+  "blueprintId": 1,
+  "plan": [
+    [
+      {
+        "plugin": "github",
+        "options": {
+          "connectionId": 1,
+          "owner": "apache",
+          "repo": "incubator-devlake",
+          "transformationRules":{
+            "deploymentPattern":"",
+            "productionPattern":"",
+            "issueComponent":"",
+            "issuePriority":"(high|medium|low)$",
+            "issueSeverity":"",
+            "issueTypeBug":"(bug)$",
+            "issueTypeIncident":"",
+            "issueTypeRequirement":"(feature|feature-request)$",
+            "prBodyClosePattern":"",
+            "prComponent":"",
+            "prType":""
+          }
+        }
+      }
+    ]
+  ]
+}
+'
+```
+
+## References
+
+- [references](/DeveloperManuals/DeveloperSetup.md#references)
diff --git a/versioned_docs/version-v0.15/Plugins/gitlab.md b/versioned_docs/version-v0.15/Plugins/gitlab.md
new file mode 100644
index 0000000000..f4b5663b7a
--- /dev/null
+++ b/versioned_docs/version-v0.15/Plugins/gitlab.md
@@ -0,0 +1,96 @@
+---
+title: "GitLab"
+description: >
+  GitLab Plugin
+---
+
+## Summary
+
+This plugin collects GitLab data through [API](https://docs.gitlab.com/ee/api/). It then computes and visualizes various DevOps metrics from the GitLab data, which helps tech leads, QA and DevOps engineers, and project managers to answer questions such as:
+
+- How long does it take for your codes to get merged?
+- How much time is spent on code review?
+- How long does it take for your codes to get merged?
+- How much time is spent on code review?
+
+## Entities
+
+Check out the [GitLab entities](/Overview/SupportedDataSources.md#data-collection-scope-by-each-plugin) collected by this plugin.
+
+## Data Refresh Policy
+
+Check out the [data refresh policy](/Overview/SupportedDataSources.md#gitlab) of this plugin.
+
+## Metrics
+
+Metrics that can be calculated based on the data collected from GitLab:
+
+- [Commit Count](/Metrics/CommitCount.md)
+- [Commit Author Count](/Metrics/CommitAuthorCount.md)
+- [Added Lines of Code](/Metrics/AddedLinesOfCode.md)
+- [Deleted Lines of Code](/Metrics/DeletedLinesOfCode.md)
+- [PR Count](/Metrics/PRCount.md)
+- [PR Cycle Time](/Metrics/PRCycleTime.md)
+- [PR Coding Time](/Metrics/PRCodingTime.md)
+- [PR Pickup Time](/Metrics/PRPickupTime.md)
+- [PR Review Time](/Metrics/PRReviewTime.md)
+- [PR Deploy Time](/Metrics/PRDeployTime.md)
+- [PR Time To Merge](/Metrics/PRTimeToMerge.md)
+- [PR Merge Rate](/Metrics/PRMergeRate.md)
+- [PR Review Depth](/Metrics/PRReviewDepth.md)
+- [PR Size](/Metrics/PRSize.md)
+- [Build Count](/Metrics/BuildCount.md)
+- [Build Duration](/Metrics/BuildDuration.md)
+- [Build Success Rate](/Metrics/BuildSuccessRate.md)
+- [DORA - Deployment Frequency](/Metrics/DeploymentFrequency.md)
+- [DORA - Lead Time for Changes](/Metrics/LeadTimeForChanges.md)
+- [DORA - Median Time to Restore Service](/Metrics/MTTR.md)
+- [DORA - Change Failure Rate](/Metrics/CFR.md)
+
+## Configuration
+
+- Configuring GitLab via [config-ui](/UserManuals/ConfigUI/GitLab.md).
+- Configuring GitLab via Config UI's [advanced mode](/UserManuals/ConfigUI/AdvancedMode.md#2-gitlab).
+
+## API Sample Request
+
+You can trigger data collection by making a POST request to `/pipelines`.
+
+```
+curl 'http://localhost:8080/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+  "name": "project1-BLUEPRINT",
+  "blueprintId": 1,
+  "plan": [
+    [
+      {
+        "plugin": "gitlab",
+        "options": {
+          "connectionId": 1,
+          "projectId": 33728042,
+          "transformationRules":{
+            "deploymentPattern":"",
+            "productionPattern":"",
+            "issueComponent":"",
+            "issuePriority":"(high|medium|low)$",
+            "issueSeverity":"",
+            "issueTypeBug":"(bug)$",
+            "issueTypeIncident":"",
+            "issueTypeRequirement":"(feature|feature-request)$",
+            "prBodyClosePattern":"",
+            "prComponent":"",
+            "prType":""
+          }
+        }
+      }
+    ]
+  ]
+}
+'
+```
+
+## References
+
+- [references](/DeveloperManuals/DeveloperSetup.md#references)
diff --git a/versioned_docs/version-v0.15/Plugins/jenkins.md b/versioned_docs/version-v0.15/Plugins/jenkins.md
new file mode 100644
index 0000000000..13ab302736
--- /dev/null
+++ b/versioned_docs/version-v0.15/Plugins/jenkins.md
@@ -0,0 +1,100 @@
+---
+title: "Jenkins"
+description: >
+  Jenkins Plugin
+---
+
+## Summary
+
+This plugin collects Jenkins data through [Remote Access API](https://www.jenkins.io/doc/book/using/remote-access-api/). It then computes and visualizes various DevOps metrics from the Jenkins data, which helps tech leads and DevOps engineers to answer questions such as:
+
+- What is the deployment frequency of your team?
+- What is the build success rate?
+- How long does it take for a code change to be deployed into production?
+
+## Entities
+
+Check out the [Jenkins entities](/Overview/SupportedDataSources.md#data-collection-scope-by-each-plugin) collected by this plugin.
+
+## Data Refresh Policy
+
+Check out the [data refresh policy](/Overview/SupportedDataSources.md#jenkins) of this plugin.
+
+## Metrics
+
+Metrics that can be calculated based on the data collected from Jenkins:
+
+- [Build Count](/Metrics/BuildCount.md)
+- [Build Duration](/Metrics/BuildDuration.md)
+- [Build Success Rate](/Metrics/BuildSuccessRate.md)
+- [DORA - Deployment Frequency](/Metrics/DeploymentFrequency.md)
+- [DORA - Lead Time for Changes](/Metrics/LeadTimeForChanges.md)
+- [DORA - Median Time to Restore Service](/Metrics/MTTR.md)
+- [DORA - Change Failure Rate](/Metrics/CFR.md)
+
+## Configuration
+
+- Configuring Jenkins via [Config UI](/UserManuals/ConfigUI/Jenkins.md)
+- Configuring Jenkins via Config UI's [advanced mode](/UserManuals/ConfigUI/AdvancedMode.md#3-jenkins).
+
+## API Sample Request
+
+You can trigger data collection by making a POST request to `/pipelines`.
+
+```
+curl 'http://localhost:8080/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+  "name": "project1-BLUEPRINT",
+  "blueprintId": 1,
+  "plan": [
+    [
+      {
+        "plugin": "jenkins",
+        "options": {
+          "connectionId": 1,
+          "scopeId": "auto_deploy",
+          "transformationRules":{
+            "deploymentPattern":"",
+            "productionPattern":""
+          }
+        }
+      }
+    ]
+  ]
+}
+'
+```
+
+or
+
+```
+curl 'http://localhost:8080/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+  "name": "project1-BLUEPRINT",
+  "blueprintId": 2,
+  "plan": [
+    [
+      {
+        "plugin": "jenkins",
+        "options": {
+          "connectionId": 1,
+          "jobFullName": "auto_deploy",
+          "transformationRules":{
+            "deploymentPattern":"",
+            "productionPattern":""
+          }
+        }
+      }
+    ]
+  ]
+}
+'
+```
+
+## References
+
+- [references](/DeveloperManuals/DeveloperSetup.md#references)
diff --git a/versioned_docs/version-v0.15/Plugins/jira.md b/versioned_docs/version-v0.15/Plugins/jira.md
new file mode 100644
index 0000000000..ea2ef56d72
--- /dev/null
+++ b/versioned_docs/version-v0.15/Plugins/jira.md
@@ -0,0 +1,71 @@
+---
+title: "Jira"
+description: >
+  Jira Plugin
+---
+
+## Summary
+
+This plugin collects Jira data through Jira REST API. It then computes and visualizes various engineering metrics from the Jira data.
+
+## Entities
+
+Check out the [Jira entities](/Overview/SupportedDataSources.md#data-collection-scope-by-each-plugin) collected by this plugin.
+
+## Data Refresh Policy
+
+Check out the [data refresh policy](/Overview/SupportedDataSources.md#jira) of this plugin.
+
+## Metrics
+
+Metrics that can be calculated based on the data collected from Jira:
+
+- [Requirement Count](/Metrics/RequirementCount.md)
+- [Requirement Lead Time](/Metrics/RequirementLeadTime.md)
+- [Requirement Delivery Rate](/Metrics/RequirementDeliveryRate.md)
+- [Requirement Granularity](/Metrics/RequirementGranularity.md)
+- [Bug Age](/Metrics/BugAge.md)
+- [Bug Count per 1k Lines of Code](/Metrics/BugCountPer1kLinesOfCode.md)
+- [Incident Age](/Metrics/IncidentAge.md)
+- [Incident Count per 1k Lines of Code](/Metrics/IncidentCountPer1kLinesOfCode.md)
+
+## Configuration
+
+- Configuring Jira via [config-ui](/UserManuals/ConfigUI/Jira.md).
+- Configuring Jira via Config UI's [advanced mode](/UserManuals/ConfigUI/AdvancedMode.md#4-jira).
+
+## API Sample Request
+
+You can trigger data collection by making a POST request to `/pipelines`.
+
+```shell
+curl 'http://localhost:8080/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+  "name": "MY PIPELINE",
+  "plan": [
+    [
+      {
+        "plugin": "jira",
+        "options": {
+          "connectionId": 1,
+          "boardId": 8,
+          "transformationRules": {
+            "epicKeyField": "",
+            "storyPointField": "",
+            "remotelinkCommitShaPattern": "",
+            "typeMappings": {
+              "10040": {
+                "standardType": "Incident",
+                "statusMappings": null
+              }
+            }
+          }
+        }
+      }
+    ]
+  ]
+}
+'
+```
diff --git a/versioned_docs/version-v0.15/Plugins/pagerduty.md b/versioned_docs/version-v0.15/Plugins/pagerduty.md
new file mode 100644
index 0000000000..485c6cce40
--- /dev/null
+++ b/versioned_docs/version-v0.15/Plugins/pagerduty.md
@@ -0,0 +1,78 @@
+---
+title: "PagerDuty(WIP)"
+description: >
+  PagerDuty Plugin
+---
+
+
+
+## Summary
+
+This plugin collects all incidents from PagerDuty, and uses them to compute incident-type DORA metrics. These include
+[Median time to restore service](/Metrics/MTTR.md) and [Change failure rate](/Metrics/CFR.md).
+
+As of v0.15.x, the `PagerDuty` plugin can only be invoked through the DevLake API. Its support in Config-UI is WIP.
+
+
+## Usage via DevLake API
+
+> Note: Please replace the `http://localhost:8080` in the sample requests with your actual DevLake API endpoint. For how to view DevLake API's swagger documentation, please refer to the "Using DevLake API" section of [Developer Setup](../DeveloperManuals/DeveloperSetup.md).
+
+
+1. Create a PagerDuty data connection: `POST /plugins/pagerduty/connections`. Please see a sample request below:
+
+```
+curl --location --request POST 'http://localhost:8080/plugins/pagerduty/connections' \
+--header 'Content-Type: application/json' \
+--data-raw '{
+    "name": "PagerDuty-test1",
+    "endpoint": "https://api.PagerDuty.com",
+    "token": "<api-access-token>"
+}'
+```
+
+2. Create a blueprint to collect data from PagerDuty: `POST /blueprints`. Please see a sample request below:
+
+```
+curl --location --request POST 'http://localhost:8080/blueprints' \
+--header 'Content-Type: application/json' \
+--data-raw '{
+    "cronConfig": "manual",
+    "enable": true,
+    "isManual": true,
+    "mode": "NORMAL",
+    "name": "test-blueprint",
+    "settings": {
+        "connections": [
+            {
+                "connectionId": 1,
+                "plugin": "PagerDuty",
+                "scope": [
+                    {
+                        "entities": [
+                            "TICKET"
+                        ],
+                        "options": {
+                            "connectionId": 1,
+                            "start_date": "2022-06-01T15:04:05Z"
+                        }
+                    }
+                ]
+            }
+        ],
+        "version": "1.0.0"
+    }
+}'
+```
+
+Here `start_date` is the time sinch which all created incidents will be collected. Entities may be blank: the only 
+allowed entity is `"TICKET"` which will be used as default.
+
+3. [Optional] Trigger the blueprint manually: `POST /blueprints/{blueprintId}/trigger`. Run this step if you want to trigger the newly created blueprint right away. See an example request below:
+
+```
+curl --location --request POST 'http://localhost:8080/blueprints/<blueprintId>/trigger' \
+--header 'Content-Type: application/json'
+```
+
+Note the incidents are extracted from the `issues` table in MySQL with the condition `type = 'INCIDENT'`.
diff --git a/versioned_docs/version-v0.15/Plugins/refdiff.md b/versioned_docs/version-v0.15/Plugins/refdiff.md
new file mode 100644
index 0000000000..01be58de5b
--- /dev/null
+++ b/versioned_docs/version-v0.15/Plugins/refdiff.md
@@ -0,0 +1,132 @@
+---
+title: "RefDiff"
+description: >
+  RefDiff Plugin
+---
+
+## Summary
+
+RefDiff is a plugin that performs calculation tasks and has 2 main purposes.
+
+- Calculate the difference in commits between releases/tags to [analyze the amount of code in each release](https://github.com/apache/incubator-devlake/blob/main/plugins/refdiff/tasks/commit_diff_calculator.go)
+- Calculate the difference in commits between deployments to [calculate DORA metrics](https://github.com/apache/incubator-devlake/blob/main/plugins/refdiff/tasks/project_deployment_commit_diff_calculator.go)
+
+And the output of RefDiff is stored in the table commits_diffs, finished_commits_diffs, ref_commits.
+
+## Important Note
+
+You need to run `gitextractor` before the `refdiff` plugin. The `gitextractor` plugin should create records in the `refs` table in your database before this plugin can be run.
+
+## Configuration
+
+This is an enrichment plugin based on the domain layer data, no configuration is needed.
+
+## How to use refdiff
+
+To trigger the enrichment, you need to insert a new task into your pipeline.
+
+1. Make sure `commits` and `refs` are collected into your database, `refs` table should contain records like following:
+   ```
+   id                                            ref_type
+   github:GithubRepo:1:384111310:refs/tags/0.3.5   TAG
+   github:GithubRepo:1:384111310:refs/tags/0.3.6   TAG
+   github:GithubRepo:1:384111310:refs/tags/0.5.0   TAG
+   github:GithubRepo:1:384111310:refs/tags/v0.0.1  TAG
+   github:GithubRepo:1:384111310:refs/tags/v0.2.0  TAG
+   github:GithubRepo:1:384111310:refs/tags/v0.3.0  TAG
+   github:GithubRepo:1:384111310:refs/tags/v0.4.0  TAG
+   github:GithubRepo:1:384111310:refs/tags/v0.6.0  TAG
+   github:GithubRepo:1:384111310:refs/tags/v0.6.1  TAG
+   ```
+2. If you want to run calculatePrCherryPick, please configure GITHUB_PR_TITLE_PATTERN in .env, you can check the example in .env.example(we have a default value, please make sure your pattern is disclosed by single quotes '')
+3. And then, trigger a pipeline like the following format:
+
+```shell
+curl 'http://localhost:8080/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+    "name": "test-refdiff",
+    "tasks": [
+        [
+            {
+                "plugin": "refdiff",
+                "options": {
+                    "repoId": "github:GithubRepo:1:384111310",
+                    "pairs": [
+                       { "newRef": "refs/tags/v0.6.0", "oldRef": "refs/tags/0.5.0" },
+                       { "newRef": "refs/tags/0.5.0", "oldRef": "refs/tags/0.4.0" }
+                    ],
+                    "tasks": [
+                        "calculateCommitsDiff",
+                        "calculateIssuesDiff",
+                        "calculatePrCherryPick",
+                    ]
+                }
+            }
+        ]
+    ]
+}'
+```
+
+Or if you preferred calculating latest releases
+
+```shell
+curl 'http://localhost:8080/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+    "name": "test-refdiff",
+    "tasks": [
+        [
+            {
+                "plugin": "refdiff",
+                "options": {
+                    "repoId": "github:GithubRepo:1:384111310",
+                    "tagsPattern": "v\d+\.\d+.\d+",
+                    "tagsLimit": 10,
+                    "tagsOrder": "reverse semver",
+                    "tasks": [
+                        "calculateCommitsDiff",
+                        "calculateIssuesDiff",
+                        "calculatePrCherryPick",
+                    ]
+                }
+            }
+        ]
+    ]
+}'
+```
+
+## How to use refdiff in DORA
+
+RefDiff can be called by the [DORA plugin](https://github.com/apache/incubator-devlake/tree/main/plugins/dora) to support the calculation of [DORA metrics](https://devlake.apache.org/docs/UserManuals/DORA). RefDiff has a subtask called 'calculateProjectDeploymentCommitsDiff'. This subtask takes the `project_name` from task options to calculate the commits diff between two consecutive deployments in this project. That is to say, refdiff will generate the relationship between `deployed com [...]
+
+```shell
+curl 'http://localhost:8080/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+    "name": "test-refdiff-dora",
+    "tasks": [
+        [
+            {
+                "plugin": "refdiff",
+                "options": {
+                    "projectName": "project_name_1",
+                    "tasks": [
+                        "calculateProjectDeploymentCommitsDiff"
+                    ]
+                }
+            }
+        ]
+    ]
+}'
+```
+
+## Development
+
+This plugin depends on `libgit2`, you need to install version 1.3.0 in order to run and debug this plugin on your local
+machine. [Click here](./gitextractor.md#Development) for a brief guide.
+
+<br/><br/><br/>
diff --git a/versioned_docs/version-v0.15/Plugins/tapd.md b/versioned_docs/version-v0.15/Plugins/tapd.md
new file mode 100644
index 0000000000..691f7a930d
--- /dev/null
+++ b/versioned_docs/version-v0.15/Plugins/tapd.md
@@ -0,0 +1,24 @@
+---
+title: "Tapd(Beta)"
+description: >
+  Tapd Plugin
+---
+
+## Summary
+
+This plugin collects TAPD data through its REST APIs. TAPD is an issue-tracking tool similar to Jira.
+
+## Metrics
+
+Metrics that can be calculated based on the data collected from Tapd:
+
+- [Requirement Count](/Metrics/RequirementCount.md)
+- [Requirement Lead Time](/Metrics/RequirementLeadTime.md)
+- [Requirement Delivery Rate](/Metrics/RequirementDeliveryRate.md)
+- [Bug Age](/Metrics/BugAge.md)
+- [Incident Age](/Metrics/IncidentAge.md)
+
+## Configuration
+
+- Configuring Tapd via [config-ui](/UserManuals/ConfigUI/Tapd.md).
+- Configuring Tapd via Config UI's [advanced mode](/UserManuals/ConfigUI/AdvancedMode.md#6-tapd).
diff --git a/versioned_docs/version-v0.15/Plugins/webhook.md b/versioned_docs/version-v0.15/Plugins/webhook.md
new file mode 100644
index 0000000000..e11d6dd84f
--- /dev/null
+++ b/versioned_docs/version-v0.15/Plugins/webhook.md
@@ -0,0 +1,191 @@
+---
+title: "Webhook"
+description: >
+  Webhook Plugin
+---
+
+## Summary
+
+Incoming Webhooks are your solution to bring data to Apache DevLake when there isn't a specific plugin ready for your DevOps tool. An Incoming Webhook allows users to actively push data to DevLake.
+
+When you create an Incoming Webhook within DevLake, DevLake generates a unique URL. You can then post JSON payloads to this URL to push data directly to your DevLake instance.
+
+In v0.14+, users can push "incidents" and "deployments" required by DORA metrics to DevLake via Incoming Webhooks.
+
+## Entities
+
+Check out the [Incoming Webhooks entities](/Overview/SupportedDataSources.md#data-collection-scope-by-each-plugin) collected by this plugin.
+
+## Metrics
+
+Metrics that can be calculated based on the data collected from Incoming Webhooks:
+
+- [Requirement Delivery Rate](/Metrics/RequirementDeliveryRate.md)
+- [Requirement Granularity](/Metrics/RequirementGranularity.md)
+- [Bug Age](/Metrics/BugAge.md)
+- [Bug Count per 1k Lines of Code](/Metrics/BugCountPer1kLinesOfCode.md)
+- [Incident Age](/Metrics/IncidentAge.md)
+- [Incident Count per 1k Lines of Code](/Metrics/IncidentCountPer1kLinesOfCode.md)
+- [DORA - Deployment Frequency](/Metrics/DeploymentFrequency.md)
+- [DORA - Lead Time for Changes](/Metrics/LeadTimeForChanges.md)
+- [DORA - Median Time to Restore Service](/Metrics/MTTR.md)
+- [DORA - Change Failure Rate](/Metrics/CFR.md)
+
+## Configuration
+
+- Configuring Incoming Webhooks via [Config UI](/UserManuals/ConfigUI/webhook.md)
+
+## API Sample Request
+
+### Deployment
+
+If you want to collect deployment data from your system, you can use the incoming webhooks for deployment.
+
+#### Payload Schema
+
+You can copy the generated deployment curl commands to your CI/CD script to post deployments to Apache DevLake. Below is the detailed payload schema:
+
+|     Key     | Required | Notes                                                                                                                                        |
+| :---------: | :------: | -------------------------------------------------------------------------------------------------------------------------------------------- |
+| commit_sha  |  ✔️ Yes  | the sha of the deployment commit                                                                                                             |
+|  repo_url   |  ✔️ Yes  | the repo URL of the deployment commit                                                                                                        |
+| environment |  ✖️ No   | the environment this deployment happens. For example, `PRODUCTION` `STAGING` `TESTING` `DEVELOPMENT`. <br/>The default value is `PRODUCTION` |
+| start_time  |  ✖️ No   | Time. Eg. 2020-01-01T12:00:00+00:00<br/> No default value.                                                                                   |
+|  end_time   |  ✖️ No   | Time. Eg. 2020-01-01T12:00:00+00:00<br/>The default value is the time when DevLake receives the POST request.                                |
+
+#### Register a Deployment - Sample API Calls
+
+Sample CURL to post deployments to DevLake. The following command should be replaced with the actual curl command copied from your Config UI:
+
+```
+curl https://sample-url.com/api/plugins/webhook/1/deployments -X 'POST' -d '{
+    "commit_sha":"015e3d3b480e417aede5a1293bd61de9b0fd051d",
+    "repo_url":"https://github.com/apache/incubator-devlake/",
+    "environment":"PRODUCTION",
+    "start_time":"2020-01-01T12:00:00+00:00",
+    "end_time":"2020-01-02T12:00:00+00:00"
+  }'
+```
+
+If you have set a [username/password](https://devlake.apache.org/docs/UserManuals/Authentication) for Config UI, you'll need to add them to the curl command to register a `deployment`:
+
+```
+curl https://sample-url.com/api/plugins/webhook/1/deployments -X 'POST' -u 'username:password' -d '{
+    "commit_sha":"015e3d3b480e417aede5a1293bd61de9b0fd051d",
+    "repo_url":"https://github.com/apache/incubator-devlake/",
+    "environment":"PRODUCTION",
+    "start_time":"2020-01-01T12:00:00+00:00",
+    "end_time":"2020-01-02T12:00:00+00:00"
+  }'
+```
+
+#### A real-world example - Push CircleCI deployments to DevLake
+
+The following demo shows how to post "deployments" to DevLake from CircleCI. In this example, the CircleCI job 'deploy' is used to manage deployments.
+
+```
+version: 2.1
+
+jobs:
+  build:
+    docker:
+      - image: cimg/base:stable
+    steps:
+      - checkout
+      - run:
+          name: "build"
+          command: |
+            echo Hello, World!
+
+  deploy:
+    docker:
+      - image: cimg/base:stable
+    steps:
+      - checkout
+      - run:
+          name: "deploy"
+          command: |
+            # The time a deploy started
+            start_time=`date '+%Y-%m-%dT%H:%M:%S%z'`
+
+            # Some deployment tasks here ...
+            echo Hello, World!
+
+            # Send the request to DevLake after deploy
+            # The values start with a '$CIRCLE_' are CircleCI's built-in variables
+            curl https://sample-url.com/api/plugins/webhook/1/deployments -X 'POST' -d "{
+              \"commit_sha\":\"$CIRCLE_SHA1\",
+              \"repo_url\":\"$CIRCLE_REPOSITORY_URL\",
+              \"start_time\":\"$start_time\"
+            }"
+
+workflows:
+  build_and_deploy_workflow:
+    jobs:
+      - build
+      - deploy
+```
+
+### Incident / Issue
+
+If you want to collect issue or incident data from your system, you can use the two webhooks for issues.
+
+#### Register Issues - Update or Create Issues
+
+`POST https://sample-url.com/api/plugins/webhook/1/issues`
+
+needs to be called when an issue or incident is created. The body should be a JSON and include columns as follows:
+
+|          Keyname          | Required | Notes                                                         |
+| :-----------------------: | :------: | ------------------------------------------------------------- |
+|         board_key         |  ✔️ Yes  | issue belongs to which board/project                          |
+|            url            |  ✖️ No   | issue's URL                                                   |
+|         issue_key         |  ✔️ Yes  | issue's key, needs to be unique in a connection               |
+|           title           |  ✔️ Yes  |                                                               |
+|        description        |  ✖️ No   |                                                               |
+|         epic_key          |  ✖️ No   | in which epic.                                                |
+|           type            |  ✖️ No   | type, such as bug/incident/epic/...                           |
+|          status           |  ✔️ Yes  | issue's status. Must be one of `TODO` `DONE` `IN_PROGRESS`    |
+|      original_status      |  ✔️ Yes  | status in your system, such as created/open/closed/...        |
+|        story_point        |  ✖️ No   |                                                               |
+|      resolution_date      |  ✖️ No   | date, Format should be 2020-01-01T12:00:00+00:00              |
+|       created_date        |  ✔️ Yes  | date, Format should be 2020-01-01T12:00:00+00:00              |
+|       updated_date        |  ✖️ No   | date, Format should be 2020-01-01T12:00:00+00:00              |
+|     lead_time_minutes     |  ✖️ No   | how long from this issue accepted to develop                  |
+|     parent_issue_key      |  ✖️ No   |                                                               |
+|         priority          |  ✖️ No   |                                                               |
+| original_estimate_minutes |  ✖️ No   |                                                               |
+|    time_spent_minutes     |  ✖️ No   |                                                               |
+|  time_remaining_minutes   |  ✖️ No   |                                                               |
+|        creator_id         |  ✖️ No   | the user id of the creator                                    |
+|       creator_name        |  ✖️ No   | the user name of the creator, it will just be used to display |
+|        assignee_id        |  ✖️ No   |                                                               |
+|       assignee_name       |  ✖️ No   |                                                               |
+|         severity          |  ✖️ No   |                                                               |
+|         component         |  ✖️ No   | which component is this issue in.                             |
+
+More information about these columns at [DomainLayerIssueTracking](https://devlake.apache.org/docs/DataModels/DevLakeDomainLayerSchema#domain-1---issue-tracking).
+
+#### Register Issues - Close Issues (Optional)
+
+`POST https://sample-url.com/api/plugins/webhook/1/issue/:boardKey/:issueId/close`
+
+needs to be called when an issue or incident is closed. Replace `:boardKey` and `:issueId` with specific strings and keep the body empty.
+
+#### Register Issues - Sample API Calls
+
+Sample CURL for Issue Creating :
+
+```
+curl https://sample-url.com/api/plugins/webhook/1/issues -X 'POST' -d '{"board_key":"DLK","url":"","issue_key":"DLK-1234","title":"a feature from DLK","description":"","epic_key":"","type":"BUG","status":"TODO","original_status":"created","story_point":0,"resolution_date":null,"created_date":"2020-01-01T12:00:00+00:00","updated_date":null,"lead_time_minutes":0,"parent_issue_key":"DLK-1200","priority":"","original_estimate_minutes":0,"time_spent_minutes":0,"time_remaining_minutes":0,"crea [...]
+```
+
+Sample CURL for Issue Closing:
+
+```
+curl http://127.0.0.1:4000/api/plugins/webhook/1/issue/DLK/DLK-1234/close -X 'POST'
+```
+
+## References
+
+- [references](/DeveloperManuals/DeveloperSetup.md#references)
diff --git a/versioned_docs/version-v0.15/Plugins/zentao.md b/versioned_docs/version-v0.15/Plugins/zentao.md
new file mode 100644
index 0000000000..72d157ae4d
--- /dev/null
+++ b/versioned_docs/version-v0.15/Plugins/zentao.md
@@ -0,0 +1,24 @@
+---
+title: "Zentao(Beta)"
+description: >
+  Zentao Plugin
+---
+
+## Summary
+
+This plugin collects Zentao data through its REST APIs. [Zentao](https://github.com/easysoft/zentaopms) is an issue-tracking tool similar to Jira.
+
+## Metrics
+
+Metrics that can be calculated based on the data collected from Zentao:
+
+- [Requirement Count](/Metrics/RequirementCount.md)
+- [Requirement Lead Time](/Metrics/RequirementLeadTime.md)
+- [Requirement Delivery Rate](/Metrics/RequirementDeliveryRate.md)
+- [Bug Age](/Metrics/BugAge.md)
+- [Incident Age](/Metrics/IncidentAge.md)
+
+## Configuration
+
+- Configuring Zentao via [config-ui](/UserManuals/ConfigUI/Zentao.md).
+- Configuring Zentao via Config UI's [advanced mode](/UserManuals/ConfigUI/AdvancedMode.md#8-zentao).
diff --git a/versioned_docs/version-v0.15/Troubleshooting/Configuration.md b/versioned_docs/version-v0.15/Troubleshooting/Configuration.md
new file mode 100644
index 0000000000..e1978a4525
--- /dev/null
+++ b/versioned_docs/version-v0.15/Troubleshooting/Configuration.md
@@ -0,0 +1,74 @@
+---
+title: "Configuration and Blueprint Troubleshooting"
+sidebar_position: 2
+description: >
+  Debug errors found in Config UI or during data collection.
+---
+
+### Common Error Code while collecting/processing data
+
+| Error code | An example                  | Causes | Solutions |
+| ---------- | ----------------------------|--------|-----------|
+| 429        | subtask collectAPiPipelines ended unexpectedly caused: Error waiting for async Collector execution caused by: retry exceeded 3 times calling projects/{projectId}/pipelines {429} | This error exmaple is caused by GitLab's Pipeline APIs. These APIs are implemented via Cloudflare, which is different from other GitLab entities. | Two ways: <br/> - Enable `fixed rate limit` in the GitLab connection, lower the API rates to 2,000. If it works, you can try increase the rates to ac [...]
+| 403        | error: preparing task data for gitextractor caused by: unexpected http status code: 403 | This is usually caused by the permission of your tokens. For example, if you're using an un-supported auth method, or using a token without ticking permissions to certain entities you want to collect. | Find the supported authentication methods and token permissions that should be selected in the corresponding plugin's Config UI manuals, for example, [configuring GitHub](/docs/UserMan [...]
+| 1406       | subtask extractApiBuilds ended unexpectedly caused by: error adding the result to batch caused by: Error 1406: Data too long for column 'full_display_name' at row 138. See bug [#4053](https://github.com/apache/incubator-devlake/issues/4053) | This is usually thrown by MySQL because a certain value is too long | A work-around is to manually change the field length to varchar(255) or longer in MySQL. Also, please put up a [bug](https://github.com/apache/incubator-devlake/iss [...]
+
+
+### Failed to collect data from the server with a self-signed certificate
+
+There might be two problems when trying to collect data from a private GitLab server with a self-signed certificate:
+
+1. "Test Connection" error. This can be solved by setting the environment variable `IN_SECURE_SKIP_VERIFY=true` for the `devlake` container
+2. "GitExtractor" fails to clone the repository due to certificate verification, sadly, neither gogit nor git2go we are using supports insecure HTTPS.
+
+A better approach would be adding your root CA to the `devlake` container:
+
+1. Mount your `rootCA.crt` into the `devlake` container
+2. Add a `command` node to install the mounted certificate
+
+Here is an example of the `docker-compose`` installation, the idea applies to other installation methods.
+```
+  devlake:
+    image: apache/devlake:v...
+    ...
+    volumes:
+      ...
+      - /path/to/your/rootCA.crt:/usr/local/share/ca-certificates/rootCA.crt
+    command: [ "sh", "-c", "update-ca-certificates; lake" ]
+    ...
+```
+
+### GitExtractor task failed in a GitHub/GitLab/BitBucket blueprint
+See bug [#3719](https://github.com/apache/incubator-devlake/issues/3719)
+
+This bug happens occasionally in v0.14.x and previous versions. It is fixed by changing the docker base image. Please upgrade to v0.15.x to get it fixed if you encounter it.
+
+
+### Pipeline failed with "The total number of locks exceeds the lock table size"
+
+We have had a couple of reports suggesting MySQL InnoDB would fail with the message.
+
+- [Error 1206: The total number of locks exceeds the lock table size · Issue #3849 · apache/incubator-devlake](https://github.com/apache/incubator-devlake/issues/3849)
+- [[Bug][Gitlab] gitlab collectApiJobs task failed for mysql locks error · Issue #3653 · apache/incubator-devlake](https://github.com/apache/incubator-devlake/issues/3653)
+
+The cause of the problem is:
+
+- Before Apache DevLake data collection starts, it must purge expired data in the database.
+- MySQL InnoDB Engine would create locks in memory for the records being deleted.
+- When deleting huge amounts of records, the memory bursts, hence the error.
+
+You are likely to see the error when dealing with a huge repository or board. For MySQL, you can solve it by increasing the `innodb_buffer_pool_size` to a higher value.
+
+Here is an example of the `docker-compose` installation, the idea applies to other installation methods.
+```
+  mysql:
+    image: mysql:8.....
+    ...
+    # add the follow line to the mysql container
+    command: --innodb-buffer-pool-size=200M
+```
+
+
+## None of them solve your problem?
+
+Sorry for the inconvenience, please help us improve by [creating an issue](https://github.com/apache/incubator-devlake/issues)
diff --git a/versioned_docs/version-v0.15/Troubleshooting/Dashboard.md b/versioned_docs/version-v0.15/Troubleshooting/Dashboard.md
new file mode 100644
index 0000000000..1250cb01e4
--- /dev/null
+++ b/versioned_docs/version-v0.15/Troubleshooting/Dashboard.md
@@ -0,0 +1,13 @@
+---
+title: "Dashboard Troubleshooting"
+sidebar_position: 3
+description: >
+  Dashboard Troubleshooting
+---
+
+WIP
+
+
+## None of them solve your problem?
+
+Sorry for the inconvenience, please help us improve by [creating an issue](https://github.com/apache/incubator-devlake/issues)
diff --git a/versioned_docs/version-v0.15/Troubleshooting/Installation.md b/versioned_docs/version-v0.15/Troubleshooting/Installation.md
new file mode 100644
index 0000000000..65b7b05861
--- /dev/null
+++ b/versioned_docs/version-v0.15/Troubleshooting/Installation.md
@@ -0,0 +1,12 @@
+---
+title: "Installation Troubleshooting"
+sidebar_position: 1
+description: >
+  Installation Troubleshooting
+---
+
+WIP
+
+## None of them solve your problem?
+
+Sorry for the inconvenience, please help us improve by [creating an issue](https://github.com/apache/incubator-devlake/issues)
diff --git a/versioned_docs/version-v0.15/Troubleshooting/_category_.json b/versioned_docs/version-v0.15/Troubleshooting/_category_.json
new file mode 100644
index 0000000000..ceac2b5c73
--- /dev/null
+++ b/versioned_docs/version-v0.15/Troubleshooting/_category_.json
@@ -0,0 +1,8 @@
+{
+  "label": "Troubleshooting",
+  "position": 10,
+  "link":{
+    "type": "generated-index",
+    "slug": "Troubleshooting"
+  }
+}
diff --git a/versioned_docs/version-v0.15/UserManuals/ConfigUI/AdvancedMode.md b/versioned_docs/version-v0.15/UserManuals/ConfigUI/AdvancedMode.md
new file mode 100644
index 0000000000..3e0c34b3da
--- /dev/null
+++ b/versioned_docs/version-v0.15/UserManuals/ConfigUI/AdvancedMode.md
@@ -0,0 +1,316 @@
+---
+title: "Using Advanced Mode"
+sidebar_position: 7
+description: >
+  Using the advanced mode of Config-UI
+---
+
+## Why advanced mode?
+
+Advanced mode allows users to create any pipeline by writing JSON. This is useful for users who want to:
+
+1. Collect multiple GitHub/GitLab repos or Jira projects within a single pipeline
+2. Have fine-grained control over what entities to collect or what subtasks to run for each plugin
+3. Orchestrate a complex pipeline that consists of multiple stages of plugins.
+
+Advanced mode gives utmost flexibility to users by exposing the JSON API.
+
+## How to use advanced mode to create pipelines?
+
+1. Click on "+ New Blueprint" on the Blueprint page.
+
+![image](/img/AdvancedMode/AdvancedMode1.png)
+
+2. In step 1, click on the "Advanced Mode" link.
+
+![image](/img/AdvancedMode/AdvancedMode2.png)
+
+3. The pipeline editor expects a 2D array of plugins. The first dimension represents different stages of the pipeline and the second dimension describes the plugins in each stage. Stages run in sequential order and plugins within the same stage runs in parallel. We provide some templates for users to get started. Please also see the next section for some examples.
+
+![image](/img/AdvancedMode/AdvancedMode3.png)
+
+4. You can choose how often you would like to sync your data in this step by selecting a sync frequency option or enter a cron code to specify your preferred schedule. After setting up the Blueprint, you will be prompted to the Blueprint's activity detail page, where you can track the progress of the current run and wait for it to finish before the dashboards become available. You can also view all historical runs of previously created Blueprints from the list on the Blueprint page.
+
+## Examples
+
+### 1. GitHub
+
+Collect multiple GitHub repos sequentially. Below is an example for collecting 2 GitHub repos sequentially. It has 2 stages, each contains a GitHub task.
+
+```
+[
+  [
+    {
+      "Plugin": "github",
+      "Options": {
+        "connectionId": 1,
+        "repo": "incubator-devlake",
+        "owner": "apache"
+      }
+    },
+    {
+      "Plugin": "github",
+      "Options": {
+        "connectionId": 1,
+        "repo": "lake",
+        "owner": "merico-dev"
+      }
+    }
+  ]
+]
+```
+
+GitHub:
+
+- `connectionId`: The ID of your GitHub connection at page http://localhost:4000/connections/github.
+- `owner`: Just take a look at the URL: https://github.com/apache/incubator-devlake, owner is `apache`.
+- `repo`: Just take a look at the URL: https://github.com/apache/incubator-devlake, repo is `incubator-devlake`.
+
+### 2. GitLab
+
+Collect multiple GitLab repos sequentially.
+
+> When there're multiple collection tasks against a single data source, we recommend running these tasks sequentially since the collection speed is mostly limited by the API rate limit of the data source.
+> Running multiple tasks against the same data source is unlikely to speed up the process and may overwhelm the data source.
+
+Below is an example for collecting 2 GitLab repos sequentially. It has 2 stages, each contains a GitLab task.
+
+```
+[
+  [
+    {
+      "Plugin": "gitlab",
+      "Options": {
+        "connectionId": 1,
+        "projectId": 152***74
+      }
+    }
+  ],
+  [
+    {
+      "Plugin": "gitlab",
+      "Options": {
+        "connectionId": 2,
+        "projectId": 116***98
+      }
+    }
+  ]
+]
+```
+
+- `connectionId`: The ID of your GitLab connection at page http://localhost:4000/connections/gitlab.
+- `projectId`: GitLab repo's Project ID.
+
+### 3. Jenkins
+
+Collect multiple Jenkins jobs sequentially. Below is an example for collecting 2 Jenkins jobs sequentially. It has 2 stages, each contains a Jenkins task.
+
+```
+[
+    [
+        {
+            "plugin": "jenkins",
+            "options": {
+                "connectionId": 1,
+                "scopeId": "auto_deploy"
+            }
+        }
+    ],
+    [
+        {
+            "plugin": "jenkins",
+            "options": {
+                "connectionId": 2,
+                "scopeId": "Deploy test"
+            }
+        }
+    ]
+]
+```
+
+- `connectionId`: The ID of your Jenkins connection at page http://localhost:4000/connections/jenkins.
+- `scopeId`: Jenkins job name.
+
+### 4. Jira
+
+Collect multiple Jira boards sequentially. Below is an example for collecting 2 Jira boards sequentially. It has 2 stages, each contains a Jira task.
+
+```
+[
+    [
+        {
+            "plugin": "jira",
+            "options": {
+                "boardId": 8,
+                "connectionId": 1
+            }
+        }
+    ],
+    [
+        {
+            "plugin": "jira",
+            "options": {
+                "boardId": 26,
+                "connectionId": 1
+            }
+        }
+    ]
+]
+```
+
+- `connectionId`: The ID of your Jira connection at page http://localhost:4000/connections/jira.
+- `boardId`: Just take a look at the URL - it will be the last number in the address. Should look something like this at the end: `RapidBoard.jspa?rapidView=8` or `/projects/xxx/boards/8`. So `8` would be the board ID in that case.
+
+### 5. Jira + GitLab
+
+Below is an example for collecting a GitLab repo and a Jira board in parallel. It has a single stage with a GitLab task and a Jira task. As GitLab and Jira are using their own tokens, they can be executed in parallel.
+
+```
+[
+    [
+        {
+            "plugin":"jira",
+            "options":{
+                "boardId":8,
+                "connectionId":1
+            }
+        }
+    ],
+    [
+        {
+            "Plugin":"gitlab",
+            "Options":{
+                "connectionId":1,
+                "projectId":116***98
+            }
+        }
+    ]
+]
+```
+
+### 6. TAPD
+
+Below is an example for collecting a TAPD workspace. Since users can configure multiple TAPD connection, it's required to pass in a `connectionId` for TAPD task to specify which connection to use.
+
+```
+[
+    [
+        {
+            "plugin": "tapd",
+            "options": {
+                "createdDateAfter": "2006-01-02T15:04:05Z",
+                "workspaceId": 34***66,
+                "connectionId": 1
+            }
+        }
+    ]
+]
+```
+
+- `createdDateAfter`: The data range you wish to collect after the given date.
+- `connectionId`: The ID of your TAPD connection at page http://localhost:4000/connections/tapd.
+- `workspaceId`: TAPD workspace id, you can get it from two ways:
+  - url: ![tapd-workspace-id](/img/ConfigUI/tapd-find-workspace-id.png)
+  - db: you can check workspace info from db.\_tool_tapd_workspaces and get all workspaceId you want to collect after execution of the following json in `advanced mode`
+    ```json
+    [
+      [
+        {
+          "plugin": "tapd",
+          "options": {
+            "companyId": 558***09,
+            "workspaceId": 1,
+            "connectionId": 1
+          },
+          "subtasks": ["collectCompanies", "extractCompanies"]
+        }
+      ]
+    ]
+    ```
+
+### 7. TAPD + GitLab
+
+Below is an example for collecting a TAPD workspace and a GitLab repo in parallel. It has a single stage with a TAPD task and a GitLab task.
+
+```
+[
+    [
+        {
+            "plugin": "tapd",
+            "options": {
+                "createdDateAfter": "2006-01-02T15:04:05Z",
+                "workspaceId": 6***14,
+                "connectionId": 1
+            }
+        }
+    ],
+    [
+        {
+            "Plugin":"gitlab",
+            "Options":{
+                "connectionId":1,
+                "projectId":116***98
+            }
+        }
+    ]
+]
+```
+
+### 8. Zentao
+
+Below is an example for collecting a Zentao workspace. Since users can configure multiple Zentao connection, it's required to pass in a `connectionId` for Zentao task to specify which connection to use.
+
+```
+[
+  [
+    {
+      plugin: 'zentao',
+      options: {
+        connectionId: 1,
+        productId: 1,
+        projectId: 1,
+        executionId: 1
+      }
+    }
+  ]
+]
+```
+
+- `connectionId`: The ID of your Zentao connection at page http://localhost:4000/connections/zentao.
+- `productId`: optional, ZENTAO product id, see "Find Product Id" for details.
+- `projectId`: optional, ZENTAO product id, see "Find Project Id" for details.
+- `executionId`: optional, ZENTAO product id, see "Find Execution Id" for details.
+
+You must choose at least one of `productId`, `projectId` and `executionId`.
+
+#### Find Product Id
+
+1. Navigate to the Zentao Product in the browser
+   ![](/img/ConfigUI/zentao-product.png)
+2. Click the red square annotated in the pic above
+   ![](/img/ConfigUI/zentao-product-id.png)
+3. Then the number in the red circle above is `ProductId`
+
+#### Find Project Id
+
+1. Navigate to the Zentao Project in the browser
+   ![](/img/ConfigUI/zentao-project-id.png)
+2. Then the number in the red square above is `ProjectId`
+
+#### Find Execution Id
+
+1. Navigate to the Zentao Execution in the browser
+   ![](/img/ConfigUI/zentao-execution-id.png)
+2. Then the number in the red square above is `ExecutionId`
+
+## Editing a Blueprint (Advanced Mode)
+
+This section is for editing a Blueprint in the Advanced Mode. To edit in the Normal mode, please refer to [this guide](Tutorial.md#editing-a-blueprint-normal-mode).
+
+To edit a Blueprint created in the Advanced mode, you can simply go the Configuration page of that Blueprint and edit its configuration.
+
+![img](/img/ConfigUI/BlueprintEditing/blueprint-edit2.png)
+
+## Troubleshooting
+
+If you run into any problem, please check the [Troubleshooting](/Troubleshooting/Configuration.md) or [create an issue](https://github.com/apache/incubator-devlake/issues)
diff --git a/versioned_docs/version-v0.15/UserManuals/ConfigUI/BitBucket.md b/versioned_docs/version-v0.15/UserManuals/ConfigUI/BitBucket.md
new file mode 100644
index 0000000000..317be3e8d7
--- /dev/null
+++ b/versioned_docs/version-v0.15/UserManuals/ConfigUI/BitBucket.md
@@ -0,0 +1,66 @@
+---
+title: "Configuring BitBucket(Beta)"
+sidebar_position: 2
+description: Config UI instruction for BitBucket(Cloud)
+---
+
+Visit config-ui: `http://localhost:4000` and go to `Connections` page.
+
+### Step 1 - Add Data Connections
+
+![bitbucket-add-data-connections](/img/ConfigUI/bitbucket-add-data-connections.png)
+
+#### Connection Name
+
+Name your connection.
+
+#### Endpoint URL
+
+This should be a valid REST API endpoint for BitBucket: `https://api.bitbucket.org/2.0/`. The endpoint URL should end with `/`.
+
+DevLake will support BitBucket Server in the future.
+
+#### Authentication
+
+BitBucket `username` and `app password` are required to add a connection. Learn about [how to create a BitBucket app password](https://support.atlassian.com/bitbucket-cloud/docs/create-an-app-password/).
+
+The following permissions are required to collect data from BitBucket repositories:
+
+- Account:Read
+- Workspace membership:Read
+- Projects:Read
+- Repositories:Read
+- Pull requests:Read
+- Issues:Read
+- Pipelines:Read
+- Runners:Read
+
+![bitbucket-app-password-permissions](/img/ConfigUI/bitbucket-app-password-permissions.jpeg)
+
+
+#### Proxy URL (Optional)
+
+If you are behind a corporate firewall or VPN you may need to utilize a proxy server. Enter a valid proxy server address on your network, e.g. `http://your-proxy-server.com:1080`
+
+
+#### Fixed Rate Limit (Optional)
+
+DevLake uses a dynamic rate limit to collect BitBucket data. You can adjust the rate limit if you want to increase or lower the speed.
+
+The maximum rate limit for different entities in BitBucket(Cloud) is [60,000 or 1,000 requests/hour](https://support.atlassian.com/bitbucket-cloud/docs/api-request-limits/). Please do not use a rate that exceeds this number.
+
+
+#### Test and Save Connection
+
+Click `Test Connection`, if the connection is successful, click `Save Connection` to add the connection.
+
+
+### Step 2 - Configure Blueprint
+
+Similar to other beta plugins, BitBucket does not support `project`, which means, you can only collect BitBucket data via blueprint's advanced mode. 
+
+Please go to the `Blueprints` page and switch to advanced mode. See how to use advanced mode and JSON [examples](AdvancedMode.md).
+
+### Troubleshooting
+
+If you run into any problem, please check the [Troubleshooting](/Troubleshooting/Configuration.md) or [create an issue](https://github.com/apache/incubator-devlake/issues)
diff --git a/versioned_docs/version-v0.15/UserManuals/ConfigUI/GitHub.md b/versioned_docs/version-v0.15/UserManuals/ConfigUI/GitHub.md
new file mode 100644
index 0000000000..4cb53afc1d
--- /dev/null
+++ b/versioned_docs/version-v0.15/UserManuals/ConfigUI/GitHub.md
@@ -0,0 +1,155 @@
+---
+title: "Configuring GitHub"
+sidebar_position: 2
+description: Config UI instruction for GitHub
+---
+
+Visit config-ui: `http://localhost:4000`.
+
+### Step 1 - Add Data Connections
+
+![github-add-data-connections](/img/ConfigUI/github-add-data-connections.png)
+
+#### Connection Name
+
+Name your connection.
+
+#### Endpoint URL
+
+This should be a valid REST API endpoint, eg. `https://api.github.com/`. The URL should end with `/`.
+
+#### Auth Token(s)
+
+You can use one of the following GitHub tokens: personal access tokens(PATs) or fine-grained personal access tokens.
+
+###### GitHub personal access tokens(Recommended)
+
+Learn about [how to create a GitHub personal access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token). The following permissions are required to collect data from repositories:
+
+- `repo:status`
+- `repo_deployment`
+- `read:user`
+- `read:org`
+
+The data collection speed is restricted by the **rate limit of [5,000 requests](https://docs.github.com/en/rest/overview/resources-in-the-rest-api#rate-limiting) per hour per token** (15,000 requests/hour if you pay for GitHub enterprise). You can accelerate data collection by configuring _multiple_ personal access tokens. Please note that multiple tokens should be created by different GitHub accounts. Tokens belonging to the same GitHub account share the rate limit.
+
+###### Fine-grained personal access tokens
+
+Note: this token doesn't support GraphQL APIs. You have to disable `Use GraphQL APIs` on the connection page if you want to use it. However, this will significantly increase the data collection time.
+
+If you're concerned with giving classic PATs full unrestricted access to your repositories, you can use fine-grained PATs announced by GitHub recently. With fine-grained PATs, GitHub users can create read-only PATs that only have access to repositories under certain GitHub orgs. But in order to do that, org admin needs to enroll that org with fine-grained PATs beta feature first. Please check [this doc](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creati [...]
+The token should be granted read-only permission for the following entities.
+  - `Actions`
+  - `Contents`
+  - `Discussions`
+  - `Issues`
+  - `Metadata`
+  - `Pull requests`
+
+#### Use Graphql APIs
+
+If you are using `github.com` or your on-premise GitHub version supports GraphQL APIs, toggle on this setting to collect data quicker.
+
+- GraphQL APIs are 10+ times faster than REST APIs, but they may not be supported in GitHub on-premise versions.
+- Instead of using multiple tokens to collect data, you can use ONLY ONE token because GraphQL APIs are quick enough.
+
+#### Proxy URL (Optional)
+
+If you are behind a corporate firewall or VPN you may need to utilize a proxy server. Enter a valid proxy server address on your network, e.g. `http://your-proxy-server.com:1080`
+
+#### Fixed Rate Limit (Optional)
+
+DevLake uses a dynamic rate limit to collect GitHub data. You can adjust the rate limit if you want to increase or lower the speed.
+
+The maximum rate limit for GitHub is ** [5,000 requests/hour](https://docs.github.com/en/rest/overview/resources-in-the-rest-api#rate-limiting)** (15,000 requests/hour if you pay for GitHub enterprise). Please do not use a rate that exceeds this number.
+
+#### Test and Save Connection
+
+Click `Test Connection`, if the connection is successful, click `Save Connection` to add the connection.
+
+### Step 2 - Setting Data Scope
+
+![github-set-data-scope](/img/ConfigUI/github-set-data-scope.png)
+
+#### Projects
+
+Enter the GitHub repos to collect. If you want to collect more than 1 repo, please separate repos with comma. For example, "apache/incubator-devlake,apache/incubator-devlake-website".
+
+#### Data Entities
+
+Usually, you don't have to modify this part. However, if you don't want to collect certain GitHub entities, you can unselect some entities to accelerate the collection speed.
+
+- Issue Tracking: GitHub issues, issue comments, issue labels, etc.
+- Source Code Management: GitHub repos, refs, commits, etc.
+- Code Review: GitHub PRs, PR comments and reviews, etc.
+- CI/CD: GitHub Workflow runs, GitHub Workflow jobs, etc.
+- Cross Domain: GitHub accounts, etc.
+
+### Step 3 - Adding Transformation Rules (Optional)
+
+![github-add-transformation-rules-list](/img/ConfigUI/github-add-transformation-rules-list.png)
+![github-add-transformation-rules](/img/ConfigUI/github-add-transformation-rules.png)
+
+Without adding transformation rules, you can still view the "[GitHub Metrics](/livedemo/DataSources/GitHub)" dashboard. However, if you want to view "[Weekly Bug Retro](/livedemo/QAEngineers/WeeklyBugRetro)", "[Weekly Community Retro](/livedemo/OSSMaintainers/WeeklyCommunityRetro)" or other pre-built dashboards, the following transformation rules, especially "Type/Bug", should be added.<br/>
+
+Each GitHub repo has at most ONE set of transformation rules.
+
+#### Issue Tracking
+
+- Severity: Parse the value of `severity` from issue labels.
+
+  - when your issue labels for severity level are like 'severity/p0', 'severity/p1', 'severity/p2', then input 'severity/(.\*)$'
+  - when your issue labels for severity level are like 'p0', 'p1', 'p2', then input '(p0|p1|p2)$'
+
+- Component: Same as "Severity".
+
+- Priority: Same as "Severity".
+
+- Type/Requirement: The `type` of issues with labels that match given regular expression will be set to "REQUIREMENT". Unlike "PR.type", submatch does nothing, because for issue management analysis, users tend to focus on 3 kinds of types (Requirement/Bug/Incident), however, the concrete naming varies from repo to repo, time to time, so we decided to standardize them to help analysts metrics.
+
+- Type/Bug: Same as "Type/Requirement", with `type` setting to "BUG".
+
+- Type/Incident: Same as "Type/Requirement", with `type` setting to "INCIDENT".
+
+#### CI/CD
+
+This set of configurations is used for calculating [DORA metrics](../DORA.md).
+
+If you're using GitHub Action to conduct `deployments`, please select "Detect Deployment from Jobs in GitHub Action", and input the RegEx in the following fields:
+
+- Deployment: A GitHub Action job with a name that matches the given regEx will be considered as a deployment.
+- Production: A GitHub Action job with a name that matches the given regEx will be considered a job in the production environment.
+
+By the above two fields, DevLake can identify a production deployment among massive CI jobs.
+
+You can also select "Not using Jobs in GitHub Action as Deployments" if you're not using GitHub action to conduct deployments.
+
+#### Code Review
+
+- Type: The `type` of pull requests will be parsed from PR labels by given regular expression. For example:
+
+  - when your labels for PR types are like 'type/feature-development', 'type/bug-fixing' and 'type/docs', please input 'type/(.\*)$'
+  - when your labels for PR types are like 'feature-development', 'bug-fixing' and 'docs', please input '(feature-development|bug-fixing|docs)$'
+
+- Component: The `component` of pull requests will be parsed from PR labels by given regular expression.
+
+#### Additional Settings (Optional)
+
+- Tags Limit: It'll compare the last N pairs of tags to get the "commit diff', "issue diff" between tags. N defaults to 10.
+
+  - commit diff: new commits for a tag relative to the previous one
+  - issue diff: issues solved by the new commits for a tag relative to the previous one
+
+- Tags Pattern: Only tags that meet given regular expression will be counted.
+
+- Tags Order: Only "reverse semver" order is supported for now.
+
+Please click `Save` to save the transformation rules for the repo. In the data scope list, click `Next Step` to continue configuring.
+
+### Step 4 - Setting Sync Frequency
+
+You can choose how often you would like to sync your data in this step by selecting a sync frequency option or enter a cron code to specify your prefered schedule.
+
+## Troubleshooting
+
+If you run into any problem, please check the [Troubleshooting](/Troubleshooting/Configuration.md) or [create an issue](https://github.com/apache/incubator-devlake/issues)
diff --git a/versioned_docs/version-v0.15/UserManuals/ConfigUI/GitLab.md b/versioned_docs/version-v0.15/UserManuals/ConfigUI/GitLab.md
new file mode 100644
index 0000000000..f1db518f2b
--- /dev/null
+++ b/versioned_docs/version-v0.15/UserManuals/ConfigUI/GitLab.md
@@ -0,0 +1,100 @@
+---
+title: "Configuring GitLab"
+sidebar_position: 3
+description: Config UI instruction for GitLab
+---
+
+Visit config-ui: `http://localhost:4000`.
+
+### Step 1 - Add Data Connections
+
+![gitlab-add-data-connections](/img/ConfigUI/gitlab-add-data-connections.png)
+
+#### Connection Name
+
+Name your connection.
+
+#### Endpoint URL
+
+This should be a valid REST API endpoint.
+
+- If you are using gitlab.com, the endpoint will be `https://gitlab.com/api/v4/`
+- If you are self-hosting GitLab, the endpoint will look like `https://gitlab.example.com/api/v4/`
+  The endpoint URL should end with `/`.
+
+#### Auth Token(s)
+
+GitLab personal access tokens are required to add a connection. Learn about [how to create a GitLab personal access token](https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html).
+
+###### GitLab personal access tokens
+
+The following permissions are required to collect data from repositories:
+
+- `api`
+- `read_api`
+- `read_user`
+- `read_repository`
+
+You also have to double-check your GitLab user permission settings.
+
+1. Go to the Project information -> Members page of the GitLab projects you wish to collect.
+2. Check your role in this project from the Max role column. Make sure you are not the Guest role, otherwise, you will not be able to collect data from this project.
+
+#### Proxy URL (Optional)
+
+If you are behind a corporate firewall or VPN you may need to utilize a proxy server. Enter a valid proxy server address on your network, e.g. `http://your-proxy-server.com:1080`
+
+#### Fixed Rate Limit (Optional)
+
+DevLake uses a dynamic rate limit at around 12,000 requests/hour to collect GitLab data. You can adjust the rate limit if you want to increase or lower the speed.
+
+The maximum rate limit for GitLab Cloud is ** [120,000 requests/hour](https://docs.gitlab.com/ee/user/gitlab_com/index.html#gitlabcom-specific-rate-limits)**. Tokens under the same IP address share the rate limit, so the actual rate limit for your token will be lower than this number.
+
+For self-managed GitLab rate limiting, please contact your GitLab admin to [get or set the maximum rate limit](https://repository.prace-ri.eu/git/help/security/rate_limits.md) of your GitLab instance. Please do not use a rate that exceeds this number.
+
+#### Test and Save Connection
+
+Click `Test Connection`, if the connection is successful, click `Save Connection` to add the connection.
+
+### Step 2 - Setting Data Scope
+
+![gitlab-set-data-scope](/img/ConfigUI/gitlab-set-data-scope.png)
+
+#### Projects
+
+Choose the Gitlab projects to collect. Limited by GitLab API, You need to type more than 2 characters to search.
+
+- If you want to collect public repositories in GitLab, please uncheck "Only search my repositories" to search all repositories.
+
+#### Data Entities
+
+Usually, you don't have to modify this part. However, if you don't want to collect certain GitLab entities, you can unselect some entities to accerlerate the collection speed.
+
+- Issue Tracking: GitLab issues, issue comments, issue labels, etc.
+- Source Code Management: GitLab repos, refs, commits, etc.
+- Code Review: GitLab MRs, MR comments and reviews, etc.
+- CI/CD: GitLab pipelines, jobs, etc.
+- Cross Domain: GitLab accounts, etc.
+
+### Step 3 - Adding Transformation Rules (Optional)
+
+#### CI/CD
+
+This set of configurations is used for calculating [DORA metrics](../DORA.md).
+
+If you're using GitLab CI to conduct `deployments`, please select "Detect Deployment from Jobs in GitLab CI", and input the RegEx in the following fields:
+
+- Deployment: A GitLab CI job with a name that matches the given regEx will be considered as a deployment.
+- Production: A GitLab CI job with a name that matches the given regEx will be considered a job in the production environment.
+
+By the above two fields, DevLake can identify a production deployment among massive CI jobs.
+
+You can also select "Not using Jobs in GitLab CI as Deployments" if you're not using GitLab CI to conduct deployments.
+
+### Step 4 - Setting Sync Frequency
+
+You can choose how often you would like to sync your data in this step by selecting a sync frequency option or enter a cron code to specify your prefered schedule.
+
+## Troubleshooting
+
+If you run into any problem, please check the [Troubleshooting](/Troubleshooting/Configuration.md) or [create an issue](https://github.com/apache/incubator-devlake/issues)
diff --git a/versioned_docs/version-v0.15/UserManuals/ConfigUI/Jenkins.md b/versioned_docs/version-v0.15/UserManuals/ConfigUI/Jenkins.md
new file mode 100644
index 0000000000..41ba2d6e51
--- /dev/null
+++ b/versioned_docs/version-v0.15/UserManuals/ConfigUI/Jenkins.md
@@ -0,0 +1,72 @@
+---
+title: "Configuring Jenkins"
+sidebar_position: 5
+description: Config UI instruction for Jenkins
+---
+
+Visit config-ui: `http://localhost:4000`.
+
+### Step 1 - Add Data Connections
+
+![jenkins-add-data-connections](/img/ConfigUI/jenkins-add-data-connections.png)
+
+#### Connection Name
+
+Name your connection.
+
+#### Endpoint URL
+
+This should be a valid REST API endpoint. Eg. `https://ci.jenkins.io/`. The endpoint url should end with `/`.
+
+#### Username (E-mail)
+
+Your User ID for the Jenkins Instance.
+
+#### Password
+
+For help on Username and Password, please see Jenkins docs on [using credentials](https://www.jenkins.io/doc/book/using/using-credentials/). You can also use "API Access Token" for this field, which can be generated at `User` -> `Configure` -> `API Token` section on Jenkins.
+
+#### Fixed Rate Limit (Optional)
+
+DevLake uses a dynamic rate limit to collect Jenkins data. You can adjust the rate limit if you want to increase or lower the speed.
+
+There is not any doc about Jenkins rate limiting. Please put up an issue if you find one.
+
+#### Test and Save Connection
+
+Click `Test Connection`, if the connection is successful, click `Save Connection` to add the connection.
+
+### Step 2 - Setting Data Scope
+
+![jenkins-set-data-scope](/img/ConfigUI/jenkins-set-data-scope.png)
+
+#### Jobs
+
+Choose the Jenkins jobs. All `Jenkins builds` under these jobs will be collected.
+
+#### Data Entities
+
+Jenkins only supports `CI/CD` domain entities, transformed from Jenkins builds and stages.
+
+- CI/CD: Jenkins builds, stages, etc.
+
+### Step 3 - Adding Transformation Rules (Optional)
+
+This set of configurations is used for calculating [DORA metrics](../DORA.md).
+
+If you're using Jenkins builds to conduct `deployments`, please select "Detect Deployment from Jenkins Builds", and input the RegEx in the following fields:
+
+- Deployment: A Jenkins build with a name that matches the given regEx will be considered as a deployment.
+- Production: A Jenkins build with a name that matches the given regEx will be considered a build in the production environment.
+
+By the above two fields, DevLake can identify a production deployment among massive CI jobs.
+
+You can also select "Not using Jenkins builds as Deployments" if you're not using Jenkins to conduct deployments.
+
+### Step 4 - Setting Sync Frequency
+
+You can choose how often you would like to sync your data in this step by selecting a sync frequency option or enter a cron code to specify your prefered schedule.
+
+## Troubleshooting
+
+If you run into any problem, please check the [Troubleshooting](/Troubleshooting/Configuration.md) or [create an issue](https://github.com/apache/incubator-devlake/issues)
diff --git a/versioned_docs/version-v0.15/UserManuals/ConfigUI/Jira.md b/versioned_docs/version-v0.15/UserManuals/ConfigUI/Jira.md
new file mode 100644
index 0000000000..f9aaf2683a
--- /dev/null
+++ b/versioned_docs/version-v0.15/UserManuals/ConfigUI/Jira.md
@@ -0,0 +1,79 @@
+---
+title: "Configuring Jira"
+sidebar_position: 4
+description: Config UI instruction for Jira
+---
+
+Visit config-ui: `http://localhost:4000`.
+
+### Step 1 - Add Data Connections
+![jira-add-data-connections](/img/ConfigUI/jira-add-data-connections.png)
+
+#### Connection Name
+Name your connection.
+
+#### Endpoint URL
+This should be a valid REST API endpoint
+   - If you are using Jira Cloud, the endpoint will be `https://<mydomain>.atlassian.net/rest/`
+   - If you are self-hosting Jira v8+, the endpoint will look like `https://jira.<mydomain>.com/rest/`
+The endpoint url should end with `/`.
+
+#### Username / Email
+Input the username or email of your Jira account.
+
+#### Password
+- If you are using Jira Cloud, please input the [Jira personal access token](https://confluence.atlassian.com/enterprise/using-personal-access-tokens-1026032365.html).
+- If you are using Jira Server v8+, please input the password of your Jira account.
+
+#### Proxy URL (Optional)
+If you are behind a corporate firewall or VPN you may need to utilize a proxy server. Enter a valid proxy server address on your network, e.g. `http://your-proxy-server.com:1080`
+
+#### Fixed Rate Limit (Optional)
+
+DevLake uses a dynamic rate limit to collect Jira data. You can adjust the rate limit if you want to increase or lower the speed. If you encounter a 403 error during data collection, please lower the rate limit.
+
+Jira(Cloud) uses a dynamic rate limit and has no clear rate limit. For Jira Server's rate limiting, please contact your Jira Server admin to [get or set the maximum rate limit](https://repository.prace-ri.eu/git/help/security/rate_limits.md) of your Jira instance. Please do not use a rate that exceeds this number.
+
+
+#### Test and Save Connection
+Click `Test Connection`, if the connection is successful, click `Save Connection` to add the connection.
+
+
+### Step 2 - Setting Data Scope
+![jira-set-data-scope](/img/ConfigUI/jira-set-data-scope.png)
+
+#### Projects
+Choose the Jira boards to collect.
+
+#### Data Entities
+Usually, you don't have to modify this part. However, if you don't want to collect certain Jira entities, you can unselect some entities to accerlerate the collection speed.
+- Issue Tracking: Jira issues, issue comments, issue labels, etc.
+- Cross Domain: Jira accounts, etc.
+
+### Step 3 - Adding Transformation Rules (Optional)
+![jira-add-transformation-rules-list](/img/ConfigUI/jira-add-transformation-rules-list.png)
+ 
+Without adding transformation rules, you can not view all charts in "Jira" or "Engineering Throughput and Cycle Time" dashboards.<br/>
+
+Each Jira board has at most ONE set of transformation rules.
+
+![jira-add-transformation-rules](/img/ConfigUI/jira-add-transformation-rules.png)
+
+#### Issue Tracking
+
+- Requirement: choose the issue types to be transformed to "REQUIREMENT".
+- Bug: choose the issue types to be transformed to "BUG".
+- Incident: choose the issue types to be transformed to "INCIDENT".
+- Epic Key: choose the custom field that represents Epic key. In most cases, it is "Epic Link".
+- Story Point: choose the custom field that represents story points. In most cases, it is "Story Points".
+
+#### Additional Settings
+- Remotelink Commit SHA: parse the commits from an issue's remote links by the given regular expression so that the relationship between `issues` and `commits` can be created. You can directly use the regular expression `/commit/([0-9a-f]{40})$`.
+
+### Step 4 - Setting Sync Frequency
+You can choose how often you would like to sync your data in this step by selecting a sync frequency option or enter a cron code to specify your prefered schedule.
+
+
+## Troubleshooting
+
+If you run into any problem, please check the [Troubleshooting](/Troubleshooting/Configuration.md) or [create an issue](https://github.com/apache/incubator-devlake/issues)
diff --git a/versioned_docs/version-v0.15/UserManuals/ConfigUI/Tapd.md b/versioned_docs/version-v0.15/UserManuals/ConfigUI/Tapd.md
new file mode 100644
index 0000000000..c65270d449
--- /dev/null
+++ b/versioned_docs/version-v0.15/UserManuals/ConfigUI/Tapd.md
@@ -0,0 +1,41 @@
+---
+title: "Configuring TAPD(Beta)"
+sidebar_position: 6
+description: Config UI instruction for Tapd
+---
+
+Visit config-ui: `http://localhost:4000` and go to `Connections` page.
+
+### Step 1 - Add Data Connections
+![tapd-add-data-connections](/img/ConfigUI/tapd-add-data-connections.png)
+
+#### Connection Name
+Name your connection.
+
+#### Endpoint URL
+This should be a valid REST API endpoint
+   - `https://api.tapd.cn/`
+The endpoint url should end with `/`.
+
+#### Username / Password
+Input the username and password of your Tapd account, you can follow the steps as below.
+![tapd-account](/img/ConfigUI/tapd-account.png)
+
+#### Proxy URL (Optional)
+If you are behind a corporate firewall or VPN you may need to utilize a proxy server. Enter a valid proxy server address on your network, e.g. `http://your-proxy-server.com:1080`
+
+#### Ralte Limit (Optional)
+For TAPD, we suggest you setting the rate limit to 3500
+
+#### Test and Save Connection
+Click `Test Connection`, if the connection is successful, click `Save Connection` to add the connection.
+
+### Step 2 - Configure Blueprint
+
+Similar to other beta plugins, TAPD does not support `project`, which means, you can only collect TAPD data via blueprint's advanced mode.
+
+Please go to the `Blueprints` page and switch to advanced mode. See how to use advanced mode and JSON [examples](AdvancedMode.md#6-tapd).
+
+### Troubleshooting
+
+If you run into any problem, please check the [Troubleshooting](/Troubleshooting/Configuration.md) or [create an issue](https://github.com/apache/incubator-devlake/issues)
diff --git a/versioned_docs/version-v0.15/UserManuals/ConfigUI/Tutorial.md b/versioned_docs/version-v0.15/UserManuals/ConfigUI/Tutorial.md
new file mode 100644
index 0000000000..00f29ee5e1
--- /dev/null
+++ b/versioned_docs/version-v0.15/UserManuals/ConfigUI/Tutorial.md
@@ -0,0 +1,93 @@
+---
+title: "Tutorial"
+sidebar_position: 1
+description: Config UI instruction
+---
+
+## Overview
+The Apache DevLake Config UI allows you to configure the data you wish to collect through a graphical user interface. Visit config-ui at `http://localhost:4000`.
+
+## Create a Project
+Starting from v0.15, DevLake has introduced the Project feature to alllow viewing project-based metrics, such as DORA. To create a project, simply go to Project on the main navigation, click on the "+ New Project" button and fill out the info on the dialog below.
+
+![img](/img/ConfigUI/BlueprintCreation-v0.15/project.png)
+
+## Create a Blueprint
+
+### Introduction
+A Blueprint is a plan that covers all the work to get your raw data ready for query and metric computation in the dashboards. Blueprints can either be used to collect data for a Project or be used alone without being dependent on any Project. To use the Blueprint within a Project, you can create the Blueprint once a Project is created; to use it alone, you can create the Blueprint from the Blueprint page from the main navigation. 
+
+For either usage of the Blueprint, creating it consists of four steps:
+
+1. Adding Data Connections: Add new or select from existing data connections for the data you wish to collect
+2. Setting Data Scope: Select the scope of data (e.g. GitHub projects or Jira boards) for your data connections
+3. Adding Transformations (Optional): Add transformation rules for the data scope you have selected in order to view corresponding metrics
+4. Setting the Sync Policies: Set the sync frequency, time range and the skip-on-fail option for your data
+
+### Step 1 - Add Data Connections
+There are two ways to add data connections to your Blueprint: adding them during the creation of a Blueprint and adding them separately on the Data Integrations page. There is no difference between these two ways.
+
+When adding data connections from the Blueprint, you can either create a new or select from existing data connections. 
+
+![img](/img/ConfigUI/BlueprintCreation-v0.15/step1.png)
+
+### Step 2 - Set Data Scope
+After adding data connections, click on "Next Step" and you will be prompted to select the data scope of each data connection. For instance, for a GitHub connection, you will need to select or enter the projects you wish to sync, and for Jira, you will need to select from your boards.
+
+![img](/img/ConfigUI/BlueprintCreation-v0.15/step2-1.png)
+![img](/img/ConfigUI/BlueprintCreation-v0.15/step2-2.png)
+
+### Step 3 - Add Transformations (Optional)
+This step is required for viewing certain metrics (e.g. Bug Age, Bug Count per 1k Lines of Code and DORA)in the pre-built dashboards that require data transformation. We highly recommend adding Transformations for your data for the best display of the metrics. but you can still view the basic metrics if you skip this step. 
+
+![img](/img/ConfigUI/BlueprintCreation-v0.15/step3-1.png)
+![img](/img/ConfigUI/BlueprintCreation-v0.15/step3-2.png)
+
+### Step 4 - Set the Sync Policies
+Time Filter: You can select the time range of the data you wish to sync to speed up the collection process.
+
+Frequency: You can choose how often you would like to sync your data in this step by selecting a sync frequency option or entering a cron code to specify your preferred schedule. 
+
+Running Policy: By default, the `Skip failed tasks` is checked to avoid losing all data when encountering a few bugs during data collection, when you are collecting a large volume of data, e.g. 10+ GitHub repositories, Jira boards, etc. For clarity, a task is a unit of a pipeline, an execution of a blueprint. By default, when a task is failed, the whole pipeline will fail and all the data that has been collected will be discarded. By skipping failed tasks, the pipeline will continue to r [...]
+
+![img](/img/ConfigUI/BlueprintCreation-v0.15/step4.png)
+
+### View the Blueprint Status and Download Logs for Historical Runs
+After setting up the Blueprint, you will be prompted to the Blueprint's status page, where you can track the progress of the current run and wait for it to finish before the dashboards become available. You can also view all historical runs of previously created Blueprints from the list on the Blueprint page.
+
+If you run into any errors, you can also download the pipeline logs and share them with us on Slack so that our developers can help you debug.
+
+![img](/img/ConfigUI/BlueprintEditing/blueprint-edit3.png)
+
+## Edit a Blueprint (Normal Mode)
+If you switch to the Configuration tab on the Blueprint detail page, you can see the settings of your Blueprint and edit them.
+
+In the current version, the Blueprint editing feature **allows** editing:
+- The Blueprint's name
+- The sync policies
+- The data scope of a connection
+- The data entities of the data scope
+- The transformation rules of any data scope
+
+and currently does **NOT allow**:
+- Adding or deleting connections of an existing Blueprint (will be available in the future)
+- Editing any connections
+
+Please note: 
+If you have created the Blueprint in the Normal mode, you will only be able to edit it in the Normal Mode; if you have created it in the Advanced Mode, please refer to [this guide](AdvancedMode.md#editing-a-blueprint-advanced-mode) for editing.
+
+![img](/img/ConfigUI/BlueprintEditing/blueprint-edit1.png)
+
+## Create and Manage Data Connections
+
+The Data Connections page allows you to view, create and manage all your data connections in one place.
+![img](/img/ConfigUI/BlueprintCreation-v0.15/connections.png)
+
+## Manage Transformations
+The Transformations page allows you to manage all your transformation rules.
+![img](/img/ConfigUI/BlueprintCreation-v0.15/transformations.png)
+
+
+## Troubleshooting
+
+If you run into any problem, please check [Troubleshooting](/Troubleshooting/Configuration.md), contact us on [Slack](https://join.slack.com/t/devlake-io/shared_invite/zt-17b6vuvps-x98pqseoUagM7EAmKC82xQ) or [create an issue](https://github.com/apache/incubator-devlake/issues).
diff --git a/versioned_docs/version-v0.15/UserManuals/ConfigUI/Zentao.md b/versioned_docs/version-v0.15/UserManuals/ConfigUI/Zentao.md
new file mode 100644
index 0000000000..d8cfde289a
--- /dev/null
+++ b/versioned_docs/version-v0.15/UserManuals/ConfigUI/Zentao.md
@@ -0,0 +1,37 @@
+---
+title: "Configuring Zentao(Beta)"
+sidebar_position: 6
+description: Config UI instruction for Zentao
+---
+
+Visit config-ui: `http://localhost:4000` and go to `Connections` page.
+
+### Step 1 - Add Data Connections
+![zentao-add-data-connections](/img/ConfigUI/zentao-add-data-connections.png)
+
+#### Connection Name
+Name your connection.
+
+#### Endpoint URL
+This should be a valid REST API endpoint
+   - `https://YOUR_DOMAIN:YOUR_PORT/`
+The endpoint url should end with `/`.
+
+#### Username/Password
+Input the username and password of your Zentao account.
+
+#### Proxy URL (Optional)
+If you are behind a corporate firewall or VPN you may need to utilize a proxy server. Enter a valid proxy server address on your network, e.g. `http://your-proxy-server.com:1080`
+
+#### Test and Save Connection
+Click `Test Connection`, if the connection is successful, click `Save Connection` to add the connection.
+
+### Step 2 - Configure Blueprint
+
+Similar to other beta plugins, Zentao does not support `project`, which means, you can only collect Zentao data via blueprint's advanced mode.
+
+Please go to the `Blueprints` page and switch to advanced mode. See how to use advanced mode and JSON [examples](AdvancedMode.md#8-zentao).
+
+### Troubleshooting
+
+If you run into any problem, please check the [Troubleshooting](/Troubleshooting/Configuration.md) or [create an issue](https://github.com/apache/incubator-devlake/issues)
diff --git a/versioned_docs/version-v0.15/UserManuals/ConfigUI/_category_.json b/versioned_docs/version-v0.15/UserManuals/ConfigUI/_category_.json
new file mode 100644
index 0000000000..62f99d484f
--- /dev/null
+++ b/versioned_docs/version-v0.15/UserManuals/ConfigUI/_category_.json
@@ -0,0 +1,4 @@
+{
+  "label": "Config UI",
+  "position": 4
+}
diff --git a/versioned_docs/version-v0.15/UserManuals/ConfigUI/webhook.md b/versioned_docs/version-v0.15/UserManuals/ConfigUI/webhook.md
new file mode 100644
index 0000000000..9616feab99
--- /dev/null
+++ b/versioned_docs/version-v0.15/UserManuals/ConfigUI/webhook.md
@@ -0,0 +1,34 @@
+---
+title: "Configuring Incoming Webhook"
+sidebar_position: 7
+description: Config UI instruction for Webhook
+---
+
+Visit config-ui: `http://{localhost}:4000`.
+
+### Step 1 - Add a new incoming webhook
+
+Go to the 'Data Connections' page. Create a webhook.
+
+![webhook-add-data-connections](/img/ConfigUI/webhook-add-data-connections.png)
+
+We recommend that you give your webhook connection a unique name so that you can identify and manage where you have used it later.
+
+### Step 2 - Use Webhooks
+
+Click on Generate POST URL, and you will find four webhook URLs. Copy the ones that suit your usage into your CI or issue-tracking systems. You can always come back to the webhook page to copy the URLs later on.
+
+![webhook-use](/img/ConfigUI/webhook-use.png)
+
+#### Put webhook on the internet
+
+For the new webhook to work, it needs to be accessible from the DevOps tools from which you would like to push data to DevLake. If DevLake is deployed in your private network and your DevOps tool (e.g. CircleCI) is a cloud service that lives outside of your private network, then you need to make DevLake's webhook accessible to the outside cloud service.
+
+There're many tools for this:
+
+- For testing and quick setup, [ngrok](https://ngrok.com/) is a useful utility that provides a publicly accessible web URL to any locally hosted application. You can put DevLake's webhook on the internet within 5 mins by following ngrok's [Getting Started](https://ngrok.com/docs/getting-started) guide. Note that, when posting to webhook, you may need to replace the `localhost` part in the webhook URL with the forwarding URL that ngrok provides.
+- If you prefer DIY, please checkout open-source reverse proxies like [fatedier/frp](https://github.com/fatedier/frp) or go for the classic [nginx](https://www.nginx.com/).
+
+## Troubleshooting
+
+If you run into any problem, please check the [Troubleshooting](/Troubleshooting/Configuration.md) or [create an issue](https://github.com/apache/incubator-devlake/issues)
diff --git a/versioned_docs/version-v0.15/UserManuals/DORA.md b/versioned_docs/version-v0.15/UserManuals/DORA.md
new file mode 100644
index 0000000000..81ac826b2b
--- /dev/null
+++ b/versioned_docs/version-v0.15/UserManuals/DORA.md
@@ -0,0 +1,187 @@
+---
+title: "DORA"
+sidebar_position: 7
+description: >
+  DORA Metrics
+---
+
+
+This document describes everything you need to know about DORA, and implementing this powerful and practical framework in DevLake.
+
+## What are DORA metrics?
+Created six years ago by a team of researchers, DORA stands for "DevOps Research & Assessment" and is the answer to years of research, having examined thousands of teams, seeking a reliable and actionable approach to understanding the performance of software development teams.
+
+DORA has since become a standardized framework focused on the stability and velocity of development processes, one that avoids the more controversial aspects of productivity and individual performance measures.
+
+There are two key clusters of data inside DORA: Velocity and Stability. The DORA framework is focused on keeping them in context with each other, as a whole, rather than as independent variables, making the data more challenging to misinterpret or abuse. 
+
+Within velocity are two core metrics: 
+- [Deployment Frequency](https://devlake.apache.org/docs/Metrics/DeploymentFrequency): Number of successful deployments to production, how rapidly is your team releasing to users?
+- [Lead Time for Changes](https://devlake.apache.org/docs/Metrics/LeadTimeForChanges): How long does it take from commit to the code running in production? This is important, as it reflects how quickly your team can respond to user requirements.
+
+Stability is composed of two core metrics:
+- [Median Time to Restore Service](https://devlake.apache.org/docs/Metrics/MTTR): How long does it take the team to properly recover from a failure once it is identified?
+- [Change Failure Rate](https://devlake.apache.org/docs/Metrics/CFR): How often are your deployments causing a failure?
+
+![](https://i.imgur.com/71EUflb.png)
+
+To make DORA even more actionable, there are well-established benchmarks to determine if you are performing at "Elite", "High", "Medium", or "Low" levels. Inside DevLake, you will find the benchmarking table available to assess and compare your own projects.  
+
+## Why is DORA important?
+DORA metrics help teams and projects measure and improve software development practices to consistently deliver reliable products, and thus happy users!
+
+
+## How to implement DORA metrics with Apache DevLake?
+
+You can set up DORA metrics in DevLake in a few steps:
+- **Install**: [Getting Started](https://devlake.apache.org/docs/GettingStarted)
+- **Collect**: Collect data via blueprint
+    - In the blueprint, select the data you wish to collect, and make sure you have selected the data required for DORA metrics
+    - Configure DORA-related transformation rules to define `deployments` and `incidents`
+    - Select a sync frequency for your data, save and run the blueprint.
+- **Report**: DevLake provides a built-in DORA dashboard. See an example screenshot below or check out our [live demo](https://grafana-lake.demo.devlake.io/grafana/d/qNo8_0M4z/dora?orgId=1).
+![DORA Dashboard](https://i.imgur.com/y1pUIsk.png)
+
+DevLake now supports Jenkins, GitHub Action and GitLabCI as data sources for `deployments` data; Jira, GitHub issues, and TAPD as the sources for `incidents` data; Github PRs, GitLab MRs as the sources for `changes` data.
+
+If your CI/CD tools are not listed on the [Supported Data Sources](https://devlake.apache.org/docs/SupportedDataSources) page, have no fear! DevLake provides incoming webhooks to push your `deployments` data to DevLake. The webhook configuration doc can be found [here](https://devlake.apache.org/docs/UserManuals/ConfigUI/webhook/).
+
+
+## A real-world example
+
+Let's walk through the DORA implementation process for a team with the following toolchain
+
+- Code Hosting: GitHub
+- CI/CD: GitHub Actions + CircleCI
+- Issue Tracking: Jira
+
+Calculating DORA metrics requires three key entities: **changes**, **deployments**, and **incidents**. Their exact definitions of course depend on a team's DevOps practice and varies team by team. For the team in this example, let's assume the following definition:
+
+- Changes: All pull requests in GitHub.
+- Deployments: GitHub action jobs that have "deploy" in their names and CircleCI's deployment jobs.
+- Incidents: Jira issues whose types are `Crash` or `Incident`
+
+In the next section, we'll demonstrate how to configure DevLake to implement DORA metrics for the aforementioned example team.
+
+### Collect GitHub & Jira data via `blueprint`
+1. Visit the config-ui at `http://localhost:4000`
+2. Create a `blueprint`, let's name it "Blueprint for DORA", add a Jira and a GitHub connection. Click `Next Step`
+![](https://i.imgur.com/lpPRZ6v.png)
+
+3. Select Jira boards and GitHub repos to collect, click `Next Step`
+![](https://i.imgur.com/Ko38n6J.png)
+
+4. Click `Add Transformation` to configure for DORA metrics
+![](https://i.imgur.com/Lhcu2DE.png)
+
+5. To make it simple, fields with a ![](https://i.imgur.com/rrLopFx.png) label are DORA-related configurations for every data source. Via these fields, you can define what are "incidents" and "deployments" for each data source. After all data connections have been configured, click `Next Step`
+   - This team uses Jira issue types `Crash` and `Incident` as "incident", so choose the two types in field "incident". Jira issues in these two types will be transformed to "incidents" in DevLake.
+   - This team uses the GitHub action jobs named `deploy` and `build-and-deploy` to deploy, so type in `(?i)deploy` to match these jobs. These jobs will be transformed to "deployments" in DevLake.
+   ![](https://i.imgur.com/1JZA2xn.png)
+   
+   Note: The following example shows where to find GitHub action jobs. It's easy to mix them up with GitHub workflows.
+   ![](https://i.imgur.com/Y2hchEh.png)
+   
+
+6. Choose sync frequency, click 'Save and Run Now' to start data collection. The time to completion varies by data source and depends on the volume of data.
+![](https://i.imgur.com/zPkfzGr.png)
+
+For more details, please refer to our [blueprint manuals](https://devlake.apache.org/docs/UserManuals/ConfigUI/Tutorial).
+
+### Collect CircleCI data via `webhook`
+
+Using CircleCI as an example, we demonstrate how to actively push data to DevLake using the Webhook approach, in cases where DevLake doesn't have a plugin specific to that tool to pull data from your data source.
+
+7. Visit "Data Connections" page in config-ui and select "Issue/Deployment Incoming Webhook".
+
+8. Click "Add Incoming Webhook", give it a name, and click "Generate POST URL". DevLake will generate URLs that you can send JSON payloads to push `deployments` and `incidents` to Devlake. Copy the `Deployment` curl command.
+![](https://i.imgur.com/jq6lzg1.png)
+![](https://i.imgur.com/jBMQnjt.png)
+
+9. Now head to your CircleCI's pipelines page in a new tab. Find your deployment pipeline and click `Configuration File`
+![](https://i.imgur.com/XwPzmyk.png)
+
+10. Paste the curl command copied in step 8 to the `config.yml`, change the key-values in the payload. See full payload schema [here](https://devlake.apache.org/docs/Plugins/webhook/##register-a-deployment).
+  ```
+  version: 2.1
+
+  jobs:
+    build:
+      docker:
+        - image: cimg/base:stable
+      steps:
+        - checkout
+        - run:
+            name: "build"
+            command: |
+              echo Hello, World!
+
+    deploy:
+      docker:
+        - image: cimg/base:stable
+      steps:
+        - checkout
+        - run:
+            name: "deploy"
+            command: |
+              # The time a deploy started
+              start_time=`date '+%Y-%m-%dT%H:%M:%S%z'`
+
+              # Some deployment tasks here ...
+              echo Hello, World!
+
+              # Send the request to DevLake after deploy
+              # The values start with a '$CIRCLE_' are CircleCI's built-in variables
+              curl https://sample-url.com/api/plugins/webhook/1/deployments -X 'POST' -d "{
+                \"commit_sha\":\"$CIRCLE_SHA1\",
+                \"repo_url\":\"$CIRCLE_REPOSITORY_URL\",
+                \"start_time\":\"$start_time\"
+              }"
+
+  workflows:
+    build_and_deploy_workflow:
+      jobs:
+        - build
+        - deploy
+  ```
+  If you have set a [username/password](https://devlake.apache.org/docs/UserManuals/Authentication) for Config UI, you need to add them to the curl to register a deployment:
+
+  ```
+  curl https://sample-url.com/api/plugins/webhook/1/deployments -X 'POST' -u 'username:password' -d '{
+      \"commit_sha\":\"$CIRCLE_SHA1\",
+      \"repo_url\":\"$CIRCLE_REPOSITORY_URL\",
+      \"start_time\":\"$start_time\"
+    }'
+  ```
+
+11. Run the modified CircleCI pipeline. Check to verify that the request has been successfully sent.
+![](https://i.imgur.com/IyneAMn.png)
+
+12. You will find the corresponding `deployments` in table.cicd_tasks in DevLake's database.
+![](https://i.imgur.com/6hguCYK.png)
+
+### View and customize DevLake's DORA dashboard 
+
+With all the data collected, DevLake's DORA dashboard is ready to deliver your DORA metrics and benchmarks. You can find the DORA dashboard within the Grafana instance shipped with DevLake, ready for you to put into action.
+
+You can customize the DORA dashboard by editing the underlying SQL query of each panel.
+
+For a breakdown of each metric's SQL query, please refer to the corresponding metric docs:
+  - [Deployment Frequency](https://devlake.apache.org/docs/Metrics/DeploymentFrequency)
+  - [Lead Time for Changes](https://devlake.apache.org/docs/Metrics/LeadTimeForChanges)
+  - [Median Time to Restore Service](https://devlake.apache.org/docs/Metrics/MTTR)
+  - [Change Failure Rate](https://devlake.apache.org/docs/Metrics/CFR)
+
+If you aren't familiar with Grafana, please refer to our [Grafana doc](./Dashboards/GrafanaUserGuide.md), or jump into Slack for help.
+
+<br/>
+
+:tada::tada::tada: Congratulations! You are now a DevOps Hero, with your own DORA dashboard! 
+
+<br/><br/><br/>
+
+
+
+## Troubleshooting
+
+If you run into any problem, please check the [Troubleshooting](/Troubleshooting/Configuration.md) or [create an issue](https://github.com/apache/incubator-devlake/issues)
diff --git a/versioned_docs/version-v0.15/UserManuals/Dashboards/AccessControl.md b/versioned_docs/version-v0.15/UserManuals/Dashboards/AccessControl.md
new file mode 100644
index 0000000000..500fd0d385
--- /dev/null
+++ b/versioned_docs/version-v0.15/UserManuals/Dashboards/AccessControl.md
@@ -0,0 +1,44 @@
+---
+title: "Dashboard Access Control"
+sidebar_position: 2
+description: >
+  Dashboard Access Control
+---
+
+
+# Dashboard Access Control
+
+This tutorial shows how to leverage Grafana's role-based access control (RBAC) to manage what dashboards a user has access to. If you're setting up a single DevLake instance to be shared by multiple teams in your organization, this tutorial can help you achieve data segregation between teams.
+
+## Example solution: one folder for each team
+
+One of the simplest solutions is to create one Grafana folder for each team and assign permissions to teams at the folder level. Below is a step-by-step walk through.
+
+1. Sign in as Grafana admin and create a new folder
+
+![create-new-folder](/img/Grafana/create-new-folder.png)
+
+2. Click "Permissions" tab and remove the default access of "Editor (Role)" and "Viewer (Role)"
+
+![folder-permission](/img/Grafana/folder-permission.png)
+
+After removing default permissions:
+
+![after-remove-default-permissions](/img/Grafana/after-remove-default-permissions.png)
+
+
+3. Add "View" permission to the target team (you'll need to create this team in Grafana first)
+
+![add-team-permission](/img/Grafana/add-team-permission.png)
+
+4. Copy/move dashboards into this folder (you may need to edit the dashboard so that it only shows data belongs to this team)
+
+## Reference
+
+1. [Manage dashboard permissions by Grafana](https://grafana.com/docs/grafana/latest/administration/user-management/manage-dashboard-permissions/#grant-dashboard-folder-permissions)
+
+
+
+
+
+
diff --git a/versioned_docs/version-v0.15/UserManuals/Dashboards/GrafanaUserGuide.md b/versioned_docs/version-v0.15/UserManuals/Dashboards/GrafanaUserGuide.md
new file mode 100644
index 0000000000..47a19f25de
--- /dev/null
+++ b/versioned_docs/version-v0.15/UserManuals/Dashboards/GrafanaUserGuide.md
@@ -0,0 +1,125 @@
+---
+title: "Grafana User Guide"
+sidebar_position: 2
+description: >
+  Grafana User Guide
+---
+
+
+# Grafana
+
+<img src="https://user-images.githubusercontent.com/3789273/128533901-3107e9bf-c3e3-4320-ba47-879fe2b0ea4d.png" width="450px" />
+
+When first visiting Grafana, you will be provided with a sample dashboard with some basic charts setup from the database.
+
+## Contents
+
+Section | Link
+:------------ | :-------------
+Logging In | [View Section](#logging-in)
+Viewing All Dashboards | [View Section](#viewing-all-dashboards)
+Customizing a Dashboard | [View Section](#customizing-a-dashboard)
+Dashboard Settings | [View Section](#dashboard-settings)
+Provisioning a Dashboard | [View Section](#provisioning-a-dashboard)
+Troubleshooting DB Connection | [View Section](#troubleshooting-db-connection)
+
+## Logging In<a id="logging-in"></a>
+
+Once the app is up and running, visit `http://localhost:3002` to view the Grafana dashboard.
+
+Default login credentials are:
+
+- Username: `admin`
+- Password: `admin`
+
+## Viewing All Dashboards<a id="viewing-all-dashboards"></a>
+
+To see all dashboards created in Grafana visit `/dashboards`
+
+Or, use the sidebar and click on **Manage**:
+
+![Screen Shot 2021-08-06 at 11 27 08 AM](https://user-images.githubusercontent.com/3789273/128534617-1992c080-9385-49d5-b30f-be5c96d5142a.png)
+
+
+## Customizing a Dashboard<a id="customizing-a-dashboard"></a>
+
+When viewing a dashboard, click the top bar of a panel, and go to **edit**
+
+![Screen Shot 2021-08-06 at 11 35 36 AM](https://user-images.githubusercontent.com/3789273/128535505-a56162e0-72ad-46ac-8a94-70f1c7a910ed.png)
+
+**Edit Dashboard Panel Page:**
+
+![grafana-sections](https://user-images.githubusercontent.com/3789273/128540136-ba36ee2f-a544-4558-8282-84a7cb9df27a.png)
+
+### 1. Preview Area
+- **Top Left** is the variable select area (custom dashboard variables, used for switching projects, or grouping data)
+- **Top Right** we have a toolbar with some buttons related to the display of the data:
+  - View data results in a table
+  - Time range selector
+  - Refresh data button
+- **The Main Area** will display the chart and should update in real time
+
+> Note: Data should refresh automatically, but may require a refresh using the button in some cases
+
+### 2. Query Builder
+Here we form the SQL query to pull data into our chart, from our database
+- Ensure the **Data Source** is the correct database
+
+  ![Screen Shot 2021-08-06 at 10 14 22 AM](https://user-images.githubusercontent.com/3789273/128545278-be4846e0-852d-4bc8-8994-e99b79831d8c.png)
+
+- Select **Format as Table**, and **Edit SQL** buttons to write/edit queries as SQL
+
+  ![Screen Shot 2021-08-06 at 10 17 52 AM](https://user-images.githubusercontent.com/3789273/128545197-a9ff9cb3-f12d-4331-bf6a-39035043667a.png)
+
+- The **Main Area** is where the queries are written, and in the top right is the **Query Inspector** button (to inspect returned data)
+
+  ![Screen Shot 2021-08-06 at 10 18 23 AM](https://user-images.githubusercontent.com/3789273/128545557-ead5312a-e835-4c59-b9ca-dd5c08f2a38b.png)
+
+### 3. Main Panel Toolbar
+In the top right of the window are buttons for:
+- Dashboard settings (regarding entire dashboard)
+- Save/apply changes (to specific panel)
+
+### 4. Grafana Parameter Sidebar
+- Change chart style (bar/line/pie chart etc)
+- Edit legends, chart parameters
+- Modify chart styling
+- Other Grafana specific settings
+
+## Dashboard Settings<a id="dashboard-settings"></a>
+
+When viewing a dashboard click on the settings icon to view dashboard settings. Here are 2 important sections to use:
+
+![Screen Shot 2021-08-06 at 1 51 14 PM](https://user-images.githubusercontent.com/3789273/128555763-4d0370c2-bd4d-4462-ae7e-4b140c4e8c34.png)
+
+- Variables
+  - Create variables to use throughout the dashboard panels, that are also built on SQL queries
+
+  ![Screen Shot 2021-08-06 at 2 02 40 PM](https://user-images.githubusercontent.com/3789273/128553157-a8e33042-faba-4db4-97db-02a29036e27c.png)
+
+- JSON Model
+  - Copy `json` code here and save it to a new file in `/grafana/dashboards/` with a unique name in the `lake` repo. This will allow us to persist dashboards when we load the app
+
+  ![Screen Shot 2021-08-06 at 2 02 52 PM](https://user-images.githubusercontent.com/3789273/128553176-65a5ae43-742f-4abf-9c60-04722033339e.png)
+
+## Provisioning a Dashboard<a id="provisioning-a-dashboard"></a>
+
+To save a dashboard in the `lake` repo and load it:
+
+1. Create a dashboard in browser (visit `/dashboard/new`, or use sidebar)
+2. Save dashboard (in top right of screen)
+3. Go to dashboard settings (in top right of screen)
+4. Click on _JSON Model_ in sidebar
+5. Copy code into a new `.json` file in `/grafana/dashboards`
+
+## Troubleshooting DB Connection<a id="troubleshooting-db-connection"></a>
+
+To ensure we have properly connected our database to the data source in Grafana, check database settings in `./grafana/datasources/datasource.yml`, specifically:
+- `database`
+- `user`
+- `secureJsonData/password`
+
+
+## Troubleshooting
+
+If you run into any problem, please check the [Troubleshooting](/Troubleshooting/Dashboard.md) or [create an issue](https://github.com/apache/incubator-devlake/issues)
diff --git a/versioned_docs/version-v0.15/UserManuals/Dashboards/_category_.json b/versioned_docs/version-v0.15/UserManuals/Dashboards/_category_.json
new file mode 100644
index 0000000000..0db83c6e9b
--- /dev/null
+++ b/versioned_docs/version-v0.15/UserManuals/Dashboards/_category_.json
@@ -0,0 +1,4 @@
+{
+  "label": "Dashboards",
+  "position": 5
+}
diff --git a/versioned_docs/version-v0.15/UserManuals/TeamConfiguration.md b/versioned_docs/version-v0.15/UserManuals/TeamConfiguration.md
new file mode 100644
index 0000000000..8457fd76a0
--- /dev/null
+++ b/versioned_docs/version-v0.15/UserManuals/TeamConfiguration.md
@@ -0,0 +1,193 @@
+---
+title: "Team Configuration"
+sidebar_position: 9
+description: >
+  Team Configuration
+---
+## What is 'Team Configuration' and how it works?
+
+To organize and display metrics by `team`, Apache DevLake needs to know about the team configuration in an organization, specifically:
+
+1. What are the teams?
+2. Who are the users(unified identities)?
+3. Which users belong to a team?
+4. Which accounts(identities in specific tools) belong to the same user?
+
+Each of the questions above corresponds to a table in DevLake's schema, illustrated below:
+
+![image](/img/Team/teamflow0.png)
+
+1. `teams` table stores all the teams in the organization.
+2. `users` table stores the organization's roster. An entry in the `users` table corresponds to a person in the org.
+3. `team_users` table stores which users belong to a team.
+4. `user_accounts` table stores which accounts belong to a user. An `account` refers to an identiy in a DevOps tool and is automatically created when importing data from that tool. For example, a `user` may have a GitHub `account` as well as a Jira `account`.
+
+Apache DevLake uses a simple heuristic algorithm based on emails and names to automatically map accounts to users and populate the `user_accounts` table.
+When Apache DevLake cannot confidently map an `account` to a `user` due to insufficient information, it allows DevLake users to manually configure the mapping to ensure accuracy and integrity.
+
+## A step-by-step guide
+
+In the following sections, we'll walk through how to configure teams and create the five aforementioned tables (`teams`, `users`, `team_users`, `accounts`, and `user_accounts`).
+The overall workflow is:
+
+1. Create the `teams` table
+2. Create the `users` and `team_users` table
+3. Populate the `accounts` table via data collection
+4. Run a heuristic algorithm to populate `user_accounts` table
+5. Manually update `user_accounts` when the algorithm can't catch everything
+
+Note:
+
+1. Please replace `/path/to/*.csv` with the absolute path of the CSV file you'd like to upload.
+2. Please replace `127.0.0.1:4000` with your actual Apache DevLake ConfigUI service IP and port number.
+
+## Step 1 - Create the `teams` table
+
+You can create the `teams` table by sending a PUT request to `/plugins/org/teams.csv` with a `teams.csv` file. To jumpstart the process, you can download a template `teams.csv` from `/plugins/org/teams.csv?fake_data=true`. Below are the detailed instructions:
+
+a. Download the template `teams.csv` file
+
+    i.  GET http://127.0.0.1:4000/api/plugins/org/teams.csv?fake_data=true (pasting the URL into your browser will download the template)
+
+    ii. If you prefer using curl:
+        curl --location --request GET 'http://127.0.0.1:4000/api/plugins/org/teams.csv?fake_data=true'
+    
+
+b. Fill out `teams.csv` file and upload it to DevLake
+
+    i. Fill out `teams.csv` with your org data. Please don't modify the column headers or the file suffix.
+
+    ii. Upload `teams.csv` to DevLake with the following curl command: 
+    curl --location --request PUT 'http://127.0.0.1:4000/api/plugins/org/teams.csv' --form 'file=@"/path/to/teams.csv"'
+
+    iii. The PUT request would populate the `teams` table with data from `teams.csv` file.
+    You can connect to the database and verify the data in the `teams` table.
+    See Appendix for how to connect to the database.
+
+![image](/img/Team/teamflow3.png)
+
+
+## Step 2 - Create the `users` and `team_users` table
+
+You can create the `users` and `team_users` table by sending a single PUT request to `/plugins/org/users.csv` with a `users.csv` file. To jumpstart the process, you can download a template `users.csv` from `/plugins/org/users.csv?fake_data=true`. Below are the detailed instructions:
+
+a. Download the template `users.csv` file
+
+    i.  GET http://127.0.0.1:4000/api/plugins/org/users.csv?fake_data=true (pasting the URL into your browser will download the template)
+
+    ii. If you prefer using curl:
+    curl --location --request GET 'http://127.0.0.1:4000/api/plugins/org/users.csv?fake_data=true'
+
+
+b. Fill out `users.csv` and upload to DevLake
+
+    i.  Fill out `users.csv` with your org data. Please don't modify the column headers or the file suffix
+
+    ii. Upload `users.csv` to DevLake with the following curl command:
+    curl --location --request PUT 'http://127.0.0.1:4000/api/plugins/org/users.csv' --form 'file=@"/path/to/users.csv"'
+
+    iii. The PUT request would populate the `users` table along with the `team_users` table with data from `users.csv` file.
+    You can connect to the database and verify these two tables.
+
+![image](/img/Team/teamflow1.png)
+    
+![image](/img/Team/teamflow2.png)
+
+c. If you ever want to update `team_users` or `users` table, simply upload the updated `users.csv` to DevLake again following step b.
+
+## Step 3 - Populate the `accounts` table via data collection
+
+The `accounts` table is automatically populated when you collect data from data sources like GitHub and Jira through DevLake.
+
+For example, the GitHub plugin would create one entry in the `accounts` table for each GitHub user involved in your repository.
+For demo purposes, we'll insert some mock data into the `accounts` table using SQL:
+
+```
+INSERT INTO `accounts` (`id`, `created_at`, `updated_at`, `_raw_data_params`, `_raw_data_table`, `_raw_data_id`, `_raw_data_remark`, `email`, `full_name`, `user_name`, `avatar_url`, `organization`, `created_date`, `status`)
+VALUES
+        ('github:GithubAccount:1:1234', '2022-07-12 10:54:09.632', '2022-07-12 10:54:09.632', '{\"ConnectionId\":1,\"Owner\":\"apache\",\"Repo\":\"incubator-devlake\"}', '_raw_github_api_pull_request_reviews', 28, '', 'TyroneKCummings@teleworm.us', '', 'Tyrone K. Cummings', 'https://avatars.githubusercontent.com/u/101256042?u=a6e460fbaffce7514cbd65ac739a985f5158dabc&v=4', '', NULL, 0),
+        ('jira:JiraAccount:1:629cdf', '2022-07-12 10:54:09.632', '2022-07-12 10:54:09.632', '{\"ConnectionId\":1,\"BoardId\":\"76\"}', '_raw_jira_api_users', 5, '', 'DorothyRUpdegraff@dayrep.com', '', 'Dorothy R. Updegraff', 'https://avatars.jiraxxxx158dabc&v=4', '', NULL, 0);
+
+```
+
+![image](/img/Team/teamflow4.png)
+
+## Step 4 - Run a heuristic algorithm to populate `user_accounts` table
+
+Now that we have data in both the `users` and `accounts` table, we can tell DevLake to infer the mappings between `users` and `accounts` with a simple heuristic algorithm based on names and emails.
+
+a. Send an API request to DevLake to run the mapping algorithm
+
+```
+curl --location --request POST '127.0.0.1:4000/api/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '{
+    "name": "test",
+    "plan":[
+        [
+            {
+                "plugin": "org",
+                "subtasks":["connectUserAccountsExact"],
+                "options":{
+                    "connectionId":1
+                }
+            }
+        ]
+    ]
+}'
+```
+
+b. After successful execution, you can verify the data in `user_accounts` in the database. 
+
+![image](/img/Team/teamflow5.png)
+
+## Step 5 - Manually update `user_accounts` when the algorithm can't catch everything
+
+It is recommended to examine the generated `user_accounts` table after running the algorithm.
+We'll demonstrate how to manually update `user_accounts` when the mapping is inaccurate/incomplete in this section.
+To make manual verification easier, DevLake provides an API for users to download `user_accounts` as a CSV file.
+Alternatively, you can verify and modify `user_accounts` all by SQL, see Appendix for more info.
+
+a. GET http://127.0.0.1:4000/api/plugins/org/user_account_mapping.csv(pasting the URL into your browser will download the file). If you prefer using curl:
+```
+curl --location --request GET 'http://127.0.0.1:4000/api/plugins/org/user_account_mapping.csv'
+```
+
+![image](/img/Team/teamflow6.png)
+
+b. If you find the mapping inaccurate or incomplete, you can modify the `user_account_mapping.csv` file and then upload it to DevLake.
+For example, here we change the `UserId` of row 'Id=github:GithubAccount:1:1234' in the `user_account_mapping.csv` file to 2.
+Then we upload the updated `user_account_mapping.csv` file with the following curl command:
+
+```
+curl --location --request PUT 'http://127.0.0.1:4000/api/plugins/org/user_account_mapping.csv' --form 'file=@"/path/to/user_account_mapping.csv"'
+```
+
+c. You can verify the data in the `user_accounts` table has been updated.
+
+![image](/img/Team/teamflow7.png)
+
+## Appendix A: how to connect to the database
+
+Here we use MySQL as an example. You can install database management tools like Sequel Ace, DataGrip, MySQLWorkbench, etc.
+
+
+Or through the command line:
+
+```
+mysql -h <ip> -u <username> -p -P <port>
+```
+
+## Appendix B: how to examine `user_accounts` via SQL
+
+```
+SELECT a.id as account_id, a.email, a.user_name as account_user_name, u.id as user_id, u.name as real_name
+FROM accounts a
+        join user_accounts ua on a.id = ua.account_id
+        join users u on ua.user_id = u.id
+```
+
+
+## Troubleshooting
+
+If you run into any problem, please check the [Troubleshooting](/Troubleshooting/Installation.md) or [create an issue](https://github.com/apache/incubator-devlake/issues)
diff --git a/versioned_docs/version-v0.15/UserManuals/_category_.json b/versioned_docs/version-v0.15/UserManuals/_category_.json
new file mode 100644
index 0000000000..23ce768a59
--- /dev/null
+++ b/versioned_docs/version-v0.15/UserManuals/_category_.json
@@ -0,0 +1,8 @@
+{
+  "label": "User Manuals",
+  "position": 3,
+  "link":{
+    "type": "generated-index",
+    "slug": "UserManuals"
+  }
+}
diff --git a/versioned_sidebars/version-v0.15-sidebars.json b/versioned_sidebars/version-v0.15-sidebars.json
new file mode 100644
index 0000000000..39332bfe75
--- /dev/null
+++ b/versioned_sidebars/version-v0.15-sidebars.json
@@ -0,0 +1,8 @@
+{
+  "docsSidebar": [
+    {
+      "type": "autogenerated",
+      "dirName": "."
+    }
+  ]
+}
diff --git a/versions.json b/versions.json
index b875137320..72a4bcafc7 100644
--- a/versions.json
+++ b/versions.json
@@ -1,4 +1,5 @@
 [
+  "v0.15",
   "v0.14",
   "v0.13",
   "v0.12",