You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@devlake.apache.org by zk...@apache.org on 2022/07/13 15:54:26 UTC

[incubator-devlake-website] branch main updated (bd483f0 -> 137b4d6)

This is an automated email from the ASF dual-hosted git repository.

zky pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-devlake-website.git


    from bd483f0  feat: team feature user guide
     new bb9b829  docs: updated versioning and tidied up docs
     new b399667  fix: fixed some links
     new 5650f06  fix: udpated versioning again
     new f0008ea  fix: fixed file names
     new f232eca  fix: fixed image path
     new 137b4d6  fix: fixed versioning again

The 6 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../index.md"                                      |   2 +-
 community/Team/team.md                             |  30 +++---
 .../make-contribution/fix-or-create-issues.md      |   4 +-
 .../{02-DataSupport.md => DataSupport.md}          |   7 +-
 ...nLayerSchema.md => DevLakeDomainLayerSchema.md} |   7 +-
 .../DeveloperManuals/DBMigration.md                |   9 +-
 docs/DeveloperManuals/Dal.md                       |   2 +-
 .../DeveloperManuals/DeveloperSetup.md             |  15 +--
 .../DeveloperManuals/Notifications.md              |   3 +-
 .../{PluginCreate.md => PluginImplementation.md}   |   6 +-
 docs/Glossary.md                                   |  10 +-
 docs/Overview/01-WhatIsDevLake.md                  |  41 --------
 .../Overview/Architecture.md                       |   8 +-
 .../Overview/Introduction.md                       |  18 ++--
 docs/Overview/{03-Roadmap.md => Roadmap.md}        |  11 +--
 docs/Plugins/feishu.md                             |   2 -
 docs/Plugins/gitee.md                              |   2 -
 docs/Plugins/gitextractor.md                       |   6 +-
 docs/Plugins/github.md                             |   3 +-
 docs/Plugins/jenkins.md                            |   2 -
 docs/Plugins/refdiff.md                            |   2 -
 docs/Plugins/tapd.md                               |   6 +-
 .../QuickStart/KubernetesSetup.md                  |   7 +-
 .../QuickStart/{01-LocalSetup.md => LocalSetup.md} |  21 ++--
 .../UserManuals/AdvancedMode.md                    |   4 +-
 .../UserManuals/GitHubUserGuide.md                 |   6 +-
 .../UserManuals/GrafanaUserGuide.md                |   4 +-
 .../UserManuals/RecurringPipelines.md              |   4 +-
 ...-feature-user-guide.md => TeamConfiguration.md} |  18 ++--
 .../{03-TemporalSetup.md => TemporalSetup.md}      |   0
 docusaurus.config.js                               |  12 +--
 .../index.md"                                      |   2 +-
 src/components/HomepageFeatures.js                 |   6 +-
 static/img/{ => Architecture}/arch-component.svg   |   0
 static/img/{ => Architecture}/arch-dataflow.svg    |   0
 .../img/Community}/contributors/abhishek.jpeg      | Bin
 .../img/Community}/contributors/anshimin.jpeg      | Bin
 .../img/Community}/contributors/chengeyu.jpeg      | Bin
 .../img/Community}/contributors/jibin.jpeg         | Bin
 .../img/Community}/contributors/keonamini.jpeg     | Bin
 .../img/Community}/contributors/lijiageng.jpeg     | Bin
 .../img/Community}/contributors/lizhenlei.jpeg     | Bin
 .../img/Community}/contributors/nikitakoselec.jpeg | Bin
 .../img/Community}/contributors/prajwalborkar.jpeg | Bin
 .../img/Community}/contributors/songdunyu.jpeg     | Bin
 .../img/Community}/contributors/supeng.jpeg        | Bin
 .../img/Community}/contributors/tanguiping.jpeg    | Bin
 .../img/Community}/contributors/wangdanna.jpeg     | Bin
 .../img/Community}/contributors/wangxiaolei.jpeg   | Bin
 .../img/Community}/contributors/zhangxiangyu.jpeg  | Bin
 .../screenshots/issue_page_screenshot.png          | Bin
 .../img/{ => DomainLayerSchema}/schema-diagram.png | Bin
 static/img/{ => Glossary}/blueprint-erd.svg        |   0
 static/img/{ => Glossary}/pipeline-erd.svg         |   0
 static/img/{ => Homepage}/HighlyFlexible.svg       |   0
 static/img/{ => Homepage}/OutoftheboxAnalysis.svg  |   0
 static/img/{ => Homepage}/SilosConnected.svg       |   0
 static/img/{ => Introduction}/userflow1.svg        |   0
 static/img/{ => Introduction}/userflow2.svg        |   0
 static/img/{ => Introduction}/userflow3.png        | Bin
 static/img/{ => Introduction}/userflow4.png        | Bin
 static/img/{ => Plugins}/github-demo.png           | Bin
 static/img/{ => Plugins}/jenkins-demo.png          | Bin
 static/img/{ => Plugins}/jira-demo.png             | Bin
 static/img/{ => Team}/teamflow1.png                | Bin
 static/img/{ => Team}/teamflow2.png                | Bin
 static/img/{ => Team}/teamflow3.png                | Bin
 static/img/{ => Team}/teamflow4.png                | Bin
 static/img/{ => Team}/teamflow5.png                | Bin
 static/img/{ => Team}/teamflow6.png                | Bin
 static/img/{ => Team}/teamflow7.png                | Bin
 static/img/tutorial/docsVersionDropdown.png        | Bin 25102 -> 0 bytes
 static/img/tutorial/localeDropdown.png             | Bin 30020 -> 0 bytes
 versioned_docs/version-0.11/Glossary.md            | 106 ---------------------
 .../Dashboards/AverageRequirementLeadTime.md       |   0
 .../Dashboards/CommitCountByAuthor.md              |   0
 .../Dashboards/DetailedBugInfo.md                  |   0
 .../Dashboards/GitHubBasic.md                      |   0
 .../GitHubReleaseQualityAndContributionAnalysis.md |   0
 .../Dashboards/Jenkins.md                          |   0
 .../Dashboards/WeeklyBugRetro.md                   |   0
 .../Dashboards/_category_.json                     |   0
 .../DataModels/DataSupport.md}                     |   7 +-
 .../DataModels/DevLakeDomainLayerSchema.md}        |   7 +-
 .../DataModels/_category_.json                     |   0
 .../DeveloperManuals/DBMigration.md                |   9 +-
 .../DeveloperManuals/Dal.md                        |   2 +-
 .../DeveloperManuals/DeveloperSetup.md             |  15 +--
 .../DeveloperManuals/Notifications.md              |   3 +-
 .../DeveloperManuals/PluginImplementation.md}      |   6 +-
 .../DeveloperManuals/_category_.json               |   0
 .../EngineeringMetrics.md                          |   0
 .../version-v0.11.0/Overview/Architecture.md       |  10 +-
 .../version-v0.11.0/Overview/Introduction.md       |  16 ++++
 .../Overview/Roadmap.md}                           |  11 +--
 .../Overview/_category_.json                       |   0
 .../Plugins/_category_.json                        |   0
 .../Plugins/dbt.md                                 |   0
 .../Plugins/feishu.md                              |   2 -
 .../Plugins/gitee.md                               |   2 -
 .../Plugins/gitextractor.md                        |   6 +-
 .../Plugins/github-connection-in-config-ui.png     | Bin
 .../Plugins/github.md                              |   3 +-
 .../Plugins/gitlab-connection-in-config-ui.png     | Bin
 .../Plugins/gitlab.md                              |   0
 .../Plugins/jenkins.md                             |   2 -
 .../Plugins/jira-connection-config-ui.png          | Bin
 .../Plugins/jira-more-setting-in-config-ui.png     | Bin
 .../Plugins/jira.md                                |   0
 .../Plugins/refdiff.md                             |   2 -
 .../Plugins/tapd.md                                |   6 +-
 .../version-v0.11.0/QuickStart/KubernetesSetup.md  |   7 +-
 .../QuickStart/LocalSetup.md}                      |  21 ++--
 .../QuickStart/_category_.json                     |   0
 .../version-v0.11.0/UserManuals/AdvancedMode.md    |   4 +-
 .../version-v0.11.0/UserManuals/GitHubUserGuide.md |   6 +-
 .../UserManuals/GrafanaUserGuide.md                |   4 +-
 .../UserManuals/RecurringPipelines.md              |   4 +-
 .../UserManuals/TeamConfiguration.md               |  18 ++--
 .../UserManuals/TemporalSetup.md}                  |   0
 .../UserManuals/_category_.json                    |   0
 ...sidebars.json => version-v0.11.0-sidebars.json} |   0
 versions.json                                      |   2 +-
 123 files changed, 200 insertions(+), 361 deletions(-)
 rename docs/DataModels/{02-DataSupport.md => DataSupport.md} (98%)
 rename docs/DataModels/{01-DevLakeDomainLayerSchema.md => DevLakeDomainLayerSchema.md} (99%)
 rename versioned_docs/version-0.11/DeveloperManuals/MIGRATIONS.md => docs/DeveloperManuals/DBMigration.md (94%)
 rename versioned_docs/version-0.11/DeveloperManuals/04-DeveloperSetup.md => docs/DeveloperManuals/DeveloperSetup.md (92%)
 rename versioned_docs/version-0.11/DeveloperManuals/NOTIFICATION.md => docs/DeveloperManuals/Notifications.md (97%)
 rename docs/DeveloperManuals/{PluginCreate.md => PluginImplementation.md} (99%)
 delete mode 100755 docs/Overview/01-WhatIsDevLake.md
 rename versioned_docs/version-0.11/Overview/02-Architecture.md => docs/Overview/Architecture.md (93%)
 rename versioned_docs/version-0.11/Overview/01-WhatIsDevLake.md => docs/Overview/Introduction.md (79%)
 rename docs/Overview/{03-Roadmap.md => Roadmap.md} (53%)
 rename versioned_docs/version-0.11/QuickStart/02-KubernetesSetup.md => docs/QuickStart/KubernetesSetup.md (94%)
 rename docs/QuickStart/{01-LocalSetup.md => LocalSetup.md} (79%)
 rename versioned_docs/version-0.11/UserManuals/create-pipeline-in-advanced-mode.md => docs/UserManuals/AdvancedMode.md (97%)
 rename versioned_docs/version-0.11/UserManuals/github-user-guide-v0.10.0.md => docs/UserManuals/GitHubUserGuide.md (97%)
 rename versioned_docs/version-0.11/UserManuals/GRAFANA.md => docs/UserManuals/GrafanaUserGuide.md (99%)
 rename versioned_docs/version-0.11/UserManuals/recurring-pipeline.md => docs/UserManuals/RecurringPipelines.md (91%)
 copy docs/UserManuals/{team-feature-user-guide.md => TeamConfiguration.md} (94%)
 rename docs/UserManuals/{03-TemporalSetup.md => TemporalSetup.md} (100%)
 rename static/img/{ => Architecture}/arch-component.svg (100%)
 rename static/img/{ => Architecture}/arch-dataflow.svg (100%)
 rename {img/community => static/img/Community}/contributors/abhishek.jpeg (100%)
 rename {img/community => static/img/Community}/contributors/anshimin.jpeg (100%)
 rename {img/community => static/img/Community}/contributors/chengeyu.jpeg (100%)
 rename {img/community => static/img/Community}/contributors/jibin.jpeg (100%)
 rename {img/community => static/img/Community}/contributors/keonamini.jpeg (100%)
 rename {img/community => static/img/Community}/contributors/lijiageng.jpeg (100%)
 rename {img/community => static/img/Community}/contributors/lizhenlei.jpeg (100%)
 rename {img/community => static/img/Community}/contributors/nikitakoselec.jpeg (100%)
 rename {img/community => static/img/Community}/contributors/prajwalborkar.jpeg (100%)
 rename {img/community => static/img/Community}/contributors/songdunyu.jpeg (100%)
 rename {img/community => static/img/Community}/contributors/supeng.jpeg (100%)
 rename {img/community => static/img/Community}/contributors/tanguiping.jpeg (100%)
 rename {img/community => static/img/Community}/contributors/wangdanna.jpeg (100%)
 rename {img/community => static/img/Community}/contributors/wangxiaolei.jpeg (100%)
 rename {img/community => static/img/Community}/contributors/zhangxiangyu.jpeg (100%)
 rename {img/community => static/img/Community}/screenshots/issue_page_screenshot.png (100%)
 rename static/img/{ => DomainLayerSchema}/schema-diagram.png (100%)
 rename static/img/{ => Glossary}/blueprint-erd.svg (100%)
 rename static/img/{ => Glossary}/pipeline-erd.svg (100%)
 rename static/img/{ => Homepage}/HighlyFlexible.svg (100%)
 rename static/img/{ => Homepage}/OutoftheboxAnalysis.svg (100%)
 rename static/img/{ => Homepage}/SilosConnected.svg (100%)
 rename static/img/{ => Introduction}/userflow1.svg (100%)
 rename static/img/{ => Introduction}/userflow2.svg (100%)
 rename static/img/{ => Introduction}/userflow3.png (100%)
 rename static/img/{ => Introduction}/userflow4.png (100%)
 rename static/img/{ => Plugins}/github-demo.png (100%)
 rename static/img/{ => Plugins}/jenkins-demo.png (100%)
 rename static/img/{ => Plugins}/jira-demo.png (100%)
 rename static/img/{ => Team}/teamflow1.png (100%)
 rename static/img/{ => Team}/teamflow2.png (100%)
 rename static/img/{ => Team}/teamflow3.png (100%)
 rename static/img/{ => Team}/teamflow4.png (100%)
 rename static/img/{ => Team}/teamflow5.png (100%)
 rename static/img/{ => Team}/teamflow6.png (100%)
 rename static/img/{ => Team}/teamflow7.png (100%)
 delete mode 100644 static/img/tutorial/docsVersionDropdown.png
 delete mode 100644 static/img/tutorial/localeDropdown.png
 delete mode 100644 versioned_docs/version-0.11/Glossary.md
 rename versioned_docs/{version-0.11 => version-v0.11.0}/Dashboards/AverageRequirementLeadTime.md (100%)
 rename versioned_docs/{version-0.11 => version-v0.11.0}/Dashboards/CommitCountByAuthor.md (100%)
 rename versioned_docs/{version-0.11 => version-v0.11.0}/Dashboards/DetailedBugInfo.md (100%)
 rename versioned_docs/{version-0.11 => version-v0.11.0}/Dashboards/GitHubBasic.md (100%)
 rename versioned_docs/{version-0.11 => version-v0.11.0}/Dashboards/GitHubReleaseQualityAndContributionAnalysis.md (100%)
 rename versioned_docs/{version-0.11 => version-v0.11.0}/Dashboards/Jenkins.md (100%)
 rename versioned_docs/{version-0.11 => version-v0.11.0}/Dashboards/WeeklyBugRetro.md (100%)
 rename versioned_docs/{version-0.11 => version-v0.11.0}/Dashboards/_category_.json (100%)
 rename versioned_docs/{version-0.11/DataModels/02-DataSupport.md => version-v0.11.0/DataModels/DataSupport.md} (98%)
 rename versioned_docs/{version-0.11/DataModels/01-DevLakeDomainLayerSchema.md => version-v0.11.0/DataModels/DevLakeDomainLayerSchema.md} (99%)
 rename versioned_docs/{version-0.11 => version-v0.11.0}/DataModels/_category_.json (100%)
 rename docs/DeveloperManuals/MIGRATIONS.md => versioned_docs/version-v0.11.0/DeveloperManuals/DBMigration.md (94%)
 rename versioned_docs/{version-0.11 => version-v0.11.0}/DeveloperManuals/Dal.md (99%)
 rename docs/DeveloperManuals/04-DeveloperSetup.md => versioned_docs/version-v0.11.0/DeveloperManuals/DeveloperSetup.md (92%)
 rename docs/DeveloperManuals/NOTIFICATION.md => versioned_docs/version-v0.11.0/DeveloperManuals/Notifications.md (97%)
 rename versioned_docs/{version-0.11/DeveloperManuals/PluginCreate.md => version-v0.11.0/DeveloperManuals/PluginImplementation.md} (99%)
 rename versioned_docs/{version-0.11 => version-v0.11.0}/DeveloperManuals/_category_.json (100%)
 rename versioned_docs/{version-0.11 => version-v0.11.0}/EngineeringMetrics.md (100%)
 rename docs/Overview/02-Architecture.md => versioned_docs/version-v0.11.0/Overview/Architecture.md (89%)
 create mode 100755 versioned_docs/version-v0.11.0/Overview/Introduction.md
 rename versioned_docs/{version-0.11/Overview/03-Roadmap.md => version-v0.11.0/Overview/Roadmap.md} (53%)
 rename versioned_docs/{version-0.11 => version-v0.11.0}/Overview/_category_.json (100%)
 rename versioned_docs/{version-0.11 => version-v0.11.0}/Plugins/_category_.json (100%)
 rename versioned_docs/{version-0.11 => version-v0.11.0}/Plugins/dbt.md (100%)
 rename versioned_docs/{version-0.11 => version-v0.11.0}/Plugins/feishu.md (99%)
 rename versioned_docs/{version-0.11 => version-v0.11.0}/Plugins/gitee.md (99%)
 rename versioned_docs/{version-0.11 => version-v0.11.0}/Plugins/gitextractor.md (90%)
 rename versioned_docs/{version-0.11 => version-v0.11.0}/Plugins/github-connection-in-config-ui.png (100%)
 rename versioned_docs/{version-0.11 => version-v0.11.0}/Plugins/github.md (98%)
 rename versioned_docs/{version-0.11 => version-v0.11.0}/Plugins/gitlab-connection-in-config-ui.png (100%)
 rename versioned_docs/{version-0.11 => version-v0.11.0}/Plugins/gitlab.md (100%)
 rename versioned_docs/{version-0.11 => version-v0.11.0}/Plugins/jenkins.md (99%)
 rename versioned_docs/{version-0.11 => version-v0.11.0}/Plugins/jira-connection-config-ui.png (100%)
 rename versioned_docs/{version-0.11 => version-v0.11.0}/Plugins/jira-more-setting-in-config-ui.png (100%)
 rename versioned_docs/{version-0.11 => version-v0.11.0}/Plugins/jira.md (100%)
 rename versioned_docs/{version-0.11 => version-v0.11.0}/Plugins/refdiff.md (99%)
 rename versioned_docs/{version-0.11 => version-v0.11.0}/Plugins/tapd.md (84%)
 rename docs/QuickStart/02-KubernetesSetup.md => versioned_docs/version-v0.11.0/QuickStart/KubernetesSetup.md (94%)
 rename versioned_docs/{version-0.11/QuickStart/01-LocalSetup.md => version-v0.11.0/QuickStart/LocalSetup.md} (79%)
 rename versioned_docs/{version-0.11 => version-v0.11.0}/QuickStart/_category_.json (100%)
 rename docs/UserManuals/create-pipeline-in-advanced-mode.md => versioned_docs/version-v0.11.0/UserManuals/AdvancedMode.md (97%)
 rename docs/UserManuals/github-user-guide-v0.10.0.md => versioned_docs/version-v0.11.0/UserManuals/GitHubUserGuide.md (97%)
 rename docs/UserManuals/GRAFANA.md => versioned_docs/version-v0.11.0/UserManuals/GrafanaUserGuide.md (99%)
 rename docs/UserManuals/recurring-pipeline.md => versioned_docs/version-v0.11.0/UserManuals/RecurringPipelines.md (91%)
 rename docs/UserManuals/team-feature-user-guide.md => versioned_docs/version-v0.11.0/UserManuals/TeamConfiguration.md (94%)
 rename versioned_docs/{version-0.11/UserManuals/03-TemporalSetup.md => version-v0.11.0/UserManuals/TemporalSetup.md} (100%)
 rename versioned_docs/{version-0.11 => version-v0.11.0}/UserManuals/_category_.json (100%)
 rename versioned_sidebars/{version-0.11-sidebars.json => version-v0.11.0-sidebars.json} (100%)


[incubator-devlake-website] 03/06: fix: udpated versioning again

Posted by zk...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

zky pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-devlake-website.git

commit 5650f06e67f08b308143861e2d6cb31c9dcdf144
Author: yumengwang03 <yu...@merico.dev>
AuthorDate: Wed Jul 13 23:27:15 2022 +0800

    fix: udpated versioning again
---
 docusaurus.config.js                               |  50 +-
 .../Dashboards/AverageRequirementLeadTime.md       |   9 -
 .../Dashboards/CommitCountByAuthor.md              |   9 -
 .../version-v0.11.0/Dashboards/DetailedBugInfo.md  |   9 -
 .../version-v0.11.0/Dashboards/GitHubBasic.md      |   9 -
 .../GitHubReleaseQualityAndContributionAnalysis.md |   9 -
 .../version-v0.11.0/Dashboards/Jenkins.md          |   9 -
 .../version-v0.11.0/Dashboards/WeeklyBugRetro.md   |   9 -
 .../version-v0.11.0/Dashboards/_category_.json     |   4 -
 .../version-v0.11.0/DataModels/DataSupport.md      |  59 ---
 .../DataModels/DevLakeDomainLayerSchema.md         | 532 ---------------------
 .../version-v0.11.0/DataModels/_category_.json     |   4 -
 .../DeveloperManuals/DBMigration.md                |  37 --
 .../version-v0.11.0/DeveloperManuals/Dal.md        | 173 -------
 .../DeveloperManuals/DeveloperSetup.md             | 131 -----
 .../DeveloperManuals/Notifications.md              |  32 --
 .../DeveloperManuals/PluginImplementation.md       | 292 -----------
 .../DeveloperManuals/_category_.json               |   4 -
 .../version-v0.11.0/EngineeringMetrics.md          | 195 --------
 .../version-v0.11.0/Overview/Architecture.md       |  39 --
 .../version-v0.11.0/Overview/Introduction.md       |  16 -
 versioned_docs/version-v0.11.0/Overview/Roadmap.md |  33 --
 .../version-v0.11.0/Overview/_category_.json       |   4 -
 versioned_docs/version-v0.11.0/Plugins/Dbt.md      |  67 ---
 versioned_docs/version-v0.11.0/Plugins/Feishu.md   |  64 ---
 .../version-v0.11.0/Plugins/GitExtractor.md        |  63 ---
 versioned_docs/version-v0.11.0/Plugins/GitHub.md   |  95 ----
 versioned_docs/version-v0.11.0/Plugins/GitLab.md   |  94 ----
 versioned_docs/version-v0.11.0/Plugins/Gitee.md    | 112 -----
 versioned_docs/version-v0.11.0/Plugins/Jenkins.md  |  59 ---
 versioned_docs/version-v0.11.0/Plugins/Jira.md     | 253 ----------
 versioned_docs/version-v0.11.0/Plugins/RefDiff.md  | 116 -----
 versioned_docs/version-v0.11.0/Plugins/Tapd.md     |  16 -
 .../version-v0.11.0/Plugins/_category_.json        |   4 -
 .../Plugins/github-connection-in-config-ui.png     | Bin 51159 -> 0 bytes
 .../Plugins/gitlab-connection-in-config-ui.png     | Bin 66616 -> 0 bytes
 .../Plugins/jira-connection-config-ui.png          | Bin 76052 -> 0 bytes
 .../Plugins/jira-more-setting-in-config-ui.png     | Bin 300823 -> 0 bytes
 .../version-v0.11.0/QuickStart/KubernetesSetup.md  |  33 --
 .../version-v0.11.0/QuickStart/LocalSetup.md       |  44 --
 .../version-v0.11.0/QuickStart/_category_.json     |   4 -
 .../version-v0.11.0/UserManuals/AdvancedMode.md    |  89 ----
 .../version-v0.11.0/UserManuals/GitHubUserGuide.md | 118 -----
 .../UserManuals/GrafanaUserGuide.md                | 120 -----
 .../UserManuals/RecurringPipelines.md              |  30 --
 .../UserManuals/TeamConfiguration.md               | 129 -----
 .../version-v0.11.0/UserManuals/TemporalSetup.md   |  35 --
 .../version-v0.11.0/UserManuals/_category_.json    |   4 -
 versioned_sidebars/version-v0.11.0-sidebars.json   |   8 -
 versions.json                                      |   3 -
 50 files changed, 25 insertions(+), 3203 deletions(-)

diff --git a/docusaurus.config.js b/docusaurus.config.js
index 11340ad..4beb65d 100644
--- a/docusaurus.config.js
+++ b/docusaurus.config.js
@@ -1,6 +1,6 @@
 const lightCodeTheme = require('prism-react-renderer/themes/github');
 const darkCodeTheme = require('prism-react-renderer/themes/dracula');
-const versions = require('./versions.json');
+// const versions = require('./versions.json');
 
 
 // With JSDoc @type annotations, IDEs can provide config autocompletion
@@ -26,14 +26,14 @@ const versions = require('./versions.json');
           sidebarPath: require.resolve('./sidebars.js'),
           // set to undefined to remove Edit this Page
           editUrl: 'https://github.com/apache/incubator-devlake-website/edit/main',
-          versions: {
-            current: {
-                path: '',
-            },
-            [versions[0]]: {
-                path: versions[0],
-            }
-          }
+          // versions: {
+          //   current: {
+          //       path: '',
+          //   },
+          //   [versions[0]]: {
+          //       path: versions[0],
+          //   }
+          // }
         },
         blog: {
           showReadingTime: true,
@@ -86,24 +86,24 @@ const versions = require('./versions.json');
         },
         items: [
           {
-            // type: 'docsVersionDropdown',
-            // docId: 'Overview/Introduction',
+            type: 'doc',
+            docId: 'Overview/Introduction',
             position: 'right',
             label: 'Docs',
-            items: [
-              ...versions.slice(0, versions.length - 2).map((version) => ({
-                label: version,
-                to: `docs/${version}/Overview/Introduction`,
-             })),
-             ...versions.slice(versions.length - 2, versions.length).map((version) => ({
-              label: (version === "1.x") ? "1.x(Not Apache Release)" : version,
-              to: `docs/${version}/Overview/Introduction`,
-          })),
-              {
-                  label: "Latest",
-                  to: "/docs/Overview/Introduction",
-              }
-            ]
+          //   items: [
+          //     ...versions.slice(0, versions.length - 2).map((version) => ({
+          //       label: version,
+          //       to: `docs/${version}/Overview/Introduction`,
+          //    })),
+          //    ...versions.slice(versions.length - 2, versions.length).map((version) => ({
+          //     label: (version === "1.x") ? "1.x(Not Apache Release)" : version,
+          //     to: `docs/${version}/Overview/Introduction`,
+          // })),
+          //     {
+          //         label: "Latest",
+          //         to: "/docs/Overview/Introduction",
+          //     }
+          //   ]
           },
          {
             type: 'doc',
diff --git a/versioned_docs/version-v0.11.0/Dashboards/AverageRequirementLeadTime.md b/versioned_docs/version-v0.11.0/Dashboards/AverageRequirementLeadTime.md
deleted file mode 100644
index 0710335..0000000
--- a/versioned_docs/version-v0.11.0/Dashboards/AverageRequirementLeadTime.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-sidebar_position: 6
-title: "Average Requirement Lead Time by Assignee"
-description: >
-  DevLake Live Demo
----
-
-# Average Requirement Lead Time by Assignee
-<iframe src="https://grafana-lake.demo.devlake.io/d/q27fk7cnk/demo-average-requirement-lead-time-by-assignee?orgId=1&from=1635945684845&to=1651584084846" width="100%" height="940px"></iframe>
\ No newline at end of file
diff --git a/versioned_docs/version-v0.11.0/Dashboards/CommitCountByAuthor.md b/versioned_docs/version-v0.11.0/Dashboards/CommitCountByAuthor.md
deleted file mode 100644
index 04e029c..0000000
--- a/versioned_docs/version-v0.11.0/Dashboards/CommitCountByAuthor.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-sidebar_position: 2
-title: "Commit Count by Author"
-description: >
-  DevLake Live Demo
----
-
-# Commit Count by Author
-<iframe src="https://grafana-lake.demo.devlake.io/d/F0iYknc7z/demo-commit-count-by-author?orgId=1&from=1634911190615&to=1650635990615" width="100%" height="820px"></iframe>
diff --git a/versioned_docs/version-v0.11.0/Dashboards/DetailedBugInfo.md b/versioned_docs/version-v0.11.0/Dashboards/DetailedBugInfo.md
deleted file mode 100644
index b777617..0000000
--- a/versioned_docs/version-v0.11.0/Dashboards/DetailedBugInfo.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-sidebar_position: 4
-title: "Detailed Bug Info"
-description: >
-  DevLake Live Demo
----
-
-# Detailed Bug Info
-<iframe src="https://grafana-lake.demo.devlake.io/d/s48Lzn5nz/demo-detailed-bug-info?orgId=1&from=1635945709579&to=1651584109579" width="100%" height="800px"></iframe>
\ No newline at end of file
diff --git a/versioned_docs/version-v0.11.0/Dashboards/GitHubBasic.md b/versioned_docs/version-v0.11.0/Dashboards/GitHubBasic.md
deleted file mode 100644
index 7ea28cd..0000000
--- a/versioned_docs/version-v0.11.0/Dashboards/GitHubBasic.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-sidebar_position: 1
-title: "GitHub Basic Metrics"
-description: >
-  DevLake Live Demo
----
-
-# GitHub Basic Metrics
-<iframe src="https://grafana-lake.demo.devlake.io/d/KXWvOFQnz/github_basic_metrics?orgId=1&from=1635945132339&to=1651583532339" width="100%" height="3080px"></iframe>
\ No newline at end of file
diff --git a/versioned_docs/version-v0.11.0/Dashboards/GitHubReleaseQualityAndContributionAnalysis.md b/versioned_docs/version-v0.11.0/Dashboards/GitHubReleaseQualityAndContributionAnalysis.md
deleted file mode 100644
index 61db78f..0000000
--- a/versioned_docs/version-v0.11.0/Dashboards/GitHubReleaseQualityAndContributionAnalysis.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-sidebar_position: 5
-title: "GitHub Release Quality and Contribution Analysis"
-description: >
-  DevLake Live Demo
----
-
-# GitHub Release Quality and Contribution Analysis
-<iframe src="https://grafana-lake.demo.devlake.io/d/2xuOaQUnk1/github_release_quality_and_contribution_analysis?orgId=1&from=1635945847658&to=1651584247658" width="100%" height="2800px"></iframe>
\ No newline at end of file
diff --git a/versioned_docs/version-v0.11.0/Dashboards/Jenkins.md b/versioned_docs/version-v0.11.0/Dashboards/Jenkins.md
deleted file mode 100644
index 506a3c9..0000000
--- a/versioned_docs/version-v0.11.0/Dashboards/Jenkins.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-sidebar_position: 7
-title: "Jenkins"
-description: >
-  DevLake Live Demo
----
-
-# Jenkins
-<iframe src="https://grafana-lake.demo.devlake.io/d/W8AiDFQnk/jenkins?orgId=1&from=1635945337632&to=1651583737632" width="100%" height="1060px"></iframe>
\ No newline at end of file
diff --git a/versioned_docs/version-v0.11.0/Dashboards/WeeklyBugRetro.md b/versioned_docs/version-v0.11.0/Dashboards/WeeklyBugRetro.md
deleted file mode 100644
index adbc4e8..0000000
--- a/versioned_docs/version-v0.11.0/Dashboards/WeeklyBugRetro.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-sidebar_position: 3
-title: "Weekly Bug Retro"
-description: >
-  DevLake Live Demo
----
-
-# Weekly Bug Retro
-<iframe src="https://grafana-lake.demo.devlake.io/d/-5EKA5w7k/weekly-bug-retro?orgId=1&from=1635945873174&to=1651584273174" width="100%" height="2240px"></iframe>
diff --git a/versioned_docs/version-v0.11.0/Dashboards/_category_.json b/versioned_docs/version-v0.11.0/Dashboards/_category_.json
deleted file mode 100644
index b27df44..0000000
--- a/versioned_docs/version-v0.11.0/Dashboards/_category_.json
+++ /dev/null
@@ -1,4 +0,0 @@
-{
-  "label": "Dashboards (Live Demo)",
-  "position": 9
-}
diff --git a/versioned_docs/version-v0.11.0/DataModels/DataSupport.md b/versioned_docs/version-v0.11.0/DataModels/DataSupport.md
deleted file mode 100644
index 4cb4b61..0000000
--- a/versioned_docs/version-v0.11.0/DataModels/DataSupport.md
+++ /dev/null
@@ -1,59 +0,0 @@
----
-title: "Data Support"
-description: >
-  Data sources that DevLake supports
-sidebar_position: 1
----
-
-
-## Data Sources and Data Plugins
-DevLake supports the following data sources. The data from each data source is collected with one or more plugins. There are 9 data plugins in total: `ae`, `feishu`, `gitextractor`, `github`, `gitlab`, `jenkins`, `jira`, `refdiff` and `tapd`.
-
-
-| Data Source | Versions                             | Plugins |
-|-------------|--------------------------------------|-------- |
-| AE          |                                      | `ae`    |
-| Feishu      | Cloud                                |`feishu` |
-| GitHub      | Cloud                                |`github`, `gitextractor`, `refdiff` |
-| Gitlab      | Cloud, Community Edition 13.x+       |`gitlab`, `gitextractor`, `refdiff` |
-| Jenkins     | 2.263.x+                             |`jenkins` |
-| Jira        | Cloud, Server 8.x+, Data Center 8.x+ |`jira` |
-| TAPD        | Cloud                                | `tapd` |
-
-
-
-## Data Collection Scope By Each Plugin
-This table shows the entities collected by each plugin. Domain layer entities in this table are consistent with the entities [here](./DevLakeDomainLayerSchema.md).
-
-| Domain Layer Entities | ae             | gitextractor | github         | gitlab  | jenkins | jira    | refdiff | tapd    |
-| --------------------- | -------------- | ------------ | -------------- | ------- | ------- | ------- | ------- | ------- |
-| commits               | update commits | default      | not-by-default | default |         |         |         |         |
-| commit_parents        |                | default      |                |         |         |         |         |         |
-| commit_files          |                | default      |                |         |         |         |         |         |
-| pull_requests         |                |              | default        | default |         |         |         |         |
-| pull_request_commits  |                |              | default        | default |         |         |         |         |
-| pull_request_comments |                |              | default        | default |         |         |         |         |
-| pull_request_labels   |                |              | default        |         |         |         |         |         |
-| refs                  |                | default      |                |         |         |         |         |         |
-| refs_commits_diffs    |                |              |                |         |         |         | default |         |
-| refs_issues_diffs     |                |              |                |         |         |         | default |         |
-| ref_pr_cherry_picks   |                |              |                |         |         |         | default |         |
-| repos                 |                |              | default        | default |         |         |         |         |
-| repo_commits          |                | default      | default        |         |         |         |         |         |
-| board_repos           |                |              |                |         |         |         |         |         |
-| issue_commits         |                |              |                |         |         |         |         |         |
-| issue_repo_commits    |                |              |                |         |         |         |         |         |
-| pull_request_issues   |                |              |                |         |         |         |         |         |
-| refs_issues_diffs     |                |              |                |         |         |         |         |         |
-| boards                |                |              | default        |         |         | default |         | default |
-| board_issues          |                |              | default        |         |         | default |         | default |
-| issue_changelogs      |                |              |                |         |         | default |         | default |
-| issues                |                |              | default        |         |         | default |         | default |
-| issue_comments        |                |              |                |         |         | default |         | default |
-| issue_labels          |                |              | default        |         |         |         |         |         |
-| sprints               |                |              |                |         |         | default |         | default |
-| issue_worklogs        |                |              |                |         |         | default |         | default |
-| users o               |                |              | default        |         |         | default |         | default |
-| builds                |                |              |                |         | default |         |         |         |
-| jobs                  |                |              |                |         | default |         |         |         |
-
diff --git a/versioned_docs/version-v0.11.0/DataModels/DevLakeDomainLayerSchema.md b/versioned_docs/version-v0.11.0/DataModels/DevLakeDomainLayerSchema.md
deleted file mode 100644
index 996d397..0000000
--- a/versioned_docs/version-v0.11.0/DataModels/DevLakeDomainLayerSchema.md
+++ /dev/null
@@ -1,532 +0,0 @@
----
-title: "Domain Layer Schema"
-description: >
-  DevLake Domain Layer Schema
-sidebar_position: 2
----
-
-## Summary
-
-This document describes the entities in DevLake's domain layer schema and their relationships.
-
-Data in the domain layer is transformed from the data in the tool layer. The tool layer schema is based on the data from specific tools such as Jira, GitHub, Gitlab, Jenkins, etc. The domain layer schema can be regarded as an abstraction of tool-layer schemas.
-
-Domain layer schema itself includes 2 logical layers: a `DWD` layer and a `DWM` layer. The DWD layer stores the detailed data points, while the DWM is the slight aggregation and operation of DWD to store more organized details or middle-level metrics.
-
-
-## Use Cases
-1. Users can make customized Grafana dashboards based on the domain layer schema.
-2. Contributors can complete the ETL logic when adding new data source plugins refering to this data model.
-
-
-## Data Model
-
-This is the up-to-date domain layer schema for DevLake v0.10.x. Tables (entities) are categorized into 5 domains.
-1. Issue tracking domain entities: Jira issues, GitHub issues, GitLab issues, etc
-2. Source code management domain entities: Git/GitHub/Gitlab commits and refs, etc
-3. Code review domain entities: GitHub PRs, Gitlab MRs, etc
-4. CI/CD domain entities: Jenkins jobs & builds, etc
-5. Cross-domain entities: entities that map entities from different domains to break data isolation
-
-
-### Schema Diagram
-![Domain Layer Schema](/img/DomainLayerSchema/schema-diagram.png)
-
-When reading the schema, you'll notice that many tables' primary key is called `id`. Unlike auto-increment id or UUID, `id` is a string composed of several parts to uniquely identify similar entities (e.g. repo) from different platforms (e.g. Github/Gitlab) and allow them to co-exist in a single table.
-
-Tables that end with WIP are still under development.
-
-
-### Naming Conventions
-
-1. The name of a table is in plural form. Eg. boards, issues, etc.
-2. The name of a table which describe the relation between 2 entities is in the form of [BigEntity in singular form]\_[SmallEntity in plural form]. Eg. board_issues, sprint_issues, pull_request_comments, etc.
-3. Value of the field in enum type are in capital letters. Eg. [table.issues.type](https://merico.feishu.cn/docs/doccnvyuG9YpVc6lvmWkmmbZtUc#ZDCw9k) has 3 values, REQUIREMENT, BUG, INCIDENT. Values that are phrases, such as 'IN_PROGRESS' of [table.issues.status](https://merico.feishu.cn/docs/doccnvyuG9YpVc6lvmWkmmbZtUc#ZDCw9k), are separated with underscore '\_'.
-
-<br/>
-
-## DWD Entities - (Data Warehouse Detail)
-
-### Domain 1 - Issue Tracking
-
-#### 1. Issues
-
-An `issue` is the abstraction of Jira/Github/GitLab/TAPD/... issues.
-
-| **field**                   | **type** | **length** | **description**                                                                                                                                                                                                                                                                                                                                                                                                                                      [...]
-| :-------------------------- | :------- | :--------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [...]
-| `id`                        | varchar  | 255        | An issue's `id` is composed of < plugin >:< Entity >:< PK0 >[:PK1]..." <ul><li>For Github issues, a Github issue's id is like "github:GithubIssues:< GithubIssueId >". Eg. 'github:GithubIssues:1049355647'</li> <li>For Jira issues, a Github repo's id is like "jira:JiraIssues:< JiraSourceId >:< JiraIssueId >". Eg. 'jira:JiraIssues:1:10063'. < JiraSourceId > is used to identify which jira source the issue came from, since DevLake users  [...]
-| `number`                    | varchar  | 255        | The number of this issue. For example, the number of this Github [issue](https://github.com/merico-dev/lake/issues/1145) is 1145.                                                                                                                                                                                                                                                                                                                    [...]
-| `url`                       | varchar  | 255        | The url of the issue. It's a web address in most cases.                                                                                                                                                                                                                                                                                                                                                                                              [...]
-| `title`                     | varchar  | 255        | The title of an issue                                                                                                                                                                                                                                                                                                                                                                                                                                [...]
-| `description`               | longtext |            | The detailed description/summary of an issue                                                                                                                                                                                                                                                                                                                                                                                                         [...]
-| `type`                      | varchar  | 255        | The standard type of this issue. There're 3 standard types: <ul><li>REQUIREMENT: this issue is a feature</li><li>BUG: this issue is a bug found during test</li><li>INCIDENT: this issue is a bug found after release</li></ul>The 3 standard types are transformed from the original types of an issue. The transformation rule is set in the '.env' file or 'config-ui' before data collection. For issues with an original type that has not mapp [...]
-| `status`                    | varchar  | 255        | The standard statuses of this issue. There're 3 standard statuses: <ul><li> TODO: this issue is in backlog or to-do list</li><li>IN_PROGRESS: this issue is in progress</li><li>DONE: this issue is resolved or closed</li></ul>The 3 standard statuses are transformed from the original statuses of an issue. The transformation rule: <ul><li>For Jira issue status: transformed from the Jira issue's `statusCategory`. Jira issue has 3 default [...]
-| `original_status`           | varchar  | 255        | The original status of an issue.                                                                                                                                                                                                                                                                                                                                                                                                                     [...]
-| `story_point`               | int      |            | The story point of this issue. It's default to an empty string for data sources such as Github issues and Gitlab issues.                                                                                                                                                                                                                                                                                                                             [...]
-| `priority`                  | varchar  | 255        | The priority of the issue                                                                                                                                                                                                                                                                                                                                                                                                                            [...]
-| `component`                 | varchar  | 255        | The component a bug-issue affects. This field only supports Github plugin for now. The value is transformed from Github issue labels by the rules set according to the user's configuration of .env by end users during DevLake installation.                                                                                                                                                                                                        [...]
-| `severity`                  | varchar  | 255        | The severity level of a bug-issue. This field only supports Github plugin for now. The value is transformed from Github issue labels by the rules set according to the user's configuration of .env by end users during DevLake installation.                                                                                                                                                                                                        [...]
-| `parent_issue_id`           | varchar  | 255        | The id of its parent issue                                                                                                                                                                                                                                                                                                                                                                                                                           [...]
-| `epic_key`                  | varchar  | 255        | The key of the epic this issue belongs to. For tools with no epic-type issues such as Github and Gitlab, this field is default to an empty string                                                                                                                                                                                                                                                                                                    [...]
-| `original_estimate_minutes` | int      |            | The orginal estimation of the time allocated for this issue                                                                                                                                                                                                                                                                                                                                                                                          [...]
-| `time_spent_minutes`         | int      |            | The orginal estimation of the time allocated for this issue                                                                                                                                                                                                                                                                                                                                                                                         [...]
-| `time_remaining_minutes`     | int      |            | The remaining time to resolve the issue                                                                                                                                                                                                                                                                                                                                                                                                             [...]
-| `creator_id`                 | varchar  | 255        | The id of issue creator                                                                                                                                                                                                                                                                                                                                                                                                                             [...]
-| `assignee_id`               | varchar  | 255        | The id of issue assignee.<ul><li>For Github issues: this is the last assignee of an issue if the issue has multiple assignees</li><li>For Jira issues: this is the assignee of the issue at the time of collection</li></ul>                                                                                                                                                                                                                         [...]
-| `assignee_name`             | varchar  | 255        | The name of the assignee                                                                                                                                                                                                                                                                                                                                                                                                                             [...]
-| `created_date`              | datetime | 3          | The time issue created                                                                                                                                                                                                                                                                                                                                                                                                                               [...]
-| `updated_date`              | datetime | 3          | The last time issue gets updated                                                                                                                                                                                                                                                                                                                                                                                                                     [...]
-| `resolution_date`           | datetime | 3          | The time the issue changes to 'DONE'.                                                                                                                                                                                                                                                                                                                                                                                                                [...]
-| `lead_time_minutes`         | int      |            | Describes the cycle time from issue creation to issue resolution.<ul><li>For issues whose type = 'REQUIREMENT' and status = 'DONE', lead_time_minutes = resolution_date - created_date. The unit is minute.</li><li>For issues whose type != 'REQUIREMENT' or status != 'DONE', lead_time_minutes is null</li></ul>                                                                                                                                  [...]
-
-#### 2. issue_labels
-
-This table shows the labels of issues. Multiple entries can exist per issue. This table can be used to filter issues by label name.
-
-| **field**  | **type** | **length** | **description** | **key**      |
-| :--------- | :------- | :--------- | :-------------- | :----------- |
-| `name`     | varchar  | 255        | Label name      |              |
-| `issue_id` | varchar  | 255        | Issue ID        | FK_issues.id |
-
-
-#### 3. issue_comments(WIP)
-
-This table shows the comments of issues. Issues with multiple comments are shown as multiple records. This table can be used to calculate _metric - issue response time_.
-
-| **field**      | **type** | **length** | **description**                                                                                                                                                                               | **key**      |
-| :------------- | :------- | :--------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :----------- |
-| `id`           | varchar  | 255        | The unique id of a comment                                                                                                                                                                    | PK           |
-| `issue_id`     | varchar  | 255        | Issue ID                                                                                                                                                                                      | FK_issues.id |
-| `user_id`      | varchar  | 255        | The id of the user who made the comment                                                                                                                                                       | FK_users.id  |
-| `body`         | longtext |            | The body/detail of the comment                                                                                                                                                                |              |
-| `created_date` | datetime | 3          | The creation date of the comment                                                                                                                                                              |              |
-| `updated_date` | datetime | 3          | The last time comment gets updated                                                                                                                                                            |              |
-| `position`     | int      |            | The position of a comment under an issue. It starts from 1. The position is sorted by comment created_date asc.<br/>Eg. If an issue has 5 comments, the position of the 1st created comment is 1. |              |
-
-#### 4. issue_changelog(WIP)
-
-This table shows the changelogs of issues. Issues with multiple changelogs are shown as multiple records.
-
-| **field**      | **type** | **length** | **description**                                                       | **key**      |
-| :------------- | :------- | :--------- | :-------------------------------------------------------------------- | :----------- |
-| `id`           | varchar  | 255        | The unique id of an issue changelog                                   | PK           |
-| `issue_id`     | varchar  | 255        | Issue ID                                                              | FK_issues.id |
-| `actor_id`     | varchar  | 255        | The id of the user who made the change                                | FK_users.id  |
-| `field`        | varchar  | 255        | The id of changed field                                               |              |
-| `from`         | varchar  | 255        | The original value of the changed field                               |              |
-| `to`           | varchar  | 255        | The new value of the changed field                                    |              |
-| `created_date` | datetime | 3          | The creation date of the changelog                                    |              |
-
-
-#### 5. issue_worklogs
-
-This table shows the work logged under issues. Usually, an issue has multiple worklogs logged by different developers.
-
-| **field**            | **type** | **length** | **description**                                                                              | **key**      |
-| :------------------- | :------- | :--------- | :------------------------------------------------------------------------------------------- | :----------- |
-| `issue_id`           | varchar  | 255        | Issue ID                                                                                     | FK_issues.id |
-| `author_id`          | varchar  | 255        | The id of the user who logged the work                                                       | FK_users.id  |
-| `comment`            | varchar  | 255        | The comment an user made while logging the work.                                             |              |
-| `time_spent_minutes` | int      |            | The time user logged. The unit of value is normalized to minute. Eg. 1d =) 480, 4h30m =) 270 |              |
-| `logged_date`        | datetime | 3          | The time of this logging action                                                              |              |
-| `started_date`       | datetime | 3          | Start time of the worklog                                                                    |              |
-
-
-#### 6. boards
-
-A `board` is an issue list or a collection of issues. It's the abstraction of a Jira board, a Jira project or a [Github issue list](https://github.com/merico-dev/lake/issues). This table can be used to filter issues by the boards they belong to.
-
-| **field**      | **type** | **length** | **description**                                                                                                                                                                                                                                                                                                                                                                      | **key** |
-| :------------- | :------- | :--------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------ |
-| `id`           | varchar  | 255        | A board's `id` is composed of "< plugin >:< Entity >:< PK0 >[:PK1]..." <ul><li>For a Github repo's issue list, the board id is like "< github >:< GithubRepos >:< GithubRepoId >". Eg. "github:GithubRepo:384111310"</li> <li>For a Jira Board, the id is like the board id is like "< jira >:< JiraSourceId >< JiraBoards >:< JiraBoardsId >". Eg. "jira:1:JiraBoards:12"</li></ul> | PK      |
-| `name`           | varchar  | 255        | The name of the board. Note: the board name of a Github project 'merico-dev/lake' is 'merico-dev/lake', representing the [default issue list](https://github.com/merico-dev/lake/issues).                                                                                                                                                                                            |         |
-| `description`  | varchar  | 255        | The description of the board.                                                                                                                                                                                                                                                                                                                                                        |         |
-| `url`          | varchar  | 255        | The url of the board. Eg. https://Github.com/merico-dev/lake                                                                                                                                                                                                                                                                                                                         |         |
-| `created_date` | datetime | 3          | Board creation time                                                                                                                                                                                                                                                                                                                             |         |
-
-#### 7. board_issues
-
-This table shows the relation between boards and issues. This table can be used to filter issues by board.
-
-| **field**  | **type** | **length** | **description** | **key**      |
-| :--------- | :------- | :--------- | :-------------- | :----------- |
-| `board_id` | varchar  | 255        | Board id        | FK_boards.id |
-| `issue_id` | varchar  | 255        | Issue id        | FK_issues.id |
-
-#### 8. sprints
-
-A `sprint` is the abstraction of Jira sprints, TAPD iterations and Github milestones. A sprint contains a list of issues.
-
-| **field**           | **type** | **length** | **description**                                                                                                                                                                                                                                                                                                                                                                                                                                              [...]
-| :------------------ | :------- | :--------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [...]
-| `id`                | varchar  | 255        | A sprint's `id` is composed of "< plugin >:< Entity >:< PK0 >[:PK1]..."<ul><li>A sprint in a Github repo is a milestone, the sprint id is like "< github >:< GithubRepos >:< GithubRepoId >:< milestoneNumber >".<br/>Eg. The id for this [sprint](https://github.com/merico-dev/lake/milestone/5) is "github:GithubRepo:384111310:5"</li><li>For a Jira Board, the id is like "< jira >:< JiraSourceId >< JiraBoards >:< JiraBoardsId >".<br/>Eg. "jira:1:J [...]
-| `name`              | varchar  | 255        | The name of sprint.<br/>For Github projects, the sprint name is the milestone name. For instance, 'v0.10.0 - Introduce Temporal to DevLake' is the name of this [sprint](https://github.com/merico-dev/lake/milestone/5).                                                                                                                                                                                                                                    [...]
-| `url`               | varchar  | 255        | The url of sprint.                                                                                                                                                                                                                                                                                                                                                                                                                                           [...]
-| `status`            | varchar  | 255        | There're 3 statuses of a sprint:<ul><li>CLOSED: a completed sprint</li><li>ACTIVE: a sprint started but not completed</li><li>FUTURE: a sprint that has not started</li></ul>                                                                                                                                                                                                                                                                                [...]
-| `started_date`      | datetime | 3          | The start time of a sprint                                                                                                                                                                                                                                                                                                                                                                                                                                   [...]
-| `ended_date`        | datetime | 3          | The planned/estimated end time of a sprint. It's usually set when planning a sprint.                                                                                                                                                                                                                                                                                                                                                                         [...]
-| `completed_date`    | datetime | 3          | The actual time to complete a sprint.                                                                                                                                                                                                                                                                                                                                                                                                                        [...]
-| `original_board_id` | datetime | 3          | The id of board where the sprint first created. This field is not null only when this entity is transformed from Jira sprintas.<br/>In Jira, sprint and board entities have 2 types of relation:<ul><li>A sprint is created based on a specific board. In this case, board(1):(n)sprint. The `original_board_id` is used to show the relation.</li><li>A sprint can be mapped to multiple boards, a board can also show multiple sprints. In this case, boar [...]
-
-#### 9. sprint_issues
-
-This table shows the relation between sprints and issues that have been added to sprints. This table can be used to show metrics such as _'ratio of unplanned issues'_, _'completion rate of sprint issues'_, etc
-
-| **field**        | **type** | **length** | **description**                                                                                                                                                                                                                                                                                                                                                                                                                                                 [...]
-| :--------------- | :------- | :--------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [...]
-| `sprint_id`      | varchar  | 255        | Sprint id                                                                                                                                                                                                                                                                                                                                                                                                                                                       [...]
-| `issue_id`       | varchar  | 255        | Issue id                                                                                                                                                                                                                                                                                                                                                                                                                                                        [...]
-| `is_removed`     | bool     |            | If the issue is removed from this sprint, then TRUE; else FALSE                                                                                                                                                                                                                                                                                                                                                                                                 [...]
-| `added_date`     | datetime | 3          | The time this issue added to the sprint. If an issue is added to a sprint multiple times, the latest time will be the value.                                                                                                                                                                                                                                                                                                                                    [...]
-| `removed_date`   | datetime | 3          | The time this issue gets removed from the sprint. If an issue is removed multiple times, the latest time will be the value.                                                                                                                                                                                                                                                                                                                                     [...]
-| `added_stage`    | varchar  | 255        | The stage when issue is added to this sprint. There're 3 possible values:<ul><li>BEFORE_SPRINT<br/>Planning before sprint starts.<br/> Condition: sprint_issues.added_date <= sprints.start_date</li><li>DURING_SPRINT Planning during a sprint.<br/>Condition: sprints.start_date < sprint_issues.added_date <= sprints.end_date</li><li>AFTER_SPRINT<br/>Planing after a sprint. This is caused by improper operation - adding issues to a completed sprint.< [...]
-| `resolved_stage` | varchar  | 255        | The stage when an issue is resolved (issue status turns to 'DONE'). There're 3 possible values:<ul><li>BEFORE_SPRINT<br/>Condition: issues.resolution_date <= sprints.start_date</li><li>DURING_SPRINT<br/>Condition: sprints.start_date < issues.resolution_date <= sprints.end_date</li><li>AFTER_SPRINT<br/>Condition: issues.resolution_date ) sprints.end_date</li></ul>                                                                                   [...]
-
-#### 10. board_sprints
-
-| **field**   | **type** | **length** | **description** | **key**       |
-| :---------- | :------- | :--------- | :-------------- | :------------ |
-| `board_id`  | varchar  | 255        | Board id        | FK_boards.id  |
-| `sprint_id` | varchar  | 255        | Sprint id       | FK_sprints.id |
-
-<br/>
-
-### Domain 2 - Source Code Management
-
-#### 11. repos
-
-Information about Github or Gitlab repositories. A repository is always owned by a user.
-
-| **field**      | **type** | **length** | **description**                                                                                                                                                                                | **key**     |
-| :------------- | :------- | :--------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :---------- |
-| `id`           | varchar  | 255        | A repo's `id` is composed of "< plugin >:< Entity >:< PK0 >[:PK1]..."<br/>For example, a Github repo's id is like "< github >:< GithubRepos >< GithubRepoId >". Eg. 'github:GithubRepos:384111310' | PK          |
-| `name`         | varchar  | 255        | The name of repo.                                                                                                                                                                              |             |
-| `description`  | varchar  | 255        | The description of repo.                                                                                                                                                                       |             |
-| `url`          | varchar  | 255        | The url of repo. Eg. https://Github.com/merico-dev/lake                                                                                                                                        |             |
-| `owner_id`     | varchar  | 255        | The id of the owner of repo                                                                                                                                                                    | FK_users.id |
-| `language`     | varchar  | 255        | The major language of repo. Eg. The language for merico-dev/lake is 'Go'                                                                                                                       |             |
-| `forked_from`  | varchar  | 255        | Empty unless the repo is a fork in which case it contains the `id` of the repo the repo is forked from.                                                                                        |             |
-| `deleted`      | tinyint  | 255        | 0: repo is active 1: repo has been deleted                                                                                                                                                     |             |
-| `created_date` | datetime | 3          | Repo creation date                                                                                                                                                                             |             |
-| `updated_date` | datetime | 3          | Last full update was done for this repo                                                                                                                                                        |             |
-
-#### 12. repo_languages(WIP)
-
-Languages that are used in the repository along with byte counts for all files in those languages. This is in line with how Github calculates language percentages in a repository. Multiple entries can exist per repo.
-
-The table is filled in when the repo has been first inserted on when an update round for all repos is made.
-
-| **field**      | **type** | **length** | **description**                                                                                                                                                                                    | **key** |
-| :------------- | :------- | :--------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------ |
-| `id`           | varchar  | 255        | A repo's `id` is composed of "< plugin >:< Entity >:< PK0 >[:PK1]..."<br/>For example, a Github repo's id is like "< github >:< GithubRepos >< GithubRepoId >". Eg. 'github:GithubRepos:384111310' | PK      |
-| `language`     | varchar  | 255        | The language of repo.<br/>These are the [languages](https://api.github.com/repos/merico-dev/lake/languages) for merico-dev/lake                                                                    |         |
-| `bytes`        | int      |            | The byte counts for all files in those languages                                                                                                                                                   |         |
-| `created_date` | datetime | 3          | The field is filled in with the latest timestamp the query for a specific `repo_id` was done.                                                                                                      |         |
-
-#### 13. repo_commits
-
-The commits belong to the history of a repository. More than one repos can share the same commits if one is a fork of the other.
-
-| **field**    | **type** | **length** | **description** | **key**        |
-| :----------- | :------- | :--------- | :-------------- | :------------- |
-| `repo_id`    | varchar  | 255        | Repo id         | FK_repos.id    |
-| `commit_sha` | char     | 40         | Commit sha      | FK_commits.sha |
-
-#### 14. refs
-
-A ref is the abstraction of a branch or tag.
-
-| **field**    | **type** | **length** | **description**                                                                                                                                                                                                                                                                                                                                             | **key**     |
-| :----------- | :------- | :--------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :---------- |
-| `id`         | varchar  | 255        | A ref's `id` is composed of "< plugin >:< Entity >:< PK0 >[:PK1]..."<br/>For example, a Github ref is composed of "github:GithubRepos:< GithubRepoId >:< RefUrl >". Eg. The id of release v5.3.0 of PingCAP/TiDB project is 'github:GithubRepos:384111310:refs/tags/v5.3.0' A repo's `id` is composed of "< plugin >:< Entity >:< PK0 >[:PK1]..."           | PK          |
-| `ref_name`   | varchar  | 255        | The name of ref. Eg. '[refs/tags/v0.9.3](https://github.com/merico-dev/lake/tree/v0.9.3)'                                                                                                                                                                                                                                                                   |             |
-| `repo_id`    | varchar  | 255        | The id of repo this ref belongs to                                                                                                                                                                                                                                                                                                                          | FK_repos.id |
-| `commit_sha` | char     | 40         | The commit this ref points to at the time of collection                                                                                                                                                                                                                                                                                                     |             |
-| `is_default` | int      |            | <ul><li>0: the ref is the default branch. By the definition of [Github](https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/managing-branches-in-your-repository/changing-the-default-branch), the default branch is the base branch for pull requests and code commits.</li><li>1: not the default branch</li></ul> |             |
-| `merge_base` | char     | 40         | The merge base commit of the main ref and the current ref                                                                                                                                                                                                                                                                                                   |             |
-| `ref_type`   | varchar  | 64         | There're 2 typical types:<ul><li>BRANCH</li><li>TAG</li></ul>                                                                                                                                                                                                                                                                                               |             |
-
-#### 15. refs_commits_diffs
-
-This table shows the commits added in a new ref compared to an old ref. This table can be used to support tag-based analysis, for instance, '_No. of commits of a tag_', '_No. of merged pull request of a tag_', etc.
-
-The records of this table are computed by [RefDiff](https://github.com/merico-dev/lake/tree/main/plugins/refdiff) plugin. The computation should be manually triggered after using [GitRepoExtractor](https://github.com/merico-dev/lake/tree/main/plugins/gitextractor) to collect commits and refs. The algorithm behind is similar to [this](https://github.com/merico-dev/lake/compare/v0.8.0%E2%80%A6v0.9.0).
-
-| **field**            | **type** | **length** | **description**                                                 | **key**        |
-| :------------------- | :------- | :--------- | :-------------------------------------------------------------- | :------------- |
-| `commit_sha`         | char     | 40         | One of the added commits in the new ref compared to the old ref | FK_commits.sha |
-| `new_ref_id`         | varchar  | 255        | The new ref's id for comparison                                 | FK_refs.id     |
-| `old_ref_id`         | varchar  | 255        | The old ref's id for comparison                                 | FK_refs.id     |
-| `new_ref_commit_sha` | char     | 40         | The commit new ref points to at the time of collection          |                |
-| `old_ref_commit_sha` | char     | 40         | The commit old ref points to at the time of collection          |                |
-| `sorting_index`      | varchar  | 255        | An index for debugging, please skip it                          |                |
-
-#### 16. commits
-
-| **field**         | **type** | **length** | **description**                                                                                                                                                  | **key**        |
-| :---------------- | :------- | :--------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------- |
-| `sha`             | char     | 40         | One of the added commits in the new ref compared to the old ref                                                                                                  | FK_commits.sha |
-| `message`         | varchar  | 255        | Commit message                                                                                                                                                   |                |
-| `author_name`     | varchar  | 255        | The value is set with command `git config user.name xxxxx` commit                                                                                                                            |                |
-| `author_email`    | varchar  | 255        | The value is set with command `git config user.email xxxxx` author                                                                                                                                       |                |
-| `authored_date`   | datetime | 3          | The date when this commit was originally made                                                                                                                    |                |
-| `author_id`       | varchar  | 255        | The id of commit author                                                                                                                                          | FK_users.id    |
-| `committer_name`  | varchar  | 255        | The name of committer                                                                                                                                            |                |
-| `committer_email` | varchar  | 255        | The email of committer                                                                                                                                           |                |
-| `committed_date`  | datetime | 3          | The last time the commit gets modified.<br/>For example, when rebasing the branch where the commit is in on another branch, the committed_date changes.          |                |
-| `committer_id`    | varchar  | 255        | The id of committer                                                                                                                                              | FK_users.id    |
-| `additions`       | int      |            | Added lines of code                                                                                                                                              |                |
-| `deletions`       | int      |            | Deleted lines of code                                                                                                                                            |                |
-| `dev_eq`          | int      |            | A metric that quantifies the amount of code contribution. The data can be retrieved from [AE plugin](https://github.com/merico-dev/lake/tree/v0.9.3/plugins/ae). |                |
-
-
-#### 17. commit_files
-
-The files have been changed via commits. Multiple entries can exist per commit.
-
-| **field**    | **type** | **length** | **description**                        | **key**        |
-| :----------- | :------- | :--------- | :------------------------------------- | :------------- |
-| `commit_sha` | char     | 40         | Commit sha                             | FK_commits.sha |
-| `file_path`  | varchar  | 255        | Path of a changed file in a commit     |                |
-| `additions`  | int      |            | The added lines of code in this file   |                |
-| `deletions`  | int      |            | The deleted lines of code in this file |                |
-
-#### 18. commit_comments(WIP)
-
-Code review comments on commits. These are comments on individual commits. If a commit is associated with a pull request, then its comments are in the [pull_request_comments](https://merico.feishu.cn/docs/doccnvyuG9YpVc6lvmWkmmbZtUc#xt2lv4) table.
-
-| **field**      | **type** | **length** | **description**                     | **key**        |
-| :------------- | :------- | :--------- | :---------------------------------- | :------------- |
-| `id`           | varchar  | 255        | Unique comment id                   |                |
-| `commit_sha`   | char     | 40         | Commit sha                          | FK_commits.sha |
-| `user_id`      | varchar  | 255        | Id of the user who made the comment |                |
-| `created_date` | datetime | 3          | Comment creation time               |                |
-| `body`         | longtext |            | Comment body/detail                 |                |
-| `line`         | int      |            |                                     |                |
-| `position`     | int      |            |                                     |                |
-
-#### 19. commit_parents
-
-The parent commit(s) for each commit, as specified by Git.
-
-| **field**    | **type** | **length** | **description**   | **key**        |
-| :----------- | :------- | :--------- | :---------------- | :------------- |
-| `commit_sha` | char     | 40         | commit sha        | FK_commits.sha |
-| `parent`     | char     | 40         | Parent commit sha | FK_commits.sha |
-
-<br/>
-
-### Domain 3 - Code Review
-
-#### 20. pull_requests
-
-A pull request is the abstraction of Github pull request and Gitlab merge request.
-
-| **field**          | **type** | **length** | **description**                                                                                                                                                                                                                                                                                                                                                                                | **key**        |
-| :----------------- | :------- | :--------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------- |
-| `id`               | char     | 40         | A pull request's `id` is composed of "< plugin >:< Entity >:< PK0 >[:PK1]..." Eg. For 'github:GithubPullRequests:1347'                                                                                                                                                                                                                                                                         | FK_commits.sha |
-| `title`            | varchar  | 255        | The title of pull request                                                                                                                                                                                                                                                                                                                                                                      |                |
-| `description`      | longtext |            | The body/description of pull request                                                                                                                                                                                                                                                                                                                                                           |                |
-| `status`           | varchar  | 255        | the status of pull requests. For a Github pull request, the status can either be 'open' or 'closed'.                                                                                                                                                                                                                                                                                           |                |
-| `number`           | varchar  | 255        | The number of PR. Eg, 1536 is the number of this [PR](https://github.com/merico-dev/lake/pull/1563)                                                                                                                                                                                                                                                                                            |                |
-| `base_repo_id`     | varchar  | 255        | The repo that will be updated.                                                                                                                                                                                                                                                                                                                                                                 |                |
-| `head_reop_id`     | varchar  | 255        | The repo containing the changes that will be added to the base. If the head repository is NULL, this means that the corresponding project had been deleted when DevLake processed the pull request.                                                                                                                                                                                            |                |
-| `base_ref`         | varchar  | 255        | The branch name in the base repo that will be updated                                                                                                                                                                                                                                                                                                                                          |                |
-| `head_ref`         | varchar  | 255        | The branch name in the head repo that contains the changes that will be added to the base                                                                                                                                                                                                                                                                                                      |                |
-| `author_name`      | varchar  | 255        | The creator's name of the pull request                                                                                                                                                                                                                                                                                                                                                         |                |
-| `author_id`        | varchar  | 255        | The creator's id of the pull request                                                                                                                                                                                                                                                                                                                                                           |                |
-| `url`              | varchar  | 255        | the web link of the pull request                                                                                                                                                                                                                                                                                                                                                               |                |
-| `type`             | varchar  | 255        | The work-type of a pull request. For example: feature-development, bug-fix, docs, etc.<br/>The value is transformed from Github pull request labels by configuring `GITHUB_PR_TYPE` in `.env` file during installation.                                                                                                                                                                        |                |
-| `component`        | varchar  | 255        | The component this PR affects.<br/>The value is transformed from Github/Gitlab pull request labels by configuring `GITHUB_PR_COMPONENT` in `.env` file during installation.                                                                                                                                                                                                                    |                |
-| `created_date`     | datetime | 3          | The time PR created.                                                                                                                                                                                                                                                                                                                                                                           |                |
-| `merged_date`      | datetime | 3          | The time PR gets merged. Null when the PR is not merged.                                                                                                                                                                                                                                                                                                                                       |                |
-| `closed_date`      | datetime | 3          | The time PR closed. Null when the PR is not closed.                                                                                                                                                                                                                                                                                                                                            |                |
-| `merge_commit_sha` | char     | 40         | the merge commit of this PR. By the definition of [Github](https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/managing-branches-in-your-repository/changing-the-default-branch), when you click the default Merge pull request option on a pull request on Github, all commits from the feature branch are added to the base branch in a merge commit. |                |
-
-#### 21. pull_request_labels
-
-This table shows the labels of pull request. Multiple entries can exist per pull request. This table can be used to filter pull requests by label name.
-
-| **field**         | **type** | **length** | **description** | **key**             |
-| :---------------- | :------- | :--------- | :-------------- | :------------------ |
-| `name`            | varchar  | 255        | Label name      |                     |
-| `pull_request_id` | varchar  | 255        | Pull request ID | FK_pull_requests.id |
-
-#### 22. pull_request_commits
-
-A commit associated with a pull request
-
-The list is additive. This means if a rebase with commit squashing takes place after the commits of a pull request have been processed, the old commits will not be deleted.
-
-| **field**         | **type** | **length** | **description** | **key**             |
-| :---------------- | :------- | :--------- | :-------------- | :------------------ |
-| `pull_request_id` | varchar  | 255        | Pull request id | FK_pull_requests.id |
-| `commit_sha`      | char     | 40         | Commit sha      | FK_commits.sha      |
-
-#### 23. pull_request_comments(WIP)
-
-A code review comment on a commit associated with a pull request
-
-The list is additive. If commits are squashed on the head repo, the comments remain intact.
-
-| **field**         | **type** | **length** | **description**                                                                                                                                                                                     | **key**             |
-| :---------------- | :------- | :--------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------ |
-| `id`              | varchar  | 255        | Comment id                                                                                                                                                                                          | PK                  |
-| `pull_request_id` | varchar  | 255        | Pull request id                                                                                                                                                                                     | FK_pull_requests.id |
-| `user_id`         | varchar  | 255        | Id of user who made the comment                                                                                                                                                                     | FK_users.id         |
-| `created_date`    | datetime | 3          | Comment creation time                                                                                                                                                                               |                     |
-| `body`            | longtext |            | The body of the comment                                                                                                                                                                             |                     |
-| `position`        | int      |            | The position of a comment under a pull request. It starts from 1. The position is sorted by comment created_date asc.<br/>Eg. If a PR has 5 comments, the position of the 1st created comment is 1. |                     |
-
-#### 24. pull_request_events(WIP)
-
-Events of pull requests.
-
-| **field**         | **type** | **length** | **description**                                                                                                                                                                                                                                                                                                                                                                                                                                          | **k [...]
-| :---------------- | :------- | :--------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-- [...]
-| `id`              | varchar  | 255        | Event id                                                                                                                                                                                                                                                                                                                                                                                                                                                 | PK  [...]
-| `pull_request_id` | varchar  | 255        | Pull request id                                                                                                                                                                                                                                                                                                                                                                                                                                          | FK_ [...]
-| `action`          | varchar  | 255        | The action to be taken, some values:<ul><li>`opened`: When the pull request has been opened</li><li>`closed`: When the pull request has been closed</li><li>`merged`: When Github detected that the pull request has been merged. No merges outside Github (i.e. Git based) are reported</li><li>`reoponed`: When a pull request is opened after being closed</li><li>`syncrhonize`: When new commits are added/removed to the head repository</li></ul> |     [...]
-| `actor_id`        | varchar  | 255        | The user id of the event performer                                                                                                                                                                                                                                                                                                                                                                                                                       | FK_ [...]
-| `created_date`    | datetime | 3          | Event creation time                                                                                                                                                                                                                                                                                                                                                                                                                                      |     [...]
-
-<br/>
-
-### Domain 4 - CI/CD(WIP)
-
-#### 25. jobs
-
-The CI/CD schedule, not a specific task.
-
-| **field** | **type** | **length** | **description** | **key** |
-| :-------- | :------- | :--------- | :-------------- | :------ |
-| `id`      | varchar  | 255        | Job id          | PK      |
-| `name`    | varchar  | 255        | Name of job     |         |
-
-#### 26. builds
-
-A build is an execution of a job.
-
-| **field**      | **type** | **length** | **description**                                                  | **key**    |
-| :------------- | :------- | :--------- | :--------------------------------------------------------------- | :--------- |
-| `id`           | varchar  | 255        | Build id                                                         | PK         |
-| `job_id`       | varchar  | 255        | Id of the job this build belongs to                              | FK_jobs.id |
-| `name`         | varchar  | 255        | Name of build                                                    |            |
-| `duration_sec` | bigint   |            | The duration of build in seconds                                 |            |
-| `started_date` | datetime | 3          | Started time of the build                                        |            |
-| `status`       | varchar  | 255        | The result of build. The values may be 'success', 'failed', etc. |            |
-| `commit_sha`   | char     | 40         | The specific commit being built on. Nullable.                    |            |
-
-
-### Cross-Domain Entities
-
-These entities are used to map entities between different domains. They are the key players to break data isolation.
-
-There're low-level entities such as issue_commits, users, and higher-level cross domain entities such as board_repos
-
-#### 27. issue_commits
-
-A low-level mapping between "issue tracking" and "source code management" domain by mapping `issues` and `commits`. Issue(n): Commit(n).
-
-The original connection between these two entities lies in either issue tracking tools like Jira or source code management tools like GitLab. You have to use tools to accomplish this.
-
-For example, a common method to connect Jira issue and GitLab commit is a GitLab plugin [Jira Integration](https://docs.gitlab.com/ee/integration/jira/). With this plugin, the Jira issue key in the commit message written by the committers will be parsed. Then, the plugin will add the commit urls under this jira issue. Hence, DevLake's [Jira plugin](https://github.com/merico-dev/lake/tree/main/plugins/jira) can get the related commits (including repo, commit_id, url) of an issue.
-
-| **field**    | **type** | **length** | **description** | **key**        |
-| :----------- | :------- | :--------- | :-------------- | :------------- |
-| `issue_id`   | varchar  | 255        | Issue id        | FK_issues.id   |
-| `commit_sha` | char     | 40         | Commit sha      | FK_commits.sha |
-
-#### 28. pull_request_issues
-
-This table shows the issues closed by pull requests. It's a medium-level mapping between "issue tracking" and "source code management" domain by mapping issues and commits. Issue(n): Commit(n).
-
-The data is extracted from the body of pull requests conforming to certain regular expression. The regular expression can be defined in GITHUB_PR_BODY_CLOSE_PATTERN in the .env file
-
-| **field**             | **type** | **length** | **description**     | **key**             |
-| :-------------------- | :------- | :--------- | :------------------ | :------------------ |
-| `pull_request_id`     | char     | 40         | Pull request id     | FK_pull_requests.id |
-| `issue_id`            | varchar  | 255        | Issue id            | FK_issues.id        |
-| `pull_request_number` | varchar  | 255        | Pull request number |                     |
-| `issue_number`        | varchar  | 255        | Issue number        |                     |
-
-#### 29. board_repo(WIP)
-
-A rough way to link "issue tracking" and "source code management" domain by mapping `boards` and `repos`. Board(n): Repo(n).
-
-The mapping logic is under development.
-
-| **field**  | **type** | **length** | **description** | **key**      |
-| :--------- | :------- | :--------- | :-------------- | :----------- |
-| `board_id` | varchar  | 255        | Board id        | FK_boards.id |
-| `repo_id`  | varchar  | 255        | Repo id         | FK_repos.id  |
-
-#### 30. users(WIP)
-
-This is the table to unify user identities across tools. This table can be used to do all user-based metrics, such as _'No. of Issue closed by contributor', 'No. of commits by contributor',_
-
-| **field**      | **type** | **length** | **description**                                                                                                                                                                                         | **key** |
-| :------------- | :------- | :--------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | :------ |
-| `id`           | varchar  | 255        | A user's `id` is composed of "< Plugin >:< Entity >:< PK0 >[:PK1]..."<br/>For example, a Github user's id is composed of "< github >:< GithubUsers >< GithubUserId)". Eg. 'github:GithubUsers:14050754' | PK      |
-| user_name      | varchar  | 255        | username/Github login of a user                                                                                                                                                                         |         |
-| `fullname`     | varchar  | 255        | User's full name                                                                                                                                                                                        |         |
-| `email`        | varchar  | 255        | Email                                                                                                                                                                                                   |         |
-| `avatar_url`   | varchar  | 255        |                                                                                                                                                                                                         |         |
-| `organization` | varchar  | 255        | User's organization or comany name                                                                                                                                                                      |         |
-| `created_date` | datetime | 3          | User creation time                                                                                                                                                                                      |         |
-| `deleted`      | tinyint  |            | 0: default. The user is active 1: the user is no longer active                                                                                                                                          |         |
-
-<br/>
-
-## DWM Entities - (Data Warehouse Middle)
-
-DWM entities are the slight aggregation and operation of DWD to store more organized details or middle-level metrics.
-
-#### 31. issue_status_history
-
-This table shows the history of 'status change' of issues. This table can be used to break down _'issue lead time'_ to _'issue staying time in each status'_ to identify the bottleneck of the delivery workflow.
-
-| **field**         | **type** | **length** | **description**                 | **key**         |
-| :---------------- | :------- | :--------- | :------------------------------ | :-------------- |
-| `issue_id`        | varchar  | 255        | Issue id                        | PK, FK_issue.id |
-| `original_status` | varchar  | 255        | The original status of an issue |                 |
-| `start_date`      | datetime | 3          | The start time of the status    |                 |
-| `end_date`        | datetime | 3          | The end time of the status      |                 |
-
-#### 32. Issue_assignee_history
-
-This table shows the 'assignee change history' of issues. This table can be used to identify _'the actual developer of an issue',_ or _'contributor involved in an issue'_ for contribution analysis.
-
-| **field**    | **type** | **length** | **description**                                    | **key**         |
-| :----------- | :------- | :--------- | :------------------------------------------------- | :-------------- |
-| `issue_id`   | varchar  | 255        | Issue id                                           | PK, FK_issue.id |
-| `assignee`   | varchar  | 255        | The name of assignee of an issue                   |                 |
-| `start_date` | datetime | 3          | The time when the issue is assigned to an assignee |                 |
-| `end_date`   | datetime | 3          | The time when the assignee changes                 |                 |
-
-#### 33. issue_sprints_history
-
-This table shows the 'scope change history' of sprints. This table can be used to analyze the _'how much and how frequently does a team change plans'_.
-
-| **field**    | **type** | **length** | **description**                                    | **key**         |
-| :----------- | :------- | :--------- | :------------------------------------------------- | :-------------- |
-| `issue_id`   | varchar  | 255        | Issue id                                           | PK, FK_issue.id |
-| `sprint_id`  | varchar  | 255        | Sprint id                                          | FK_sprints.id   |
-| `start_date` | datetime | 3          | The time when the issue added to a sprint          |                 |
-| `end_date`   | datetime | 3          | The time when the issue gets removed from a sprint |                 |
-
-#### 34. refs_issues_diffs
-
-This table shows the issues fixed by commits added in a new ref compared to an old one. The data is computed from [table.ref_commits_diff](https://merico.feishu.cn/docs/doccnvyuG9YpVc6lvmWkmmbZtUc#yJOyqa), [table.pull_requests](https://merico.feishu.cn/docs/doccnvyuG9YpVc6lvmWkmmbZtUc#Uc849c), [table.pull_request_commits](https://merico.feishu.cn/docs/doccnvyuG9YpVc6lvmWkmmbZtUc#G9cPfj), and [table.pull_request_issues](https://merico.feishu.cn/docs/doccnvyuG9YpVc6lvmWkmmbZtUc#we6Uac).
-
-This table can support tag-based analysis, for instance, '_No. of bugs closed in a tag_'.
-
-| **field**            | **type** | **length** | **description**                                        | **key**      |
-| :------------------- | :------- | :--------- | :----------------------------------------------------- | :----------- |
-| `new_ref_id`         | varchar  | 255        | The new ref's id for comparison                        | FK_refs.id   |
-| `old_ref_id`         | varchar  | 255        | The old ref's id for comparison                        | FK_refs.id   |
-| `new_ref_commit_sha` | char     | 40         | The commit new ref points to at the time of collection |              |
-| `old_ref_commit_sha` | char     | 40         | The commit old ref points to at the time of collection |              |
-| `issue_number`       | varchar  | 255        | Issue number                                           |              |
-| `issue_id`           | varchar  | 255        | Issue id                                               | FK_issues.id |
diff --git a/versioned_docs/version-v0.11.0/DataModels/_category_.json b/versioned_docs/version-v0.11.0/DataModels/_category_.json
deleted file mode 100644
index e678e71..0000000
--- a/versioned_docs/version-v0.11.0/DataModels/_category_.json
+++ /dev/null
@@ -1,4 +0,0 @@
-{
-  "label": "Data Models",
-  "position": 5
-}
diff --git a/versioned_docs/version-v0.11.0/DeveloperManuals/DBMigration.md b/versioned_docs/version-v0.11.0/DeveloperManuals/DBMigration.md
deleted file mode 100644
index 9530237..0000000
--- a/versioned_docs/version-v0.11.0/DeveloperManuals/DBMigration.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: "DB Migration"
-description: >
-  DB Migration
-sidebar_position: 3
----
-
-## Summary
-Starting in v0.10.0, DevLake provides a lightweight migration tool for executing migration scripts.
-Both framework itself and plugins define their migration scripts in their own migration folder.
-The migration scripts are written with gorm in Golang to support different SQL dialects.
-
-
-## Migration Script
-Migration script describes how to do database migration.
-They implement the `Script` interface.
-When DevLake starts, scripts register themselves to the framework by invoking the `Register` function
-
-```go
-type Script interface {
-	Up(ctx context.Context, db *gorm.DB) error
-	Version() uint64
-	Name() string
-}
-```
-
-## Table `migration_history`
-
-The table tracks migration scripts execution and schemas changes.
-From which, DevLake could figure out the current state of database schemas.
-
-
-## How It Works
-1. Check `migration_history` table, calculate all the migration scripts need to be executed.
-2. Sort scripts by Version in ascending order.
-3. Execute scripts.
-4. Save results in the `migration_history` table.
diff --git a/versioned_docs/version-v0.11.0/DeveloperManuals/Dal.md b/versioned_docs/version-v0.11.0/DeveloperManuals/Dal.md
deleted file mode 100644
index 9b08542..0000000
--- a/versioned_docs/version-v0.11.0/DeveloperManuals/Dal.md
+++ /dev/null
@@ -1,173 +0,0 @@
----
-title: "Dal"
-sidebar_position: 5
-description: >
-  The Dal (Data Access Layer) is designed to decouple the hard dependency on `gorm` in v0.12
----
-
-## Summary
-
-The Dal (Data Access Layer) is designed to decouple the hard dependency on `gorm` in v0.12.  The advantages of introducing this isolation are:
-
- - Unit Test: Mocking an Interface is easier and more reliable than Patching a Pointer.
- - Clean Code: DBS operations are more consistence than using `gorm ` directly.
- - Replaceable: It would be easier to replace `gorm` in the future if needed.
-
-## The Dal Interface
-
-```go
-type Dal interface {
-	AutoMigrate(entity interface{}, clauses ...Clause) error
-	Exec(query string, params ...interface{}) error
-	RawCursor(query string, params ...interface{}) (*sql.Rows, error)
-	Cursor(clauses ...Clause) (*sql.Rows, error)
-	Fetch(cursor *sql.Rows, dst interface{}) error
-	All(dst interface{}, clauses ...Clause) error
-	First(dst interface{}, clauses ...Clause) error
-	Count(clauses ...Clause) (int64, error)
-	Pluck(column string, dest interface{}, clauses ...Clause) error
-	Create(entity interface{}, clauses ...Clause) error
-	Update(entity interface{}, clauses ...Clause) error
-	CreateOrUpdate(entity interface{}, clauses ...Clause) error
-	CreateIfNotExist(entity interface{}, clauses ...Clause) error
-	Delete(entity interface{}, clauses ...Clause) error
-	AllTables() ([]string, error)
-}
-```
-
-
-## How to use
-
-### Query
-```go
-// Get a database cursor
-user := &models.User{}
-cursor, err := db.Cursor(
-  dal.From(user),
-  dal.Where("department = ?", "R&D"),
-  dal.Orderby("id DESC"),
-)
-if err != nil {
-  return err
-}
-for cursor.Next() {
-  err = dal.Fetch(cursor, user)  // fetch one record at a time
-  ...
-}
-
-// Get a database cursor by raw sql query
-cursor, err := db.Raw("SELECT * FROM users")
-
-// USE WITH CAUTIOUS: loading a big table at once is slow and dangerous
-// Load all records from database at once. 
-users := make([]models.Users, 0)
-err := db.All(&users, dal.Where("department = ?", "R&D"))
-
-// Load a column as Scalar or Slice
-var email string
-err := db.Pluck("email", &username, dal.Where("id = ?", 1))
-var emails []string
-err := db.Pluck("email", &emails)
-
-// Execute query
-err := db.Exec("UPDATE users SET department = ? WHERE department = ?", "Research & Development", "R&D")
-```
-
-### Insert
-```go
-err := db.Create(&models.User{
-  Email: "hello@example.com", // assumming this the Primarykey
-  Name: "hello",
-  Department: "R&D",
-})
-```
-
-### Update
-```go
-err := db.Create(&models.User{
-  Email: "hello@example.com", // assumming this the Primarykey
-  Name: "hello",
-  Department: "R&D",
-})
-```
-### Insert or Update
-```go
-err := db.CreateOrUpdate(&models.User{
-  Email: "hello@example.com",  // assuming this is the Primarykey
-  Name: "hello",
-  Department: "R&D",
-})
-```
-
-### Insert if record(by PrimaryKey) didn't exist
-```go
-err := db.CreateIfNotExist(&models.User{
-  Email: "hello@example.com",  // assuming this is the Primarykey
-  Name: "hello",
-  Department: "R&D",
-})
-```
-
-### Delete
-```go
-err := db.CreateIfNotExist(&models.User{
-  Email: "hello@example.com",  // assuming this is the Primary key
-})
-```
-
-### DDL and others
-```go
-// Returns all table names
-allTables, err := db.AllTables()
-
-// Automigrate: create/add missing table/columns
-// Note: it won't delete any existing columns, nor does it update the column definition
-err := db.AutoMigrate(&models.User{})
-```
-
-## How to do Unit Test
-First, run the command `make mock` to generate the Mocking Stubs, the generated source files should appear in `mocks` folder. 
-```
-mocks
-├── ApiResourceHandler.go
-├── AsyncResponseHandler.go
-├── BasicRes.go
-├── CloseablePluginTask.go
-├── ConfigGetter.go
-├── Dal.go
-├── DataConvertHandler.go
-├── ExecContext.go
-├── InjectConfigGetter.go
-├── InjectLogger.go
-├── Iterator.go
-├── Logger.go
-├── Migratable.go
-├── PluginApi.go
-├── PluginBlueprintV100.go
-├── PluginInit.go
-├── PluginMeta.go
-├── PluginTask.go
-├── RateLimitedApiClient.go
-├── SubTaskContext.go
-├── SubTaskEntryPoint.go
-├── SubTask.go
-└── TaskContext.go
-```
-With these Mocking stubs, you may start writing your TestCases using the `mocks.Dal`.
-```go
-import "github.com/apache/incubator-devlake/mocks"
-
-func TestCreateUser(t *testing.T) {
-    mockDal := new(mocks.Dal)
-    mockDal.On("Create", mock.Anything, mock.Anything).Return(nil).Once()
-    userService := &services.UserService{
-        Dal: mockDal,
-    }
-    userService.Post(map[string]interface{}{
-        "email": "helle@example.com",
-        "name": "hello",
-        "department": "R&D",
-    })
-    mockDal.AssertExpectations(t)
-```
-
diff --git a/versioned_docs/version-v0.11.0/DeveloperManuals/DeveloperSetup.md b/versioned_docs/version-v0.11.0/DeveloperManuals/DeveloperSetup.md
deleted file mode 100644
index 4b05c11..0000000
--- a/versioned_docs/version-v0.11.0/DeveloperManuals/DeveloperSetup.md
+++ /dev/null
@@ -1,131 +0,0 @@
----
-title: "Developer Setup"
-description: >
-  The steps to install DevLake in develper mode.
-sidebar_position: 1
----
-
-
-## Requirements
-
-- <a href="https://docs.docker.com/get-docker" target="_blank">Docker v19.03.10+</a>
-- <a href="https://golang.org/doc/install" target="_blank">Golang v1.17+</a>
-- Make
-  - Mac (Already installed)
-  - Windows: [Download](http://gnuwin32.sourceforge.net/packages/make.htm)
-  - Ubuntu: `sudo apt-get install build-essential libssl-dev`
-
-## How to setup dev environment
-1. Navigate to where you would like to install this project and clone the repository:
-
-   ```sh
-   git clone https://github.com/apache/incubator-devlake
-   cd incubator-devlake
-   ```
-
-2. Install dependencies for plugins:
-
-   - [RefDiff](../Plugins/RefDiff.md#development)
-
-3. Install Go packages
-
-    ```sh
-	go get
-    ```
-
-4. Copy the sample config file to new local file:
-
-    ```sh
-    cp .env.example .env
-    ```
-
-5. Update the following variables in the file `.env`:
-
-    * `DB_URL`: Replace `mysql:3306` with `127.0.0.1:3306`
-
-6. Start the MySQL and Grafana containers:
-
-    > Make sure the Docker daemon is running before this step.
-
-    ```sh
-    docker-compose up -d mysql grafana
-    ```
-
-7. Run lake and config UI in dev mode in two separate terminals:
-
-    ```sh
-    # install mockery
-    go install github.com/vektra/mockery/v2@latest
-    # generate mocking stubs
-    make mock
-    # run lake
-    make dev
-    # run config UI
-    make configure-dev
-    ```
-
-    Q: I got an error saying: `libgit2.so.1.3: cannot open share object file: No such file or directory`
-
-    A: Make sure your program can find `libgit2.so.1.3`. `LD_LIBRARY_PATH` can be assigned like this if your `libgit2.so.1.3` is located at `/usr/local/lib`:
-
-    ```sh
-    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
-    ```
-
-8. Visit config UI at `localhost:4000` to configure data connections.
-    - Navigate to desired plugins pages on the Integrations page
-    - Enter the required information for the plugins you intend to use.
-    - Refer to the following for more details on how to configure each one:
-        - [Jira](../Plugins/Jira.md)
-        - [GitLab](../Plugins/GitLab.md)
-        - [Jenkins](../Plugins/Jenkins.md)
-        - [GitHub](../Plugins/GitHub.md): For users who'd like to collect GitHub data, we recommend reading our [GitHub data collection guide](../UserManuals/GitHubUserGuide.md) which covers the following steps in detail.
-    - Submit the form to update the values by clicking on the **Save Connection** button on each form page
-
-9. Visit `localhost:4000/pipelines/create` to RUN a Pipeline and trigger data collection.
-
-
-   Pipelines Runs can be initiated by the new "Create Run" Interface. Simply enable the **Data Connection Providers** you wish to run collection for, and specify the data you want to collect, for instance, **Project ID** for Gitlab and **Repository Name** for GitHub.
-
-   Once a valid pipeline configuration has been created, press **Create Run** to start/run the pipeline.
-   After the pipeline starts, you will be automatically redirected to the **Pipeline Activity** screen to monitor collection activity.
-
-   **Pipelines** is accessible from the main menu of the config-ui for easy access.
-
-   - Manage All Pipelines: `http://localhost:4000/pipelines`
-   - Create Pipeline RUN: `http://localhost:4000/pipelines/create`
-   - Track Pipeline Activity: `http://localhost:4000/pipelines/activity/[RUN_ID]`
-
-   For advanced use cases and complex pipelines, please use the Raw JSON API to manually initiate a run using **cURL** or graphical API tool such as **Postman**. `POST` the following request to the DevLake API Endpoint.
-
-    ```json
-    [
-        [
-            {
-                "plugin": "github",
-                "options": {
-                    "repo": "lake",
-                    "owner": "merico-dev"
-                }
-            }
-        ]
-    ]
-    ```
-
-   Please refer to [Pipeline Advanced Mode](../UserManuals/AdvancedMode.md) for in-depth explanation.
-
-
-10. Click *View Dashboards* button in the top left when done, or visit `localhost:3002` (username: `admin`, password: `admin`).
-
-   We use <a href="https://grafana.com/" target="_blank">Grafana</a> as a visualization tool to build charts for the <a href="https://github.com/merico-dev/lake/wiki/DataModel.Domain-layer-schema">data stored in our database</a>. Using SQL queries, we can add panels to build, save, and edit customized dashboards.
-
-   All the details on provisioning and customizing a dashboard can be found in the [Grafana Doc](../UserManuals/GrafanaUserGuide.md).
-
-11. (Optional) To run the tests:
-
-    ```sh
-    make test
-    ```
-
-12. For DB migrations, please refer to [Migration Doc](../DeveloperManuals/DBMigration.md).
-
diff --git a/versioned_docs/version-v0.11.0/DeveloperManuals/Notifications.md b/versioned_docs/version-v0.11.0/DeveloperManuals/Notifications.md
deleted file mode 100644
index 23456b4..0000000
--- a/versioned_docs/version-v0.11.0/DeveloperManuals/Notifications.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-title: "Notifications"
-description: >
-  Notifications
-sidebar_position: 4
----
-
-## Request
-Example request
-```
-POST /lake/notify?nouce=3-FDXxIootApWxEVtz&sign=424c2f6159bd9e9828924a53f9911059433dc14328a031e91f9802f062b495d5
-
-{"TaskID":39,"PluginName":"jenkins","CreatedAt":"2021-09-30T15:28:00.389+08:00","UpdatedAt":"2021-09-30T15:28:00.785+08:00"}
-```
-
-## Configuration
-If you want to use the notification feature, you should add two configuration key to `.env` file.
-```shell
-# .env
-# notification request url, e.g.: http://example.com/lake/notify
-NOTIFICATION_ENDPOINT=
-# secret is used to calculate signature
-NOTIFICATION_SECRET=
-```
-
-## Signature
-You should check the signature before accepting the notification request. We use sha256 algorithm to calculate the checksum.
-```go
-// calculate checksum
-sum := sha256.Sum256([]byte(requestBody + NOTIFICATION_SECRET + nouce))
-return hex.EncodeToString(sum[:])
-```
diff --git a/versioned_docs/version-v0.11.0/DeveloperManuals/PluginImplementation.md b/versioned_docs/version-v0.11.0/DeveloperManuals/PluginImplementation.md
deleted file mode 100644
index e3457c9..0000000
--- a/versioned_docs/version-v0.11.0/DeveloperManuals/PluginImplementation.md
+++ /dev/null
@@ -1,292 +0,0 @@
----
-title: "Plugin Implementation"
-sidebar_position: 2
-description: >
-  Plugin Implementation
----
-
-## How to Implement a DevLake plugin?
-
-If your favorite DevOps tool is not yet supported by DevLake, don't worry. It's not difficult to implement a DevLake plugin. In this post, we'll go through the basics of DevLake plugins and build an example plugin from scratch together.
-
-## What is a plugin?
-
-A DevLake plugin is a shared library built with Go's `plugin` package that hooks up to DevLake core at run-time.
-
-A plugin may extend DevLake's capability in three ways:
-
-1. Integrating with new data sources
-2. Transforming/enriching existing data
-3. Exporting DevLake data to other data systems
-
-
-## How do plugins work?
-
-A plugin mainly consists of a collection of subtasks that can be executed by DevLake core. For data source plugins, a subtask may be collecting a single entity from the data source (e.g., issues from Jira). Besides the subtasks, there're hooks that a plugin can implement to customize its initialization, migration, and more. See below for a list of the most important interfaces:
-
-1. [PluginMeta](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_meta.go) contains the minimal interface that a plugin should implement, with only two functions 
-   - Description() returns the description of a plugin
-   - RootPkgPath() returns the root package path of a plugin
-2. [PluginInit](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_init.go) allows a plugin to customize its initialization
-3. [PluginTask](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_task.go) enables a plugin to prepare data prior to subtask execution
-4. [PluginApi](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_api.go) lets a plugin exposes some self-defined APIs
-5. [Migratable](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_db_migration.go) is where a plugin manages its database migrations 
-
-The diagram below shows the control flow of executing a plugin:
-
-```mermaid
-flowchart TD;
-    subgraph S4[Step4 sub-task extractor running process];
-    direction LR;
-    D4[DevLake];
-    D4 -- Step4.1 create a new\n ApiExtractor\n and execute it --> E["ExtractXXXMeta.\nEntryPoint"];
-    E <-- Step4.2 read from\n raw table --> RawDataSubTaskArgs.\nTable;
-    E -- "Step4.3 call with RawData" --> ApiExtractor.Extract;
-    ApiExtractor.Extract -- "decode and return gorm models" --> E
-    end
-    subgraph S3[Step3 sub-task collector running process]
-    direction LR
-    D3[DevLake]
-    D3 -- Step3.1 create a new\n ApiCollector\n and execute it --> C["CollectXXXMeta.\nEntryPoint"];
-    C <-- Step3.2 create\n raw table --> RawDataSubTaskArgs.\nRAW_BBB_TABLE;
-    C <-- Step3.3 build query\n before sending requests --> ApiCollectorArgs.\nQuery/UrlTemplate;
-    C <-. Step3.4 send requests by ApiClient \n and return HTTP response.-> A1["HTTP APIs"];
-    C <-- "Step3.5 call and \nreturn decoded data \nfrom HTTP response" --> ResponseParser;
-    end
-    subgraph S2[Step2 DevLake register custom plugin]
-    direction LR
-    D2[DevLake]
-    D2 <-- "Step2.1 function `Init` \nneed to do init jobs" --> plugin.Init;
-    D2 <-- "Step2.2 (Optional) call \nand return migration scripts" --> plugin.MigrationScripts;
-    D2 <-- "Step2.3 (Optional) call \nand return taskCtx" --> plugin.PrepareTaskData;
-    D2 <-- "Step2.4 call and \nreturn subTasks for execting" --> plugin.SubTaskContext;
-    end
-    subgraph S1[Step1 Run DevLake]
-    direction LR
-    main -- Transfer of control \nby `runner.DirectRun` --> D1[DevLake];
-    end
-    S1-->S2-->S3-->S4
-```
-There's a lot of information in the diagram but we don't expect you to digest it right away, simply use it as a reference when you go through the example below.
-
-## A step-by-step guide towards your first plugin
-
-In this guide, we'll walk through how to create a data source plugin from scratch. 
-
-The example in this tutorial comes from DevLake's own needs of managing [CLAs](https://en.wikipedia.org/wiki/Contributor_License_Agreement). Whenever DevLake receives a new PR on GitHub, we need to check if the author has signed a CLA by referencing `https://people.apache.org/public/icla-info.json`. This guide will demonstrate how to collect the ICLA info from Apache API, cache the raw response, and extract the raw data into a relational table ready to be queried.
-
-### Step 1: Bootstrap the new plugin
-
-**Note:** Please make sure you have DevLake up and running before proceeding.
-
-> More info about plugin:
-> Generally, we need these folders in plugin folders: `api`, `models` and `tasks`
-> `api` interacts with `config-ui` for test/get/save connection of data source
->       - connection [example](https://github.com/apache/incubator-devlake/blob/main/plugins/gitlab/api/connection.go)
->       - connection model [example](https://github.com/apache/incubator-devlake/blob/main/plugins/gitlab/models/connection.go)
-> `models` stores all `data entities` and `data migration scripts`. 
->       - entity 
->       - data migrations [template](https://github.com/apache/incubator-devlake/tree/main/generator/template/migrationscripts)
-> `tasks` contains all of our `sub tasks` for a plugin
->       - task data [template](https://github.com/apache/incubator-devlake/blob/main/generator/template/plugin/tasks/task_data.go-template)
->       - api client [template](https://github.com/apache/incubator-devlake/blob/main/generator/template/plugin/tasks/task_data_with_api_client.go-template)
-
-Don't worry if you cannot figure out what these concepts mean immediately. We'll explain them one by one later. 
-
-DevLake provides a generator to create a plugin conveniently. Let's scaffold our new plugin by running `go run generator/main.go create-plugin icla`, which would ask for `with_api_client` and `Endpoint`.
-
-* `with_api_client` is used for choosing if we need to request HTTP APIs by api_client. 
-* `Endpoint` use in which site we will request, in our case, it should be `https://people.apache.org/`.
-
-![create plugin](https://i.imgur.com/itzlFg7.png)
-
-Now we have three files in our plugin. `api_client.go` and `task_data.go` are in subfolder `tasks/`.
-![plugin files](https://i.imgur.com/zon5waf.png)
-
-Have a try to run this plugin by function `main` in `plugin_main.go`. When you see result like this:
-```
-$go run plugins/icla/plugin_main.go
-[2022-06-02 18:07:30]  INFO failed to create dir logs: mkdir logs: file exists
-press `c` to send cancel signal
-[2022-06-02 18:07:30]  INFO  [icla] start plugin
-invalid ICLA_TOKEN, but ignore this error now
-[2022-06-02 18:07:30]  INFO  [icla] scheduler for api https://people.apache.org/ worker: 25, request: 18000, duration: 1h0m0s
-[2022-06-02 18:07:30]  INFO  [icla] total step: 0
-```
-How exciting. It works! The plugin defined and initiated in `plugin_main.go` use some options in `task_data.go`. They are made up as the most straightforward plugin in Apache DevLake, and `api_client.go` will be used in the next step to request HTTP APIs.
-
-### Step 2: Create a sub-task for data collection
-Before we start, it is helpful to know how collection task is executed: 
-1. First, Apache DevLake would call `plugin_main.PrepareTaskData()` to prepare needed data before any sub-tasks. We need to create an API client here.
-2. Then Apache DevLake will call the sub-tasks returned by `plugin_main.SubTaskMetas()`. Sub-task is an independent task to do some job, like requesting API, processing data, etc.
-
-> Each sub-task must be defined as a SubTaskMeta, and implement SubTaskEntryPoint of SubTaskMeta. SubTaskEntryPoint is defined as 
-> ```go
-> type SubTaskEntryPoint func(c SubTaskContext) error
-> ```
-> More info at: https://devlake.apache.org/blog/how-apache-devlake-runs/
-
-#### Step 2.1 Create a sub-task(Collector) for data collection
-
-Let's run `go run generator/main.go create-collector icla committer` and confirm it. This sub-task is activated by registering in `plugin_main.go/SubTaskMetas` automatically.
-
-![](https://i.imgur.com/tkDuofi.png)
-
-> - Collector will collect data from HTTP or other data sources, and save the data into the raw layer. 
-> - Inside the func `SubTaskEntryPoint` of `Collector`, we use `helper.NewApiCollector` to create an object of [ApiCollector](https://github.com/apache/incubator-devlake/blob/main/generator/template/plugin/tasks/api_collector.go-template), then call `execute()` to do the job. 
-
-Now you can notice `data.ApiClient` is inited in `plugin_main.go/PrepareTaskData.ApiClient`. `PrepareTaskData` create a new `ApiClient`, and it's a tool Apache DevLake suggests to request data from HTTP Apis. This tool support some valuable features for HttpApi, like rateLimit, proxy and retry. Of course, if you like, you may use the lib `http` instead, but it will be more tedious.
-
-Let's move forward to use it.
-
-1. To collect data from `https://people.apache.org/public/icla-info.json`,
-we have filled `https://people.apache.org/` into `tasks/api_client.go/ENDPOINT` in Step 1.
-
-![](https://i.imgur.com/q8Zltnl.png)
-
-2. And fill `public/icla-info.json` into `UrlTemplate`, delete unnecessary iterator and add `println("receive data:", res)` in `ResponseParser` to see if collection was successful.
-
-![](https://i.imgur.com/ToLMclH.png)
-
-Ok, now the collector sub-task has been added to the plugin, and we can kick it off by running `main` again. If everything goes smoothly, the output should look like this:
-```bash
-[2022-06-06 12:24:52]  INFO  [icla] start plugin
-invalid ICLA_TOKEN, but ignore this error now
-[2022-06-06 12:24:52]  INFO  [icla] scheduler for api https://people.apache.org/ worker: 25, request: 18000, duration: 1h0m0s
-[2022-06-06 12:24:52]  INFO  [icla] total step: 1
-[2022-06-06 12:24:52]  INFO  [icla] executing subtask CollectCommitter
-[2022-06-06 12:24:52]  INFO  [icla] [CollectCommitter] start api collection
-receive data: 0x140005763f0
-[2022-06-06 12:24:55]  INFO  [icla] [CollectCommitter] finished records: 1
-[2022-06-06 12:24:55]  INFO  [icla] [CollectCommitter] end api collection
-[2022-06-06 12:24:55]  INFO  [icla] finished step: 1 / 1
-```
-
-Great! Now we can see data pulled from the server without any problem. The last step is to decode the response body in `ResponseParser` and return it to the framework, so it can be stored in the database.
-```go
-ResponseParser: func(res *http.Response) ([]json.RawMessage, error) {
-    body := &struct {
-        LastUpdated string          `json:"last_updated"`
-        Committers  json.RawMessage `json:"committers"`
-    }{}
-    err := helper.UnmarshalResponse(res, body)
-    if err != nil {
-        return nil, err
-    }
-    println("receive data:", len(body.Committers))
-    return []json.RawMessage{body.Committers}, nil
-},
-
-```
-Ok, run the function `main` once again, then it turned out like this, and we should be able see some records show up in the table `_raw_icla_committer`.
-```bash
-……
-receive data: 272956 /* <- the number means 272956 models received */
-[2022-06-06 13:46:57]  INFO  [icla] [CollectCommitter] finished records: 1
-[2022-06-06 13:46:57]  INFO  [icla] [CollectCommitter] end api collection
-[2022-06-06 13:46:57]  INFO  [icla] finished step: 1 / 1
-```
-
-![](https://i.imgur.com/aVYNMRr.png)
-
-#### Step 2.2 Create a sub-task(Extractor) to extract data from the raw layer
-
-> - Extractor will extract data from raw layer and save it into tool db table.
-> - Except for some pre-processing, the main flow is similar to the collector.
-
-We have already collected data from HTTP API and saved them into the DB table `_raw_XXXX`. In this step, we will extract the names of committers from the raw data. As you may infer from the name, raw tables are temporary and not easy to use directly.
-
-Now Apache DevLake suggests to save data by [gorm](https://gorm.io/docs/index.html), so we will create a model by gorm and add it into `plugin_main.go/AutoSchemas.Up()`.
-
-plugins/icla/models/committer.go
-```go
-package models
-
-import (
-	"github.com/apache/incubator-devlake/models/common"
-)
-
-type IclaCommitter struct {
-	UserName     string `gorm:"primaryKey;type:varchar(255)"`
-	Name         string `gorm:"primaryKey;type:varchar(255)"`
-	common.NoPKModel
-}
-
-func (IclaCommitter) TableName() string {
-	return "_tool_icla_committer"
-}
-```
-
-plugins/icla/plugin_main.go
-![](https://i.imgur.com/4f0zJty.png)
-
-
-Ok, run the plugin, and table `_tool_icla_committer` will be created automatically just like the snapshot below:
-![](https://i.imgur.com/7Z324IX.png)
-
-Next, let's run `go run generator/main.go create-extractor icla committer` and type in what the command prompt asks for.
-
-![](https://i.imgur.com/UyDP9Um.png)
-
-Let's look at the function `extract` in `committer_extractor.go` created just now, and some codes need to be written here. It's obviously `resData.data` is raw data, so we could decode them by json and add new `IclaCommitter` to save them.
-```go
-Extract: func(resData *helper.RawData) ([]interface{}, error) {
-    names := &map[string]string{}
-    err := json.Unmarshal(resData.Data, names)
-    if err != nil {
-        return nil, err
-    }
-    extractedModels := make([]interface{}, 0)
-    for userName, name := range *names {
-        extractedModels = append(extractedModels, &models.IclaCommitter{
-            UserName: userName,
-            Name:     name,
-        })fco
-    }
-    return extractedModels, nil
-},
-```
-
-Ok, run it then we get:
-```
-[2022-06-06 15:39:40]  INFO  [icla] start plugin
-invalid ICLA_TOKEN, but ignore this error now
-[2022-06-06 15:39:40]  INFO  [icla] scheduler for api https://people.apache.org/ worker: 25, request: 18000, duration: 1h0m0s
-[2022-06-06 15:39:40]  INFO  [icla] total step: 2
-[2022-06-06 15:39:40]  INFO  [icla] executing subtask CollectCommitter
-[2022-06-06 15:39:40]  INFO  [icla] [CollectCommitter] start api collection
-receive data: 272956
-[2022-06-06 15:39:44]  INFO  [icla] [CollectCommitter] finished records: 1
-[2022-06-06 15:39:44]  INFO  [icla] [CollectCommitter] end api collection
-[2022-06-06 15:39:44]  INFO  [icla] finished step: 1 / 2
-[2022-06-06 15:39:44]  INFO  [icla] executing subtask ExtractCommitter
-[2022-06-06 15:39:46]  INFO  [icla] [ExtractCommitter] finished records: 1
-[2022-06-06 15:39:46]  INFO  [icla] finished step: 2 / 2
-```
-Now committer data have been saved in _tool_icla_committer.
-![](https://i.imgur.com/6svX0N2.png)
-
-#### Step 2.3 Convertor
-
-Notes: There are two ways here (open source or using it yourself). It is unnecessary, but we encourage it because convertors and the domain layer will significantly help build dashboards. More info about the domain layer at: https://devlake.apache.org/docs/DataModels/DevLakeDomainLayerSchema/
-
-> - Convertor will convert data from the tool layer and save it into the domain layer.
-> - We use `helper.NewDataConverter` to create an object of [DataConvertor], then call `execute()`. 
-
-#### Step 2.4 Let's try it
-Sometimes OpenApi will be protected by token or other auth types, and we need to log in to gain a token to visit it. For example, only after logging in `private@apahce.com` could we gather the data about contributors signing ICLA. Here we briefly introduce how to authorize DevLake to collect data.
-
-Let's look at `api_client.go`. `NewIclaApiClient` load config `ICLA_TOKEN` by `.env`, so we can add `ICLA_TOKEN=XXXXXX` in `.env` and use it in `apiClient.SetHeaders()` to mock the login status. Code as below:
-![](https://i.imgur.com/dPxooAx.png)
-
-Of course, we can use `username/password` to get a token after login mockery. Just try and adjust according to the actual situation.
-
-Look for more related details at https://github.com/apache/incubator-devlake
-
-#### Final step: Submit the code as open source code
-Good ideas and we encourage contributions~ Let's learn about migration scripts and domain layers to write normative and platform-neutral codes. More info at https://devlake.apache.org/docs/DataModels/DevLakeDomainLayerSchema or contact us for ebullient help.
-
-
-## Done!
-
-Congratulations! The first plugin has been created! 🎖 
\ No newline at end of file
diff --git a/versioned_docs/version-v0.11.0/DeveloperManuals/_category_.json b/versioned_docs/version-v0.11.0/DeveloperManuals/_category_.json
deleted file mode 100644
index fe67a68..0000000
--- a/versioned_docs/version-v0.11.0/DeveloperManuals/_category_.json
+++ /dev/null
@@ -1,4 +0,0 @@
-{
-  "label": "Developer Manuals",
-  "position": 4
-}
diff --git a/versioned_docs/version-v0.11.0/EngineeringMetrics.md b/versioned_docs/version-v0.11.0/EngineeringMetrics.md
deleted file mode 100644
index 2d9a42a..0000000
--- a/versioned_docs/version-v0.11.0/EngineeringMetrics.md
+++ /dev/null
@@ -1,195 +0,0 @@
----
-sidebar_position: 06
-title: "Engineering Metrics"
-linkTitle: "Engineering Metrics"
-tags: []
-description: >
-  The definition, values and data required for the 20+ engineering metrics supported by DevLake.
----
-
-<table>
-    <tr>
-        <th><b>Category</b></th>
-        <th><b>Metric Name</b></th>
-        <th><b>Definition</b></th>
-        <th><b>Data Required</b></th>
-        <th style={{width:'70%'}}><b>Use Scenarios and Recommended Practices</b></th>
-        <th><b>Value&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</b></th>
-    </tr>
-    <tr>
-        <td rowspan="10">Delivery Velocity</td>
-        <td>Requirement Count</td>
-        <td>Number of issues in type "Requirement"</td>
-        <td>Issue/Task Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/jira/README.md">Jira issues</a>, <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub issues</a>, etc</td>
-        <td rowspan="2">
-1. Analyze the number of requirements and delivery rate of different time cycles to find the stability and trend of the development process.
-<br/>2. Analyze and compare the number of requirements delivered and delivery rate of each project/team, and compare the scale of requirements of different projects.
-<br/>3. Based on historical data, establish a baseline of the delivery capacity of a single iteration (optimistic, probable and pessimistic values) to provide a reference for iteration estimation.
-<br/>4. Drill down to analyze the number and percentage of requirements in different phases of SDLC. Analyze rationality and identify the requirements stuck in the backlog.</td>
-        <td rowspan="2">1. Based on historical data, establish a baseline of the delivery capacity of a single iteration to improve the organization and planning of R&D resources.
-<br/>2. Evaluate whether the delivery capacity matches the business phase and demand scale. Identify key bottlenecks and reasonably allocate resources.</td>
-    </tr>
-    <tr>
-        <td>Requirement Delivery Rate</td>
-        <td>Ratio of delivered requirements to all requirements</td>
-        <td>Issue/Task Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/jira/README.md">Jira issues</a>, <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub issues</a>, etc</td>
-    </tr>
-    <tr>
-        <td>Requirement Lead Time</td>
-        <td>Lead time of issues with type "Requirement"</td>
-        <td>Issue/Task Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/jira/README.md">Jira issues</a>, <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub issues</a>, etc</td>
-        <td>
-1. Analyze the trend of requirement lead time to observe if it has improved over time.
-<br/>2. Analyze and compare the requirement lead time of each project/team to identify key projects with abnormal lead time.
-<br/>3. Drill down to analyze a requirement's staying time in different phases of SDLC. Analyze the bottleneck of delivery velocity and improve the workflow.</td>
-        <td>1. Analyze key projects and critical points, identify good/to-be-improved practices that affect requirement lead time, and reduce the risk of delays
-<br/>2. Focus on the end-to-end velocity of value delivery process; coordinate different parts of R&D to avoid efficiency shafts; make targeted improvements to bottlenecks.</td>
-    </tr>
-    <tr>
-        <td>Requirement Granularity</td>
-        <td>Number of story points associated with an issue</td>
-        <td>Issue/Task Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/jira/README.md">Jira issues</a>, <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub issues</a>, etc</td>
-        <td>
-1. Analyze the story points/requirement lead time of requirements to evaluate whether the ticket size, ie. requirement complexity is optimal.
-<br/>2. Compare the estimated requirement granularity with the actual situation and evaluate whether the difference is reasonable by combining more microscopic workload metrics (e.g. lines of code/code equivalents)</td>
-        <td>1. Promote product teams to split requirements carefully, improve requirements quality, help developers understand requirements clearly, deliver efficiently and with high quality, and improve the project management capability of the team.
-<br/>2. Establish a data-supported workload estimation model to help R&D teams calibrate their estimation methods and more accurately assess the granularity of requirements, which is useful to achieve better issue planning in project management.</td>
-    </tr>
-    <tr>
-        <td>Commit Count</td>
-        <td>Number of Commits</td>
-        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitextractor/README.md">Git</a>/<a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a>/<a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> commits</td>
-        <td>
-1. Identify the main reasons for the unusual number of commits and the possible impact on the number of commits through comparison
-<br/>2. Evaluate whether the number of commits is reasonable in conjunction with more microscopic workload metrics (e.g. lines of code/code equivalents)</td>
-        <td>1. Identify potential bottlenecks that may affect output
-<br/>2. Encourage R&D practices of small step submissions and develop excellent coding habits</td>
-    </tr>
-    <tr>
-        <td>Added Lines of Code</td>
-        <td>Accumulated number of added lines of code</td>
-        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitextractor/README.md">Git</a>/<a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a>/<a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> commits</td>
-        <td rowspan="2">
-1. From the project/team dimension, observe the accumulated change in Added lines to assess the team activity and code growth rate
-<br/>2. From version cycle dimension, observe the active time distribution of code changes, and evaluate the effectiveness of project development model.
-<br/>3. From the member dimension, observe the trend and stability of code output of each member, and identify the key points that affect code output by comparison.</td>
-        <td rowspan="2">1. identify potential bottlenecks that may affect the output
-<br/>2. Encourage the team to implement a development model that matches the business requirements; develop excellent coding habits</td>
-    </tr>
-    <tr>
-        <td>Deleted Lines of Code</td>
-        <td>Accumulated number of deleted lines of code</td>
-        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitextractor/README.md">Git</a>/<a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a>/<a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> commits</td>
-    </tr>
-    <tr>
-        <td>Pull Request Review Time</td>
-        <td>Time from Pull/Merge created time until merged</td>
-        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a> PRs, <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> MRs, etc</td>
-        <td>
-1. Observe the mean and distribution of code review time from the project/team/individual dimension to assess the rationality of the review time</td>
-        <td>1. Take inventory of project/team code review resources to avoid lack of resources and backlog of review sessions, resulting in long waiting time
-<br/>2. Encourage teams to implement an efficient and responsive code review mechanism</td>
-    </tr>
-    <tr>
-        <td>Bug Age</td>
-        <td>Lead time of issues in type "Bug"</td>
-        <td>Issue/Task Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/jira/README.md">Jira issues</a>, <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub issues</a>, etc</td>
-        <td rowspan="2">
-1. Observe the trend of bug age and locate the key reasons.<br/>
-2. According to the severity level, type (business, functional classification), affected module, source of bugs, count and observe the length of bug and incident age.</td>
-        <td rowspan="2">1. Help the team to establish an effective hierarchical response mechanism for bugs and incidents. Focus on the resolution of important problems in the backlog.<br/>
-2. Improve team's and individual's bug/incident fixing efficiency. Identify good/to-be-improved practices that affect bug age or incident age</td>
-    </tr>
-    <tr>
-        <td>Incident Age</td>
-        <td>Lead time of issues in type "Incident"</td>
-        <td>Issue/Task Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/jira/README.md">Jira issues</a>, <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub issues</a>, etc</td>
-    </tr>
-    <tr>
-        <td rowspan="8">Delivery Quality</td>
-        <td>Pull Request Count</td>
-        <td>Number of Pull/Merge Requests</td>
-        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a> PRs, <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> MRs, etc</td>
-        <td rowspan="3">
-1. From the developer dimension, we evaluate the code quality of developers by combining the task complexity with the metrics related to the number of review passes and review rounds.<br/>
-2. From the reviewer dimension, we observe the reviewer's review style by taking into account the task complexity, the number of passes and the number of review rounds.<br/>
-3. From the project/team dimension, we combine the project phase and team task complexity to aggregate the metrics related to the number of review passes and review rounds, and identify the modules with abnormal code review process and possible quality risks.</td>
-        <td rowspan="3">1. Code review metrics are process indicators to provide quick feedback on developers' code quality<br/>
-2. Promote the team to establish a unified coding specification and standardize the code review criteria<br/>
-3. Identify modules with low-quality risks in advance, optimize practices, and precipitate into reusable knowledge and tools to avoid technical debt accumulation</td>
-    </tr>
-    <tr>
-        <td>Pull Request Pass Rate</td>
-        <td>Ratio of Pull/Merge Review requests to merged</td>
-        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a> PRs, <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> MRs, etc</td>
-    </tr>
-    <tr>
-        <td>Pull Request Review Rounds</td>
-        <td>Number of cycles of commits followed by comments/final merge</td>
-        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a> PRs, <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> MRs, etc</td>
-    </tr>
-    <tr>
-        <td>Pull Request Review Count</td>
-        <td>Number of Pull/Merge Reviewers</td>
-        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a> PRs, <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> MRs, etc</td>
-        <td>1. As a secondary indicator, assess the cost of labor invested in the code review process</td>
-        <td>1. Take inventory of project/team code review resources to avoid long waits for review sessions due to insufficient resource input</td>
-    </tr>
-    <tr>
-        <td>Bug Count</td>
-        <td>Number of bugs found during testing</td>
-        <td>Issue/Task Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/jira/README.md">Jira issues</a>, <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub issues</a>, etc</td>
-        <td rowspan="4">
-1. From the project or team dimension, observe the statistics on the total number of defects, the distribution of the number of defects in each severity level/type/owner, the cumulative trend of defects, and the change trend of the defect rate in thousands of lines, etc.<br/>
-2. From version cycle dimension, observe the statistics on the cumulative trend of the number of defects/defect rate, which can be used to determine whether the growth rate of defects is slowing down, showing a flat convergence trend, and is an important reference for judging the stability of software version quality<br/>
-3. From the time dimension, analyze the trend of the number of test defects, defect rate to locate the key items/key points<br/>
-4. Evaluate whether the software quality and test plan are reasonable by referring to CMMI standard values</td>
-        <td rowspan="4">1. Defect drill-down analysis to inform the development of design and code review strategies and to improve the internal QA process<br/>
-2. Assist teams to locate projects/modules with higher defect severity and density, and clean up technical debts<br/>
-3. Analyze critical points, identify good/to-be-improved practices that affect defect count or defect rate, to reduce the amount of future defects</td>
-    </tr>
-    <tr>
-        <td>Incident Count</td>
-        <td>Number of Incidents found after shipping</td>
-        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a> PRs, <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> MRs, etc</td>
-    </tr>
-    <tr>
-        <td>Bugs Count per 1k Lines of Code</td>
-        <td>Amount of bugs per 1,000 lines of code</td>
-        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a> PRs, <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> MRs, etc</td>
-    </tr>
-    <tr>
-        <td>Incidents Count per 1k Lines of Code</td>
-        <td>Amount of incidents per 1,000 lines of code</td>
-        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a> PRs, <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> MRs, etc</td>
-    </tr>
-    <tr>
-        <td>Delivery Cost</td>
-        <td>Commit Author Count</td>
-        <td>Number of Contributors who have committed code</td>
-        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitextractor/README.md">Git</a>/<a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a>/<a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> commits</td>
-        <td>1. As a secondary indicator, this helps assess the labor cost of participating in coding</td>
-        <td>1. Take inventory of project/team R&D resource inputs, assess input-output ratio, and rationalize resource deployment</td>
-    </tr>
-    <tr>
-        <td rowspan="3">Delivery Capability</td>
-        <td>Build Count</td>
-        <td>The number of builds started</td>
-        <td>CI/CD entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/jenkins/README.md">Jenkins</a> PRs, <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLabCI</a> MRs, etc</td>
-        <td rowspan="3">1. From the project dimension, compare the number of builds and success rate by combining the project phase and the complexity of tasks<br/>
-2. From the time dimension, analyze the trend of the number of builds and success rate to see if it has improved over time</td>
-        <td rowspan="3">1. As a process indicator, it reflects the value flow efficiency of upstream production and research links<br/>
-2. Identify excellent/to-be-improved practices that impact the build, and drive the team to precipitate reusable tools and mechanisms to build infrastructure for fast and high-frequency delivery</td>
-    </tr>
-    <tr>
-        <td>Build Duration</td>
-        <td>The duration of successful builds</td>
-        <td>CI/CD entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/jenkins/README.md">Jenkins</a> PRs, <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLabCI</a> MRs, etc</td>
-    </tr>
-    <tr>
-        <td>Build Success Rate</td>
-        <td>The percentage of successful builds</td>
-        <td>CI/CD entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/jenkins/README.md">Jenkins</a> PRs, <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLabCI</a> MRs, etc</td>
-    </tr>
-</table>
-<br/><br/><br/>
diff --git a/versioned_docs/version-v0.11.0/Overview/Architecture.md b/versioned_docs/version-v0.11.0/Overview/Architecture.md
deleted file mode 100755
index 2d780a5..0000000
--- a/versioned_docs/version-v0.11.0/Overview/Architecture.md
+++ /dev/null
@@ -1,39 +0,0 @@
----
-title: "Architecture"
-description: >
-  Understand the architecture of Apache DevLake
-sidebar_position: 2
----
-
-## Architecture Overview
-
-<p align="center"><img src="/img/Architecture/arch-component.svg" /></p>
-<p align="center">DevLake Components</p>
-
-A DevLake installation typically consists of the following components:
-
-- Config UI: A handy user interface to create, trigger, and debug data pipelines.
-- API Server: The main programmatic interface of DevLake.
-- Runner: The runner does all the heavy-lifting for executing tasks. In the default DevLake installation, it runs within the API Server, but DevLake provides a temporal-based runner (beta) for production environments.
-- Database: The database stores both DevLake's metadata and user data collected by data pipelines. DevLake supports MySQL and PostgreSQL as of v0.11.
-- Plugins: Plugins enable DevLake to collect and analyze dev data from any DevOps tools with an accessible API. DevLake community is actively adding plugins for popular DevOps tools, but if your preferred tool is not covered yet, feel free to open a GitHub issue to let us know or check out our doc on how to build a new plugin by yourself.
-- Dashboards: Dashboards deliver data and insights to DevLake users. A dashboard is simply a collection of SQL queries along with corresponding visualization configurations. DevLake's official dashboard tool is Grafana and pre-built dashboards are shipped in Grafana's JSON format. Users are welcome to swap for their own choice of dashboard/BI tool if desired.
-
-## Dataflow
-
-<p align="center"><img src="/img/Architecture/arch-dataflow.svg" /></p>
-<p align="center">DevLake Dataflow</p>
-
-A typical plugin's dataflow is illustrated below:
-
-1. The Raw layer stores the API responses from data sources (DevOps tools) in JSON. This saves developers' time if the raw data is to be transformed differently later on. Please note that communicating with data sources' APIs is usually the most time-consuming step.
-2. The Tool layer extracts raw data from JSONs into a relational schema that's easier to consume by analytical tasks. Each DevOps tool would have a schema that's tailored to their data structure, hence the name, the Tool layer.
-3. The Domain layer attempts to build a layer of abstraction on top of the Tool layer so that analytics logics can be re-used across different tools. For example, GitHub's Pull Request (PR) and GitLab's Merge Request (MR) are similar entities. They each have their own table name and schema in the Tool layer, but they're consolidated into a single entity in the Domain layer, so that developers only need to implement metrics like Cycle Time and Code Review Rounds once against the domain la [...]
-
-## Principles
-
-1. Extensible: DevLake's plugin system allows users to integrate with any DevOps tool. DevLake also provides a dbt plugin that enables users to define their own data transformation and analysis workflows.
-2. Portable: DevLake has a modular design and provides multiple options for each module. Users of different setups can freely choose the right configuration for themselves.
-3. Robust: DevLake provides an SDK to help plugins efficiently and reliably collect data from data sources while respecting their API rate limits and constraints.
-
-<br/>
diff --git a/versioned_docs/version-v0.11.0/Overview/Introduction.md b/versioned_docs/version-v0.11.0/Overview/Introduction.md
deleted file mode 100755
index c8aacd9..0000000
--- a/versioned_docs/version-v0.11.0/Overview/Introduction.md
+++ /dev/null
@@ -1,16 +0,0 @@
----
-title: "Introduction"
-description: General introduction of Apache DevLake
-sidebar_position: 1
----
-
-## What is Apache DevLake?
-Apache DevLake is an open-source dev data platform that ingests, analyzes, and visualizes the fragmented data from DevOps tools to distill insights for engineering productivity.
-
-Apache DevLake is designed for developer teams looking to make better sense of their development process and to bring a more data-driven approach to their own practices. You can ask Apache DevLake many questions regarding your development process. Just connect and query.
-
-## What can be accomplished with DevLake?
-1. Collect DevOps data across the entire Software Development Life Cycle (SDLC) and connect the siloed data with a standard [data model](../DataModels/DevLakeDomainLayerSchema.md).
-2. Visualize out-of-the-box engineering [metrics](../EngineeringMetrics.md) in a series of use-case driven dashboards
-3. Easily extend DevLake to support your data sources, metrics, and dashboards with a flexible [framework](Architecture.md) for data collection and ETL.
-
diff --git a/versioned_docs/version-v0.11.0/Overview/Roadmap.md b/versioned_docs/version-v0.11.0/Overview/Roadmap.md
deleted file mode 100644
index 9dcf0b3..0000000
--- a/versioned_docs/version-v0.11.0/Overview/Roadmap.md
+++ /dev/null
@@ -1,33 +0,0 @@
----
-title: "Roadmap"
-description: >
-  The goals and roadmap for DevLake in 2022
-sidebar_position: 3
----
-
-
-## Goals
-DevLake has joined the Apache Incubator and is aiming to become a top-level project. To achieve this goal, the Apache DevLake (Incubating) community will continue to make efforts in helping development teams to analyze and improve their engineering productivity. In the 2022 Roadmap, we have summarized three major goals followed by the feature breakdown to invite the broader community to join us and grow together.
-
-1. As a dev data analysis application, discover and implement 3 (or even more!) usage scenarios:
-   - A collection of metrics to track the contribution, quality and growth of open-source projects
-   - DORA metrics for DevOps engineers
-   - To be decided ([let us know](https://join.slack.com/t/devlake-io/shared_invite/zt-17b6vuvps-x98pqseoUagM7EAmKC82xQ) if you have any suggestions!)
-2. As dev data infrastructure, provide robust data collection modules, customizable data models, and data extensibility.
-3. Design better user experience for end-users and contributors.
-
-## Feature Breakdown
-Apache DevLake is currently under rapid development. You are more than welcome to use the following table to explore your intereted features and make contributions. We deeply appreciate the collective effort of our community to make this project possible!
-
-| Category | Features|
-| --- | --- |
-| More data sources across different [DevOps domains](../DataModels/DevLakeDomainLayerSchema.md) (Goal No.1 & 2)| Features in **bold** are of higher priority <br/><br/> Issue/Task Management: <ul><li>**Jira server** [#886 (closed)](https://github.com/apache/incubator-devlake/issues/886)</li><li>**Jira data center** [#1687 (closed)](https://github.com/apache/incubator-devlake/issues/1687)</li><li>GitLab Issues [#715 (closed)](https://github.com/apache/incubator-devlake/issues/715)</li><li [...]
-| Improved data collection, [data models](../DataModels/DevLakeDomainLayerSchema.md) and data extensibility (Goal No.2)| Data Collection: <br/> <ul><li>Complete the logging system</li><li>Implement a good error handling mechanism during data collection</li></ul> Data Models:<ul><li>Introduce DBT to allow users to create and modify the domain layer schema. [#1479 (closed)](https://github.com/apache/incubator-devlake/issues/1479)</li><li>Design the data models for 5 new domains, please ref [...]
-| Better user experience (Goal No.3) | For new users: <ul><li> Iterate on a clearer step-by-step guide to improve the pre-configuration experience.</li><li>Provide a new Config UI to reduce frictions for data configuration [#1700 (in-progress)](https://github.com/apache/incubator-devlake/issues/1700)</li><li> Showcase dashboard live demos to let users explore and learn about the dashboards. [#1784 (open)](https://github.com/apache/incubator-devlake/issues/1784)</li></ul>For returning use [...]
-
-
-## How to Influence the Roadmap
-A roadmap is only useful when it captures real user needs. We are glad to hear from you if you have specific use cases, feedback, or ideas. You can submit an issue to let us know!
-Also, if you plan to work (or are already working) on a new or existing feature, tell us, so that we can update the roadmap accordingly. We are happy to share knowledge and context to help your feature land successfully.
-<br/><br/><br/>
-
diff --git a/versioned_docs/version-v0.11.0/Overview/_category_.json b/versioned_docs/version-v0.11.0/Overview/_category_.json
deleted file mode 100644
index e224ed8..0000000
--- a/versioned_docs/version-v0.11.0/Overview/_category_.json
+++ /dev/null
@@ -1,4 +0,0 @@
-{
-  "label": "Overview",
-  "position": 1
-}
diff --git a/versioned_docs/version-v0.11.0/Plugins/Dbt.md b/versioned_docs/version-v0.11.0/Plugins/Dbt.md
deleted file mode 100644
index 059bf12..0000000
--- a/versioned_docs/version-v0.11.0/Plugins/Dbt.md
+++ /dev/null
@@ -1,67 +0,0 @@
----
-title: "DBT"
-description: >
-  DBT Plugin
----
-
-
-## Summary
-
-dbt (data build tool) enables analytics engineers to transform data in their warehouses by simply writing select statements. dbt handles turning these select statements into tables and views.
-dbt does the T in ELT (Extract, Load, Transform) processes – it doesn’t extract or load data, but it’s extremely good at transforming data that’s already loaded into your warehouse.
-
-## User setup<a id="user-setup"></a>
-- If you plan to use this product, you need to install some environments first.
-
-#### Required Packages to Install<a id="user-setup-requirements"></a>
-- [python3.7+](https://www.python.org/downloads/)
-- [dbt-mysql](https://pypi.org/project/dbt-mysql/#configuring-your-profile)
-
-#### Commands to run or create in your terminal and the dbt project<a id="user-setup-commands"></a>
-1. pip install dbt-mysql
-2. dbt init demoapp (demoapp is project name)
-3. create your SQL transformations and data models
-
-## Convert Data By DBT
-
-Use the Raw JSON API to manually initiate a run using **cURL** or graphical API tool such as **Postman**. `POST` the following request to the DevLake API Endpoint.
-
-```json
-[
-  [
-    {
-      "plugin": "dbt",
-      "options": {
-          "projectPath": "/Users/abeizn/demoapp",
-          "projectName": "demoapp",
-          "projectTarget": "dev",
-          "selectedModels": ["my_first_dbt_model","my_second_dbt_model"],
-          "projectVars": {
-            "demokey1": "demovalue1",
-            "demokey2": "demovalue2"
-        }
-      }
-    }
-  ]
-]
-```
-
-- `projectPath`: the absolute path of the dbt project. (required)
-- `projectName`: the name of the dbt project. (required)
-- `projectTarget`: this is the default target your dbt project will use. (optional)
-- `selectedModels`: a model is a select statement. Models are defined in .sql files, and typically in your models directory. (required)
-And selectedModels accepts one or more arguments. Each argument can be one of:
-1. a package name, runs all models in your project, example: example
-2. a model name, runs a specific model, example: my_fisrt_dbt_model
-3. a fully-qualified path to a directory of models.
-
-- `projectVars`: variables to parametrize dbt models. (optional)
-example:
-`select * from events where event_type = '{{ var("event_type") }}'`
-To execute this SQL query in your model, you need set a value for `event_type`.
-
-### Resources:
-- Learn more about dbt [in the docs](https://docs.getdbt.com/docs/introduction)
-- Check out [Discourse](https://discourse.getdbt.com/) for commonly asked questions and answers
-
-<br/><br/><br/>
diff --git a/versioned_docs/version-v0.11.0/Plugins/Feishu.md b/versioned_docs/version-v0.11.0/Plugins/Feishu.md
deleted file mode 100644
index c3e0eb6..0000000
--- a/versioned_docs/version-v0.11.0/Plugins/Feishu.md
+++ /dev/null
@@ -1,64 +0,0 @@
----
-title: "Feishu"
-description: >
-  Feishu Plugin
----
-
-## Summary
-
-This plugin collects Feishu meeting data through [Feishu Openapi](https://open.feishu.cn/document/home/user-identity-introduction/introduction).
-
-## Configuration
-
-In order to fully use this plugin, you will need to get app_id and app_secret from a Feishu administrator (for help on App info, please see [official Feishu Docs](https://open.feishu.cn/document/ukTMukTMukTM/ukDNz4SO0MjL5QzM/auth-v3/auth/tenant_access_token_internal)),
-then set these two parameters via Dev Lake's `.env`.
-
-### By `.env`
-
-The connection aspect of the configuration screen requires the following key fields to connect to the Feishu API. As Feishu is a single-source data provider at the moment, the connection name is read-only as there is only one instance to manage. As we continue our development roadmap we may enable multi-source connections for Feishu in the future.
-
-```
-FEISHU_APPID=app_id
-FEISHU_APPSCRECT=app_secret
-```
-
-## Collect data from Feishu
-
-To collect data, select `Advanced Mode` on the `Create Pipeline Run` page and paste a JSON config like the following:
-
-
-```json
-[
-  [
-    {
-      "plugin": "feishu",
-      "options": {
-        "numOfDaysToCollect" : 80,
-        "rateLimitPerSecond" : 5
-      }
-    }
-  ]
-]
-```
-
-> `numOfDaysToCollect`: The number of days you want to collect
-
-> `rateLimitPerSecond`: The number of requests to send(Maximum is 8)
-
-You can also trigger data collection by making a POST request to `/pipelines`.
-```
-curl --location --request POST 'localhost:8080/pipelines' \
---header 'Content-Type: application/json' \
---data-raw '
-{
-    "name": "feishu 20211126",
-    "tasks": [[{
-      "plugin": "feishu",
-      "options": {
-        "numOfDaysToCollect" : 80,
-        "rateLimitPerSecond" : 5
-      }
-    }]]
-}
-'
-```
\ No newline at end of file
diff --git a/versioned_docs/version-v0.11.0/Plugins/GitExtractor.md b/versioned_docs/version-v0.11.0/Plugins/GitExtractor.md
deleted file mode 100644
index b40cede..0000000
--- a/versioned_docs/version-v0.11.0/Plugins/GitExtractor.md
+++ /dev/null
@@ -1,63 +0,0 @@
----
-title: "GitExtractor"
-description: >
-  GitExtractor Plugin
----
-
-## Summary
-This plugin extracts commits and references from a remote or local git repository. It then saves the data into the database or csv files.
-
-## Steps to make this plugin work
-
-1. Use the Git repo extractor to retrieve data about commits and branches from your repository.
-2. Use the GitHub plugin to retrieve data about Github issues and PRs from your repository.
-NOTE: you can run only one issue collection stage as described in the Github Plugin README.
-3. Use the [RefDiff](RefDiff.md) plugin to calculate version diff, which will be stored in `refs_commits_diffs` table.
-
-## Sample Request
-
-```
-curl --location --request POST 'localhost:8080/pipelines' \
---header 'Content-Type: application/json' \
---data-raw '
-{
-    "name": "git repo extractor",
-    "tasks": [
-        [
-            {
-                "Plugin": "gitextractor",
-                "Options": {
-                    "url": "https://github.com/merico-dev/lake.git",
-                    "repoId": "github:GithubRepo:384111310"
-                }
-            }
-        ]
-    ]
-}
-'
-```
-- `url`: the location of the git repository. It should start with `http`/`https` for a remote git repository and with `/` for a local one.
-- `repoId`: column `id` of  `repos`.
-- `proxy`: optional, http proxy, e.g. `http://your-proxy-server.com:1080`.
-- `user`: optional, for cloning private repository using HTTP/HTTPS
-- `password`: optional, for cloning private repository using HTTP/HTTPS
-- `privateKey`: optional, for SSH cloning, base64 encoded `PEM` file
-- `passphrase`: optional, passphrase for the private key
-
-
-## Standalone Mode
-
-You call also run this plugin in a standalone mode without any DevLake service running using the following command:
-
-```
-go run plugins/gitextractor/main.go -url https://github.com/merico-dev/lake.git -id github:GithubRepo:384111310 -db "merico:merico@tcp(127.0.0.1:3306)/lake?charset=utf8mb4&parseTime=True"
-```
-
-For more options (e.g., saving to a csv file instead of a db), please read `plugins/gitextractor/main.go`.
-
-## Development
-
-This plugin depends on `libgit2`, you need to install version 1.3.0 in order to run and debug this plugin on your local
-machine. [Click here](RefDiff.md#Development) for a brief guide.
-
-<br/><br/><br/>
diff --git a/versioned_docs/version-v0.11.0/Plugins/GitHub.md b/versioned_docs/version-v0.11.0/Plugins/GitHub.md
deleted file mode 100644
index cca87b7..0000000
--- a/versioned_docs/version-v0.11.0/Plugins/GitHub.md
+++ /dev/null
@@ -1,95 +0,0 @@
----
-title: "GitHub"
-description: >
-  GitHub Plugin
----
-
-
-
-## Summary
-
-This plugin gathers data from `GitHub` to display information to the user in `Grafana`. We can help tech leaders answer such questions as:
-
-- Is this month more productive than last?
-- How fast do we respond to customer requirements?
-- Was our quality improved or not?
-
-## Metrics
-
-Here are some examples metrics using `GitHub` data:
-- Avg Requirement Lead Time By Assignee
-- Bug Count per 1k Lines of Code
-- Commit Count over Time
-
-## Screenshot
-
-![image](/img/Plugins/github-demo.png)
-
-
-## Configuration
-
-### Provider (Datasource) Connection
-The connection section of the configuration screen requires the following key fields to connect to the **GitHub API**.
-
-![connection-in-config-ui](github-connection-in-config-ui.png)
-
-- **Connection Name** [`READONLY`]
-  - ⚠️ Defaults to "**Github**" and may not be changed. As GitHub is a _single-source data provider_ at the moment, the connection name is read-only as there is only one instance to manage. As we advance on our development roadmap we may enable _multi-source_ connections for GitHub in the future.
-- **Endpoint URL** (REST URL, starts with `https://` or `http://`)
-  - This should be a valid REST API Endpoint eg. `https://api.github.com/`
-  - ⚠️ URL should end with`/`
-- **Auth Token(s)** (Personal Access Token)
-  - For help on **Creating a personal access token**, please see official [GitHub Docs on Personal Tokens](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token)
-  - Provide at least one token for Authentication.
-  - This field accepts a comma-separated list of values for multiple tokens. The data collection will take longer for GitHub since they have a **rate limit of [5,000 requests](https://docs.github.com/en/rest/overview/resources-in-the-rest-api#rate-limiting) per hour** (15,000 requests/hour if you pay for `GitHub` enterprise). You can accelerate the process by configuring _multiple_ personal access tokens.
-
-Click **Save Connection** to update connection settings.
-
-
-### Provider (Datasource) Settings
-Manage additional settings and options for the GitHub Datasource Provider. Currently there is only one **optional** setting, *Proxy URL*. If you are behind a corporate firewall or VPN you may need to utilize a proxy server.
-
-- **GitHub Proxy URL [`Optional`]**
-Enter a valid proxy server address on your Network, e.g. `http://your-proxy-server.com:1080`
-
-Click **Save Settings** to update additional settings.
-
-### Regular Expression Configuration
-Define regex pattern in .env
-- GITHUB_PR_BODY_CLOSE_PATTERN: Define key word to associate issue in PR body, please check the example in .env.example
-
-## Sample Request
-To collect data, select `Advanced Mode` on the `Create Pipeline Run` page and paste a JSON config like the following:
-
-```json
-[
-  [
-    {
-      "plugin": "github",
-      "options": {
-        "repo": "lake",
-        "owner": "merico-dev"
-      }
-    }
-  ]
-]
-```
-
-You can also trigger data collection by making a POST request to `/pipelines`.
-```
-curl --location --request POST 'localhost:8080/pipelines' \
---header 'Content-Type: application/json' \
---data-raw '
-{
-    "name": "github 20211126",
-    "tasks": [[{
-        "plugin": "github",
-        "options": {
-            "repo": "lake",
-            "owner": "merico-dev"
-        }
-    }]]
-}
-'
-```
-<br/><br/><br/>
diff --git a/versioned_docs/version-v0.11.0/Plugins/GitLab.md b/versioned_docs/version-v0.11.0/Plugins/GitLab.md
deleted file mode 100644
index 21a86d7..0000000
--- a/versioned_docs/version-v0.11.0/Plugins/GitLab.md
+++ /dev/null
@@ -1,94 +0,0 @@
----
-title: "GitLab"
-description: >
-  GitLab Plugin
----
-
-
-## Metrics
-
-| Metric Name                 | Description                                                  |
-|:----------------------------|:-------------------------------------------------------------|
-| Pull Request Count          | Number of Pull/Merge Requests                                |
-| Pull Request Pass Rate      | Ratio of Pull/Merge Review requests to merged                |
-| Pull Request Reviewer Count | Number of Pull/Merge Reviewers                               |
-| Pull Request Review Time    | Time from Pull/Merge created time until merged               |
-| Commit Author Count         | Number of Contributors                                       |
-| Commit Count                | Number of Commits                                            |
-| Added Lines                 | Accumulated Number of New Lines                              |
-| Deleted Lines               | Accumulated Number of Removed Lines                          |
-| Pull Request Review Rounds  | Number of cycles of commits followed by comments/final merge |
-
-## Configuration
-
-### Provider (Datasource) Connection
-The connection section of the configuration screen requires the following key fields to connect to the **GitLab API**.
-
-![connection-in-config-ui](gitlab-connection-in-config-ui.png)
-
-- **Connection Name** [`READONLY`]
-  - ⚠️ Defaults to "**GitLab**" and may not be changed. As GitLab is a _single-source data provider_ at the moment, the connection name is read-only as there is only one instance to manage. As we advance on our development roadmap we may enable _multi-source_ connections for GitLab in the future.
-- **Endpoint URL** (REST URL, starts with `https://` or `http://`)
-  - This should be a valid REST API Endpoint eg. `https://gitlab.example.com/api/v4/`
-  - ⚠️ URL should end with`/`
-- **Personal Access Token** (HTTP Basic Auth)
-  - Login to your GitLab Account and create a **Personal Access Token** to authenticate with the API using HTTP Basic Authentication. The token must be 20 characters long. Save the personal access token somewhere safe. After you leave the page, you no longer have access to the token.
-
-    1. In the top-right corner, select your **avatar**.
-    2. Click on **Edit profile**.
-    3. On the left sidebar, select **Access Tokens**.
-    4. Enter a **name** and optional **expiry date** for the token.
-    5. Select the desired **scopes**.
-    6. Click on **Create personal access token**.
-
-    For help on **Creating a personal access token**, please see official [GitLab Docs on Personal Tokens](https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html).
-    For an overview of the **GitLab REST API**, please see official [GitLab Docs on REST](https://docs.gitlab.com/ee/development/documentation/restful_api_styleguide.html#restful-api)
-
-Click **Save Connection** to update connection settings.
-
-### Provider (Datasource) Settings
-There are no additional settings for the GitLab Datasource Provider at this time.
-
-> NOTE: `GitLab Project ID` Mappings feature has been deprecated.
-
-## Gathering Data with GitLab
-
-To collect data, you can make a POST request to `/pipelines`
-
-```
-curl --location --request POST 'localhost:8080/pipelines' \
---header 'Content-Type: application/json' \
---data-raw '
-{
-    "name": "gitlab 20211126",
-    "tasks": [[{
-        "plugin": "gitlab",
-        "options": {
-            "projectId": <Your gitlab project id>
-        }
-    }]]
-}
-'
-```
-
-## Finding Project Id
-
-To get the project id for a specific `GitLab` repository:
-- Visit the repository page on GitLab
-- Find the project id just below the title
-
-  ![Screen Shot 2021-08-06 at 4 32 53 PM](https://user-images.githubusercontent.com/3789273/128568416-a47b2763-51d8-4a6a-8a8b-396512bffb03.png)
-
-> Use this project id in your requests, to collect data from this project
-
-## ⚠️ (WIP) Create a GitLab API Token <a id="gitlab-api-token"></a>
-
-1. When logged into `GitLab` visit `https://gitlab.com/-/profile/personal_access_tokens`
-2. Give the token any name, no expiration date and all scopes (excluding write access)
-
-    ![Screen Shot 2021-08-06 at 4 44 01 PM](https://user-images.githubusercontent.com/3789273/128569148-96f50d4e-5b3b-4110-af69-a68f8d64350a.png)
-
-3. Click the **Create Personal Access Token** button
-4. Save the API token into `.env` file via `cofnig-ui` or edit the file directly.
-
-<br/><br/><br/>
diff --git a/versioned_docs/version-v0.11.0/Plugins/Gitee.md b/versioned_docs/version-v0.11.0/Plugins/Gitee.md
deleted file mode 100644
index 6066fd2..0000000
--- a/versioned_docs/version-v0.11.0/Plugins/Gitee.md
+++ /dev/null
@@ -1,112 +0,0 @@
----
-title: "Gitee(WIP)"
-description: >
-  Gitee Plugin
----
-
-## Summary
-
-## Configuration
-
-### Provider (Datasource) Connection
-The connection aspect of the configuration screen requires the following key fields to connect to the **Gitee API**. As gitee is a _single-source data provider_ at the moment, the connection name is read-only as there is only one instance to manage. As we continue our development roadmap we may enable _multi-source_ connections for gitee in the future.
-
-- **Connection Name** [`READONLY`]
-    - ⚠️ Defaults to "**Gitee**" and may not be changed.
-- **Endpoint URL** (REST URL, starts with `https://` or `http://`)
-    - This should be a valid REST API Endpoint eg. `https://gitee.com/api/v5/`
-    - ⚠️ URL should end with`/`
-- **Auth Token(s)** (Personal Access Token)
-    - For help on **Creating a personal access token**
-    - Provide at least one token for Authentication with the . This field accepts a comma-separated list of values for multiple tokens. The data collection will take longer for gitee since they have a **rate limit of 2k requests per hour**. You can accelerate the process by configuring _multiple_ personal access tokens.
-
-"For API requests using `Basic Authentication` or `OAuth`
-
-
-If you have a need for more api rate limits, you can set many tokens in the config file and we will use all of your tokens.
-
-For an overview of the **gitee REST API**, please see official [gitee Docs on REST](https://gitee.com/api/v5/swagger)
-
-Click **Save Connection** to update connection settings.
-
-
-### Provider (Datasource) Settings
-Manage additional settings and options for the gitee Datasource Provider. Currently there is only one **optional** setting, *Proxy URL*. If you are behind a corporate firewall or VPN you may need to utilize a proxy server.
-
-**gitee Proxy URL [ `Optional`]**
-Enter a valid proxy server address on your Network, e.g. `http://your-proxy-server.com:1080`
-
-Click **Save Settings** to update additional settings.
-
-### Regular Expression Configuration
-Define regex pattern in .env
-- GITEE_PR_BODY_CLOSE_PATTERN: Define key word to associate issue in pr body, please check the example in .env.example
-
-## Sample Request
-In order to collect data, you have to compose a JSON looks like following one, and send it by selecting `Advanced Mode` on `Create Pipeline Run` page:
-1. Configure-UI Mode
-```json
-[
-  [
-    {
-      "plugin": "gitee",
-      "options": {
-        "repo": "lake",
-        "owner": "merico-dev"
-      }
-    }
-  ]
-]
-```
-and if you want to perform certain subtasks.
-```json
-[
-  [
-    {
-      "plugin": "gitee",
-      "subtasks": ["collectXXX", "extractXXX", "convertXXX"],
-      "options": {
-        "repo": "lake",
-        "owner": "merico-dev"
-      }
-    }
-  ]
-]
-```
-
-2. Curl Mode:
-   You can also trigger data collection by making a POST request to `/pipelines`.
-```
-curl --location --request POST 'localhost:8080/pipelines' \
---header 'Content-Type: application/json' \
---data-raw '
-{
-    "name": "gitee 20211126",
-    "tasks": [[{
-        "plugin": "gitee",
-        "options": {
-            "repo": "lake",
-            "owner": "merico-dev"
-        }
-    }]]
-}
-'
-```
-and if you want to perform certain subtasks.
-```
-curl --location --request POST 'localhost:8080/pipelines' \
---header 'Content-Type: application/json' \
---data-raw '
-{
-    "name": "gitee 20211126",
-    "tasks": [[{
-        "plugin": "gitee",
-        "subtasks": ["collectXXX", "extractXXX", "convertXXX"],
-        "options": {
-            "repo": "lake",
-            "owner": "merico-dev"
-        }
-    }]]
-}
-'
-```
diff --git a/versioned_docs/version-v0.11.0/Plugins/Jenkins.md b/versioned_docs/version-v0.11.0/Plugins/Jenkins.md
deleted file mode 100644
index 792165d..0000000
--- a/versioned_docs/version-v0.11.0/Plugins/Jenkins.md
+++ /dev/null
@@ -1,59 +0,0 @@
----
-title: "Jenkins"
-description: >
-  Jenkins Plugin
----
-
-## Summary
-
-This plugin collects Jenkins data through [Remote Access API](https://www.jenkins.io/doc/book/using/remote-access-api/). It then computes and visualizes various DevOps metrics from the Jenkins data.
-
-![image](https://user-images.githubusercontent.com/61080/141943122-dcb08c35-cb68-4967-9a7c-87b63c2d6988.png)
-
-## Metrics
-
-| Metric Name        | Description                         |
-|:-------------------|:------------------------------------|
-| Build Count        | The number of builds created        |
-| Build Success Rate | The percentage of successful builds |
-
-## Configuration
-
-In order to fully use this plugin, you will need to set various configurations via Dev Lake's `config-ui`.
-
-### By `config-ui`
-
-The connection section of the configuration screen requires the following key fields to connect to the Jenkins API.
-
-- Connection Name [READONLY]
-  - ⚠️ Defaults to "Jenkins" and may not be changed. As Jenkins is a _single-source data provider_ at the moment, the connection name is read-only as there is only one instance to manage. As we advance on our development roadmap we may enable multi-source connections for Jenkins in the future.
-- Endpoint URL (REST URL, starts with `https://` or `http://`i, ends with `/`)
-  - This should be a valid REST API Endpoint eg. `https://ci.jenkins.io/`
-- Username (E-mail)
-  - Your User ID for the Jenkins Instance.
-- Password (Secret Phrase or API Access Token)
-  - Secret password for common credentials.
-  - For help on Username and Password, please see official Jenkins Docs on Using Credentials
-  - Or you can use **API Access Token** for this field, which can be generated at `User` -> `Configure` -> `API Token` section on Jenkins.
-
-Click Save Connection to update connection settings.
-
-## Collect Data From Jenkins
-
-To collect data, select `Advanced Mode` on the `Create Pipeline Run` page and paste a JSON config like the following:
-
-```json
-[
-  [
-    {
-      "plugin": "jenkins",
-      "options": {}
-    }
-  ]
-]
-```
-
-## Relationship between job and build
-
-Build is kind of a snapshot of job. Running job each time creates a build.
-<br/><br/><br/>
diff --git a/versioned_docs/version-v0.11.0/Plugins/Jira.md b/versioned_docs/version-v0.11.0/Plugins/Jira.md
deleted file mode 100644
index 8ac28d6..0000000
--- a/versioned_docs/version-v0.11.0/Plugins/Jira.md
+++ /dev/null
@@ -1,253 +0,0 @@
----
-title: "Jira"
-description: >
-  Jira Plugin
----
-
-
-## Summary
-
-This plugin collects Jira data through Jira Cloud REST API. It then computes and visualizes various engineering metrics from the Jira data.
-
-<img width="2035" alt="jira metric display" src="https://user-images.githubusercontent.com/2908155/132926143-7a31d37f-22e1-487d-92a3-cf62e402e5a8.png" />
-
-## Project Metrics This Covers
-
-| Metric Name                         | Description                                                                                       |
-|:------------------------------------|:--------------------------------------------------------------------------------------------------|
-| Requirement Count	                  | Number of issues with type "Requirement"                                                          |
-| Requirement Lead Time	              | Lead time of issues with type "Requirement"                                                       |
-| Requirement Delivery Rate           | Ratio of delivered requirements to all requirements                                               |
-| Requirement Granularity             | Number of story points associated with an issue                                                   |
-| Bug Count	                          | Number of issues with type "Bug"<br/><i>bugs are found during testing</i>                         |
-| Bug Age	                          | Lead time of issues with type "Bug"<br/><i>both new and deleted lines count</i>                   |
-| Bugs Count per 1k Lines of Code     | Amount of bugs per 1000 lines of code                                                             |
-| Incident Count                      | Number of issues with type "Incident"<br/><i>incidents are found when running in production</i>   |
-| Incident Age                        | Lead time of issues with type "Incident"                                                          |
-| Incident Count per 1k Lines of Code | Amount of incidents per 1000 lines of code                                                        |
-
-## Configuration
-
-In order to fully use this plugin, you will need to set various configurations via Dev Lake's `config-ui` service. Open `config-ui` on browser, by default the URL is http://localhost:4000, then go to **Data Integrations / JIRA** page. JIRA plugin currently supports multiple data connections, Here you can **add** new connection to your JIRA connection or **update** the settings if needed.
-
-For each connection, you will need to set up following items first:
-
-![connection at config ui](jira-connection-config-ui.png)
-
-- Connection Name: This allow you to distinguish different connections.
-- Endpoint URL: The JIRA instance API endpoint, for JIRA Cloud Service: `https://<mydomain>.atlassian.net/rest`. DevLake officially supports JIRA Cloud Service on atlassian.net, but may or may not work for JIRA Server Instance.
-- Basic Auth Token: First, generate a **JIRA API TOKEN** for your JIRA account on the JIRA console (see [Generating API token](#generating-api-token)), then, in `config-ui` click the KEY icon on the right side of the input to generate a full `HTTP BASIC AUTH` token for you.
-- Proxy Url: Just use when you want collect through VPN.
-
-### More custom configuration
-If you want to add more custom config, you can click "settings" to change these config
-![More config in config ui](jira-more-setting-in-config-ui.png)
-- Issue Type Mapping: JIRA is highly customizable, each JIRA instance may have a different set of issue types than others. In order to compute and visualize metrics for different instances, you need to map your issue types to standard ones. See [Issue Type Mapping](#issue-type-mapping) for detail.
-- Epic Key: unfortunately, epic relationship implementation in JIRA is based on `custom field`, which is vary from instance to instance. Please see [Find Out Custom Fields](#find-out-custom-fields).
-- Story Point Field: same as Epic Key.
-- Remotelink Commit SHA:A regular expression that matches commit links to determine whether an external link is a link to a commit. Taking gitlab as an example, to match all commits similar to https://gitlab.com/merico-dev/ce/example-repository/-/commit/8ab8fb319930dbd8615830276444b8545fd0ad24, you can directly use the regular expression **/commit/([0-9a-f]{40})$**
-
-
-### Generating API token
-1. Once logged into Jira, visit the url `https://id.atlassian.com/manage-profile/security/api-tokens`
-2. Click the **Create API Token** button, and give it any label name
-![image](https://user-images.githubusercontent.com/27032263/129363611-af5077c9-7a27-474a-a685-4ad52366608b.png)
-
-
-### Issue Type Mapping
-
-Devlake supports 3 standard types, all metrics are computed based on these types:
-
- - `Bug`: Problems found during the `test` phase, before they can reach the production environment.
- - `Incident`: Problems that went through the `test` phase, got deployed into production environment.
- - `Requirement`: Normally, it would be `Story` on your instance if you adopted SCRUM.
-
-You can map arbitrary **YOUR OWN ISSUE TYPE** to a single **STANDARD ISSUE TYPE**. Normally, one would map `Story` to `Requirement`, but you could map both `Story` and `Task` to `Requirement` if that was your case. Unspecified types are copied directly for your convenience, so you don't need to map your `Bug` to standard `Bug`.
-
-Type mapping is critical for some metrics, like **Requirement Count**, make sure to map your custom type correctly.
-
-### Find Out Custom Field
-
-Please follow this guide: [How to find the custom field ID in Jira?](https://github.com/apache/incubator-devlake/wiki/How-to-find-the-custom-field-ID-in-Jira)
-
-
-## Collect Data From JIRA
-
-To collect data, select `Advanced Mode` on the `Create Pipeline Run` page and paste a JSON config like the following:
-
-> <font color="#ED6A45">Warning: Data collection only supports single-task execution, and the results of concurrent multi-task execution may not meet expectations.</font>
-
-```
-[
-  [
-    {
-      "plugin": "jira",
-      "options": {
-          "connectionId": 1,
-          "boardId": 8,
-          "since": "2006-01-02T15:04:05Z"
-      }
-    }
-  ]
-]
-```
-
-- `connectionId`: The `ID` field from **JIRA Integration** page.
-- `boardId`: JIRA board id, see "Find Board Id" for details.
-- `since`: optional, download data since a specified date only.
-
-
-### Find Board Id
-
-1. Navigate to the Jira board in the browser
-2. in the URL bar, get the board id from the parameter `?rapidView=`
-
-**Example:**
-
-`https://{your_jira_endpoint}/secure/RapidBoard.jspa?rapidView=51`
-
-![Screenshot](https://user-images.githubusercontent.com/27032263/129363083-df0afa18-e147-4612-baf9-d284a8bb7a59.png)
-
-Your board id is used in all REST requests to Apache DevLake. You do not need to configure this at the data connection level.
-
-
-
-## API
-
-### Data Connections
-
-1. Get all data connection
-
-```GET /plugins/jira/connections
-[
-  {
-    "ID": 14,
-    "CreatedAt": "2021-10-11T11:49:19.029Z",
-    "UpdatedAt": "2021-10-11T11:49:19.029Z",
-    "name": "test-jira-connection",
-    "endpoint": "https://merico.atlassian.net/rest",
-    "basicAuthEncoded": "basicAuth",
-    "epicKeyField": "epicKeyField",
-      "storyPointField": "storyPointField"
-  }
-]
-```
-
-2. Create a new data connection
-
-```POST /plugins/jira/connections
-{
-	"name": "jira data connection name",
-	"endpoint": "jira api endpoint, i.e. https://merico.atlassian.net/rest",
-    "basicAuthEncoded": "generated by `echo -n {jira login email}:{jira token} | base64`",
-	"epicKeyField": "name of customfield of epic key",
-	"storyPointField": "name of customfield of story point",
-	"typeMappings": { // optional, send empty object to delete all typeMappings of the data connection
-		"userType": {
-			"standardType": "devlake standard type"
-		}
-	}
-}
-```
-
-
-3. Update data connection
-
-```PUT /plugins/jira/connections/:connectionId
-{
-	"name": "jira data connection name",
-	"endpoint": "jira api endpoint, i.e. https://merico.atlassian.net/rest",
-    "basicAuthEncoded": "generated by `echo -n {jira login email}:{jira token} | base64`",
-	"epicKeyField": "name of customfield of epic key",
-	"storyPointField": "name of customfield of story point",
-	"typeMappings": { // optional, send empty object to delete all typeMappings of the data connection
-		"userType": {
-			"standardType": "devlake standard type",
-		}
-	}
-}
-```
-
-4. Get data connection detail
-```GET /plugins/jira/connections/:connectionId
-{
-	"name": "jira data connection name",
-	"endpoint": "jira api endpoint, i.e. https://merico.atlassian.net/rest",
-    "basicAuthEncoded": "generated by `echo -n {jira login email}:{jira token} | base64`",
-	"epicKeyField": "name of customfield of epic key",
-	"storyPointField": "name of customfield of story point",
-	"typeMappings": { // optional, send empty object to delete all typeMappings of the data connection
-		"userType": {
-			"standardType": "devlake standard type",
-		}
-	}
-}
-```
-
-5. Delete data connection
-
-```DELETE /plugins/jira/connections/:connectionId
-```
-
-
-### Type mappings
-
-1. Get all type mappings
-```GET /plugins/jira/connections/:connectionId/type-mappings
-[
-  {
-    "jiraConnectionId": 16,
-    "userType": "userType",
-    "standardType": "standardType"
-  }
-]
-```
-
-2. Create a new type mapping
-
-```POST /plugins/jira/connections/:connectionId/type-mappings
-{
-    "userType": "userType",
-    "standardType": "standardType"
-}
-```
-
-3. Update type mapping
-
-```PUT /plugins/jira/connections/:connectionId/type-mapping/:userType
-{
-    "standardType": "standardTypeUpdated"
-}
-```
-
-
-4. Delete type mapping
-
-```DELETE /plugins/jira/connections/:connectionId/type-mapping/:userType
-```
-
-5. API forwarding
-For example:
-Requests to `http://your_devlake_host/plugins/jira/connections/1/proxy/rest/agile/1.0/board/8/sprint`
-would be forwarded to `https://your_jira_host/rest/agile/1.0/board/8/sprint`
-
-```GET /plugins/jira/connections/:connectionId/proxy/rest/*path
-{
-    "maxResults": 1,
-    "startAt": 0,
-    "isLast": false,
-    "values": [
-        {
-            "id": 7,
-            "self": "https://merico.atlassian.net/rest/agile/1.0/sprint/7",
-            "state": "closed",
-            "name": "EE Sprint 7",
-            "startDate": "2020-06-12T00:38:51.882Z",
-            "endDate": "2020-06-26T00:38:00.000Z",
-            "completeDate": "2020-06-22T05:59:58.980Z",
-            "originBoardId": 8,
-            "goal": ""
-        }
-    ]
-}
-```
diff --git a/versioned_docs/version-v0.11.0/Plugins/RefDiff.md b/versioned_docs/version-v0.11.0/Plugins/RefDiff.md
deleted file mode 100644
index 12950f4..0000000
--- a/versioned_docs/version-v0.11.0/Plugins/RefDiff.md
+++ /dev/null
@@ -1,116 +0,0 @@
----
-title: "RefDiff"
-description: >
-  RefDiff Plugin
----
-
-
-## Summary
-
-For development workload analysis, we often need to know how many commits have been created between 2 releases. This plugin calculates which commits differ between 2 Ref (branch/tag), and the result will be stored back into database for further analysis.
-
-## Important Note
-
-You need to run gitextractor before the refdiff plugin. The gitextractor plugin should create records in the `refs` table in your DB before this plugin can be run.
-
-## Configuration
-
-This is a enrichment plugin based on Domain Layer data, no configuration needed
-
-## How to use
-
-In order to trigger the enrichment, you need to insert a new task into your pipeline.
-
-1. Make sure `commits` and `refs` are collected into your database, `refs` table should contain records like following:
-```
-id                                            ref_type
-github:GithubRepo:384111310:refs/tags/0.3.5   TAG
-github:GithubRepo:384111310:refs/tags/0.3.6   TAG
-github:GithubRepo:384111310:refs/tags/0.5.0   TAG
-github:GithubRepo:384111310:refs/tags/v0.0.1  TAG
-github:GithubRepo:384111310:refs/tags/v0.2.0  TAG
-github:GithubRepo:384111310:refs/tags/v0.3.0  TAG
-github:GithubRepo:384111310:refs/tags/v0.4.0  TAG
-github:GithubRepo:384111310:refs/tags/v0.6.0  TAG
-github:GithubRepo:384111310:refs/tags/v0.6.1  TAG
-```
-2. If you want to run calculateIssuesDiff, please configure GITHUB_PR_BODY_CLOSE_PATTERN in .env, you can check the example in .env.example(we have a default value, please make sure your pattern is disclosed by single quotes '')
-3. If you want to run calculatePrCherryPick, please configure GITHUB_PR_TITLE_PATTERN in .env, you can check the example in .env.example(we have a default value, please make sure your pattern is disclosed by single quotes '')
-4. And then, trigger a pipeline like following, you can also define sub tasks, calculateRefDiff will calculate commits between two ref, and creatRefBugStats will create a table to show bug list between two ref:
-```
-curl -v -XPOST http://localhost:8080/pipelines --data @- <<'JSON'
-{
-    "name": "test-refdiff",
-    "tasks": [
-        [
-            {
-                "plugin": "refdiff",
-                "options": {
-                    "repoId": "github:GithubRepo:384111310",
-                    "pairs": [
-                       { "newRef": "refs/tags/v0.6.0", "oldRef": "refs/tags/0.5.0" },
-                       { "newRef": "refs/tags/0.5.0", "oldRef": "refs/tags/0.4.0" }
-                    ],
-                    "tasks": [
-                        "calculateCommitsDiff",
-                        "calculateIssuesDiff",
-                        "calculatePrCherryPick",
-                    ]
-                }
-            }
-        ]
-    ]
-}
-JSON
-```
-
-## Development
-
-This plugin depends on `libgit2`, you need to install version 1.3.0 in order to run and debug this plugin on your local
-machine.
-
-### Ubuntu
-
-```
-apt install cmake
-git clone https://github.com/libgit2/libgit2.git
-cd libgit2
-git checkout v1.3.0
-mkdir build
-cd build
-cmake ..
-make
-make install
-```
-
-### MacOS
-1. [MacPorts](https://guide.macports.org/#introduction) install
-```
-port install libgit2@1.3.0
-```
-2. Source install
-```
-brew install cmake
-git clone https://github.com/libgit2/libgit2.git
-cd libgit2
-git checkout v1.3.0
-mkdir build
-cd build
-cmake ..
-make
-make install
-```
-
-#### Troubleshooting (MacOS)
-
-> Q: I got an error saying: `pkg-config: exec: "pkg-config": executable file not found in $PATH`
-
-> A:
-> 1. Make sure you have pkg-config installed:
->
-> `brew install pkg-config`
->
-> 2. Make sure your pkg config path covers the installation:
-> `export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib:/usr/local/lib/pkgconfig`
-
-<br/><br/><br/>
diff --git a/versioned_docs/version-v0.11.0/Plugins/Tapd.md b/versioned_docs/version-v0.11.0/Plugins/Tapd.md
deleted file mode 100644
index b8db89f..0000000
--- a/versioned_docs/version-v0.11.0/Plugins/Tapd.md
+++ /dev/null
@@ -1,16 +0,0 @@
----
-title: "TAPD"
-description: >
-  TAPD Plugin
----
-
-## Summary
-
-This plugin collects TAPD data.
-
-This plugin is in development so you can't modify settings in config-ui.
-
-## Configuration
-
-In order to fully use this plugin, you will need to get endpoint/basic_auth_encoded/rate_limit and insert it into table `_tool_tapd_connections`.
-
diff --git a/versioned_docs/version-v0.11.0/Plugins/_category_.json b/versioned_docs/version-v0.11.0/Plugins/_category_.json
deleted file mode 100644
index 534bad8..0000000
--- a/versioned_docs/version-v0.11.0/Plugins/_category_.json
+++ /dev/null
@@ -1,4 +0,0 @@
-{
-  "label": "Plugins",
-  "position": 7
-}
diff --git a/versioned_docs/version-v0.11.0/Plugins/github-connection-in-config-ui.png b/versioned_docs/version-v0.11.0/Plugins/github-connection-in-config-ui.png
deleted file mode 100644
index 5359fb1..0000000
Binary files a/versioned_docs/version-v0.11.0/Plugins/github-connection-in-config-ui.png and /dev/null differ
diff --git a/versioned_docs/version-v0.11.0/Plugins/gitlab-connection-in-config-ui.png b/versioned_docs/version-v0.11.0/Plugins/gitlab-connection-in-config-ui.png
deleted file mode 100644
index 7aacee8..0000000
Binary files a/versioned_docs/version-v0.11.0/Plugins/gitlab-connection-in-config-ui.png and /dev/null differ
diff --git a/versioned_docs/version-v0.11.0/Plugins/jira-connection-config-ui.png b/versioned_docs/version-v0.11.0/Plugins/jira-connection-config-ui.png
deleted file mode 100644
index df2e8e3..0000000
Binary files a/versioned_docs/version-v0.11.0/Plugins/jira-connection-config-ui.png and /dev/null differ
diff --git a/versioned_docs/version-v0.11.0/Plugins/jira-more-setting-in-config-ui.png b/versioned_docs/version-v0.11.0/Plugins/jira-more-setting-in-config-ui.png
deleted file mode 100644
index dffb0c9..0000000
Binary files a/versioned_docs/version-v0.11.0/Plugins/jira-more-setting-in-config-ui.png and /dev/null differ
diff --git a/versioned_docs/version-v0.11.0/QuickStart/KubernetesSetup.md b/versioned_docs/version-v0.11.0/QuickStart/KubernetesSetup.md
deleted file mode 100644
index e4faeba..0000000
--- a/versioned_docs/version-v0.11.0/QuickStart/KubernetesSetup.md
+++ /dev/null
@@ -1,33 +0,0 @@
----
-title: "Kubernetes Setup"
-description: >
-  The steps to install Apache DevLake in Kubernetes
-sidebar_position: 2
----
-
-
-We provide a sample [k8s-deploy.yaml](https://github.com/apache/incubator-devlake/blob/main/k8s-deploy.yaml) for users interested in deploying Apache DevLake on a k8s cluster.
-
-[k8s-deploy.yaml](https://github.com/apache/incubator-devlake/blob/main/k8s-deploy.yaml) will create a namespace `devlake` on your k8s cluster, and use `nodePort 30004` for `config-ui`,  `nodePort 30002` for `grafana` dashboards. If you would like to use certain version of Apache DevLake, please update the image tag of `grafana`, `devlake` and `config-ui` services to specify versions like `v0.10.1`.
-
-## Step-by-step guide
-
-1. Download [k8s-deploy.yaml](https://github.com/apache/incubator-devlake/blob/main/k8s-deploy.yaml) to local machine
-2. Some key points:
-   - `config-ui` deployment:
-     * `GRAFANA_ENDPOINT`: FQDN of grafana service which can be reached from user's browser
-     * `DEVLAKE_ENDPOINT`: FQDN of devlake service which can be reached within k8s cluster, normally you don't need to change it unless namespace was changed
-     * `ADMIN_USER`/`ADMIN_PASS`: Not required, but highly recommended
-   - `devlake-config` config map:
-     * `MYSQL_USER`: shared between `mysql` and `grafana` service
-     * `MYSQL_PASSWORD`: shared between `mysql` and `grafana` service
-     * `MYSQL_DATABASE`: shared between `mysql` and `grafana` service
-     * `MYSQL_ROOT_PASSWORD`: set root password for `mysql`  service
-   - `devlake` deployment:
-     * `DB_URL`: update this value if  `MYSQL_USER`, `MYSQL_PASSWORD` or `MYSQL_DATABASE` were changed
-3. The `devlake` deployment store its configuration in `/app/.env`. In our sample yaml, we use `hostPath` volume, so please make sure directory `/var/lib/devlake` exists on your k8s workers, or employ other techniques to persist `/app/.env` file. Please do NOT mount the entire `/app` directory, because plugins are located in `/app/bin` folder.
-4. Finally, execute the following command, Apache DevLake should be up and running:
-    ```sh
-    kubectl apply -f k8s-deploy.yaml
-    ```
-<br/><br/><br/>
diff --git a/versioned_docs/version-v0.11.0/QuickStart/LocalSetup.md b/versioned_docs/version-v0.11.0/QuickStart/LocalSetup.md
deleted file mode 100644
index 8e56a65..0000000
--- a/versioned_docs/version-v0.11.0/QuickStart/LocalSetup.md
+++ /dev/null
@@ -1,44 +0,0 @@
----
-title: "Local Setup"
-description: >
-  The steps to install DevLake locally
-sidebar_position: 1
----
-
-
-## Prerequisites
-
-- [Docker v19.03.10+](https://docs.docker.com/get-docker)
-- [docker-compose v2.2.3+](https://docs.docker.com/compose/install/)
-
-## Launch DevLake
-
-- Commands written `like this` are to be run in your terminal.
-
-1. Download `docker-compose.yml` and `env.example` from [latest release page](https://github.com/apache/incubator-devlake/releases/latest) into a folder.
-2. Rename `env.example` to `.env`. For Mac/Linux users, please run `mv env.example .env` in the terminal.
-3. Run `docker-compose up -d` to launch DevLake.
-
-## Configure data connections and collect data
-
-1. Visit `config-ui` at `http://localhost:4000` in your browser to configure data connections.
-   - Navigate to desired plugins on the Integrations page
-   - Please reference the following for more details on how to configure each one:<br/>
-      - [Jira](../Plugins/Jira.md)
-      - [GitHub](../Plugins/GitHub.md): For users who'd like to collect GitHub data, we recommend reading our [GitHub data collection guide](../UserManuals/GitHubUserGuide.md) which covers the following steps in detail.
-      - [GitLab](../Plugins/GitLab.md)
-      - [Jenkins](../Plugins/Jenkins.md)
-   - Submit the form to update the values by clicking on the **Save Connection** button on each form page
-   - `devlake` takes a while to fully boot up. if `config-ui` complaining about api being unreachable, please wait a few seconds and try refreshing the page.
-2. Create pipelines to trigger data collection in `config-ui`
-3. Click *View Dashboards* button in the top left when done, or visit `localhost:3002` (username: `admin`, password: `admin`).
-   - We use [Grafana](https://grafana.com/) as a visualization tool to build charts for the [data](../DataModels/DataSupport.md) stored in our database.
-   - Using SQL queries, we can add panels to build, save, and edit customized dashboards.
-   - All the details on provisioning and customizing a dashboard can be found in the [Grafana Doc](../UserManuals/GrafanaUserGuide.md).
-4. To synchronize data periodically, users can set up recurring pipelines with DevLake's [pipeline blueprint](../UserManuals/RecurringPipelines.md) for details.
-
-## Upgrade to a newer version
-
-Support for database schema migration was introduced to DevLake in v0.10.0. From v0.10.0 onwards, users can upgrade their instance smoothly to a newer version. However, versions prior to v0.10.0 do not support upgrading to a newer version with a different database schema. We recommend users to deploy a new instance if needed.
-
-<br/>
diff --git a/versioned_docs/version-v0.11.0/QuickStart/_category_.json b/versioned_docs/version-v0.11.0/QuickStart/_category_.json
deleted file mode 100644
index 133c30f..0000000
--- a/versioned_docs/version-v0.11.0/QuickStart/_category_.json
+++ /dev/null
@@ -1,4 +0,0 @@
-{
-  "label": "Quick Start",
-  "position": 2
-}
diff --git a/versioned_docs/version-v0.11.0/UserManuals/AdvancedMode.md b/versioned_docs/version-v0.11.0/UserManuals/AdvancedMode.md
deleted file mode 100644
index 4323133..0000000
--- a/versioned_docs/version-v0.11.0/UserManuals/AdvancedMode.md
+++ /dev/null
@@ -1,89 +0,0 @@
----
-title: "Advanced Mode"
-sidebar_position: 2
-description: >
-  Advanced Mode
----
-
-
-## Why advanced mode?
-
-Advanced mode allows users to create any pipeline by writing JSON. This is useful for users who want to:
-
-1. Collect multiple GitHub/GitLab repos or Jira projects within a single pipeline
-2. Have fine-grained control over what entities to collect or what subtasks to run for each plugin
-3. Orchestrate a complex pipeline that consists of multiple stages of plugins.
-
-Advanced mode gives the most flexibility to users by exposing the JSON API.
-
-## How to use advanced mode to create pipelines?
-
-1. Visit the "Create Pipeline Run" page on `config-ui`
-
-![image](https://user-images.githubusercontent.com/2908155/164569669-698da2f2-47c1-457b-b7da-39dfa7963e09.png)
-
-2. Scroll to the bottom and toggle on the "Advanced Mode" button
-
-![image](https://user-images.githubusercontent.com/2908155/164570039-befb86e2-c400-48fe-8867-da44654194bd.png)
-
-3. The pipeline editor expects a 2D array of plugins. The first dimension represents different stages of the pipeline and the second dimension describes the plugins in each stage. Stages run in sequential order and plugins within the same stage runs in parallel. We provide some templates for users to get started. Please also see the next section for some examples.
-
-![image](https://user-images.githubusercontent.com/2908155/164576122-fc015fea-ca4a-48f2-b2f5-6f1fae1ab73c.png)
-
-## Examples
-
-1. Collect multiple GitLab repos sequentially.
-
->When there're multiple collection tasks against a single data source, we recommend running these tasks sequentially since the collection speed is mostly limited by the API rate limit of the data source.
->Running multiple tasks against the same data source is unlikely to speed up the process and may overwhelm the data source.
-
-
-Below is an example for collecting 2 GitLab repos sequentially. It has 2 stages, each contains a GitLab task.
-
-
-```
-[
-  [
-    {
-      "Plugin": "gitlab",
-      "Options": {
-        "projectId": 15238074
-      }
-    }
-  ],
-  [
-    {
-      "Plugin": "gitlab",
-      "Options": {
-        "projectId": 11624398
-      }
-    }
-  ]
-]
-```
-
-
-2. Collect a GitHub repo and a Jira board in parallel
-
-Below is an example for collecting a GitHub repo and a Jira board in parallel. It has a single stage with a GitHub task and a Jira task. Since users can configure multiple Jira connection, it's required to pass in a `connectionId` for Jira task to specify which connection to use.
-
-```
-[
-  [
-    {
-      "Plugin": "github",
-      "Options": {
-        "repo": "lake",
-        "owner": "merico-dev"
-      }
-    },
-    {
-      "Plugin": "jira",
-      "Options": {
-        "connectionId": 1,
-        "boardId": 76
-      }
-    }
-  ]
-]
-```
diff --git a/versioned_docs/version-v0.11.0/UserManuals/GitHubUserGuide.md b/versioned_docs/version-v0.11.0/UserManuals/GitHubUserGuide.md
deleted file mode 100644
index fa67456..0000000
--- a/versioned_docs/version-v0.11.0/UserManuals/GitHubUserGuide.md
+++ /dev/null
@@ -1,118 +0,0 @@
----
-title: "GitHub User Guide"
-sidebar_position: 4
-description: >
-  GitHub User Guide
----
-
-## Summary
-
-GitHub has a rate limit of 5,000 API calls per hour for their REST API.
-As a result, it may take hours to collect commits data from GitHub API for a repo that has 10,000+ commits.
-To accelerate the process, DevLake introduces GitExtractor, a new plugin that collects git data by cloning the git repo instead of by calling GitHub APIs.
-
-Starting from v0.10.0, DevLake will collect GitHub data in 2 separate plugins:
-
-- GitHub plugin (via GitHub API): collect repos, issues, pull requests
-- GitExtractor (via cloning repos):  collect commits, refs
-
-Note that GitLab plugin still collects commits via API by default since GitLab has a much higher API rate limit.
-
-This doc details the process of collecting GitHub data in v0.10.0. We're working on simplifying this process in the next releases.
-
-Before start, please make sure all services are started.
-
-## GitHub Data Collection Procedure
-
-There're 3 steps.
-
-1. Configure GitHub connection
-2. Create a pipeline to run GitHub plugin
-3. Create a pipeline to run GitExtractor plugin
-4. [Optional] Set up a recurring pipeline to keep data fresh
-
-### Step 1 - Configure GitHub connection
-
-1. Visit `config-ui` at `http://localhost:4000` and click the GitHub icon
-
-2. Click the default connection 'Github' in the list
-    ![image](https://user-images.githubusercontent.com/14050754/163591959-11d83216-057b-429f-bb35-a9d845b3de5a.png)
-
-3. Configure connection by providing your GitHub API endpoint URL and your personal access token(s).
-    ![image](https://user-images.githubusercontent.com/14050754/163592015-b3294437-ce39-45d6-adf6-293e620d3942.png)
-
-- Endpoint URL: Leave this unchanged if you're using github.com. Otherwise replace it with your own GitHub instance's REST API endpoint URL. This URL should end with '/'.
-- Auth Token(s): Fill in your personal access tokens(s). For how to generate personal access tokens, please see GitHub's [official documentation](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token).
-You can provide multiple tokens to speed up the data collection process, simply concatenating tokens with commas.
-- GitHub Proxy URL: This is optional. Enter a valid proxy server address on your Network, e.g. http://your-proxy-server.com:1080
-
-4. Click 'Test Connection' and see it's working, then click 'Save Connection'.
-
-5. [Optional] Help DevLake understand your GitHub data by customizing data enrichment rules shown below.
-    ![image](https://user-images.githubusercontent.com/14050754/163592506-1873bdd1-53cb-413b-a528-7bda440d07c5.png)
-
-   1. Pull Request Enrichment Options
-
-      1. `Type`: PRs with label that matches given Regular Expression, their properties `type` will be set to the value of first sub match. For example, with Type being set to `type/(.*)$`, a PR with label `type/bug`, its `type` would be set to `bug`, with label `type/doc`, it would be `doc`.
-      2. `Component`: Same as above, but for `component` property.
-
-   2. Issue Enrichment Options
-
-      1. `Severity`: Same as above, but for `issue.severity` of course.
-
-      2. `Component`: Same as above.
-
-      3. `Priority`: Same as above.
-
-      4. **Requirement** : Issues with label that matches given Regular Expression, their properties `type` will be set to `REQUIREMENT`. Unlike `PR.type`, submatch does nothing,    because for Issue Management Analysis, people tend to focus on 3 kinds of types (Requirement/Bug/Incident), however, the concrete naming varies from repo to repo, time to time, so we decided to standardize them to help analysts make general purpose metrics.
-
-      5. **Bug**: Same as above, with `type` setting to `BUG`
-
-      6. **Incident**: Same as above, with `type` setting to `INCIDENT`
-
-6. Click 'Save Settings'
-
-### Step 2 - Create a pipeline to collect GitHub data
-
-1. Select 'Pipelines > Create Pipeline Run' from `config-ui`
-
-![image](https://user-images.githubusercontent.com/14050754/163592542-8b9d86ae-4f16-492c-8f90-12f1e90c5772.png)
-
-2. Toggle on GitHub plugin, enter the repo you'd like to collect data from.
-
-![image](https://user-images.githubusercontent.com/14050754/163592606-92141c7e-e820-4644-b2c9-49aa44f10871.png)
-
-3. Click 'Run Pipeline'
-
-You'll be redirected to newly created pipeline:
-
-![image](https://user-images.githubusercontent.com/14050754/163592677-268e6b77-db3f-4eec-8a0e-ced282f5a361.png)
-
-
-See the pipeline finishes (progress 100%):
-
-![image](https://user-images.githubusercontent.com/14050754/163592709-cce0d502-92e9-4c19-8504-6eb521b76169.png)
-
-### Step 3 - Create a pipeline to run GitExtractor plugin
-
-1. Enable the `GitExtractor` plugin, and enter your `Git URL` and, select the `Repository ID` from dropdown menu.
-
-![image](https://user-images.githubusercontent.com/2908155/164125950-37822d7f-6ee3-425d-8523-6f6b6213cb89.png)
-
-2. Click 'Run Pipeline' and wait until it's finished.
-
-3. Click `View Dashboards` on the top left corner of `config-ui`, the default username and password of Grafana are `admin`.
-
-![image](https://user-images.githubusercontent.com/61080/163666814-e48ac68d-a0cc-4413-bed7-ba123dd291c8.png)
-
-4. See dashboards populated with GitHub data.
-
-### Step 4 - [Optional] Set up a recurring pipeline to keep data fresh
-
-Please see [How to create recurring pipelines](./RecurringPipelines.md) for details.
-
-
-
-
-
-
diff --git a/versioned_docs/version-v0.11.0/UserManuals/GrafanaUserGuide.md b/versioned_docs/version-v0.11.0/UserManuals/GrafanaUserGuide.md
deleted file mode 100644
index e475702..0000000
--- a/versioned_docs/version-v0.11.0/UserManuals/GrafanaUserGuide.md
+++ /dev/null
@@ -1,120 +0,0 @@
----
-title: "Grafana User Guide"
-sidebar_position: 1
-description: >
-  Grafana User Guide
----
-
-
-# Grafana
-
-<img src="https://user-images.githubusercontent.com/3789273/128533901-3107e9bf-c3e3-4320-ba47-879fe2b0ea4d.png" width="450px" />
-
-When first visiting Grafana, you will be provided with a sample dashboard with some basic charts setup from the database.
-
-## Contents
-
-Section | Link
-:------------ | :-------------
-Logging In | [View Section](#logging-in)
-Viewing All Dashboards | [View Section](#viewing-all-dashboards)
-Customizing a Dashboard | [View Section](#customizing-a-dashboard)
-Dashboard Settings | [View Section](#dashboard-settings)
-Provisioning a Dashboard | [View Section](#provisioning-a-dashboard)
-Troubleshooting DB Connection | [View Section](#troubleshooting-db-connection)
-
-## Logging In<a id="logging-in"></a>
-
-Once the app is up and running, visit `http://localhost:3002` to view the Grafana dashboard.
-
-Default login credentials are:
-
-- Username: `admin`
-- Password: `admin`
-
-## Viewing All Dashboards<a id="viewing-all-dashboards"></a>
-
-To see all dashboards created in Grafana visit `/dashboards`
-
-Or, use the sidebar and click on **Manage**:
-
-![Screen Shot 2021-08-06 at 11 27 08 AM](https://user-images.githubusercontent.com/3789273/128534617-1992c080-9385-49d5-b30f-be5c96d5142a.png)
-
-
-## Customizing a Dashboard<a id="customizing-a-dashboard"></a>
-
-When viewing a dashboard, click the top bar of a panel, and go to **edit**
-
-![Screen Shot 2021-08-06 at 11 35 36 AM](https://user-images.githubusercontent.com/3789273/128535505-a56162e0-72ad-46ac-8a94-70f1c7a910ed.png)
-
-**Edit Dashboard Panel Page:**
-
-![grafana-sections](https://user-images.githubusercontent.com/3789273/128540136-ba36ee2f-a544-4558-8282-84a7cb9df27a.png)
-
-### 1. Preview Area
-- **Top Left** is the variable select area (custom dashboard variables, used for switching projects, or grouping data)
-- **Top Right** we have a toolbar with some buttons related to the display of the data:
-  - View data results in a table
-  - Time range selector
-  - Refresh data button
-- **The Main Area** will display the chart and should update in real time
-
-> Note: Data should refresh automatically, but may require a refresh using the button in some cases
-
-### 2. Query Builder
-Here we form the SQL query to pull data into our chart, from our database
-- Ensure the **Data Source** is the correct database
-
-  ![Screen Shot 2021-08-06 at 10 14 22 AM](https://user-images.githubusercontent.com/3789273/128545278-be4846e0-852d-4bc8-8994-e99b79831d8c.png)
-
-- Select **Format as Table**, and **Edit SQL** buttons to write/edit queries as SQL
-
-  ![Screen Shot 2021-08-06 at 10 17 52 AM](https://user-images.githubusercontent.com/3789273/128545197-a9ff9cb3-f12d-4331-bf6a-39035043667a.png)
-
-- The **Main Area** is where the queries are written, and in the top right is the **Query Inspector** button (to inspect returned data)
-
-  ![Screen Shot 2021-08-06 at 10 18 23 AM](https://user-images.githubusercontent.com/3789273/128545557-ead5312a-e835-4c59-b9ca-dd5c08f2a38b.png)
-
-### 3. Main Panel Toolbar
-In the top right of the window are buttons for:
-- Dashboard settings (regarding entire dashboard)
-- Save/apply changes (to specific panel)
-
-### 4. Grafana Parameter Sidebar
-- Change chart style (bar/line/pie chart etc)
-- Edit legends, chart parameters
-- Modify chart styling
-- Other Grafana specific settings
-
-## Dashboard Settings<a id="dashboard-settings"></a>
-
-When viewing a dashboard click on the settings icon to view dashboard settings. Here are 2 important sections to use:
-
-![Screen Shot 2021-08-06 at 1 51 14 PM](https://user-images.githubusercontent.com/3789273/128555763-4d0370c2-bd4d-4462-ae7e-4b140c4e8c34.png)
-
-- Variables
-  - Create variables to use throughout the dashboard panels, that are also built on SQL queries
-
-  ![Screen Shot 2021-08-06 at 2 02 40 PM](https://user-images.githubusercontent.com/3789273/128553157-a8e33042-faba-4db4-97db-02a29036e27c.png)
-
-- JSON Model
-  - Copy `json` code here and save it to a new file in `/grafana/dashboards/` with a unique name in the `lake` repo. This will allow us to persist dashboards when we load the app
-
-  ![Screen Shot 2021-08-06 at 2 02 52 PM](https://user-images.githubusercontent.com/3789273/128553176-65a5ae43-742f-4abf-9c60-04722033339e.png)
-
-## Provisioning a Dashboard<a id="provisioning-a-dashboard"></a>
-
-To save a dashboard in the `lake` repo and load it:
-
-1. Create a dashboard in browser (visit `/dashboard/new`, or use sidebar)
-2. Save dashboard (in top right of screen)
-3. Go to dashboard settings (in top right of screen)
-4. Click on _JSON Model_ in sidebar
-5. Copy code into a new `.json` file in `/grafana/dashboards`
-
-## Troubleshooting DB Connection<a id="troubleshooting-db-connection"></a>
-
-To ensure we have properly connected our database to the data source in Grafana, check database settings in `./grafana/datasources/datasource.yml`, specifically:
-- `database`
-- `user`
-- `secureJsonData/password`
diff --git a/versioned_docs/version-v0.11.0/UserManuals/RecurringPipelines.md b/versioned_docs/version-v0.11.0/UserManuals/RecurringPipelines.md
deleted file mode 100644
index ce82b1e..0000000
--- a/versioned_docs/version-v0.11.0/UserManuals/RecurringPipelines.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: "Recurring Pipelines"
-sidebar_position: 3
-description: >
-  Recurring Pipelines
----
-
-## How to create recurring pipelines?
-
-Once you've verified that a pipeline works, most likely you'll want to run that pipeline periodically to keep data fresh, and DevLake's pipeline blueprint feature have got you covered.
-
-
-1. Click 'Create Pipeline Run' and
-  - Toggle the plugins you'd like to run, here we use GitHub and GitExtractor plugin as an example
-  - Toggle on Automate Pipeline
-    ![image](https://user-images.githubusercontent.com/14050754/163596590-484e4300-b17e-4119-9818-52463c10b889.png)
-
-
-2. Click 'Add Blueprint'. Fill in the form and 'Save Blueprint'.
-
-    - **NOTE**: The schedule syntax is standard unix cron syntax, [Crontab.guru](https://crontab.guru/) is an useful reference
-    - **IMPORANT**: The scheduler is running using the `UTC` timezone. If you want data collection to happen at 3 AM New York time (UTC-04:00) every day, use **Custom Shedule** and set it to `0 7 * * *`
-
-    ![image](https://user-images.githubusercontent.com/14050754/163596655-db59e154-405f-4739-89f2-7dceab7341fe.png)
-
-3. Click 'Save Blueprint'.
-
-4. Click 'Pipeline Blueprints', you can view and edit the new blueprint in the blueprint list.
-
-    ![image](https://user-images.githubusercontent.com/14050754/163596773-4fb4237e-e3f2-4aef-993f-8a1499ca30e2.png)
\ No newline at end of file
diff --git a/versioned_docs/version-v0.11.0/UserManuals/TeamConfiguration.md b/versioned_docs/version-v0.11.0/UserManuals/TeamConfiguration.md
deleted file mode 100644
index 4646ffa..0000000
--- a/versioned_docs/version-v0.11.0/UserManuals/TeamConfiguration.md
+++ /dev/null
@@ -1,129 +0,0 @@
----
-title: "Team Configuration"
-sidebar_position: 6
-description: >
-  Team Configuration
----
-## Summary
-This is a brief step-by-step guide to using the team feature.
-
-Notes: 
-1. Please convert /xxxpath/*.csv to the absolute path of the csv file you want to upload. 
-2. Please replace the 127.0.0.1:8080 in the text with the actual ip and port. 
-
-## Step 1 - Construct the teams table.
-a. Api request example, you can generate sample data.
-
-    i.  GET request: http://127.0.0.1:8080/plugins/org/teams.csv?fake_data=true (put into the browser can download the corresponding csv file)
-
-    ii. The corresponding curl command:
-        curl --location --request GET 'http://127.0.0.1:8080/plugins/org/teams.csv?fake_data=true'
-    
-
-b. The actual api request.
-
-    i.  Create the corresponding teams file: teams.csv 
-    (Notes: 1.The table table field names should have initial capital letters. 2.Be careful not to change the file suffix when opening csv files through the tool ).
-
-    ii. The corresponding curl command(Quick copy folder path for macOS, Shortcut option + command + c):
-    curl --location --request PUT 'http://127.0.0.1:8080/plugins/org/teams.csv' --form 'file=@"/xxxpath/teams.csv"'
-
-    iii. After successful execution, the teams table is generated and the data can be seen in the database table teams. 
-    (Notes: how to connect to the database: mainly through host, port, username, password, and then through sql tools, such as sequal ace, datagrip and other data, of course you can also access through the command line mysql -h `ip` -u `username` -p -P `port`)
-
-![image](/img/Team/teamflow3.png)
-
-
-## Step 2 - Construct user tables (roster)
-a. Api request example, you can generate sample data.
-
-    i.  Get request: http://127.0.0.1:8080/plugins/org/users.csv?fake_data=true (put into the browser can download the corresponding csv file).
-
-    ii. The corresponding curl command:
-    curl --location --request GET 'http://127.0.0.1:8080/plugins/org/users.csv?fake_data=true'
-
-
-b. The actual api request.
-
-    i.  Create the csv file (roster) (Notes: the table header is in capital letters: Id,Email,Name).
-
-    ii. The corresponding curl command:
-    curl --location --request PUT 'http://127.0.0.1:8080/plugins/org/users.csv' --form 'file=@"/xxxpath/users.csv"'
-
-    iii. After successful execution, the users table is generated and the data can be seen in the database table users.
-
-![image](/img/Team/teamflow1.png)
-    
-    iv. Generated the team_users table, you can see the data in the team_users table.
-
-![image](/img/Team/teamflow2.png)
-
-## Step 3 - Update users if you need  
-If there is a problem with team_users association or data in users, just re-put users api interface, i.e. (b in step 2 above)
-
-## Step 4 - Collect accounts 
-accounts table is collected by users through devlake. You can see the accounts table information in the database.
-
-![image](/img/Team/teamflow4.png)
-
-## Step 5 - Automatically match existing accounts and users through api requests
-
-a. Api request:  the name of the plugin is "org", connctionId is order to keep same with other plugins.
-
-```
-curl --location --request POST '127.0.0.1:8080/pipelines' \
---header 'Content-Type: application/json' \
---data-raw '{
-    "name": "test",
-    "plan":[
-        [
-            {
-                "plugin": "org",
-                "subtasks":["connectUserAccountsExact"],
-                "options":{
-                    "connectionId":1
-                }
-            }
-        ]
-    ]
-}'
-```
-
-b. After successful execution, the user_accounts table is generated, and you can see the data in table user_accounts.
-
-![image](/img/Team/teamflow5.png)
-
-## Step 6 - Get user_accountsr relationship
-After generating the user_accounts relationship, the user can get the associated data through the GET method to confirm whether the data user and account match correctly and whether the matched accounts are complete.
-
-a. http://127.0.0.1:8080/plugins/org/user_account_mapping.csv (put into the browser to download the file directly)
-
-b. The corresponding curl command:
-```
-curl --location --request GET 'http://127.0.0.1:8080/plugins/org/user_account_mapping.csv'
-```
-
-![image](/img/Team/teamflow6.png)
-
-c. You can also use sql statements to determine, here to provide a sql statement for reference only.
-```
-SELECT a.id as account_id, a.email, a.user_name as account_user_name, u.id as user_id, u.name as real_name
-FROM accounts a 
-        join user_accounts ua on a.id = ua.account_id
-        join users u on ua.user_id = u.id
-```
-
-## Step 7 - Update user_accounts if you need
-If the association between user and account is not as expected, you can change the user_account_mapping.csv file. For example, I change the UserId in the line Id=github:GithubAccount:1:1234 in the user_account_mapping.csv file to 2, and then upload the user_account_mapping.csv file through the api interface.
-
-a. The corresponding curl command:
-```
-curl --location --request PUT 'http://127.0.0.1:8080/plugins/org/user_account_mapping.csv' --form 'file=@"/xxxpath/user_account_mapping.csv"'
-```
-
-b. You can see that the data in the user_accounts table has been updated.
-
-![image](/img/Team/teamflow7.png)
-
-
-**The above is the flow of user usage for the whole team feature.**
diff --git a/versioned_docs/version-v0.11.0/UserManuals/TemporalSetup.md b/versioned_docs/version-v0.11.0/UserManuals/TemporalSetup.md
deleted file mode 100644
index f893a83..0000000
--- a/versioned_docs/version-v0.11.0/UserManuals/TemporalSetup.md
+++ /dev/null
@@ -1,35 +0,0 @@
----
-title: "Temporal Setup"
-sidebar_position: 5
-description: >
-  The steps to install DevLake in Temporal mode.
----
-
-
-Normally, DevLake would execute pipelines on a local machine (we call it `local mode`), it is sufficient most of the time. However, when you have too many pipelines that need to be executed in parallel, it can be problematic, as the horsepower and throughput of a single machine is limited.
-
-`temporal mode` was added to support distributed pipeline execution, you can fire up arbitrary workers on multiple machines to carry out those pipelines in parallel to overcome the limitations of a single machine.
-
-But, be careful, many API services like JIRA/GITHUB have a request rate limit mechanism. Collecting data in parallel against the same API service with the same identity would most likely hit such limit.
-
-## How it works
-
-1. DevLake Server and Workers connect to the same temporal server by setting up `TEMPORAL_URL`
-2. DevLake Server sends a `pipeline` to the temporal server, and one of the Workers pick it up and execute it
-
-
-**IMPORTANT: This feature is in early stage of development. Please use with caution**
-
-
-## Temporal Demo
-
-### Requirements
-
-- [Docker](https://docs.docker.com/get-docker)
-- [docker-compose](https://docs.docker.com/compose/install/)
-- [temporalio](https://temporal.io/)
-
-### How to setup
-
-1. Clone and fire up  [temporalio](https://temporal.io/) services
-2. Clone this repo, and fire up DevLake with command `docker-compose -f docker-compose-temporal.yml up -d`
\ No newline at end of file
diff --git a/versioned_docs/version-v0.11.0/UserManuals/_category_.json b/versioned_docs/version-v0.11.0/UserManuals/_category_.json
deleted file mode 100644
index b47bdfd..0000000
--- a/versioned_docs/version-v0.11.0/UserManuals/_category_.json
+++ /dev/null
@@ -1,4 +0,0 @@
-{
-  "label": "User Manuals",
-  "position": 3
-}
diff --git a/versioned_sidebars/version-v0.11.0-sidebars.json b/versioned_sidebars/version-v0.11.0-sidebars.json
deleted file mode 100644
index 39332bf..0000000
--- a/versioned_sidebars/version-v0.11.0-sidebars.json
+++ /dev/null
@@ -1,8 +0,0 @@
-{
-  "docsSidebar": [
-    {
-      "type": "autogenerated",
-      "dirName": "."
-    }
-  ]
-}
diff --git a/versions.json b/versions.json
deleted file mode 100644
index 909d780..0000000
--- a/versions.json
+++ /dev/null
@@ -1,3 +0,0 @@
-[
-  "v0.11.0"
-]


[incubator-devlake-website] 04/06: fix: fixed file names

Posted by zk...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

zky pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-devlake-website.git

commit f0008ea5e6fa44fff415bc6ba3c73696f93b4735
Author: yumengwang03 <yu...@merico.dev>
AuthorDate: Wed Jul 13 23:33:12 2022 +0800

    fix: fixed file names
---
 docs/DeveloperManuals/DeveloperSetup.md | 10 +++++-----
 docs/Plugins/gitextractor.md            |  4 ++--
 docs/QuickStart/LocalSetup.md           |  8 ++++----
 3 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/docs/DeveloperManuals/DeveloperSetup.md b/docs/DeveloperManuals/DeveloperSetup.md
index 4b05c11..2a462de 100644
--- a/docs/DeveloperManuals/DeveloperSetup.md
+++ b/docs/DeveloperManuals/DeveloperSetup.md
@@ -25,7 +25,7 @@ sidebar_position: 1
 
 2. Install dependencies for plugins:
 
-   - [RefDiff](../Plugins/RefDiff.md#development)
+   - [RefDiff](../Plugins/refdiff.md#development)
 
 3. Install Go packages
 
@@ -76,10 +76,10 @@ sidebar_position: 1
     - Navigate to desired plugins pages on the Integrations page
     - Enter the required information for the plugins you intend to use.
     - Refer to the following for more details on how to configure each one:
-        - [Jira](../Plugins/Jira.md)
-        - [GitLab](../Plugins/GitLab.md)
-        - [Jenkins](../Plugins/Jenkins.md)
-        - [GitHub](../Plugins/GitHub.md): For users who'd like to collect GitHub data, we recommend reading our [GitHub data collection guide](../UserManuals/GitHubUserGuide.md) which covers the following steps in detail.
+        - [Jira](../Plugins/jira.md)
+        - [GitLab](../Plugins/gitlab.md)
+        - [Jenkins](../Plugins/jenkins.md)
+        - [GitHub](../Plugins/github.md): For users who'd like to collect GitHub data, we recommend reading our [GitHub data collection guide](../UserManuals/GitHubUserGuide.md) which covers the following steps in detail.
     - Submit the form to update the values by clicking on the **Save Connection** button on each form page
 
 9. Visit `localhost:4000/pipelines/create` to RUN a Pipeline and trigger data collection.
diff --git a/docs/Plugins/gitextractor.md b/docs/Plugins/gitextractor.md
index b40cede..ae3fecb 100644
--- a/docs/Plugins/gitextractor.md
+++ b/docs/Plugins/gitextractor.md
@@ -12,7 +12,7 @@ This plugin extracts commits and references from a remote or local git repositor
 1. Use the Git repo extractor to retrieve data about commits and branches from your repository.
 2. Use the GitHub plugin to retrieve data about Github issues and PRs from your repository.
 NOTE: you can run only one issue collection stage as described in the Github Plugin README.
-3. Use the [RefDiff](RefDiff.md) plugin to calculate version diff, which will be stored in `refs_commits_diffs` table.
+3. Use the [RefDiff](./refdiff.md) plugin to calculate version diff, which will be stored in `refs_commits_diffs` table.
 
 ## Sample Request
 
@@ -58,6 +58,6 @@ For more options (e.g., saving to a csv file instead of a db), please read `plug
 ## Development
 
 This plugin depends on `libgit2`, you need to install version 1.3.0 in order to run and debug this plugin on your local
-machine. [Click here](RefDiff.md#Development) for a brief guide.
+machine. [Click here](./refdiff.md#Development) for a brief guide.
 
 <br/><br/><br/>
diff --git a/docs/QuickStart/LocalSetup.md b/docs/QuickStart/LocalSetup.md
index 8e56a65..5ae0e0e 100644
--- a/docs/QuickStart/LocalSetup.md
+++ b/docs/QuickStart/LocalSetup.md
@@ -24,10 +24,10 @@ sidebar_position: 1
 1. Visit `config-ui` at `http://localhost:4000` in your browser to configure data connections.
    - Navigate to desired plugins on the Integrations page
    - Please reference the following for more details on how to configure each one:<br/>
-      - [Jira](../Plugins/Jira.md)
-      - [GitHub](../Plugins/GitHub.md): For users who'd like to collect GitHub data, we recommend reading our [GitHub data collection guide](../UserManuals/GitHubUserGuide.md) which covers the following steps in detail.
-      - [GitLab](../Plugins/GitLab.md)
-      - [Jenkins](../Plugins/Jenkins.md)
+      - [Jira](../Plugins/jira.md)
+      - [GitHub](../Plugins/github.md): For users who'd like to collect GitHub data, we recommend reading our [GitHub data collection guide](../UserManuals/GitHubUserGuide.md) which covers the following steps in detail.
+      - [GitLab](../Plugins/gitlab.md)
+      - [Jenkins](../Plugins/jenkins.md)
    - Submit the form to update the values by clicking on the **Save Connection** button on each form page
    - `devlake` takes a while to fully boot up. if `config-ui` complaining about api being unreachable, please wait a few seconds and try refreshing the page.
 2. Create pipelines to trigger data collection in `config-ui`


[incubator-devlake-website] 05/06: fix: fixed image path

Posted by zk...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

zky pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-devlake-website.git

commit f232eca62d1e62a7d79f5d0b4e020de2ddf38cd6
Author: yumengwang03 <yu...@merico.dev>
AuthorDate: Wed Jul 13 23:37:37 2022 +0800

    fix: fixed image path
---
 .../index.md"                                                           | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git "a/i18n/zh/docusaurus-plugin-content-blog/2022-05-20-\345\246\202\344\275\225\350\264\241\347\214\256issues/index.md" "b/i18n/zh/docusaurus-plugin-content-blog/2022-05-20-\345\246\202\344\275\225\350\264\241\347\214\256issues/index.md"
index 232e181..8c2355d 100644
--- "a/i18n/zh/docusaurus-plugin-content-blog/2022-05-20-\345\246\202\344\275\225\350\264\241\347\214\256issues/index.md"
+++ "b/i18n/zh/docusaurus-plugin-content-blog/2022-05-20-\345\246\202\344\275\225\350\264\241\347\214\256issues/index.md"
@@ -12,7 +12,7 @@
 ### 怎么做呢?这很简单!
 
 进入我们的[问题页面](https://github.com/apache/incubator-devlake/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22),然后点击这里。我们所有的Good First Issue都列在这里!
-![good first issue](../../../../img/community/screenshots/issue_page_screenshot.png)
+![good first issue](/img/Community/screenshots/issue_page_screenshot.png)
 
 - 首先,寻找现有的issues,找到一个你喜欢的。
   你可以通过评论"I'll take it!"来预订它。


[incubator-devlake-website] 01/06: docs: updated versioning and tidied up docs

Posted by zk...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

zky pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-devlake-website.git

commit bb9b82908bf313c2b00af007ec2f2708f188f5d6
Author: yumengwang03 <yu...@merico.dev>
AuthorDate: Wed Jul 13 22:45:59 2022 +0800

    docs: updated versioning and tidied up docs
---
 .../index.md"                                      |   2 +-
 community/Team/team.md                             |  30 +++---
 .../make-contribution/fix-or-create-issues.md      |   4 +-
 .../{02-DataSupport.md => DataSupport.md}          |   7 +-
 .../DataModels/DevLakeDomainLayerSchema.md         |   7 +-
 .../DeveloperManuals/DBMigration.md                |   9 +-
 docs/DeveloperManuals/Dal.md                       |   2 +-
 .../DeveloperManuals/DeveloperSetup.md             |  23 ++---
 .../{NOTIFICATION.md => Notifications.md}          |   3 +-
 .../{PluginCreate.md => PluginImplementation.md}   |   6 +-
 docs/Glossary.md                                   |  10 +-
 docs/Overview/01-WhatIsDevLake.md                  |  41 --------
 .../Overview/Architecture.md                       |   8 +-
 .../Overview/Introduction.md                       |  18 ++--
 docs/Overview/{03-Roadmap.md => Roadmap.md}        |  11 +--
 docs/Plugins/feishu.md                             |   2 -
 docs/Plugins/gitee.md                              |   2 -
 docs/Plugins/gitextractor.md                       |   6 +-
 docs/Plugins/github.md                             |   3 +-
 docs/Plugins/jenkins.md                            |   2 -
 docs/Plugins/refdiff.md                            |   2 -
 docs/Plugins/tapd.md                               |   6 +-
 .../{02-KubernetesSetup.md => KubernetesSetup.md}  |   7 +-
 .../QuickStart/{01-LocalSetup.md => LocalSetup.md} |  27 +++---
 .../UserManuals/AdvancedMode.md                    |   4 +-
 .../UserManuals/GitHubUserGuide.md                 |   6 +-
 .../UserManuals/GrafanaUserGuide.md                |   4 +-
 ...recurring-pipeline.md => RecurringPipelines.md} |   4 +-
 ...-feature-user-guide.md => TeamConfiguration.md} |  18 ++--
 .../{03-TemporalSetup.md => TemporalSetup.md}      |   0
 docusaurus.config.js                               |  10 +-
 src/components/HomepageFeatures.js                 |   6 +-
 static/img/{ => Architecture}/arch-component.svg   |   0
 static/img/{ => Architecture}/arch-dataflow.svg    |   0
 .../img/Community}/contributors/abhishek.jpeg      | Bin
 .../img/Community}/contributors/anshimin.jpeg      | Bin
 .../img/Community}/contributors/chengeyu.jpeg      | Bin
 .../img/Community}/contributors/jibin.jpeg         | Bin
 .../img/Community}/contributors/keonamini.jpeg     | Bin
 .../img/Community}/contributors/lijiageng.jpeg     | Bin
 .../img/Community}/contributors/lizhenlei.jpeg     | Bin
 .../img/Community}/contributors/nikitakoselec.jpeg | Bin
 .../img/Community}/contributors/prajwalborkar.jpeg | Bin
 .../img/Community}/contributors/songdunyu.jpeg     | Bin
 .../img/Community}/contributors/supeng.jpeg        | Bin
 .../img/Community}/contributors/tanguiping.jpeg    | Bin
 .../img/Community}/contributors/wangdanna.jpeg     | Bin
 .../img/Community}/contributors/wangxiaolei.jpeg   | Bin
 .../img/Community}/contributors/zhangxiangyu.jpeg  | Bin
 .../screenshots/issue_page_screenshot.png          | Bin
 .../img/{ => DomainLayerSchema}/schema-diagram.png | Bin
 static/img/{ => Glossary}/blueprint-erd.svg        |   0
 static/img/{ => Glossary}/pipeline-erd.svg         |   0
 static/img/{ => Homepage}/HighlyFlexible.svg       |   0
 static/img/{ => Homepage}/OutoftheboxAnalysis.svg  |   0
 static/img/{ => Homepage}/SilosConnected.svg       |   0
 static/img/{ => Introduction}/userflow1.svg        |   0
 static/img/{ => Introduction}/userflow2.svg        |   0
 static/img/{ => Introduction}/userflow3.png        | Bin
 static/img/{ => Introduction}/userflow4.png        | Bin
 static/img/{ => Plugins}/github-demo.png           | Bin
 static/img/{ => Plugins}/jenkins-demo.png          | Bin
 static/img/{ => Plugins}/jira-demo.png             | Bin
 static/img/{ => Team}/teamflow1.png                | Bin
 static/img/{ => Team}/teamflow2.png                | Bin
 static/img/{ => Team}/teamflow3.png                | Bin
 static/img/{ => Team}/teamflow4.png                | Bin
 static/img/{ => Team}/teamflow5.png                | Bin
 static/img/{ => Team}/teamflow6.png                | Bin
 static/img/{ => Team}/teamflow7.png                | Bin
 static/img/tutorial/docsVersionDropdown.png        | Bin 25102 -> 0 bytes
 static/img/tutorial/localeDropdown.png             | Bin 30020 -> 0 bytes
 versioned_docs/version-0.11/Glossary.md            | 106 ---------------------
 .../Dashboards/AverageRequirementLeadTime.md       |   0
 .../Dashboards/CommitCountByAuthor.md              |   0
 .../Dashboards/DetailedBugInfo.md                  |   0
 .../Dashboards/GitHubBasic.md                      |   0
 .../GitHubReleaseQualityAndContributionAnalysis.md |   0
 .../Dashboards/Jenkins.md                          |   0
 .../Dashboards/WeeklyBugRetro.md                   |   0
 .../Dashboards/_category_.json                     |   0
 .../DataModels/DataSupport.md}                     |   7 +-
 .../DataModels/DevLakeDomainLayerSchema.md         |   7 +-
 .../DataModels/_category_.json                     |   0
 .../DeveloperManuals/DBMigration.md                |   9 +-
 .../DeveloperManuals/Dal.md                        |   2 +-
 .../DeveloperManuals/DeveloperSetup.md             |  23 ++---
 .../DeveloperManuals/Notifications.md}             |   3 +-
 .../DeveloperManuals/PluginImplementation.md}      |   6 +-
 .../DeveloperManuals/_category_.json               |   0
 .../EngineeringMetrics.md                          |   0
 .../version-v0.11.0/Overview/Architecture.md       |  10 +-
 .../version-v0.11.0/Overview/Introduction.md       |  16 ++++
 .../Overview/Roadmap.md}                           |  11 +--
 .../Overview/_category_.json                       |   0
 .../dbt.md => version-v0.11.0/Plugins/Dbt.md}      |   0
 .../Plugins/Feishu.md}                             |   2 -
 .../Plugins/GitExtractor.md}                       |   6 +-
 .../Plugins/GitHub.md}                             |   3 +-
 .../Plugins/GitLab.md}                             |   0
 .../gitee.md => version-v0.11.0/Plugins/Gitee.md}  |   2 -
 .../Plugins/Jenkins.md}                            |   2 -
 .../jira.md => version-v0.11.0/Plugins/Jira.md}    |   0
 .../Plugins/RefDiff.md}                            |   2 -
 .../tapd.md => version-v0.11.0/Plugins/Tapd.md}    |   6 +-
 .../Plugins/_category_.json                        |   0
 .../Plugins/github-connection-in-config-ui.png     | Bin
 .../Plugins/gitlab-connection-in-config-ui.png     | Bin
 .../Plugins/jira-connection-config-ui.png          | Bin
 .../Plugins/jira-more-setting-in-config-ui.png     | Bin
 .../QuickStart/KubernetesSetup.md}                 |   7 +-
 .../QuickStart/LocalSetup.md}                      |  27 +++---
 .../QuickStart/_category_.json                     |   0
 .../version-v0.11.0/UserManuals/AdvancedMode.md    |   4 +-
 .../version-v0.11.0/UserManuals/GitHubUserGuide.md |   6 +-
 .../UserManuals/GrafanaUserGuide.md                |   4 +-
 .../UserManuals/RecurringPipelines.md}             |   4 +-
 .../UserManuals/TeamConfiguration.md               |  18 ++--
 .../UserManuals/TemporalSetup.md}                  |   0
 .../UserManuals/_category_.json                    |   0
 ...sidebars.json => version-v0.11.0-sidebars.json} |   0
 versions.json                                      |   2 +-
 122 files changed, 212 insertions(+), 373 deletions(-)

diff --git "a/blog/2022-05-20-\345\246\202\344\275\225\350\264\241\347\214\256issues/index.md" "b/blog/2022-05-20-\345\246\202\344\275\225\350\264\241\347\214\256issues/index.md"
index 9975a38..0e77e64 100644
--- "a/blog/2022-05-20-\345\246\202\344\275\225\350\264\241\347\214\256issues/index.md"
+++ "b/blog/2022-05-20-\345\246\202\344\275\225\350\264\241\347\214\256issues/index.md"
@@ -19,7 +19,7 @@ tags: [devlake, apache]
 ### 怎么做呢?这很简单!
 
 进入我们的[问题页面](https://github.com/apache/incubator-devlake/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22),然后点击这里。我们所有的Good First Issue都列在这里!
-![good first issue](../../img/community/screenshots/issue_page_screenshot.png)
+![good first issue](/img/Community/screenshots/issue_page_screenshot.png)
 
 - 首先,寻找现有的issues,找到一个你喜欢的。
   你可以通过评论"I'll take it!"来预订它。
diff --git a/community/Team/team.md b/community/Team/team.md
index b3e8069..f8cd190 100644
--- a/community/Team/team.md
+++ b/community/Team/team.md
@@ -59,19 +59,19 @@ get merged into the codebase. We deeply appreciate your contribution!
 
 ---
 
-![Zhenlei Li](../../img/community/contributors/lizhenlei.jpeg)
+![Zhenlei Li](/img/Community/contributors/lizhenlei.jpeg)
 
 #### Danna Wang
 
 ---
 
-![Zhenlei Li](../../img/community/contributors/wangdanna.jpeg)
+![Zhenlei Li](/img/Community/contributors/wangdanna.jpeg)
 
 #### Geyu Chen
 
 ---
 
-![Geyu Chen](../../img/community/contributors/chengeyu.jpeg)
+![Geyu Chen](/img/Community/contributors/chengeyu.jpeg)
 
 ### New Contributors May 2022
 
@@ -79,70 +79,70 @@ get merged into the codebase. We deeply appreciate your contribution!
 
 ---
 
-![Jiageng Li](../../img/community/contributors/lijiageng.jpeg)
+![Jiageng Li](/img/Community/contributors/lijiageng.jpeg)
 
 #### Xiangyu Zhang
 
 ---
 
-![Xiangyu Zhang](../../img/community/contributors/zhangxiangyu.jpeg)
+![Xiangyu Zhang](/img/Community/contributors/zhangxiangyu.jpeg)
 
 #### Xiaolei Wang
 
 ---
 
-![Xiaolei Wang](../../img/community/contributors/wangxiaolei.jpeg)
+![Xiaolei Wang](/img/Community/contributors/wangxiaolei.jpeg)
 
 #### Peng Su
 
 ---
 
-![Peng Su](../../img/community/contributors/supeng.jpeg)
+![Peng Su](/img/Community/contributors/supeng.jpeg)
 
 #### Dunyu Song
 
 ---
 
-![Dunyu Song](../../img/community/contributors/songdunyu.jpeg)
+![Dunyu Song](/img/Community/contributors/songdunyu.jpeg)
 
 #### Nikita Koselev
 
 ---
 
-![Nikita Koselev](../../img/community/contributors/nikitakoselec.jpeg)
+![Nikita Koselev](/img/Community/contributors/nikitakoselec.jpeg)
 
 #### Shimin An
 
 ---
 
-![Shimin An](../../img/community/contributors/anshimin.jpeg)
+![Shimin An](/img/Community/contributors/anshimin.jpeg)
 
 #### Abhishek KM
 
 ---
 
-![Abhishek KM](../../img/community/contributors/abhishek.jpeg)
+![Abhishek KM](/img/Community/contributors/abhishek.jpeg)
 
 #### Guiping Tan
 
 ---
 
-![Guiping Tan](../../img/community/contributors/tanguiping.jpeg)
+![Guiping Tan](/img/Community/contributors/tanguiping.jpeg)
 
 #### Bin Ji
 
 ---
 
-![jibin](../../img/community/contributors/jibin.jpeg)
+![jibin](/img/Community/contributors/jibin.jpeg)
 
 #### Prajwal Borkar
 
 ---
 
-![Prajwal Borkar](../../img/community/contributors/prajwalborkar.jpeg)
+![Prajwal Borkar](/img/Community/contributors/prajwalborkar.jpeg)
 
 #### Keon Amini
 
 ---
 
-![Keon Amini](../../img/community/contributors/keonamini.jpeg)
+![Keon Amini](/img/Community/contributors/keonamini.jpeg)
diff --git a/community/make-contribution/fix-or-create-issues.md b/community/make-contribution/fix-or-create-issues.md
index 8cfce72..2ff9398 100644
--- a/community/make-contribution/fix-or-create-issues.md
+++ b/community/make-contribution/fix-or-create-issues.md
@@ -2,8 +2,6 @@
 sidebar_position: 02
 title: "Contributing to Issues"
 ---
-# Contributing to Issues
-> @Klesh
 
 Last week(2022-05-12), we had 2 designated Good First Issues listed out for everyone
 in a First Come, First Served manner, which was fun, and they were taken almost instantly...
@@ -14,7 +12,7 @@ you like from our github issue pages, or even create your own one if no more lef
 We are community after all!
 
 Now, how do we proceed? It's simple! Go to our [issues page](https://github.com/apache/incubator-devlake/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22), and then click here. all our Good First Issue are listed out here.
-![good first issue](../../img/community/screenshots/issue_page_screenshot.png)
+![good first issue](/img/Community/screenshots/issue_page_screenshot.png)
 
 - Firstly, go for existing issues if any, find one that you like, 
 you can claim it by sending comment like "I'll take it", 
diff --git a/docs/DataModels/02-DataSupport.md b/docs/DataModels/DataSupport.md
similarity index 98%
rename from docs/DataModels/02-DataSupport.md
rename to docs/DataModels/DataSupport.md
index 7067da1..4cb4b61 100644
--- a/docs/DataModels/02-DataSupport.md
+++ b/docs/DataModels/DataSupport.md
@@ -1,11 +1,8 @@
 ---
 title: "Data Support"
-linkTitle: "Data Support"
-tags: []
-categories: []
-weight: 2
 description: >
   Data sources that DevLake supports
+sidebar_position: 1
 ---
 
 
@@ -26,7 +23,7 @@ DevLake supports the following data sources. The data from each data source is c
 
 
 ## Data Collection Scope By Each Plugin
-This table shows the entities collected by each plugin. Domain layer entities in this table are consistent with the entities [here](./01-DevLakeDomainLayerSchema.md).
+This table shows the entities collected by each plugin. Domain layer entities in this table are consistent with the entities [here](./DevLakeDomainLayerSchema.md).
 
 | Domain Layer Entities | ae             | gitextractor | github         | gitlab  | jenkins | jira    | refdiff | tapd    |
 | --------------------- | -------------- | ------------ | -------------- | ------- | ------- | ------- | ------- | ------- |
diff --git a/versioned_docs/version-0.11/DataModels/01-DevLakeDomainLayerSchema.md b/docs/DataModels/DevLakeDomainLayerSchema.md
similarity index 99%
rename from versioned_docs/version-0.11/DataModels/01-DevLakeDomainLayerSchema.md
rename to docs/DataModels/DevLakeDomainLayerSchema.md
index 2ffa512..996d397 100644
--- a/versioned_docs/version-0.11/DataModels/01-DevLakeDomainLayerSchema.md
+++ b/docs/DataModels/DevLakeDomainLayerSchema.md
@@ -1,11 +1,8 @@
 ---
 title: "Domain Layer Schema"
-linkTitle: "Domain Layer Schema"
-tags: []
-categories: []
-weight: 50000
 description: >
   DevLake Domain Layer Schema
+sidebar_position: 2
 ---
 
 ## Summary
@@ -33,7 +30,7 @@ This is the up-to-date domain layer schema for DevLake v0.10.x. Tables (entities
 
 
 ### Schema Diagram
-![Domain Layer Schema](/img/schema-diagram.png)
+![Domain Layer Schema](/img/DomainLayerSchema/schema-diagram.png)
 
 When reading the schema, you'll notice that many tables' primary key is called `id`. Unlike auto-increment id or UUID, `id` is a string composed of several parts to uniquely identify similar entities (e.g. repo) from different platforms (e.g. Github/Gitlab) and allow them to co-exist in a single table.
 
diff --git a/versioned_docs/version-0.11/DeveloperManuals/MIGRATIONS.md b/docs/DeveloperManuals/DBMigration.md
similarity index 94%
rename from versioned_docs/version-0.11/DeveloperManuals/MIGRATIONS.md
rename to docs/DeveloperManuals/DBMigration.md
index edab4ca..9530237 100644
--- a/versioned_docs/version-0.11/DeveloperManuals/MIGRATIONS.md
+++ b/docs/DeveloperManuals/DBMigration.md
@@ -2,17 +2,16 @@
 title: "DB Migration"
 description: >
   DB Migration
+sidebar_position: 3
 ---
 
-# Migrations (Database)
-
 ## Summary
 Starting in v0.10.0, DevLake provides a lightweight migration tool for executing migration scripts.
 Both framework itself and plugins define their migration scripts in their own migration folder.
 The migration scripts are written with gorm in Golang to support different SQL dialects.
 
 
-## Migration script
+## Migration Script
 Migration script describes how to do database migration.
 They implement the `Script` interface.
 When DevLake starts, scripts register themselves to the framework by invoking the `Register` function
@@ -29,7 +28,9 @@ type Script interface {
 
 The table tracks migration scripts execution and schemas changes.
 From which, DevLake could figure out the current state of database schemas.
-## How it Works
+
+
+## How It Works
 1. Check `migration_history` table, calculate all the migration scripts need to be executed.
 2. Sort scripts by Version in ascending order.
 3. Execute scripts.
diff --git a/docs/DeveloperManuals/Dal.md b/docs/DeveloperManuals/Dal.md
index da27a55..9b08542 100644
--- a/docs/DeveloperManuals/Dal.md
+++ b/docs/DeveloperManuals/Dal.md
@@ -1,6 +1,6 @@
 ---
 title: "Dal"
-sidebar_position: 4
+sidebar_position: 5
 description: >
   The Dal (Data Access Layer) is designed to decouple the hard dependency on `gorm` in v0.12
 ---
diff --git a/versioned_docs/version-0.11/DeveloperManuals/04-DeveloperSetup.md b/docs/DeveloperManuals/DeveloperSetup.md
similarity index 87%
rename from versioned_docs/version-0.11/DeveloperManuals/04-DeveloperSetup.md
rename to docs/DeveloperManuals/DeveloperSetup.md
index cb27440..4b05c11 100644
--- a/versioned_docs/version-0.11/DeveloperManuals/04-DeveloperSetup.md
+++ b/docs/DeveloperManuals/DeveloperSetup.md
@@ -2,10 +2,11 @@
 title: "Developer Setup"
 description: >
   The steps to install DevLake in develper mode.
+sidebar_position: 1
 ---
 
 
-#### Requirements
+## Requirements
 
 - <a href="https://docs.docker.com/get-docker" target="_blank">Docker v19.03.10+</a>
 - <a href="https://golang.org/doc/install" target="_blank">Golang v1.17+</a>
@@ -14,7 +15,7 @@ description: >
   - Windows: [Download](http://gnuwin32.sourceforge.net/packages/make.htm)
   - Ubuntu: `sudo apt-get install build-essential libssl-dev`
 
-#### How to setup dev environment
+## How to setup dev environment
 1. Navigate to where you would like to install this project and clone the repository:
 
    ```sh
@@ -24,7 +25,7 @@ description: >
 
 2. Install dependencies for plugins:
 
-   - [RefDiff](../Plugins/refdiff.md#development)
+   - [RefDiff](../Plugins/RefDiff.md#development)
 
 3. Install Go packages
 
@@ -75,10 +76,10 @@ description: >
     - Navigate to desired plugins pages on the Integrations page
     - Enter the required information for the plugins you intend to use.
     - Refer to the following for more details on how to configure each one:
-        - [Jira](../Plugins/jira.md)
-        - [GitLab](../Plugins/gitlab.md)
-        - [Jenkins](../Plugins/jenkins.md)
-        - [GitHub](../Plugins/github.md): For users who'd like to collect GitHub data, we recommend reading our [GitHub data collection guide](../UserManuals/github-user-guide-v0.10.0.md) which covers the following steps in detail.
+        - [Jira](../Plugins/Jira.md)
+        - [GitLab](../Plugins/GitLab.md)
+        - [Jenkins](../Plugins/Jenkins.md)
+        - [GitHub](../Plugins/GitHub.md): For users who'd like to collect GitHub data, we recommend reading our [GitHub data collection guide](../UserManuals/GitHubUserGuide.md) which covers the following steps in detail.
     - Submit the form to update the values by clicking on the **Save Connection** button on each form page
 
 9. Visit `localhost:4000/pipelines/create` to RUN a Pipeline and trigger data collection.
@@ -111,14 +112,14 @@ description: >
     ]
     ```
 
-   Please refer to [Pipeline Advanced Mode](../UserManuals/create-pipeline-in-advanced-mode.md) for in-depth explanation.
+   Please refer to [Pipeline Advanced Mode](../UserManuals/AdvancedMode.md) for in-depth explanation.
 
 
 10. Click *View Dashboards* button in the top left when done, or visit `localhost:3002` (username: `admin`, password: `admin`).
 
    We use <a href="https://grafana.com/" target="_blank">Grafana</a> as a visualization tool to build charts for the <a href="https://github.com/merico-dev/lake/wiki/DataModel.Domain-layer-schema">data stored in our database</a>. Using SQL queries, we can add panels to build, save, and edit customized dashboards.
 
-   All the details on provisioning and customizing a dashboard can be found in the [Grafana Doc](../UserManuals/GRAFANA.md).
+   All the details on provisioning and customizing a dashboard can be found in the [Grafana Doc](../UserManuals/GrafanaUserGuide.md).
 
 11. (Optional) To run the tests:
 
@@ -126,5 +127,5 @@ description: >
     make test
     ```
 
-12. For DB migrations, please refer to [Migration Doc](../DeveloperManuals/MIGRATIONS.md).
-<br/><br/><br/>
+12. For DB migrations, please refer to [Migration Doc](../DeveloperManuals/DBMigration.md).
+
diff --git a/docs/DeveloperManuals/NOTIFICATION.md b/docs/DeveloperManuals/Notifications.md
similarity index 97%
rename from docs/DeveloperManuals/NOTIFICATION.md
rename to docs/DeveloperManuals/Notifications.md
index d5ebd2b..23456b4 100644
--- a/docs/DeveloperManuals/NOTIFICATION.md
+++ b/docs/DeveloperManuals/Notifications.md
@@ -2,10 +2,9 @@
 title: "Notifications"
 description: >
   Notifications
+sidebar_position: 4
 ---
 
-# Notification
-
 ## Request
 Example request
 ```
diff --git a/docs/DeveloperManuals/PluginCreate.md b/docs/DeveloperManuals/PluginImplementation.md
similarity index 99%
rename from docs/DeveloperManuals/PluginCreate.md
rename to docs/DeveloperManuals/PluginImplementation.md
index 3f2a4ce..e3457c9 100644
--- a/docs/DeveloperManuals/PluginCreate.md
+++ b/docs/DeveloperManuals/PluginImplementation.md
@@ -1,8 +1,8 @@
 ---
-title: "How to Implement a DevLake plugin?"
-sidebar_position: 1
+title: "Plugin Implementation"
+sidebar_position: 2
 description: >
-  How to Implement a DevLake plugin.
+  Plugin Implementation
 ---
 
 ## How to Implement a DevLake plugin?
diff --git a/docs/Glossary.md b/docs/Glossary.md
index 4ca3117..9ed93e3 100644
--- a/docs/Glossary.md
+++ b/docs/Glossary.md
@@ -25,7 +25,7 @@ The following terms are arranged in the order of their appearance in the actual
 
 The relationship among Blueprint, Data Connections, Data Scope and Transformation Rules is explained as follows:
 
-![Blueprint ERD](/img/blueprint-erd.svg)
+![Blueprint ERD](/img/Glossary/blueprint-erd.svg)
 - Each blueprint can have multiple data connections.
 - Each data connection can have multiple sets of data scope.
 - Each set of data scope only consists of one GitHub/GitLab project or Jira board, along with their corresponding data entities.
@@ -46,14 +46,14 @@ You can set up a new data connection either during the first step of creating a
 
 Each set of data scope refers to one GitHub or GitLab project, or one Jira board and the data entities you would like to sync for them, for the convenience of applying transformation in the next step. For instance, if you wish to sync 5 GitHub projects, you will have 5 sets of data scope for GitHub.
 
-To learn more about the default data scope of all data sources and data plugins, please refer to [Data Support](./DataModels/02-DataSupport.md).
+To learn more about the default data scope of all data sources and data plugins, please refer to [Data Support](./DataModels/DataSupport.md).
 
 ### Data Entities
 **Data entities refer to the data fields from one of the five data domains: Issue Tracking, Source Code Management, Code Review, CI/CD and Cross-Domain.**
 
 For instance, if you wish to pull Source Code Management data from GitHub and Issue Tracking data from Jira, you can check the corresponding data entities during setting the data scope of these two data connections.
 
-To learn more details, please refer to [Domain Layer Schema](./DataModels/01-DevLakeDomainLayerSchema.md).
+To learn more details, please refer to [Domain Layer Schema](./DataModels/DevLakeDomainLayerSchema.md).
 
 ### Transformation Rules
 **Transformation rules are a collection of methods that allow you to customize how DevLake normalizes raw data for query and metric computation.** Each set of data scope is strictly accompanied with one set of transformation rules. However, for your convenience, transformation rules can also be duplicated across different sets of data scope.
@@ -81,14 +81,14 @@ Data Transformation Plugins transform the data pulled by other Data Collection P
 
 Although the names of the data plugins are not displayed in the regular mode of DevLake Configuration UI, they can be used directly in JSON in the Advanced Mode.
 
-For detailed information about the relationship between data sources and data plugins, please refer to [Data Support](./DataModels/02-DataSupport.md).
+For detailed information about the relationship between data sources and data plugins, please refer to [Data Support](./DataModels/DataSupport.md).
 
 
 ### Pipelines
 **A pipeline is an orchestration of [tasks](Glossary.md#tasks) of data `collection`, `extraction`, `conversion` and `enrichment`, defined in the DevLake API.** A pipeline is composed of one or multiple [stages](Glossary.md#stages) that are executed in a sequential order. Any error occurring during the execution of any stage, task or subtask will cause the immediate fail of the pipeline.
 
 The composition of a pipeline is explained as follows:
-![Blueprint ERD](/img/pipeline-erd.svg)
+![Blueprint ERD](/img/Glossary/pipeline-erd.svg)
 Notice: **You can manually orchestrate the pipeline in Configuration UI Advanced Mode and the DevLake API; whereas in Configuration UI regular mode, an optimized pipeline orchestration will be automatically generated for you.**
 
 
diff --git a/docs/Overview/01-WhatIsDevLake.md b/docs/Overview/01-WhatIsDevLake.md
deleted file mode 100755
index 75c64a1..0000000
--- a/docs/Overview/01-WhatIsDevLake.md
+++ /dev/null
@@ -1,41 +0,0 @@
----
-title: "Introduction"
-description: General introduction of Apache DevLake
-sidebar_position: 01
----
-
-## What is Apache DevLake?
-Apache DevLake is an open-source dev data platform that ingests, analyzes, and visualizes the fragmented data from DevOps tools to distill insights for engineering productivity.
-
-Apache DevLake is designed for developer teams looking to make better sense of their development process and to bring a more data-driven approach to their own practices. You can ask Apache DevLake many questions regarding your development process. Just connect and query.
-
-## What can be accomplished with DevLake?
-1. Collect DevOps data across the entire Software Development Life Cycle (SDLC) and connect the siloed data with a standard [data model](../DataModels/01-DevLakeDomainLayerSchema.md).
-2. Visualize out-of-the-box engineering [metrics](../EngineeringMetrics.md) in a series of use-case driven dashboards
-3. Easily extend DevLake to support your data sources, metrics, and dashboards with a flexible [framework](02-Architecture.md) for data collection and ETL.
-
-## How do I use DevLake?
-### 1. Set up DevLake
-You can easily set up Apache DevLake by following our step-by step instruction for [local setup](../QuickStart/01-LocalSetup.md) or [Kubernetes setup](../QuickStart/02-KubernetesSetup.md).
-
-### 2. Create a Blueprint
-The DevLake Configuration UI will guide you through the process (a Blueprint) to define the data connections, data scope, transformation and sync frequency of the data you wish to collect.
-
-![img](/img/userflow1.svg)
-
-### 3. Track the Blueprint's progress
-You can track the progress of the Blueprint you have just set up.
-
-![img](/img/userflow2.svg)
-
-### 4. View the pre-built dashboards
-Once the first run of the Blueprint is completed, you can view the corresponding dashboards.
-
-![img](/img/userflow3.png)
-
-### 5. Customize the dahsboards with SQL
-If the pre-built dashboards are limited for your use cases, you can always customize or create your own metrics or dashboards with SQL.
-
-![img](/img/userflow4.png)
-
-
diff --git a/versioned_docs/version-0.11/Overview/02-Architecture.md b/docs/Overview/Architecture.md
similarity index 93%
rename from versioned_docs/version-0.11/Overview/02-Architecture.md
rename to docs/Overview/Architecture.md
index 8daa859..d4c6a9c 100755
--- a/versioned_docs/version-0.11/Overview/02-Architecture.md
+++ b/docs/Overview/Architecture.md
@@ -1,13 +1,13 @@
 ---
 title: "Architecture"
-linkTitle: "Architecture"
 description: >
-  Understand the architecture of Apache DevLake.
+  Understand the architecture of Apache DevLake
+sidebar_position: 2
 ---
 
 ## Architecture Overview
 
-<p align="center"><img src="/img/arch-component.svg" /></p>
+<p align="center"><img src="/img/Architecture/arch-component.svg" /></p>
 <p align="center">DevLake Components</p>
 
 A DevLake installation typically consists of the following components:
@@ -21,7 +21,7 @@ A DevLake installation typically consists of the following components:
 
 ## Dataflow
 
-<p align="center"><img src="/img/arch-dataflow.svg" /></p>
+<p align="center"><img src="/img/Architecture/arch-dataflow.svg" /></p>
 <p align="center">DevLake Dataflow</p>
 
 A typical plugin's dataflow is illustrated below:
diff --git a/versioned_docs/version-0.11/Overview/01-WhatIsDevLake.md b/docs/Overview/Introduction.md
similarity index 79%
rename from versioned_docs/version-0.11/Overview/01-WhatIsDevLake.md
rename to docs/Overview/Introduction.md
index 75c64a1..9219984 100755
--- a/versioned_docs/version-0.11/Overview/01-WhatIsDevLake.md
+++ b/docs/Overview/Introduction.md
@@ -1,7 +1,7 @@
 ---
 title: "Introduction"
 description: General introduction of Apache DevLake
-sidebar_position: 01
+sidebar_position: 1
 ---
 
 ## What is Apache DevLake?
@@ -10,32 +10,30 @@ Apache DevLake is an open-source dev data platform that ingests, analyzes, and v
 Apache DevLake is designed for developer teams looking to make better sense of their development process and to bring a more data-driven approach to their own practices. You can ask Apache DevLake many questions regarding your development process. Just connect and query.
 
 ## What can be accomplished with DevLake?
-1. Collect DevOps data across the entire Software Development Life Cycle (SDLC) and connect the siloed data with a standard [data model](../DataModels/01-DevLakeDomainLayerSchema.md).
+1. Collect DevOps data across the entire Software Development Life Cycle (SDLC) and connect the siloed data with a standard [data model](../DataModels/DevLakeDomainLayerSchema.md).
 2. Visualize out-of-the-box engineering [metrics](../EngineeringMetrics.md) in a series of use-case driven dashboards
-3. Easily extend DevLake to support your data sources, metrics, and dashboards with a flexible [framework](02-Architecture.md) for data collection and ETL.
+3. Easily extend DevLake to support your data sources, metrics, and dashboards with a flexible [framework](Architecture.md) for data collection and ETL.
 
 ## How do I use DevLake?
 ### 1. Set up DevLake
-You can easily set up Apache DevLake by following our step-by step instruction for [local setup](../QuickStart/01-LocalSetup.md) or [Kubernetes setup](../QuickStart/02-KubernetesSetup.md).
+You can easily set up Apache DevLake by following our step-by step instruction for [local setup](../QuickStart/LocalSetup.md) or [Kubernetes setup](../QuickStart/KubernetesSetup.md).
 
 ### 2. Create a Blueprint
 The DevLake Configuration UI will guide you through the process (a Blueprint) to define the data connections, data scope, transformation and sync frequency of the data you wish to collect.
 
-![img](/img/userflow1.svg)
+![img](/img/Introduction/userflow1.svg)
 
 ### 3. Track the Blueprint's progress
 You can track the progress of the Blueprint you have just set up.
 
-![img](/img/userflow2.svg)
+![img](/img/Introduction/userflow2.svg)
 
 ### 4. View the pre-built dashboards
 Once the first run of the Blueprint is completed, you can view the corresponding dashboards.
 
-![img](/img/userflow3.png)
+![img](/img/Introduction/userflow3.png)
 
 ### 5. Customize the dahsboards with SQL
 If the pre-built dashboards are limited for your use cases, you can always customize or create your own metrics or dashboards with SQL.
 
-![img](/img/userflow4.png)
-
-
+![img](/img/Introduction/userflow4.png)
\ No newline at end of file
diff --git a/docs/Overview/03-Roadmap.md b/docs/Overview/Roadmap.md
similarity index 53%
rename from docs/Overview/03-Roadmap.md
rename to docs/Overview/Roadmap.md
index f10b62e..9dcf0b3 100644
--- a/docs/Overview/03-Roadmap.md
+++ b/docs/Overview/Roadmap.md
@@ -1,11 +1,8 @@
 ---
 title: "Roadmap"
-linkTitle: "Roadmap"
-tags: []
-categories: []
-weight: 3
 description: >
-  The goals and roadmap for DevLake in 2022.
+  The goals and roadmap for DevLake in 2022
+sidebar_position: 3
 ---
 
 
@@ -24,8 +21,8 @@ Apache DevLake is currently under rapid development. You are more than welcome t
 
 | Category | Features|
 | --- | --- |
-| More data sources across different [DevOps domains](../DataModels/01-DevLakeDomainLayerSchema.md) (Goal No.1 & 2)| Features in **bold** are of higher priority <br/><br/> Issue/Task Management: <ul><li>**Jira server** [#886 (closed)](https://github.com/apache/incubator-devlake/issues/886)</li><li>**Jira data center** [#1687 (closed)](https://github.com/apache/incubator-devlake/issues/1687)</li><li>GitLab Issues [#715 (closed)](https://github.com/apache/incubator-devlake/issues/715)</li> [...]
-| Improved data collection, [data models](../DataModels/01-DevLakeDomainLayerSchema.md) and data extensibility (Goal No.2)| Data Collection: <br/> <ul><li>Complete the logging system</li><li>Implement a good error handling mechanism during data collection</li></ul> Data Models:<ul><li>Introduce DBT to allow users to create and modify the domain layer schema. [#1479 (closed)](https://github.com/apache/incubator-devlake/issues/1479)</li><li>Design the data models for 5 new domains, please  [...]
+| More data sources across different [DevOps domains](../DataModels/DevLakeDomainLayerSchema.md) (Goal No.1 & 2)| Features in **bold** are of higher priority <br/><br/> Issue/Task Management: <ul><li>**Jira server** [#886 (closed)](https://github.com/apache/incubator-devlake/issues/886)</li><li>**Jira data center** [#1687 (closed)](https://github.com/apache/incubator-devlake/issues/1687)</li><li>GitLab Issues [#715 (closed)](https://github.com/apache/incubator-devlake/issues/715)</li><li [...]
+| Improved data collection, [data models](../DataModels/DevLakeDomainLayerSchema.md) and data extensibility (Goal No.2)| Data Collection: <br/> <ul><li>Complete the logging system</li><li>Implement a good error handling mechanism during data collection</li></ul> Data Models:<ul><li>Introduce DBT to allow users to create and modify the domain layer schema. [#1479 (closed)](https://github.com/apache/incubator-devlake/issues/1479)</li><li>Design the data models for 5 new domains, please ref [...]
 | Better user experience (Goal No.3) | For new users: <ul><li> Iterate on a clearer step-by-step guide to improve the pre-configuration experience.</li><li>Provide a new Config UI to reduce frictions for data configuration [#1700 (in-progress)](https://github.com/apache/incubator-devlake/issues/1700)</li><li> Showcase dashboard live demos to let users explore and learn about the dashboards. [#1784 (open)](https://github.com/apache/incubator-devlake/issues/1784)</li></ul>For returning use [...]
 
 
diff --git a/docs/Plugins/feishu.md b/docs/Plugins/feishu.md
index f19e4b0..c3e0eb6 100644
--- a/docs/Plugins/feishu.md
+++ b/docs/Plugins/feishu.md
@@ -4,8 +4,6 @@ description: >
   Feishu Plugin
 ---
 
-# Feishu
-
 ## Summary
 
 This plugin collects Feishu meeting data through [Feishu Openapi](https://open.feishu.cn/document/home/user-identity-introduction/introduction).
diff --git a/docs/Plugins/gitee.md b/docs/Plugins/gitee.md
index 0c4307a..6066fd2 100644
--- a/docs/Plugins/gitee.md
+++ b/docs/Plugins/gitee.md
@@ -4,8 +4,6 @@ description: >
   Gitee Plugin
 ---
 
-# Gitee
-
 ## Summary
 
 ## Configuration
diff --git a/docs/Plugins/gitextractor.md b/docs/Plugins/gitextractor.md
index ac97fa3..d154e9e 100644
--- a/docs/Plugins/gitextractor.md
+++ b/docs/Plugins/gitextractor.md
@@ -4,8 +4,6 @@ description: >
   GitExtractor Plugin
 ---
 
-# Git Repo Extractor
-
 ## Summary
 This plugin extracts commits and references from a remote or local git repository. It then saves the data into the database or csv files.
 
@@ -14,7 +12,7 @@ This plugin extracts commits and references from a remote or local git repositor
 1. Use the Git repo extractor to retrieve data about commits and branches from your repository.
 2. Use the GitHub plugin to retrieve data about Github issues and PRs from your repository.
 NOTE: you can run only one issue collection stage as described in the Github Plugin README.
-3. Use the [RefDiff](./refdiff.md#development) plugin to calculate version diff, which will be stored in `refs_commits_diffs` table.
+3. Use the [RefDiff](./RefDiff.md#development) plugin to calculate version diff, which will be stored in `refs_commits_diffs` table.
 
 ## Sample Request
 
@@ -60,6 +58,6 @@ For more options (e.g., saving to a csv file instead of a db), please read `plug
 ## Development
 
 This plugin depends on `libgit2`, you need to install version 1.3.0 in order to run and debug this plugin on your local
-machine. [Click here](./refdiff.md#development) for a brief guide.
+machine. [Click here](./RefDiff.md#development) for a brief guide.
 
 <br/><br/><br/>
diff --git a/docs/Plugins/github.md b/docs/Plugins/github.md
index 463f9de..cca87b7 100644
--- a/docs/Plugins/github.md
+++ b/docs/Plugins/github.md
@@ -4,7 +4,6 @@ description: >
   GitHub Plugin
 ---
 
-# Github
 
 
 ## Summary
@@ -24,7 +23,7 @@ Here are some examples metrics using `GitHub` data:
 
 ## Screenshot
 
-![image](/img/github-demo.png)
+![image](/img/Plugins/github-demo.png)
 
 
 ## Configuration
diff --git a/docs/Plugins/jenkins.md b/docs/Plugins/jenkins.md
index 26e72a6..792165d 100644
--- a/docs/Plugins/jenkins.md
+++ b/docs/Plugins/jenkins.md
@@ -4,8 +4,6 @@ description: >
   Jenkins Plugin
 ---
 
-# Jenkins
-
 ## Summary
 
 This plugin collects Jenkins data through [Remote Access API](https://www.jenkins.io/doc/book/using/remote-access-api/). It then computes and visualizes various DevOps metrics from the Jenkins data.
diff --git a/docs/Plugins/refdiff.md b/docs/Plugins/refdiff.md
index 35d3049..12950f4 100644
--- a/docs/Plugins/refdiff.md
+++ b/docs/Plugins/refdiff.md
@@ -4,8 +4,6 @@ description: >
   RefDiff Plugin
 ---
 
-# RefDiff
-
 
 ## Summary
 
diff --git a/docs/Plugins/tapd.md b/docs/Plugins/tapd.md
index fc93539..b8db89f 100644
--- a/docs/Plugins/tapd.md
+++ b/docs/Plugins/tapd.md
@@ -1,4 +1,8 @@
-# TAPD
+---
+title: "TAPD"
+description: >
+  TAPD Plugin
+---
 
 ## Summary
 
diff --git a/docs/QuickStart/02-KubernetesSetup.md b/docs/QuickStart/KubernetesSetup.md
similarity index 94%
rename from docs/QuickStart/02-KubernetesSetup.md
rename to docs/QuickStart/KubernetesSetup.md
index 19bdc4d..e4faeba 100644
--- a/docs/QuickStart/02-KubernetesSetup.md
+++ b/docs/QuickStart/KubernetesSetup.md
@@ -1,7 +1,8 @@
 ---
-title: "Deploy to Kubernetes"
+title: "Kubernetes Setup"
 description: >
-  The steps to install Apache DevLake in Kubernetes.
+  The steps to install Apache DevLake in Kubernetes
+sidebar_position: 2
 ---
 
 
@@ -9,7 +10,7 @@ We provide a sample [k8s-deploy.yaml](https://github.com/apache/incubator-devlak
 
 [k8s-deploy.yaml](https://github.com/apache/incubator-devlake/blob/main/k8s-deploy.yaml) will create a namespace `devlake` on your k8s cluster, and use `nodePort 30004` for `config-ui`,  `nodePort 30002` for `grafana` dashboards. If you would like to use certain version of Apache DevLake, please update the image tag of `grafana`, `devlake` and `config-ui` services to specify versions like `v0.10.1`.
 
-Here's the step-by-step guide:
+## Step-by-step guide
 
 1. Download [k8s-deploy.yaml](https://github.com/apache/incubator-devlake/blob/main/k8s-deploy.yaml) to local machine
 2. Some key points:
diff --git a/docs/QuickStart/01-LocalSetup.md b/docs/QuickStart/LocalSetup.md
similarity index 72%
rename from docs/QuickStart/01-LocalSetup.md
rename to docs/QuickStart/LocalSetup.md
index 9b81bc9..8e56a65 100644
--- a/docs/QuickStart/01-LocalSetup.md
+++ b/docs/QuickStart/LocalSetup.md
@@ -1,16 +1,17 @@
 ---
-title: "Deploy Locally"
+title: "Local Setup"
 description: >
-  The steps to install DevLake locally.
+  The steps to install DevLake locally
+sidebar_position: 1
 ---
 
 
-#### Prerequisites
+## Prerequisites
 
 - [Docker v19.03.10+](https://docs.docker.com/get-docker)
 - [docker-compose v2.2.3+](https://docs.docker.com/compose/install/)
 
-#### Launch DevLake
+## Launch DevLake
 
 - Commands written `like this` are to be run in your terminal.
 
@@ -18,25 +19,25 @@ description: >
 2. Rename `env.example` to `.env`. For Mac/Linux users, please run `mv env.example .env` in the terminal.
 3. Run `docker-compose up -d` to launch DevLake.
 
-#### Configure data connections and collect data
+## Configure data connections and collect data
 
 1. Visit `config-ui` at `http://localhost:4000` in your browser to configure data connections.
    - Navigate to desired plugins on the Integrations page
    - Please reference the following for more details on how to configure each one:<br/>
-      - [Jira](../Plugins/jira.md)
-      - [GitHub](../Plugins/github.md): For users who'd like to collect GitHub data, we recommend reading our [GitHub data collection guide](../UserManuals/github-user-guide-v0.10.0.md) which covers the following steps in detail.
-      - [GitLab](../Plugins/gitlab.md)
-      - [Jenkins](../Plugins/jenkins.md)
+      - [Jira](../Plugins/Jira.md)
+      - [GitHub](../Plugins/GitHub.md): For users who'd like to collect GitHub data, we recommend reading our [GitHub data collection guide](../UserManuals/GitHubUserGuide.md) which covers the following steps in detail.
+      - [GitLab](../Plugins/GitLab.md)
+      - [Jenkins](../Plugins/Jenkins.md)
    - Submit the form to update the values by clicking on the **Save Connection** button on each form page
    - `devlake` takes a while to fully boot up. if `config-ui` complaining about api being unreachable, please wait a few seconds and try refreshing the page.
 2. Create pipelines to trigger data collection in `config-ui`
 3. Click *View Dashboards* button in the top left when done, or visit `localhost:3002` (username: `admin`, password: `admin`).
-   - We use [Grafana](https://grafana.com/) as a visualization tool to build charts for the [data](../DataModels/02-DataSupport.md) stored in our database.
+   - We use [Grafana](https://grafana.com/) as a visualization tool to build charts for the [data](../DataModels/DataSupport.md) stored in our database.
    - Using SQL queries, we can add panels to build, save, and edit customized dashboards.
-   - All the details on provisioning and customizing a dashboard can be found in the [Grafana Doc](../UserManuals/GRAFANA.md).
-4. To synchronize data periodically, users can set up recurring pipelines with DevLake's [pipeline blueprint](../UserManuals/recurring-pipeline.md) for details.
+   - All the details on provisioning and customizing a dashboard can be found in the [Grafana Doc](../UserManuals/GrafanaUserGuide.md).
+4. To synchronize data periodically, users can set up recurring pipelines with DevLake's [pipeline blueprint](../UserManuals/RecurringPipelines.md) for details.
 
-#### Upgrade to a newer version
+## Upgrade to a newer version
 
 Support for database schema migration was introduced to DevLake in v0.10.0. From v0.10.0 onwards, users can upgrade their instance smoothly to a newer version. However, versions prior to v0.10.0 do not support upgrading to a newer version with a different database schema. We recommend users to deploy a new instance if needed.
 
diff --git a/versioned_docs/version-0.11/UserManuals/create-pipeline-in-advanced-mode.md b/docs/UserManuals/AdvancedMode.md
similarity index 97%
rename from versioned_docs/version-0.11/UserManuals/create-pipeline-in-advanced-mode.md
rename to docs/UserManuals/AdvancedMode.md
index 14afd01..4323133 100644
--- a/versioned_docs/version-0.11/UserManuals/create-pipeline-in-advanced-mode.md
+++ b/docs/UserManuals/AdvancedMode.md
@@ -1,8 +1,8 @@
 ---
-title: "Create Pipeline in Advanced Mode"
+title: "Advanced Mode"
 sidebar_position: 2
 description: >
-  Create Pipeline in Advanced Mode
+  Advanced Mode
 ---
 
 
diff --git a/versioned_docs/version-0.11/UserManuals/github-user-guide-v0.10.0.md b/docs/UserManuals/GitHubUserGuide.md
similarity index 97%
rename from versioned_docs/version-0.11/UserManuals/github-user-guide-v0.10.0.md
rename to docs/UserManuals/GitHubUserGuide.md
index 9a9014b..fa67456 100644
--- a/versioned_docs/version-0.11/UserManuals/github-user-guide-v0.10.0.md
+++ b/docs/UserManuals/GitHubUserGuide.md
@@ -1,8 +1,8 @@
 ---
-title: "GitHub User Guide v0.10.0"
+title: "GitHub User Guide"
 sidebar_position: 4
 description: >
-  GitHub User Guide v0.10.0
+  GitHub User Guide
 ---
 
 ## Summary
@@ -109,7 +109,7 @@ See the pipeline finishes (progress 100%):
 
 ### Step 4 - [Optional] Set up a recurring pipeline to keep data fresh
 
-Please see [How to create recurring pipelines](./recurring-pipeline.md) for details.
+Please see [How to create recurring pipelines](./RecurringPipelines.md) for details.
 
 
 
diff --git a/versioned_docs/version-0.11/UserManuals/GRAFANA.md b/docs/UserManuals/GrafanaUserGuide.md
similarity index 99%
rename from versioned_docs/version-0.11/UserManuals/GRAFANA.md
rename to docs/UserManuals/GrafanaUserGuide.md
index bd81651..e475702 100644
--- a/versioned_docs/version-0.11/UserManuals/GRAFANA.md
+++ b/docs/UserManuals/GrafanaUserGuide.md
@@ -1,8 +1,8 @@
 ---
-title: "How to use Grafana"
+title: "Grafana User Guide"
 sidebar_position: 1
 description: >
-  How to use Grafana
+  Grafana User Guide
 ---
 
 
diff --git a/docs/UserManuals/recurring-pipeline.md b/docs/UserManuals/RecurringPipelines.md
similarity index 91%
rename from docs/UserManuals/recurring-pipeline.md
rename to docs/UserManuals/RecurringPipelines.md
index 3e92349..ce82b1e 100644
--- a/docs/UserManuals/recurring-pipeline.md
+++ b/docs/UserManuals/RecurringPipelines.md
@@ -1,8 +1,8 @@
 ---
-title: "Create Recurring Pipelines"
+title: "Recurring Pipelines"
 sidebar_position: 3
 description: >
-  Create Recurring Pipelines
+  Recurring Pipelines
 ---
 
 ## How to create recurring pipelines?
diff --git a/docs/UserManuals/team-feature-user-guide.md b/docs/UserManuals/TeamConfiguration.md
similarity index 94%
copy from docs/UserManuals/team-feature-user-guide.md
copy to docs/UserManuals/TeamConfiguration.md
index 07a080b..4646ffa 100644
--- a/docs/UserManuals/team-feature-user-guide.md
+++ b/docs/UserManuals/TeamConfiguration.md
@@ -1,8 +1,8 @@
 ---
-title: "Team Feature User Guide"
+title: "Team Configuration"
 sidebar_position: 6
 description: >
-  Team Feature User Guide
+  Team Configuration
 ---
 ## Summary
 This is a brief step-by-step guide to using the team feature.
@@ -31,7 +31,7 @@ b. The actual api request.
     iii. After successful execution, the teams table is generated and the data can be seen in the database table teams. 
     (Notes: how to connect to the database: mainly through host, port, username, password, and then through sql tools, such as sequal ace, datagrip and other data, of course you can also access through the command line mysql -h `ip` -u `username` -p -P `port`)
 
-![image](/img/teamflow3.png)
+![image](/img/Team/teamflow3.png)
 
 
 ## Step 2 - Construct user tables (roster)
@@ -52,11 +52,11 @@ b. The actual api request.
 
     iii. After successful execution, the users table is generated and the data can be seen in the database table users.
 
-![image](/img/teamflow1.png)
+![image](/img/Team/teamflow1.png)
     
     iv. Generated the team_users table, you can see the data in the team_users table.
 
-![image](/img/teamflow2.png)
+![image](/img/Team/teamflow2.png)
 
 ## Step 3 - Update users if you need  
 If there is a problem with team_users association or data in users, just re-put users api interface, i.e. (b in step 2 above)
@@ -64,7 +64,7 @@ If there is a problem with team_users association or data in users, just re-put
 ## Step 4 - Collect accounts 
 accounts table is collected by users through devlake. You can see the accounts table information in the database.
 
-![image](/img/teamflow4.png)
+![image](/img/Team/teamflow4.png)
 
 ## Step 5 - Automatically match existing accounts and users through api requests
 
@@ -91,7 +91,7 @@ curl --location --request POST '127.0.0.1:8080/pipelines' \
 
 b. After successful execution, the user_accounts table is generated, and you can see the data in table user_accounts.
 
-![image](/img/teamflow5.png)
+![image](/img/Team/teamflow5.png)
 
 ## Step 6 - Get user_accountsr relationship
 After generating the user_accounts relationship, the user can get the associated data through the GET method to confirm whether the data user and account match correctly and whether the matched accounts are complete.
@@ -103,7 +103,7 @@ b. The corresponding curl command:
 curl --location --request GET 'http://127.0.0.1:8080/plugins/org/user_account_mapping.csv'
 ```
 
-![image](/img/teamflow6.png)
+![image](/img/Team/teamflow6.png)
 
 c. You can also use sql statements to determine, here to provide a sql statement for reference only.
 ```
@@ -123,7 +123,7 @@ curl --location --request PUT 'http://127.0.0.1:8080/plugins/org/user_account_ma
 
 b. You can see that the data in the user_accounts table has been updated.
 
-![image](/img/teamflow7.png)
+![image](/img/Team/teamflow7.png)
 
 
 **The above is the flow of user usage for the whole team feature.**
diff --git a/docs/UserManuals/03-TemporalSetup.md b/docs/UserManuals/TemporalSetup.md
similarity index 100%
rename from docs/UserManuals/03-TemporalSetup.md
rename to docs/UserManuals/TemporalSetup.md
diff --git a/docusaurus.config.js b/docusaurus.config.js
index 0046c50..11340ad 100644
--- a/docusaurus.config.js
+++ b/docusaurus.config.js
@@ -87,21 +87,21 @@ const versions = require('./versions.json');
         items: [
           {
             // type: 'docsVersionDropdown',
-            // docId: 'Overview/WhatIsDevLake',
+            // docId: 'Overview/Introduction',
             position: 'right',
             label: 'Docs',
             items: [
               ...versions.slice(0, versions.length - 2).map((version) => ({
                 label: version,
-                to: `docs/${version}/Overview/WhatIsDevLake`,
+                to: `docs/${version}/Overview/Introduction`,
              })),
              ...versions.slice(versions.length - 2, versions.length).map((version) => ({
               label: (version === "1.x") ? "1.x(Not Apache Release)" : version,
-              to: `docs/${version}/Overview/WhatIsDevLake`,
+              to: `docs/${version}/Overview/Introduction`,
           })),
               {
-                  label: "Next",
-                  to: "/docs/Overview/WhatIsDevLake",
+                  label: "Latest",
+                  to: "/docs/Overview/Introduction",
               }
             ]
           },
diff --git a/src/components/HomepageFeatures.js b/src/components/HomepageFeatures.js
index 5bdfa41..2914a93 100644
--- a/src/components/HomepageFeatures.js
+++ b/src/components/HomepageFeatures.js
@@ -5,7 +5,7 @@ import styles from './HomepageFeatures.module.css';
 const FeatureList = [
   {
     title: 'Data Silos Connected',
-    Svg: require('../../static/img/SilosConnected.svg').default,
+    Svg: require('../../static/img/Homepage/SilosConnected.svg').default,
     description: (
       <>
         Collect DevOps data across the entire Software Development LifeCycle (SDLC) and connect siloed data with a standard data model
@@ -14,7 +14,7 @@ const FeatureList = [
   },
   {
     title: 'Out-of-the-box Analysis',
-    Svg: require('../../static/img/OutoftheboxAnalysis.svg').default,
+    Svg: require('../../static/img/Homepage/OutoftheboxAnalysis.svg').default,
     description: (
       <>
         Visualize out-of-the-box engineering metrics in a series of use-case driven dashboards
@@ -23,7 +23,7 @@ const FeatureList = [
   },
   {
     title: 'A Highly Flexible Framework',
-    Svg: require('../../static/img/HighlyFlexible.svg').default,
+    Svg: require('../../static/img/Homepage/HighlyFlexible.svg').default,
     description: (
       <>
         Easily extend DevLake to support your data sources, metrics, and dashboards
diff --git a/static/img/arch-component.svg b/static/img/Architecture/arch-component.svg
similarity index 100%
rename from static/img/arch-component.svg
rename to static/img/Architecture/arch-component.svg
diff --git a/static/img/arch-dataflow.svg b/static/img/Architecture/arch-dataflow.svg
similarity index 100%
rename from static/img/arch-dataflow.svg
rename to static/img/Architecture/arch-dataflow.svg
diff --git a/img/community/contributors/abhishek.jpeg b/static/img/Community/contributors/abhishek.jpeg
similarity index 100%
rename from img/community/contributors/abhishek.jpeg
rename to static/img/Community/contributors/abhishek.jpeg
diff --git a/img/community/contributors/anshimin.jpeg b/static/img/Community/contributors/anshimin.jpeg
similarity index 100%
rename from img/community/contributors/anshimin.jpeg
rename to static/img/Community/contributors/anshimin.jpeg
diff --git a/img/community/contributors/chengeyu.jpeg b/static/img/Community/contributors/chengeyu.jpeg
similarity index 100%
rename from img/community/contributors/chengeyu.jpeg
rename to static/img/Community/contributors/chengeyu.jpeg
diff --git a/img/community/contributors/jibin.jpeg b/static/img/Community/contributors/jibin.jpeg
similarity index 100%
rename from img/community/contributors/jibin.jpeg
rename to static/img/Community/contributors/jibin.jpeg
diff --git a/img/community/contributors/keonamini.jpeg b/static/img/Community/contributors/keonamini.jpeg
similarity index 100%
rename from img/community/contributors/keonamini.jpeg
rename to static/img/Community/contributors/keonamini.jpeg
diff --git a/img/community/contributors/lijiageng.jpeg b/static/img/Community/contributors/lijiageng.jpeg
similarity index 100%
rename from img/community/contributors/lijiageng.jpeg
rename to static/img/Community/contributors/lijiageng.jpeg
diff --git a/img/community/contributors/lizhenlei.jpeg b/static/img/Community/contributors/lizhenlei.jpeg
similarity index 100%
rename from img/community/contributors/lizhenlei.jpeg
rename to static/img/Community/contributors/lizhenlei.jpeg
diff --git a/img/community/contributors/nikitakoselec.jpeg b/static/img/Community/contributors/nikitakoselec.jpeg
similarity index 100%
rename from img/community/contributors/nikitakoselec.jpeg
rename to static/img/Community/contributors/nikitakoselec.jpeg
diff --git a/img/community/contributors/prajwalborkar.jpeg b/static/img/Community/contributors/prajwalborkar.jpeg
similarity index 100%
rename from img/community/contributors/prajwalborkar.jpeg
rename to static/img/Community/contributors/prajwalborkar.jpeg
diff --git a/img/community/contributors/songdunyu.jpeg b/static/img/Community/contributors/songdunyu.jpeg
similarity index 100%
rename from img/community/contributors/songdunyu.jpeg
rename to static/img/Community/contributors/songdunyu.jpeg
diff --git a/img/community/contributors/supeng.jpeg b/static/img/Community/contributors/supeng.jpeg
similarity index 100%
rename from img/community/contributors/supeng.jpeg
rename to static/img/Community/contributors/supeng.jpeg
diff --git a/img/community/contributors/tanguiping.jpeg b/static/img/Community/contributors/tanguiping.jpeg
similarity index 100%
rename from img/community/contributors/tanguiping.jpeg
rename to static/img/Community/contributors/tanguiping.jpeg
diff --git a/img/community/contributors/wangdanna.jpeg b/static/img/Community/contributors/wangdanna.jpeg
similarity index 100%
rename from img/community/contributors/wangdanna.jpeg
rename to static/img/Community/contributors/wangdanna.jpeg
diff --git a/img/community/contributors/wangxiaolei.jpeg b/static/img/Community/contributors/wangxiaolei.jpeg
similarity index 100%
rename from img/community/contributors/wangxiaolei.jpeg
rename to static/img/Community/contributors/wangxiaolei.jpeg
diff --git a/img/community/contributors/zhangxiangyu.jpeg b/static/img/Community/contributors/zhangxiangyu.jpeg
similarity index 100%
rename from img/community/contributors/zhangxiangyu.jpeg
rename to static/img/Community/contributors/zhangxiangyu.jpeg
diff --git a/img/community/screenshots/issue_page_screenshot.png b/static/img/Community/screenshots/issue_page_screenshot.png
similarity index 100%
rename from img/community/screenshots/issue_page_screenshot.png
rename to static/img/Community/screenshots/issue_page_screenshot.png
diff --git a/static/img/schema-diagram.png b/static/img/DomainLayerSchema/schema-diagram.png
similarity index 100%
rename from static/img/schema-diagram.png
rename to static/img/DomainLayerSchema/schema-diagram.png
diff --git a/static/img/blueprint-erd.svg b/static/img/Glossary/blueprint-erd.svg
similarity index 100%
rename from static/img/blueprint-erd.svg
rename to static/img/Glossary/blueprint-erd.svg
diff --git a/static/img/pipeline-erd.svg b/static/img/Glossary/pipeline-erd.svg
similarity index 100%
rename from static/img/pipeline-erd.svg
rename to static/img/Glossary/pipeline-erd.svg
diff --git a/static/img/HighlyFlexible.svg b/static/img/Homepage/HighlyFlexible.svg
similarity index 100%
rename from static/img/HighlyFlexible.svg
rename to static/img/Homepage/HighlyFlexible.svg
diff --git a/static/img/OutoftheboxAnalysis.svg b/static/img/Homepage/OutoftheboxAnalysis.svg
similarity index 100%
rename from static/img/OutoftheboxAnalysis.svg
rename to static/img/Homepage/OutoftheboxAnalysis.svg
diff --git a/static/img/SilosConnected.svg b/static/img/Homepage/SilosConnected.svg
similarity index 100%
rename from static/img/SilosConnected.svg
rename to static/img/Homepage/SilosConnected.svg
diff --git a/static/img/userflow1.svg b/static/img/Introduction/userflow1.svg
similarity index 100%
rename from static/img/userflow1.svg
rename to static/img/Introduction/userflow1.svg
diff --git a/static/img/userflow2.svg b/static/img/Introduction/userflow2.svg
similarity index 100%
rename from static/img/userflow2.svg
rename to static/img/Introduction/userflow2.svg
diff --git a/static/img/userflow3.png b/static/img/Introduction/userflow3.png
similarity index 100%
rename from static/img/userflow3.png
rename to static/img/Introduction/userflow3.png
diff --git a/static/img/userflow4.png b/static/img/Introduction/userflow4.png
similarity index 100%
rename from static/img/userflow4.png
rename to static/img/Introduction/userflow4.png
diff --git a/static/img/github-demo.png b/static/img/Plugins/github-demo.png
similarity index 100%
rename from static/img/github-demo.png
rename to static/img/Plugins/github-demo.png
diff --git a/static/img/jenkins-demo.png b/static/img/Plugins/jenkins-demo.png
similarity index 100%
rename from static/img/jenkins-demo.png
rename to static/img/Plugins/jenkins-demo.png
diff --git a/static/img/jira-demo.png b/static/img/Plugins/jira-demo.png
similarity index 100%
rename from static/img/jira-demo.png
rename to static/img/Plugins/jira-demo.png
diff --git a/static/img/teamflow1.png b/static/img/Team/teamflow1.png
similarity index 100%
rename from static/img/teamflow1.png
rename to static/img/Team/teamflow1.png
diff --git a/static/img/teamflow2.png b/static/img/Team/teamflow2.png
similarity index 100%
rename from static/img/teamflow2.png
rename to static/img/Team/teamflow2.png
diff --git a/static/img/teamflow3.png b/static/img/Team/teamflow3.png
similarity index 100%
rename from static/img/teamflow3.png
rename to static/img/Team/teamflow3.png
diff --git a/static/img/teamflow4.png b/static/img/Team/teamflow4.png
similarity index 100%
rename from static/img/teamflow4.png
rename to static/img/Team/teamflow4.png
diff --git a/static/img/teamflow5.png b/static/img/Team/teamflow5.png
similarity index 100%
rename from static/img/teamflow5.png
rename to static/img/Team/teamflow5.png
diff --git a/static/img/teamflow6.png b/static/img/Team/teamflow6.png
similarity index 100%
rename from static/img/teamflow6.png
rename to static/img/Team/teamflow6.png
diff --git a/static/img/teamflow7.png b/static/img/Team/teamflow7.png
similarity index 100%
rename from static/img/teamflow7.png
rename to static/img/Team/teamflow7.png
diff --git a/static/img/tutorial/docsVersionDropdown.png b/static/img/tutorial/docsVersionDropdown.png
deleted file mode 100644
index ff1cbe6..0000000
Binary files a/static/img/tutorial/docsVersionDropdown.png and /dev/null differ
diff --git a/static/img/tutorial/localeDropdown.png b/static/img/tutorial/localeDropdown.png
deleted file mode 100644
index d7163f9..0000000
Binary files a/static/img/tutorial/localeDropdown.png and /dev/null differ
diff --git a/versioned_docs/version-0.11/Glossary.md b/versioned_docs/version-0.11/Glossary.md
deleted file mode 100644
index 4ca3117..0000000
--- a/versioned_docs/version-0.11/Glossary.md
+++ /dev/null
@@ -1,106 +0,0 @@
----
-sidebar_position: 8
-title: "Glossary"
-linkTitle: "Glossary"
-tags: []
-categories: []
-weight: 6
-description: >
-  DevLake Glossary
----
-
-*Last updated: May 16 2022*
-
-
-## In Configuration UI (Regular Mode)
-
-The following terms are arranged in the order of their appearance in the actual user workflow.
-
-### Blueprints
-**A blueprint is the plan that covers all the work to get your raw data ready for query and metric computation in the dashboards.** Creating a blueprint consists of four steps:
-1. **Adding [Data Connections](Glossary.md#data-connections)**: For each [data source](Glossary.md#data-sources), one or more data connections can be added to a single blueprint, depending on the data you want to sync to DevLake.
-2. **Setting the [Data Scope](Glossary.md#data-scope)**: For each data connection, you need to configure the scope of data, such as GitHub projects, Jira boards, and their corresponding [data entities](Glossary.md#data-entities).
-3. **Adding [Transformation Rules](Glossary.md#transformation-rules) (optional)**: You can optionally apply transformation for the data scope you have just selected, in order to view more advanced metrics.
-3. **Setting the Sync Frequency**: You can specify the sync frequency for your blueprint to achieve recurring data syncs and transformation. Alternatively, you can set the frequency to manual if you wish to run the tasks in the blueprint manually.
-
-The relationship among Blueprint, Data Connections, Data Scope and Transformation Rules is explained as follows:
-
-![Blueprint ERD](/img/blueprint-erd.svg)
-- Each blueprint can have multiple data connections.
-- Each data connection can have multiple sets of data scope.
-- Each set of data scope only consists of one GitHub/GitLab project or Jira board, along with their corresponding data entities.
-- Each set of data scope can only have one set of transformation rules.
-
-### Data Sources
-**A data source is a specific DevOps tool from which you wish to sync your data, such as GitHub, GitLab, Jira and Jenkins.**
-
-DevLake normally uses one [data plugin](Glossary.md#data-plugins) to pull data for a single data source. However, in some cases, DevLake uses multiple data plugins for one data source for the purpose of improved sync speed, among many other advantages. For instance, when you pull data from GitHub or GitLab, aside from the GitHub or GitLab plugin, Git Extractor is also used to pull data from the repositories. In this case, DevLake still refers GitHub or GitLab as a single data source.
-
-### Data Connections
-**A data connection is a specific instance of a data source that stores information such as `endpoint` and `auth`.** A single data source can have one or more data connections (e.g. two Jira instances). Currently, DevLake supports one data connection for GitHub, GitLab and Jenkins, and multiple connections for Jira.
-
-You can set up a new data connection either during the first step of creating a blueprint, or in the Connections page that can be accessed from the navigation bar. Because one single data connection can be reused in multiple blueprints, you can update the information of a particular data connection in Connections, to ensure all its associated blueprints will run properly. For example, you may want to update your GitHub token in a data connection if it goes expired.
-
-### Data Scope
-**In a blueprint, each data connection can have multiple sets of data scope configurations, including GitHub or GitLab projects, Jira boards and their corresponding[data entities](Glossary.md#data-entities).** The fields for data scope configuration vary according to different data sources.
-
-Each set of data scope refers to one GitHub or GitLab project, or one Jira board and the data entities you would like to sync for them, for the convenience of applying transformation in the next step. For instance, if you wish to sync 5 GitHub projects, you will have 5 sets of data scope for GitHub.
-
-To learn more about the default data scope of all data sources and data plugins, please refer to [Data Support](./DataModels/02-DataSupport.md).
-
-### Data Entities
-**Data entities refer to the data fields from one of the five data domains: Issue Tracking, Source Code Management, Code Review, CI/CD and Cross-Domain.**
-
-For instance, if you wish to pull Source Code Management data from GitHub and Issue Tracking data from Jira, you can check the corresponding data entities during setting the data scope of these two data connections.
-
-To learn more details, please refer to [Domain Layer Schema](./DataModels/01-DevLakeDomainLayerSchema.md).
-
-### Transformation Rules
-**Transformation rules are a collection of methods that allow you to customize how DevLake normalizes raw data for query and metric computation.** Each set of data scope is strictly accompanied with one set of transformation rules. However, for your convenience, transformation rules can also be duplicated across different sets of data scope.
-
-DevLake uses these normalized values in the transformation to design more advanced dashboards, such as the Weekly Bug Retro dashboard. Although configuring transformation rules is not mandatory, if you leave the rules blank or have not configured correctly, only the basic dashboards (e.g. GitHub Basic Metrics) will be displayed as expected, while the advanced dashboards will not.
-
-### Historical Runs
-**A historical run of a blueprint is an actual execution of the data collection and transformation [tasks](Glossary.md#tasks) defined in the blueprint at its creation.** A list of historical runs of a blueprint is the entire running history of that blueprint, whether executed automatically or manually. Historical runs can be triggered in three ways:
-- By the blueprint automatically according to its schedule in the Regular Mode of the Configuration UI
-- By running the JSON in the Advanced Mode of the Configuration UI
-- By calling the API `/pipelines` endpoint manually
-
-However, the name Historical Runs is only used in the Configuration UI. In DevLake API, they are called [pipelines](Glossary.md#pipelines).
-
-## In Configuration UI (Advanced Mode) and API
-
-The following terms have not appeared in the Regular Mode of Configuration UI for simplification, but can be very useful if you want to learn about the underlying framework of Devalke or use Advanced Mode and the DevLake API.
-
-### Data Plugins
-**A data plugin is a specific module that syncs or transforms data.** There are two types of data plugins: Data Collection Plugins and Data Transformation Plugins.
-
-Data Collection Plugins pull data from one or more data sources. DevLake supports 8 data plugins in this category: `ae`, `feishu`, `gitextractor`, `github`, `gitlab`, `jenkins`, `jira` and `tapd`.
-
-Data Transformation Plugins transform the data pulled by other Data Collection Plugins. `refdiff` is currently the only plugin in this category.
-
-Although the names of the data plugins are not displayed in the regular mode of DevLake Configuration UI, they can be used directly in JSON in the Advanced Mode.
-
-For detailed information about the relationship between data sources and data plugins, please refer to [Data Support](./DataModels/02-DataSupport.md).
-
-
-### Pipelines
-**A pipeline is an orchestration of [tasks](Glossary.md#tasks) of data `collection`, `extraction`, `conversion` and `enrichment`, defined in the DevLake API.** A pipeline is composed of one or multiple [stages](Glossary.md#stages) that are executed in a sequential order. Any error occurring during the execution of any stage, task or subtask will cause the immediate fail of the pipeline.
-
-The composition of a pipeline is explained as follows:
-![Blueprint ERD](/img/pipeline-erd.svg)
-Notice: **You can manually orchestrate the pipeline in Configuration UI Advanced Mode and the DevLake API; whereas in Configuration UI regular mode, an optimized pipeline orchestration will be automatically generated for you.**
-
-
-### Stages
-**A stages is a collection of tasks performed by data plugins.** Stages are executed in a sequential order in a pipeline.
-
-### Tasks
-**A task is a collection of [subtasks](Glossary.md#subtasks) that perform any of the `collection`, `extraction`, `conversion` and `enrichment` jobs of a particular data plugin.** Tasks are executed in a parallel order in any stages.
-
-### Subtasks
-**A subtask is the minimal work unit in a pipeline that performs in any of the four roles: `Collectors`, `Extractors`, `Converters` and `Enrichers`.** Subtasks are executed in sequential orders.
-- `Collectors`: Collect raw data from data sources, normally via DevLake API and stored into `raw data table`
-- `Extractors`: Extract data from `raw data table` to `domain layer tables`
-- `Converters`: Convert data from `tool layer tables` into `domain layer tables`
-- `Enrichers`: Enrich data from one domain to other domains. For instance, the Fourier Transformation can examine `issue_changelog` to show time distribution of an issue on every assignee.
diff --git a/versioned_docs/version-0.11/Dashboards/AverageRequirementLeadTime.md b/versioned_docs/version-v0.11.0/Dashboards/AverageRequirementLeadTime.md
similarity index 100%
rename from versioned_docs/version-0.11/Dashboards/AverageRequirementLeadTime.md
rename to versioned_docs/version-v0.11.0/Dashboards/AverageRequirementLeadTime.md
diff --git a/versioned_docs/version-0.11/Dashboards/CommitCountByAuthor.md b/versioned_docs/version-v0.11.0/Dashboards/CommitCountByAuthor.md
similarity index 100%
rename from versioned_docs/version-0.11/Dashboards/CommitCountByAuthor.md
rename to versioned_docs/version-v0.11.0/Dashboards/CommitCountByAuthor.md
diff --git a/versioned_docs/version-0.11/Dashboards/DetailedBugInfo.md b/versioned_docs/version-v0.11.0/Dashboards/DetailedBugInfo.md
similarity index 100%
rename from versioned_docs/version-0.11/Dashboards/DetailedBugInfo.md
rename to versioned_docs/version-v0.11.0/Dashboards/DetailedBugInfo.md
diff --git a/versioned_docs/version-0.11/Dashboards/GitHubBasic.md b/versioned_docs/version-v0.11.0/Dashboards/GitHubBasic.md
similarity index 100%
rename from versioned_docs/version-0.11/Dashboards/GitHubBasic.md
rename to versioned_docs/version-v0.11.0/Dashboards/GitHubBasic.md
diff --git a/versioned_docs/version-0.11/Dashboards/GitHubReleaseQualityAndContributionAnalysis.md b/versioned_docs/version-v0.11.0/Dashboards/GitHubReleaseQualityAndContributionAnalysis.md
similarity index 100%
rename from versioned_docs/version-0.11/Dashboards/GitHubReleaseQualityAndContributionAnalysis.md
rename to versioned_docs/version-v0.11.0/Dashboards/GitHubReleaseQualityAndContributionAnalysis.md
diff --git a/versioned_docs/version-0.11/Dashboards/Jenkins.md b/versioned_docs/version-v0.11.0/Dashboards/Jenkins.md
similarity index 100%
rename from versioned_docs/version-0.11/Dashboards/Jenkins.md
rename to versioned_docs/version-v0.11.0/Dashboards/Jenkins.md
diff --git a/versioned_docs/version-0.11/Dashboards/WeeklyBugRetro.md b/versioned_docs/version-v0.11.0/Dashboards/WeeklyBugRetro.md
similarity index 100%
rename from versioned_docs/version-0.11/Dashboards/WeeklyBugRetro.md
rename to versioned_docs/version-v0.11.0/Dashboards/WeeklyBugRetro.md
diff --git a/versioned_docs/version-0.11/Dashboards/_category_.json b/versioned_docs/version-v0.11.0/Dashboards/_category_.json
similarity index 100%
rename from versioned_docs/version-0.11/Dashboards/_category_.json
rename to versioned_docs/version-v0.11.0/Dashboards/_category_.json
diff --git a/versioned_docs/version-0.11/DataModels/02-DataSupport.md b/versioned_docs/version-v0.11.0/DataModels/DataSupport.md
similarity index 98%
rename from versioned_docs/version-0.11/DataModels/02-DataSupport.md
rename to versioned_docs/version-v0.11.0/DataModels/DataSupport.md
index 7067da1..4cb4b61 100644
--- a/versioned_docs/version-0.11/DataModels/02-DataSupport.md
+++ b/versioned_docs/version-v0.11.0/DataModels/DataSupport.md
@@ -1,11 +1,8 @@
 ---
 title: "Data Support"
-linkTitle: "Data Support"
-tags: []
-categories: []
-weight: 2
 description: >
   Data sources that DevLake supports
+sidebar_position: 1
 ---
 
 
@@ -26,7 +23,7 @@ DevLake supports the following data sources. The data from each data source is c
 
 
 ## Data Collection Scope By Each Plugin
-This table shows the entities collected by each plugin. Domain layer entities in this table are consistent with the entities [here](./01-DevLakeDomainLayerSchema.md).
+This table shows the entities collected by each plugin. Domain layer entities in this table are consistent with the entities [here](./DevLakeDomainLayerSchema.md).
 
 | Domain Layer Entities | ae             | gitextractor | github         | gitlab  | jenkins | jira    | refdiff | tapd    |
 | --------------------- | -------------- | ------------ | -------------- | ------- | ------- | ------- | ------- | ------- |
diff --git a/docs/DataModels/01-DevLakeDomainLayerSchema.md b/versioned_docs/version-v0.11.0/DataModels/DevLakeDomainLayerSchema.md
similarity index 99%
rename from docs/DataModels/01-DevLakeDomainLayerSchema.md
rename to versioned_docs/version-v0.11.0/DataModels/DevLakeDomainLayerSchema.md
index 2ffa512..996d397 100644
--- a/docs/DataModels/01-DevLakeDomainLayerSchema.md
+++ b/versioned_docs/version-v0.11.0/DataModels/DevLakeDomainLayerSchema.md
@@ -1,11 +1,8 @@
 ---
 title: "Domain Layer Schema"
-linkTitle: "Domain Layer Schema"
-tags: []
-categories: []
-weight: 50000
 description: >
   DevLake Domain Layer Schema
+sidebar_position: 2
 ---
 
 ## Summary
@@ -33,7 +30,7 @@ This is the up-to-date domain layer schema for DevLake v0.10.x. Tables (entities
 
 
 ### Schema Diagram
-![Domain Layer Schema](/img/schema-diagram.png)
+![Domain Layer Schema](/img/DomainLayerSchema/schema-diagram.png)
 
 When reading the schema, you'll notice that many tables' primary key is called `id`. Unlike auto-increment id or UUID, `id` is a string composed of several parts to uniquely identify similar entities (e.g. repo) from different platforms (e.g. Github/Gitlab) and allow them to co-exist in a single table.
 
diff --git a/versioned_docs/version-0.11/DataModels/_category_.json b/versioned_docs/version-v0.11.0/DataModels/_category_.json
similarity index 100%
rename from versioned_docs/version-0.11/DataModels/_category_.json
rename to versioned_docs/version-v0.11.0/DataModels/_category_.json
diff --git a/docs/DeveloperManuals/MIGRATIONS.md b/versioned_docs/version-v0.11.0/DeveloperManuals/DBMigration.md
similarity index 94%
rename from docs/DeveloperManuals/MIGRATIONS.md
rename to versioned_docs/version-v0.11.0/DeveloperManuals/DBMigration.md
index edab4ca..9530237 100644
--- a/docs/DeveloperManuals/MIGRATIONS.md
+++ b/versioned_docs/version-v0.11.0/DeveloperManuals/DBMigration.md
@@ -2,17 +2,16 @@
 title: "DB Migration"
 description: >
   DB Migration
+sidebar_position: 3
 ---
 
-# Migrations (Database)
-
 ## Summary
 Starting in v0.10.0, DevLake provides a lightweight migration tool for executing migration scripts.
 Both framework itself and plugins define their migration scripts in their own migration folder.
 The migration scripts are written with gorm in Golang to support different SQL dialects.
 
 
-## Migration script
+## Migration Script
 Migration script describes how to do database migration.
 They implement the `Script` interface.
 When DevLake starts, scripts register themselves to the framework by invoking the `Register` function
@@ -29,7 +28,9 @@ type Script interface {
 
 The table tracks migration scripts execution and schemas changes.
 From which, DevLake could figure out the current state of database schemas.
-## How it Works
+
+
+## How It Works
 1. Check `migration_history` table, calculate all the migration scripts need to be executed.
 2. Sort scripts by Version in ascending order.
 3. Execute scripts.
diff --git a/versioned_docs/version-0.11/DeveloperManuals/Dal.md b/versioned_docs/version-v0.11.0/DeveloperManuals/Dal.md
similarity index 99%
rename from versioned_docs/version-0.11/DeveloperManuals/Dal.md
rename to versioned_docs/version-v0.11.0/DeveloperManuals/Dal.md
index da27a55..9b08542 100644
--- a/versioned_docs/version-0.11/DeveloperManuals/Dal.md
+++ b/versioned_docs/version-v0.11.0/DeveloperManuals/Dal.md
@@ -1,6 +1,6 @@
 ---
 title: "Dal"
-sidebar_position: 4
+sidebar_position: 5
 description: >
   The Dal (Data Access Layer) is designed to decouple the hard dependency on `gorm` in v0.12
 ---
diff --git a/docs/DeveloperManuals/04-DeveloperSetup.md b/versioned_docs/version-v0.11.0/DeveloperManuals/DeveloperSetup.md
similarity index 87%
rename from docs/DeveloperManuals/04-DeveloperSetup.md
rename to versioned_docs/version-v0.11.0/DeveloperManuals/DeveloperSetup.md
index cb27440..4b05c11 100644
--- a/docs/DeveloperManuals/04-DeveloperSetup.md
+++ b/versioned_docs/version-v0.11.0/DeveloperManuals/DeveloperSetup.md
@@ -2,10 +2,11 @@
 title: "Developer Setup"
 description: >
   The steps to install DevLake in develper mode.
+sidebar_position: 1
 ---
 
 
-#### Requirements
+## Requirements
 
 - <a href="https://docs.docker.com/get-docker" target="_blank">Docker v19.03.10+</a>
 - <a href="https://golang.org/doc/install" target="_blank">Golang v1.17+</a>
@@ -14,7 +15,7 @@ description: >
   - Windows: [Download](http://gnuwin32.sourceforge.net/packages/make.htm)
   - Ubuntu: `sudo apt-get install build-essential libssl-dev`
 
-#### How to setup dev environment
+## How to setup dev environment
 1. Navigate to where you would like to install this project and clone the repository:
 
    ```sh
@@ -24,7 +25,7 @@ description: >
 
 2. Install dependencies for plugins:
 
-   - [RefDiff](../Plugins/refdiff.md#development)
+   - [RefDiff](../Plugins/RefDiff.md#development)
 
 3. Install Go packages
 
@@ -75,10 +76,10 @@ description: >
     - Navigate to desired plugins pages on the Integrations page
     - Enter the required information for the plugins you intend to use.
     - Refer to the following for more details on how to configure each one:
-        - [Jira](../Plugins/jira.md)
-        - [GitLab](../Plugins/gitlab.md)
-        - [Jenkins](../Plugins/jenkins.md)
-        - [GitHub](../Plugins/github.md): For users who'd like to collect GitHub data, we recommend reading our [GitHub data collection guide](../UserManuals/github-user-guide-v0.10.0.md) which covers the following steps in detail.
+        - [Jira](../Plugins/Jira.md)
+        - [GitLab](../Plugins/GitLab.md)
+        - [Jenkins](../Plugins/Jenkins.md)
+        - [GitHub](../Plugins/GitHub.md): For users who'd like to collect GitHub data, we recommend reading our [GitHub data collection guide](../UserManuals/GitHubUserGuide.md) which covers the following steps in detail.
     - Submit the form to update the values by clicking on the **Save Connection** button on each form page
 
 9. Visit `localhost:4000/pipelines/create` to RUN a Pipeline and trigger data collection.
@@ -111,14 +112,14 @@ description: >
     ]
     ```
 
-   Please refer to [Pipeline Advanced Mode](../UserManuals/create-pipeline-in-advanced-mode.md) for in-depth explanation.
+   Please refer to [Pipeline Advanced Mode](../UserManuals/AdvancedMode.md) for in-depth explanation.
 
 
 10. Click *View Dashboards* button in the top left when done, or visit `localhost:3002` (username: `admin`, password: `admin`).
 
    We use <a href="https://grafana.com/" target="_blank">Grafana</a> as a visualization tool to build charts for the <a href="https://github.com/merico-dev/lake/wiki/DataModel.Domain-layer-schema">data stored in our database</a>. Using SQL queries, we can add panels to build, save, and edit customized dashboards.
 
-   All the details on provisioning and customizing a dashboard can be found in the [Grafana Doc](../UserManuals/GRAFANA.md).
+   All the details on provisioning and customizing a dashboard can be found in the [Grafana Doc](../UserManuals/GrafanaUserGuide.md).
 
 11. (Optional) To run the tests:
 
@@ -126,5 +127,5 @@ description: >
     make test
     ```
 
-12. For DB migrations, please refer to [Migration Doc](../DeveloperManuals/MIGRATIONS.md).
-<br/><br/><br/>
+12. For DB migrations, please refer to [Migration Doc](../DeveloperManuals/DBMigration.md).
+
diff --git a/versioned_docs/version-0.11/DeveloperManuals/NOTIFICATION.md b/versioned_docs/version-v0.11.0/DeveloperManuals/Notifications.md
similarity index 97%
rename from versioned_docs/version-0.11/DeveloperManuals/NOTIFICATION.md
rename to versioned_docs/version-v0.11.0/DeveloperManuals/Notifications.md
index d5ebd2b..23456b4 100644
--- a/versioned_docs/version-0.11/DeveloperManuals/NOTIFICATION.md
+++ b/versioned_docs/version-v0.11.0/DeveloperManuals/Notifications.md
@@ -2,10 +2,9 @@
 title: "Notifications"
 description: >
   Notifications
+sidebar_position: 4
 ---
 
-# Notification
-
 ## Request
 Example request
 ```
diff --git a/versioned_docs/version-0.11/DeveloperManuals/PluginCreate.md b/versioned_docs/version-v0.11.0/DeveloperManuals/PluginImplementation.md
similarity index 99%
rename from versioned_docs/version-0.11/DeveloperManuals/PluginCreate.md
rename to versioned_docs/version-v0.11.0/DeveloperManuals/PluginImplementation.md
index 3f2a4ce..e3457c9 100644
--- a/versioned_docs/version-0.11/DeveloperManuals/PluginCreate.md
+++ b/versioned_docs/version-v0.11.0/DeveloperManuals/PluginImplementation.md
@@ -1,8 +1,8 @@
 ---
-title: "How to Implement a DevLake plugin?"
-sidebar_position: 1
+title: "Plugin Implementation"
+sidebar_position: 2
 description: >
-  How to Implement a DevLake plugin.
+  Plugin Implementation
 ---
 
 ## How to Implement a DevLake plugin?
diff --git a/versioned_docs/version-0.11/DeveloperManuals/_category_.json b/versioned_docs/version-v0.11.0/DeveloperManuals/_category_.json
similarity index 100%
rename from versioned_docs/version-0.11/DeveloperManuals/_category_.json
rename to versioned_docs/version-v0.11.0/DeveloperManuals/_category_.json
diff --git a/versioned_docs/version-0.11/EngineeringMetrics.md b/versioned_docs/version-v0.11.0/EngineeringMetrics.md
similarity index 100%
rename from versioned_docs/version-0.11/EngineeringMetrics.md
rename to versioned_docs/version-v0.11.0/EngineeringMetrics.md
diff --git a/docs/Overview/02-Architecture.md b/versioned_docs/version-v0.11.0/Overview/Architecture.md
similarity index 89%
rename from docs/Overview/02-Architecture.md
rename to versioned_docs/version-v0.11.0/Overview/Architecture.md
index 8daa859..2d780a5 100755
--- a/docs/Overview/02-Architecture.md
+++ b/versioned_docs/version-v0.11.0/Overview/Architecture.md
@@ -1,18 +1,18 @@
 ---
 title: "Architecture"
-linkTitle: "Architecture"
 description: >
-  Understand the architecture of Apache DevLake.
+  Understand the architecture of Apache DevLake
+sidebar_position: 2
 ---
 
 ## Architecture Overview
 
-<p align="center"><img src="/img/arch-component.svg" /></p>
+<p align="center"><img src="/img/Architecture/arch-component.svg" /></p>
 <p align="center">DevLake Components</p>
 
 A DevLake installation typically consists of the following components:
 
-- Config UI: A handy user interface to create, trigger, and debug Blueprints. A Blueprint specifies the where (data connection), what (data scope), how (transformation rule), and when (sync frequency) of a data pipeline.
+- Config UI: A handy user interface to create, trigger, and debug data pipelines.
 - API Server: The main programmatic interface of DevLake.
 - Runner: The runner does all the heavy-lifting for executing tasks. In the default DevLake installation, it runs within the API Server, but DevLake provides a temporal-based runner (beta) for production environments.
 - Database: The database stores both DevLake's metadata and user data collected by data pipelines. DevLake supports MySQL and PostgreSQL as of v0.11.
@@ -21,7 +21,7 @@ A DevLake installation typically consists of the following components:
 
 ## Dataflow
 
-<p align="center"><img src="/img/arch-dataflow.svg" /></p>
+<p align="center"><img src="/img/Architecture/arch-dataflow.svg" /></p>
 <p align="center">DevLake Dataflow</p>
 
 A typical plugin's dataflow is illustrated below:
diff --git a/versioned_docs/version-v0.11.0/Overview/Introduction.md b/versioned_docs/version-v0.11.0/Overview/Introduction.md
new file mode 100755
index 0000000..c8aacd9
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/Overview/Introduction.md
@@ -0,0 +1,16 @@
+---
+title: "Introduction"
+description: General introduction of Apache DevLake
+sidebar_position: 1
+---
+
+## What is Apache DevLake?
+Apache DevLake is an open-source dev data platform that ingests, analyzes, and visualizes the fragmented data from DevOps tools to distill insights for engineering productivity.
+
+Apache DevLake is designed for developer teams looking to make better sense of their development process and to bring a more data-driven approach to their own practices. You can ask Apache DevLake many questions regarding your development process. Just connect and query.
+
+## What can be accomplished with DevLake?
+1. Collect DevOps data across the entire Software Development Life Cycle (SDLC) and connect the siloed data with a standard [data model](../DataModels/DevLakeDomainLayerSchema.md).
+2. Visualize out-of-the-box engineering [metrics](../EngineeringMetrics.md) in a series of use-case driven dashboards
+3. Easily extend DevLake to support your data sources, metrics, and dashboards with a flexible [framework](Architecture.md) for data collection and ETL.
+
diff --git a/versioned_docs/version-0.11/Overview/03-Roadmap.md b/versioned_docs/version-v0.11.0/Overview/Roadmap.md
similarity index 53%
rename from versioned_docs/version-0.11/Overview/03-Roadmap.md
rename to versioned_docs/version-v0.11.0/Overview/Roadmap.md
index f10b62e..9dcf0b3 100644
--- a/versioned_docs/version-0.11/Overview/03-Roadmap.md
+++ b/versioned_docs/version-v0.11.0/Overview/Roadmap.md
@@ -1,11 +1,8 @@
 ---
 title: "Roadmap"
-linkTitle: "Roadmap"
-tags: []
-categories: []
-weight: 3
 description: >
-  The goals and roadmap for DevLake in 2022.
+  The goals and roadmap for DevLake in 2022
+sidebar_position: 3
 ---
 
 
@@ -24,8 +21,8 @@ Apache DevLake is currently under rapid development. You are more than welcome t
 
 | Category | Features|
 | --- | --- |
-| More data sources across different [DevOps domains](../DataModels/01-DevLakeDomainLayerSchema.md) (Goal No.1 & 2)| Features in **bold** are of higher priority <br/><br/> Issue/Task Management: <ul><li>**Jira server** [#886 (closed)](https://github.com/apache/incubator-devlake/issues/886)</li><li>**Jira data center** [#1687 (closed)](https://github.com/apache/incubator-devlake/issues/1687)</li><li>GitLab Issues [#715 (closed)](https://github.com/apache/incubator-devlake/issues/715)</li> [...]
-| Improved data collection, [data models](../DataModels/01-DevLakeDomainLayerSchema.md) and data extensibility (Goal No.2)| Data Collection: <br/> <ul><li>Complete the logging system</li><li>Implement a good error handling mechanism during data collection</li></ul> Data Models:<ul><li>Introduce DBT to allow users to create and modify the domain layer schema. [#1479 (closed)](https://github.com/apache/incubator-devlake/issues/1479)</li><li>Design the data models for 5 new domains, please  [...]
+| More data sources across different [DevOps domains](../DataModels/DevLakeDomainLayerSchema.md) (Goal No.1 & 2)| Features in **bold** are of higher priority <br/><br/> Issue/Task Management: <ul><li>**Jira server** [#886 (closed)](https://github.com/apache/incubator-devlake/issues/886)</li><li>**Jira data center** [#1687 (closed)](https://github.com/apache/incubator-devlake/issues/1687)</li><li>GitLab Issues [#715 (closed)](https://github.com/apache/incubator-devlake/issues/715)</li><li [...]
+| Improved data collection, [data models](../DataModels/DevLakeDomainLayerSchema.md) and data extensibility (Goal No.2)| Data Collection: <br/> <ul><li>Complete the logging system</li><li>Implement a good error handling mechanism during data collection</li></ul> Data Models:<ul><li>Introduce DBT to allow users to create and modify the domain layer schema. [#1479 (closed)](https://github.com/apache/incubator-devlake/issues/1479)</li><li>Design the data models for 5 new domains, please ref [...]
 | Better user experience (Goal No.3) | For new users: <ul><li> Iterate on a clearer step-by-step guide to improve the pre-configuration experience.</li><li>Provide a new Config UI to reduce frictions for data configuration [#1700 (in-progress)](https://github.com/apache/incubator-devlake/issues/1700)</li><li> Showcase dashboard live demos to let users explore and learn about the dashboards. [#1784 (open)](https://github.com/apache/incubator-devlake/issues/1784)</li></ul>For returning use [...]
 
 
diff --git a/versioned_docs/version-0.11/Overview/_category_.json b/versioned_docs/version-v0.11.0/Overview/_category_.json
similarity index 100%
rename from versioned_docs/version-0.11/Overview/_category_.json
rename to versioned_docs/version-v0.11.0/Overview/_category_.json
diff --git a/versioned_docs/version-0.11/Plugins/dbt.md b/versioned_docs/version-v0.11.0/Plugins/Dbt.md
similarity index 100%
rename from versioned_docs/version-0.11/Plugins/dbt.md
rename to versioned_docs/version-v0.11.0/Plugins/Dbt.md
diff --git a/versioned_docs/version-0.11/Plugins/feishu.md b/versioned_docs/version-v0.11.0/Plugins/Feishu.md
similarity index 99%
rename from versioned_docs/version-0.11/Plugins/feishu.md
rename to versioned_docs/version-v0.11.0/Plugins/Feishu.md
index f19e4b0..c3e0eb6 100644
--- a/versioned_docs/version-0.11/Plugins/feishu.md
+++ b/versioned_docs/version-v0.11.0/Plugins/Feishu.md
@@ -4,8 +4,6 @@ description: >
   Feishu Plugin
 ---
 
-# Feishu
-
 ## Summary
 
 This plugin collects Feishu meeting data through [Feishu Openapi](https://open.feishu.cn/document/home/user-identity-introduction/introduction).
diff --git a/versioned_docs/version-0.11/Plugins/gitextractor.md b/versioned_docs/version-v0.11.0/Plugins/GitExtractor.md
similarity index 93%
rename from versioned_docs/version-0.11/Plugins/gitextractor.md
rename to versioned_docs/version-v0.11.0/Plugins/GitExtractor.md
index ac97fa3..d154e9e 100644
--- a/versioned_docs/version-0.11/Plugins/gitextractor.md
+++ b/versioned_docs/version-v0.11.0/Plugins/GitExtractor.md
@@ -4,8 +4,6 @@ description: >
   GitExtractor Plugin
 ---
 
-# Git Repo Extractor
-
 ## Summary
 This plugin extracts commits and references from a remote or local git repository. It then saves the data into the database or csv files.
 
@@ -14,7 +12,7 @@ This plugin extracts commits and references from a remote or local git repositor
 1. Use the Git repo extractor to retrieve data about commits and branches from your repository.
 2. Use the GitHub plugin to retrieve data about Github issues and PRs from your repository.
 NOTE: you can run only one issue collection stage as described in the Github Plugin README.
-3. Use the [RefDiff](./refdiff.md#development) plugin to calculate version diff, which will be stored in `refs_commits_diffs` table.
+3. Use the [RefDiff](./RefDiff.md#development) plugin to calculate version diff, which will be stored in `refs_commits_diffs` table.
 
 ## Sample Request
 
@@ -60,6 +58,6 @@ For more options (e.g., saving to a csv file instead of a db), please read `plug
 ## Development
 
 This plugin depends on `libgit2`, you need to install version 1.3.0 in order to run and debug this plugin on your local
-machine. [Click here](./refdiff.md#development) for a brief guide.
+machine. [Click here](./RefDiff.md#development) for a brief guide.
 
 <br/><br/><br/>
diff --git a/versioned_docs/version-0.11/Plugins/github.md b/versioned_docs/version-v0.11.0/Plugins/GitHub.md
similarity index 98%
rename from versioned_docs/version-0.11/Plugins/github.md
rename to versioned_docs/version-v0.11.0/Plugins/GitHub.md
index 463f9de..cca87b7 100644
--- a/versioned_docs/version-0.11/Plugins/github.md
+++ b/versioned_docs/version-v0.11.0/Plugins/GitHub.md
@@ -4,7 +4,6 @@ description: >
   GitHub Plugin
 ---
 
-# Github
 
 
 ## Summary
@@ -24,7 +23,7 @@ Here are some examples metrics using `GitHub` data:
 
 ## Screenshot
 
-![image](/img/github-demo.png)
+![image](/img/Plugins/github-demo.png)
 
 
 ## Configuration
diff --git a/versioned_docs/version-0.11/Plugins/gitlab.md b/versioned_docs/version-v0.11.0/Plugins/GitLab.md
similarity index 100%
rename from versioned_docs/version-0.11/Plugins/gitlab.md
rename to versioned_docs/version-v0.11.0/Plugins/GitLab.md
diff --git a/versioned_docs/version-0.11/Plugins/gitee.md b/versioned_docs/version-v0.11.0/Plugins/Gitee.md
similarity index 99%
rename from versioned_docs/version-0.11/Plugins/gitee.md
rename to versioned_docs/version-v0.11.0/Plugins/Gitee.md
index 0c4307a..6066fd2 100644
--- a/versioned_docs/version-0.11/Plugins/gitee.md
+++ b/versioned_docs/version-v0.11.0/Plugins/Gitee.md
@@ -4,8 +4,6 @@ description: >
   Gitee Plugin
 ---
 
-# Gitee
-
 ## Summary
 
 ## Configuration
diff --git a/versioned_docs/version-0.11/Plugins/jenkins.md b/versioned_docs/version-v0.11.0/Plugins/Jenkins.md
similarity index 99%
rename from versioned_docs/version-0.11/Plugins/jenkins.md
rename to versioned_docs/version-v0.11.0/Plugins/Jenkins.md
index 26e72a6..792165d 100644
--- a/versioned_docs/version-0.11/Plugins/jenkins.md
+++ b/versioned_docs/version-v0.11.0/Plugins/Jenkins.md
@@ -4,8 +4,6 @@ description: >
   Jenkins Plugin
 ---
 
-# Jenkins
-
 ## Summary
 
 This plugin collects Jenkins data through [Remote Access API](https://www.jenkins.io/doc/book/using/remote-access-api/). It then computes and visualizes various DevOps metrics from the Jenkins data.
diff --git a/versioned_docs/version-0.11/Plugins/jira.md b/versioned_docs/version-v0.11.0/Plugins/Jira.md
similarity index 100%
rename from versioned_docs/version-0.11/Plugins/jira.md
rename to versioned_docs/version-v0.11.0/Plugins/Jira.md
diff --git a/versioned_docs/version-0.11/Plugins/refdiff.md b/versioned_docs/version-v0.11.0/Plugins/RefDiff.md
similarity index 99%
rename from versioned_docs/version-0.11/Plugins/refdiff.md
rename to versioned_docs/version-v0.11.0/Plugins/RefDiff.md
index 35d3049..12950f4 100644
--- a/versioned_docs/version-0.11/Plugins/refdiff.md
+++ b/versioned_docs/version-v0.11.0/Plugins/RefDiff.md
@@ -4,8 +4,6 @@ description: >
   RefDiff Plugin
 ---
 
-# RefDiff
-
 
 ## Summary
 
diff --git a/versioned_docs/version-0.11/Plugins/tapd.md b/versioned_docs/version-v0.11.0/Plugins/Tapd.md
similarity index 84%
rename from versioned_docs/version-0.11/Plugins/tapd.md
rename to versioned_docs/version-v0.11.0/Plugins/Tapd.md
index fc93539..b8db89f 100644
--- a/versioned_docs/version-0.11/Plugins/tapd.md
+++ b/versioned_docs/version-v0.11.0/Plugins/Tapd.md
@@ -1,4 +1,8 @@
-# TAPD
+---
+title: "TAPD"
+description: >
+  TAPD Plugin
+---
 
 ## Summary
 
diff --git a/versioned_docs/version-0.11/Plugins/_category_.json b/versioned_docs/version-v0.11.0/Plugins/_category_.json
similarity index 100%
rename from versioned_docs/version-0.11/Plugins/_category_.json
rename to versioned_docs/version-v0.11.0/Plugins/_category_.json
diff --git a/versioned_docs/version-0.11/Plugins/github-connection-in-config-ui.png b/versioned_docs/version-v0.11.0/Plugins/github-connection-in-config-ui.png
similarity index 100%
rename from versioned_docs/version-0.11/Plugins/github-connection-in-config-ui.png
rename to versioned_docs/version-v0.11.0/Plugins/github-connection-in-config-ui.png
diff --git a/versioned_docs/version-0.11/Plugins/gitlab-connection-in-config-ui.png b/versioned_docs/version-v0.11.0/Plugins/gitlab-connection-in-config-ui.png
similarity index 100%
rename from versioned_docs/version-0.11/Plugins/gitlab-connection-in-config-ui.png
rename to versioned_docs/version-v0.11.0/Plugins/gitlab-connection-in-config-ui.png
diff --git a/versioned_docs/version-0.11/Plugins/jira-connection-config-ui.png b/versioned_docs/version-v0.11.0/Plugins/jira-connection-config-ui.png
similarity index 100%
rename from versioned_docs/version-0.11/Plugins/jira-connection-config-ui.png
rename to versioned_docs/version-v0.11.0/Plugins/jira-connection-config-ui.png
diff --git a/versioned_docs/version-0.11/Plugins/jira-more-setting-in-config-ui.png b/versioned_docs/version-v0.11.0/Plugins/jira-more-setting-in-config-ui.png
similarity index 100%
rename from versioned_docs/version-0.11/Plugins/jira-more-setting-in-config-ui.png
rename to versioned_docs/version-v0.11.0/Plugins/jira-more-setting-in-config-ui.png
diff --git a/versioned_docs/version-0.11/QuickStart/02-KubernetesSetup.md b/versioned_docs/version-v0.11.0/QuickStart/KubernetesSetup.md
similarity index 94%
rename from versioned_docs/version-0.11/QuickStart/02-KubernetesSetup.md
rename to versioned_docs/version-v0.11.0/QuickStart/KubernetesSetup.md
index 19bdc4d..e4faeba 100644
--- a/versioned_docs/version-0.11/QuickStart/02-KubernetesSetup.md
+++ b/versioned_docs/version-v0.11.0/QuickStart/KubernetesSetup.md
@@ -1,7 +1,8 @@
 ---
-title: "Deploy to Kubernetes"
+title: "Kubernetes Setup"
 description: >
-  The steps to install Apache DevLake in Kubernetes.
+  The steps to install Apache DevLake in Kubernetes
+sidebar_position: 2
 ---
 
 
@@ -9,7 +10,7 @@ We provide a sample [k8s-deploy.yaml](https://github.com/apache/incubator-devlak
 
 [k8s-deploy.yaml](https://github.com/apache/incubator-devlake/blob/main/k8s-deploy.yaml) will create a namespace `devlake` on your k8s cluster, and use `nodePort 30004` for `config-ui`,  `nodePort 30002` for `grafana` dashboards. If you would like to use certain version of Apache DevLake, please update the image tag of `grafana`, `devlake` and `config-ui` services to specify versions like `v0.10.1`.
 
-Here's the step-by-step guide:
+## Step-by-step guide
 
 1. Download [k8s-deploy.yaml](https://github.com/apache/incubator-devlake/blob/main/k8s-deploy.yaml) to local machine
 2. Some key points:
diff --git a/versioned_docs/version-0.11/QuickStart/01-LocalSetup.md b/versioned_docs/version-v0.11.0/QuickStart/LocalSetup.md
similarity index 72%
rename from versioned_docs/version-0.11/QuickStart/01-LocalSetup.md
rename to versioned_docs/version-v0.11.0/QuickStart/LocalSetup.md
index 9b81bc9..8e56a65 100644
--- a/versioned_docs/version-0.11/QuickStart/01-LocalSetup.md
+++ b/versioned_docs/version-v0.11.0/QuickStart/LocalSetup.md
@@ -1,16 +1,17 @@
 ---
-title: "Deploy Locally"
+title: "Local Setup"
 description: >
-  The steps to install DevLake locally.
+  The steps to install DevLake locally
+sidebar_position: 1
 ---
 
 
-#### Prerequisites
+## Prerequisites
 
 - [Docker v19.03.10+](https://docs.docker.com/get-docker)
 - [docker-compose v2.2.3+](https://docs.docker.com/compose/install/)
 
-#### Launch DevLake
+## Launch DevLake
 
 - Commands written `like this` are to be run in your terminal.
 
@@ -18,25 +19,25 @@ description: >
 2. Rename `env.example` to `.env`. For Mac/Linux users, please run `mv env.example .env` in the terminal.
 3. Run `docker-compose up -d` to launch DevLake.
 
-#### Configure data connections and collect data
+## Configure data connections and collect data
 
 1. Visit `config-ui` at `http://localhost:4000` in your browser to configure data connections.
    - Navigate to desired plugins on the Integrations page
    - Please reference the following for more details on how to configure each one:<br/>
-      - [Jira](../Plugins/jira.md)
-      - [GitHub](../Plugins/github.md): For users who'd like to collect GitHub data, we recommend reading our [GitHub data collection guide](../UserManuals/github-user-guide-v0.10.0.md) which covers the following steps in detail.
-      - [GitLab](../Plugins/gitlab.md)
-      - [Jenkins](../Plugins/jenkins.md)
+      - [Jira](../Plugins/Jira.md)
+      - [GitHub](../Plugins/GitHub.md): For users who'd like to collect GitHub data, we recommend reading our [GitHub data collection guide](../UserManuals/GitHubUserGuide.md) which covers the following steps in detail.
+      - [GitLab](../Plugins/GitLab.md)
+      - [Jenkins](../Plugins/Jenkins.md)
    - Submit the form to update the values by clicking on the **Save Connection** button on each form page
    - `devlake` takes a while to fully boot up. if `config-ui` complaining about api being unreachable, please wait a few seconds and try refreshing the page.
 2. Create pipelines to trigger data collection in `config-ui`
 3. Click *View Dashboards* button in the top left when done, or visit `localhost:3002` (username: `admin`, password: `admin`).
-   - We use [Grafana](https://grafana.com/) as a visualization tool to build charts for the [data](../DataModels/02-DataSupport.md) stored in our database.
+   - We use [Grafana](https://grafana.com/) as a visualization tool to build charts for the [data](../DataModels/DataSupport.md) stored in our database.
    - Using SQL queries, we can add panels to build, save, and edit customized dashboards.
-   - All the details on provisioning and customizing a dashboard can be found in the [Grafana Doc](../UserManuals/GRAFANA.md).
-4. To synchronize data periodically, users can set up recurring pipelines with DevLake's [pipeline blueprint](../UserManuals/recurring-pipeline.md) for details.
+   - All the details on provisioning and customizing a dashboard can be found in the [Grafana Doc](../UserManuals/GrafanaUserGuide.md).
+4. To synchronize data periodically, users can set up recurring pipelines with DevLake's [pipeline blueprint](../UserManuals/RecurringPipelines.md) for details.
 
-#### Upgrade to a newer version
+## Upgrade to a newer version
 
 Support for database schema migration was introduced to DevLake in v0.10.0. From v0.10.0 onwards, users can upgrade their instance smoothly to a newer version. However, versions prior to v0.10.0 do not support upgrading to a newer version with a different database schema. We recommend users to deploy a new instance if needed.
 
diff --git a/versioned_docs/version-0.11/QuickStart/_category_.json b/versioned_docs/version-v0.11.0/QuickStart/_category_.json
similarity index 100%
rename from versioned_docs/version-0.11/QuickStart/_category_.json
rename to versioned_docs/version-v0.11.0/QuickStart/_category_.json
diff --git a/docs/UserManuals/create-pipeline-in-advanced-mode.md b/versioned_docs/version-v0.11.0/UserManuals/AdvancedMode.md
similarity index 97%
rename from docs/UserManuals/create-pipeline-in-advanced-mode.md
rename to versioned_docs/version-v0.11.0/UserManuals/AdvancedMode.md
index 14afd01..4323133 100644
--- a/docs/UserManuals/create-pipeline-in-advanced-mode.md
+++ b/versioned_docs/version-v0.11.0/UserManuals/AdvancedMode.md
@@ -1,8 +1,8 @@
 ---
-title: "Create Pipeline in Advanced Mode"
+title: "Advanced Mode"
 sidebar_position: 2
 description: >
-  Create Pipeline in Advanced Mode
+  Advanced Mode
 ---
 
 
diff --git a/docs/UserManuals/github-user-guide-v0.10.0.md b/versioned_docs/version-v0.11.0/UserManuals/GitHubUserGuide.md
similarity index 97%
rename from docs/UserManuals/github-user-guide-v0.10.0.md
rename to versioned_docs/version-v0.11.0/UserManuals/GitHubUserGuide.md
index 9a9014b..fa67456 100644
--- a/docs/UserManuals/github-user-guide-v0.10.0.md
+++ b/versioned_docs/version-v0.11.0/UserManuals/GitHubUserGuide.md
@@ -1,8 +1,8 @@
 ---
-title: "GitHub User Guide v0.10.0"
+title: "GitHub User Guide"
 sidebar_position: 4
 description: >
-  GitHub User Guide v0.10.0
+  GitHub User Guide
 ---
 
 ## Summary
@@ -109,7 +109,7 @@ See the pipeline finishes (progress 100%):
 
 ### Step 4 - [Optional] Set up a recurring pipeline to keep data fresh
 
-Please see [How to create recurring pipelines](./recurring-pipeline.md) for details.
+Please see [How to create recurring pipelines](./RecurringPipelines.md) for details.
 
 
 
diff --git a/docs/UserManuals/GRAFANA.md b/versioned_docs/version-v0.11.0/UserManuals/GrafanaUserGuide.md
similarity index 99%
rename from docs/UserManuals/GRAFANA.md
rename to versioned_docs/version-v0.11.0/UserManuals/GrafanaUserGuide.md
index bd81651..e475702 100644
--- a/docs/UserManuals/GRAFANA.md
+++ b/versioned_docs/version-v0.11.0/UserManuals/GrafanaUserGuide.md
@@ -1,8 +1,8 @@
 ---
-title: "How to use Grafana"
+title: "Grafana User Guide"
 sidebar_position: 1
 description: >
-  How to use Grafana
+  Grafana User Guide
 ---
 
 
diff --git a/versioned_docs/version-0.11/UserManuals/recurring-pipeline.md b/versioned_docs/version-v0.11.0/UserManuals/RecurringPipelines.md
similarity index 91%
rename from versioned_docs/version-0.11/UserManuals/recurring-pipeline.md
rename to versioned_docs/version-v0.11.0/UserManuals/RecurringPipelines.md
index 3e92349..ce82b1e 100644
--- a/versioned_docs/version-0.11/UserManuals/recurring-pipeline.md
+++ b/versioned_docs/version-v0.11.0/UserManuals/RecurringPipelines.md
@@ -1,8 +1,8 @@
 ---
-title: "Create Recurring Pipelines"
+title: "Recurring Pipelines"
 sidebar_position: 3
 description: >
-  Create Recurring Pipelines
+  Recurring Pipelines
 ---
 
 ## How to create recurring pipelines?
diff --git a/docs/UserManuals/team-feature-user-guide.md b/versioned_docs/version-v0.11.0/UserManuals/TeamConfiguration.md
similarity index 94%
rename from docs/UserManuals/team-feature-user-guide.md
rename to versioned_docs/version-v0.11.0/UserManuals/TeamConfiguration.md
index 07a080b..4646ffa 100644
--- a/docs/UserManuals/team-feature-user-guide.md
+++ b/versioned_docs/version-v0.11.0/UserManuals/TeamConfiguration.md
@@ -1,8 +1,8 @@
 ---
-title: "Team Feature User Guide"
+title: "Team Configuration"
 sidebar_position: 6
 description: >
-  Team Feature User Guide
+  Team Configuration
 ---
 ## Summary
 This is a brief step-by-step guide to using the team feature.
@@ -31,7 +31,7 @@ b. The actual api request.
     iii. After successful execution, the teams table is generated and the data can be seen in the database table teams. 
     (Notes: how to connect to the database: mainly through host, port, username, password, and then through sql tools, such as sequal ace, datagrip and other data, of course you can also access through the command line mysql -h `ip` -u `username` -p -P `port`)
 
-![image](/img/teamflow3.png)
+![image](/img/Team/teamflow3.png)
 
 
 ## Step 2 - Construct user tables (roster)
@@ -52,11 +52,11 @@ b. The actual api request.
 
     iii. After successful execution, the users table is generated and the data can be seen in the database table users.
 
-![image](/img/teamflow1.png)
+![image](/img/Team/teamflow1.png)
     
     iv. Generated the team_users table, you can see the data in the team_users table.
 
-![image](/img/teamflow2.png)
+![image](/img/Team/teamflow2.png)
 
 ## Step 3 - Update users if you need  
 If there is a problem with team_users association or data in users, just re-put users api interface, i.e. (b in step 2 above)
@@ -64,7 +64,7 @@ If there is a problem with team_users association or data in users, just re-put
 ## Step 4 - Collect accounts 
 accounts table is collected by users through devlake. You can see the accounts table information in the database.
 
-![image](/img/teamflow4.png)
+![image](/img/Team/teamflow4.png)
 
 ## Step 5 - Automatically match existing accounts and users through api requests
 
@@ -91,7 +91,7 @@ curl --location --request POST '127.0.0.1:8080/pipelines' \
 
 b. After successful execution, the user_accounts table is generated, and you can see the data in table user_accounts.
 
-![image](/img/teamflow5.png)
+![image](/img/Team/teamflow5.png)
 
 ## Step 6 - Get user_accountsr relationship
 After generating the user_accounts relationship, the user can get the associated data through the GET method to confirm whether the data user and account match correctly and whether the matched accounts are complete.
@@ -103,7 +103,7 @@ b. The corresponding curl command:
 curl --location --request GET 'http://127.0.0.1:8080/plugins/org/user_account_mapping.csv'
 ```
 
-![image](/img/teamflow6.png)
+![image](/img/Team/teamflow6.png)
 
 c. You can also use sql statements to determine, here to provide a sql statement for reference only.
 ```
@@ -123,7 +123,7 @@ curl --location --request PUT 'http://127.0.0.1:8080/plugins/org/user_account_ma
 
 b. You can see that the data in the user_accounts table has been updated.
 
-![image](/img/teamflow7.png)
+![image](/img/Team/teamflow7.png)
 
 
 **The above is the flow of user usage for the whole team feature.**
diff --git a/versioned_docs/version-0.11/UserManuals/03-TemporalSetup.md b/versioned_docs/version-v0.11.0/UserManuals/TemporalSetup.md
similarity index 100%
rename from versioned_docs/version-0.11/UserManuals/03-TemporalSetup.md
rename to versioned_docs/version-v0.11.0/UserManuals/TemporalSetup.md
diff --git a/versioned_docs/version-0.11/UserManuals/_category_.json b/versioned_docs/version-v0.11.0/UserManuals/_category_.json
similarity index 100%
rename from versioned_docs/version-0.11/UserManuals/_category_.json
rename to versioned_docs/version-v0.11.0/UserManuals/_category_.json
diff --git a/versioned_sidebars/version-0.11-sidebars.json b/versioned_sidebars/version-v0.11.0-sidebars.json
similarity index 100%
rename from versioned_sidebars/version-0.11-sidebars.json
rename to versioned_sidebars/version-v0.11.0-sidebars.json
diff --git a/versions.json b/versions.json
index fff9bee..909d780 100644
--- a/versions.json
+++ b/versions.json
@@ -1,3 +1,3 @@
 [
-  "0.11"
+  "v0.11.0"
 ]


[incubator-devlake-website] 06/06: fix: fixed versioning again

Posted by zk...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

zky pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-devlake-website.git

commit 137b4d6dab574017da365a3716d1270c018d4b28
Author: yumengwang03 <yu...@merico.dev>
AuthorDate: Wed Jul 13 23:46:34 2022 +0800

    fix: fixed versioning again
---
 docusaurus.config.js                               |  50 +-
 .../Dashboards/AverageRequirementLeadTime.md       |   9 +
 .../Dashboards/CommitCountByAuthor.md              |   9 +
 .../version-v0.11.0/Dashboards/DetailedBugInfo.md  |   9 +
 .../version-v0.11.0/Dashboards/GitHubBasic.md      |   9 +
 .../GitHubReleaseQualityAndContributionAnalysis.md |   9 +
 .../version-v0.11.0/Dashboards/Jenkins.md          |   9 +
 .../version-v0.11.0/Dashboards/WeeklyBugRetro.md   |   9 +
 .../version-v0.11.0/Dashboards/_category_.json     |   4 +
 .../version-v0.11.0/DataModels/DataSupport.md      |  59 +++
 .../DataModels/DevLakeDomainLayerSchema.md         | 532 +++++++++++++++++++++
 .../version-v0.11.0/DataModels/_category_.json     |   4 +
 .../DeveloperManuals/DBMigration.md                |  37 ++
 .../version-v0.11.0/DeveloperManuals/Dal.md        | 173 +++++++
 .../DeveloperManuals/DeveloperSetup.md             | 131 +++++
 .../DeveloperManuals/Notifications.md              |  32 ++
 .../DeveloperManuals/PluginImplementation.md       | 292 +++++++++++
 .../DeveloperManuals/_category_.json               |   4 +
 .../version-v0.11.0/EngineeringMetrics.md          | 195 ++++++++
 .../version-v0.11.0/Overview/Architecture.md       |  39 ++
 .../version-v0.11.0/Overview/Introduction.md       |  16 +
 versioned_docs/version-v0.11.0/Overview/Roadmap.md |  33 ++
 .../version-v0.11.0/Overview/_category_.json       |   4 +
 .../version-v0.11.0/Plugins/_category_.json        |   4 +
 versioned_docs/version-v0.11.0/Plugins/dbt.md      |  67 +++
 versioned_docs/version-v0.11.0/Plugins/feishu.md   |  64 +++
 versioned_docs/version-v0.11.0/Plugins/gitee.md    | 112 +++++
 .../version-v0.11.0/Plugins/gitextractor.md        |  63 +++
 .../Plugins/github-connection-in-config-ui.png     | Bin 0 -> 51159 bytes
 versioned_docs/version-v0.11.0/Plugins/github.md   |  95 ++++
 .../Plugins/gitlab-connection-in-config-ui.png     | Bin 0 -> 66616 bytes
 versioned_docs/version-v0.11.0/Plugins/gitlab.md   |  94 ++++
 versioned_docs/version-v0.11.0/Plugins/jenkins.md  |  59 +++
 .../Plugins/jira-connection-config-ui.png          | Bin 0 -> 76052 bytes
 .../Plugins/jira-more-setting-in-config-ui.png     | Bin 0 -> 300823 bytes
 versioned_docs/version-v0.11.0/Plugins/jira.md     | 253 ++++++++++
 versioned_docs/version-v0.11.0/Plugins/refdiff.md  | 116 +++++
 versioned_docs/version-v0.11.0/Plugins/tapd.md     |  16 +
 .../version-v0.11.0/QuickStart/KubernetesSetup.md  |  33 ++
 .../version-v0.11.0/QuickStart/LocalSetup.md       |  44 ++
 .../version-v0.11.0/QuickStart/_category_.json     |   4 +
 .../version-v0.11.0/UserManuals/AdvancedMode.md    |  89 ++++
 .../version-v0.11.0/UserManuals/GitHubUserGuide.md | 118 +++++
 .../UserManuals/GrafanaUserGuide.md                | 120 +++++
 .../UserManuals/RecurringPipelines.md              |  30 ++
 .../UserManuals/TeamConfiguration.md               | 129 +++++
 .../version-v0.11.0/UserManuals/TemporalSetup.md   |  35 ++
 .../version-v0.11.0/UserManuals/_category_.json    |   4 +
 versioned_sidebars/version-v0.11.0-sidebars.json   |   8 +
 versions.json                                      |   3 +
 50 files changed, 3203 insertions(+), 25 deletions(-)

diff --git a/docusaurus.config.js b/docusaurus.config.js
index 4beb65d..a0d36c2 100644
--- a/docusaurus.config.js
+++ b/docusaurus.config.js
@@ -1,6 +1,6 @@
 const lightCodeTheme = require('prism-react-renderer/themes/github');
 const darkCodeTheme = require('prism-react-renderer/themes/dracula');
-// const versions = require('./versions.json');
+const versions = require('./versions.json');
 
 
 // With JSDoc @type annotations, IDEs can provide config autocompletion
@@ -26,14 +26,14 @@ const darkCodeTheme = require('prism-react-renderer/themes/dracula');
           sidebarPath: require.resolve('./sidebars.js'),
           // set to undefined to remove Edit this Page
           editUrl: 'https://github.com/apache/incubator-devlake-website/edit/main',
-          // versions: {
-          //   current: {
-          //       path: '',
-          //   },
-          //   [versions[0]]: {
-          //       path: versions[0],
-          //   }
-          // }
+          versions: {
+            current: {
+                path: '',
+            },
+            [versions[0]]: {
+                path: versions[0],
+            }
+          }
         },
         blog: {
           showReadingTime: true,
@@ -86,24 +86,24 @@ const darkCodeTheme = require('prism-react-renderer/themes/dracula');
         },
         items: [
           {
-            type: 'doc',
-            docId: 'Overview/Introduction',
+            // type: 'doc',
+            // docId: 'Overview/Introduction',
             position: 'right',
             label: 'Docs',
-          //   items: [
-          //     ...versions.slice(0, versions.length - 2).map((version) => ({
-          //       label: version,
-          //       to: `docs/${version}/Overview/Introduction`,
-          //    })),
-          //    ...versions.slice(versions.length - 2, versions.length).map((version) => ({
-          //     label: (version === "1.x") ? "1.x(Not Apache Release)" : version,
-          //     to: `docs/${version}/Overview/Introduction`,
-          // })),
-          //     {
-          //         label: "Latest",
-          //         to: "/docs/Overview/Introduction",
-          //     }
-          //   ]
+            items: [
+              ...versions.slice(0, versions.length - 2).map((version) => ({
+                label: version,
+                to: `docs/${version}/Overview/Introduction`,
+             })),
+             ...versions.slice(versions.length - 2, versions.length).map((version) => ({
+              label: (version === "1.x") ? "1.x(Not Apache Release)" : version,
+              to: `docs/${version}/Overview/Introduction`,
+          })),
+              {
+                  label: "Latest",
+                  to: "/docs/Overview/Introduction",
+              }
+            ]
           },
          {
             type: 'doc',
diff --git a/versioned_docs/version-v0.11.0/Dashboards/AverageRequirementLeadTime.md b/versioned_docs/version-v0.11.0/Dashboards/AverageRequirementLeadTime.md
new file mode 100644
index 0000000..0710335
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/Dashboards/AverageRequirementLeadTime.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 6
+title: "Average Requirement Lead Time by Assignee"
+description: >
+  DevLake Live Demo
+---
+
+# Average Requirement Lead Time by Assignee
+<iframe src="https://grafana-lake.demo.devlake.io/d/q27fk7cnk/demo-average-requirement-lead-time-by-assignee?orgId=1&from=1635945684845&to=1651584084846" width="100%" height="940px"></iframe>
\ No newline at end of file
diff --git a/versioned_docs/version-v0.11.0/Dashboards/CommitCountByAuthor.md b/versioned_docs/version-v0.11.0/Dashboards/CommitCountByAuthor.md
new file mode 100644
index 0000000..04e029c
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/Dashboards/CommitCountByAuthor.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 2
+title: "Commit Count by Author"
+description: >
+  DevLake Live Demo
+---
+
+# Commit Count by Author
+<iframe src="https://grafana-lake.demo.devlake.io/d/F0iYknc7z/demo-commit-count-by-author?orgId=1&from=1634911190615&to=1650635990615" width="100%" height="820px"></iframe>
diff --git a/versioned_docs/version-v0.11.0/Dashboards/DetailedBugInfo.md b/versioned_docs/version-v0.11.0/Dashboards/DetailedBugInfo.md
new file mode 100644
index 0000000..b777617
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/Dashboards/DetailedBugInfo.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 4
+title: "Detailed Bug Info"
+description: >
+  DevLake Live Demo
+---
+
+# Detailed Bug Info
+<iframe src="https://grafana-lake.demo.devlake.io/d/s48Lzn5nz/demo-detailed-bug-info?orgId=1&from=1635945709579&to=1651584109579" width="100%" height="800px"></iframe>
\ No newline at end of file
diff --git a/versioned_docs/version-v0.11.0/Dashboards/GitHubBasic.md b/versioned_docs/version-v0.11.0/Dashboards/GitHubBasic.md
new file mode 100644
index 0000000..7ea28cd
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/Dashboards/GitHubBasic.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 1
+title: "GitHub Basic Metrics"
+description: >
+  DevLake Live Demo
+---
+
+# GitHub Basic Metrics
+<iframe src="https://grafana-lake.demo.devlake.io/d/KXWvOFQnz/github_basic_metrics?orgId=1&from=1635945132339&to=1651583532339" width="100%" height="3080px"></iframe>
\ No newline at end of file
diff --git a/versioned_docs/version-v0.11.0/Dashboards/GitHubReleaseQualityAndContributionAnalysis.md b/versioned_docs/version-v0.11.0/Dashboards/GitHubReleaseQualityAndContributionAnalysis.md
new file mode 100644
index 0000000..61db78f
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/Dashboards/GitHubReleaseQualityAndContributionAnalysis.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 5
+title: "GitHub Release Quality and Contribution Analysis"
+description: >
+  DevLake Live Demo
+---
+
+# GitHub Release Quality and Contribution Analysis
+<iframe src="https://grafana-lake.demo.devlake.io/d/2xuOaQUnk1/github_release_quality_and_contribution_analysis?orgId=1&from=1635945847658&to=1651584247658" width="100%" height="2800px"></iframe>
\ No newline at end of file
diff --git a/versioned_docs/version-v0.11.0/Dashboards/Jenkins.md b/versioned_docs/version-v0.11.0/Dashboards/Jenkins.md
new file mode 100644
index 0000000..506a3c9
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/Dashboards/Jenkins.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 7
+title: "Jenkins"
+description: >
+  DevLake Live Demo
+---
+
+# Jenkins
+<iframe src="https://grafana-lake.demo.devlake.io/d/W8AiDFQnk/jenkins?orgId=1&from=1635945337632&to=1651583737632" width="100%" height="1060px"></iframe>
\ No newline at end of file
diff --git a/versioned_docs/version-v0.11.0/Dashboards/WeeklyBugRetro.md b/versioned_docs/version-v0.11.0/Dashboards/WeeklyBugRetro.md
new file mode 100644
index 0000000..adbc4e8
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/Dashboards/WeeklyBugRetro.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 3
+title: "Weekly Bug Retro"
+description: >
+  DevLake Live Demo
+---
+
+# Weekly Bug Retro
+<iframe src="https://grafana-lake.demo.devlake.io/d/-5EKA5w7k/weekly-bug-retro?orgId=1&from=1635945873174&to=1651584273174" width="100%" height="2240px"></iframe>
diff --git a/versioned_docs/version-v0.11.0/Dashboards/_category_.json b/versioned_docs/version-v0.11.0/Dashboards/_category_.json
new file mode 100644
index 0000000..b27df44
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/Dashboards/_category_.json
@@ -0,0 +1,4 @@
+{
+  "label": "Dashboards (Live Demo)",
+  "position": 9
+}
diff --git a/versioned_docs/version-v0.11.0/DataModels/DataSupport.md b/versioned_docs/version-v0.11.0/DataModels/DataSupport.md
new file mode 100644
index 0000000..4cb4b61
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/DataModels/DataSupport.md
@@ -0,0 +1,59 @@
+---
+title: "Data Support"
+description: >
+  Data sources that DevLake supports
+sidebar_position: 1
+---
+
+
+## Data Sources and Data Plugins
+DevLake supports the following data sources. The data from each data source is collected with one or more plugins. There are 9 data plugins in total: `ae`, `feishu`, `gitextractor`, `github`, `gitlab`, `jenkins`, `jira`, `refdiff` and `tapd`.
+
+
+| Data Source | Versions                             | Plugins |
+|-------------|--------------------------------------|-------- |
+| AE          |                                      | `ae`    |
+| Feishu      | Cloud                                |`feishu` |
+| GitHub      | Cloud                                |`github`, `gitextractor`, `refdiff` |
+| Gitlab      | Cloud, Community Edition 13.x+       |`gitlab`, `gitextractor`, `refdiff` |
+| Jenkins     | 2.263.x+                             |`jenkins` |
+| Jira        | Cloud, Server 8.x+, Data Center 8.x+ |`jira` |
+| TAPD        | Cloud                                | `tapd` |
+
+
+
+## Data Collection Scope By Each Plugin
+This table shows the entities collected by each plugin. Domain layer entities in this table are consistent with the entities [here](./DevLakeDomainLayerSchema.md).
+
+| Domain Layer Entities | ae             | gitextractor | github         | gitlab  | jenkins | jira    | refdiff | tapd    |
+| --------------------- | -------------- | ------------ | -------------- | ------- | ------- | ------- | ------- | ------- |
+| commits               | update commits | default      | not-by-default | default |         |         |         |         |
+| commit_parents        |                | default      |                |         |         |         |         |         |
+| commit_files          |                | default      |                |         |         |         |         |         |
+| pull_requests         |                |              | default        | default |         |         |         |         |
+| pull_request_commits  |                |              | default        | default |         |         |         |         |
+| pull_request_comments |                |              | default        | default |         |         |         |         |
+| pull_request_labels   |                |              | default        |         |         |         |         |         |
+| refs                  |                | default      |                |         |         |         |         |         |
+| refs_commits_diffs    |                |              |                |         |         |         | default |         |
+| refs_issues_diffs     |                |              |                |         |         |         | default |         |
+| ref_pr_cherry_picks   |                |              |                |         |         |         | default |         |
+| repos                 |                |              | default        | default |         |         |         |         |
+| repo_commits          |                | default      | default        |         |         |         |         |         |
+| board_repos           |                |              |                |         |         |         |         |         |
+| issue_commits         |                |              |                |         |         |         |         |         |
+| issue_repo_commits    |                |              |                |         |         |         |         |         |
+| pull_request_issues   |                |              |                |         |         |         |         |         |
+| refs_issues_diffs     |                |              |                |         |         |         |         |         |
+| boards                |                |              | default        |         |         | default |         | default |
+| board_issues          |                |              | default        |         |         | default |         | default |
+| issue_changelogs      |                |              |                |         |         | default |         | default |
+| issues                |                |              | default        |         |         | default |         | default |
+| issue_comments        |                |              |                |         |         | default |         | default |
+| issue_labels          |                |              | default        |         |         |         |         |         |
+| sprints               |                |              |                |         |         | default |         | default |
+| issue_worklogs        |                |              |                |         |         | default |         | default |
+| users o               |                |              | default        |         |         | default |         | default |
+| builds                |                |              |                |         | default |         |         |         |
+| jobs                  |                |              |                |         | default |         |         |         |
+
diff --git a/versioned_docs/version-v0.11.0/DataModels/DevLakeDomainLayerSchema.md b/versioned_docs/version-v0.11.0/DataModels/DevLakeDomainLayerSchema.md
new file mode 100644
index 0000000..996d397
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/DataModels/DevLakeDomainLayerSchema.md
@@ -0,0 +1,532 @@
+---
+title: "Domain Layer Schema"
+description: >
+  DevLake Domain Layer Schema
+sidebar_position: 2
+---
+
+## Summary
+
+This document describes the entities in DevLake's domain layer schema and their relationships.
+
+Data in the domain layer is transformed from the data in the tool layer. The tool layer schema is based on the data from specific tools such as Jira, GitHub, Gitlab, Jenkins, etc. The domain layer schema can be regarded as an abstraction of tool-layer schemas.
+
+Domain layer schema itself includes 2 logical layers: a `DWD` layer and a `DWM` layer. The DWD layer stores the detailed data points, while the DWM is the slight aggregation and operation of DWD to store more organized details or middle-level metrics.
+
+
+## Use Cases
+1. Users can make customized Grafana dashboards based on the domain layer schema.
+2. Contributors can complete the ETL logic when adding new data source plugins refering to this data model.
+
+
+## Data Model
+
+This is the up-to-date domain layer schema for DevLake v0.10.x. Tables (entities) are categorized into 5 domains.
+1. Issue tracking domain entities: Jira issues, GitHub issues, GitLab issues, etc
+2. Source code management domain entities: Git/GitHub/Gitlab commits and refs, etc
+3. Code review domain entities: GitHub PRs, Gitlab MRs, etc
+4. CI/CD domain entities: Jenkins jobs & builds, etc
+5. Cross-domain entities: entities that map entities from different domains to break data isolation
+
+
+### Schema Diagram
+![Domain Layer Schema](/img/DomainLayerSchema/schema-diagram.png)
+
+When reading the schema, you'll notice that many tables' primary key is called `id`. Unlike auto-increment id or UUID, `id` is a string composed of several parts to uniquely identify similar entities (e.g. repo) from different platforms (e.g. Github/Gitlab) and allow them to co-exist in a single table.
+
+Tables that end with WIP are still under development.
+
+
+### Naming Conventions
+
+1. The name of a table is in plural form. Eg. boards, issues, etc.
+2. The name of a table which describe the relation between 2 entities is in the form of [BigEntity in singular form]\_[SmallEntity in plural form]. Eg. board_issues, sprint_issues, pull_request_comments, etc.
+3. Value of the field in enum type are in capital letters. Eg. [table.issues.type](https://merico.feishu.cn/docs/doccnvyuG9YpVc6lvmWkmmbZtUc#ZDCw9k) has 3 values, REQUIREMENT, BUG, INCIDENT. Values that are phrases, such as 'IN_PROGRESS' of [table.issues.status](https://merico.feishu.cn/docs/doccnvyuG9YpVc6lvmWkmmbZtUc#ZDCw9k), are separated with underscore '\_'.
+
+<br/>
+
+## DWD Entities - (Data Warehouse Detail)
+
+### Domain 1 - Issue Tracking
+
+#### 1. Issues
+
+An `issue` is the abstraction of Jira/Github/GitLab/TAPD/... issues.
+
+| **field**                   | **type** | **length** | **description**                                                                                                                                                                                                                                                                                                                                                                                                                                      [...]
+| :-------------------------- | :------- | :--------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [...]
+| `id`                        | varchar  | 255        | An issue's `id` is composed of < plugin >:< Entity >:< PK0 >[:PK1]..." <ul><li>For Github issues, a Github issue's id is like "github:GithubIssues:< GithubIssueId >". Eg. 'github:GithubIssues:1049355647'</li> <li>For Jira issues, a Github repo's id is like "jira:JiraIssues:< JiraSourceId >:< JiraIssueId >". Eg. 'jira:JiraIssues:1:10063'. < JiraSourceId > is used to identify which jira source the issue came from, since DevLake users  [...]
+| `number`                    | varchar  | 255        | The number of this issue. For example, the number of this Github [issue](https://github.com/merico-dev/lake/issues/1145) is 1145.                                                                                                                                                                                                                                                                                                                    [...]
+| `url`                       | varchar  | 255        | The url of the issue. It's a web address in most cases.                                                                                                                                                                                                                                                                                                                                                                                              [...]
+| `title`                     | varchar  | 255        | The title of an issue                                                                                                                                                                                                                                                                                                                                                                                                                                [...]
+| `description`               | longtext |            | The detailed description/summary of an issue                                                                                                                                                                                                                                                                                                                                                                                                         [...]
+| `type`                      | varchar  | 255        | The standard type of this issue. There're 3 standard types: <ul><li>REQUIREMENT: this issue is a feature</li><li>BUG: this issue is a bug found during test</li><li>INCIDENT: this issue is a bug found after release</li></ul>The 3 standard types are transformed from the original types of an issue. The transformation rule is set in the '.env' file or 'config-ui' before data collection. For issues with an original type that has not mapp [...]
+| `status`                    | varchar  | 255        | The standard statuses of this issue. There're 3 standard statuses: <ul><li> TODO: this issue is in backlog or to-do list</li><li>IN_PROGRESS: this issue is in progress</li><li>DONE: this issue is resolved or closed</li></ul>The 3 standard statuses are transformed from the original statuses of an issue. The transformation rule: <ul><li>For Jira issue status: transformed from the Jira issue's `statusCategory`. Jira issue has 3 default [...]
+| `original_status`           | varchar  | 255        | The original status of an issue.                                                                                                                                                                                                                                                                                                                                                                                                                     [...]
+| `story_point`               | int      |            | The story point of this issue. It's default to an empty string for data sources such as Github issues and Gitlab issues.                                                                                                                                                                                                                                                                                                                             [...]
+| `priority`                  | varchar  | 255        | The priority of the issue                                                                                                                                                                                                                                                                                                                                                                                                                            [...]
+| `component`                 | varchar  | 255        | The component a bug-issue affects. This field only supports Github plugin for now. The value is transformed from Github issue labels by the rules set according to the user's configuration of .env by end users during DevLake installation.                                                                                                                                                                                                        [...]
+| `severity`                  | varchar  | 255        | The severity level of a bug-issue. This field only supports Github plugin for now. The value is transformed from Github issue labels by the rules set according to the user's configuration of .env by end users during DevLake installation.                                                                                                                                                                                                        [...]
+| `parent_issue_id`           | varchar  | 255        | The id of its parent issue                                                                                                                                                                                                                                                                                                                                                                                                                           [...]
+| `epic_key`                  | varchar  | 255        | The key of the epic this issue belongs to. For tools with no epic-type issues such as Github and Gitlab, this field is default to an empty string                                                                                                                                                                                                                                                                                                    [...]
+| `original_estimate_minutes` | int      |            | The orginal estimation of the time allocated for this issue                                                                                                                                                                                                                                                                                                                                                                                          [...]
+| `time_spent_minutes`         | int      |            | The orginal estimation of the time allocated for this issue                                                                                                                                                                                                                                                                                                                                                                                         [...]
+| `time_remaining_minutes`     | int      |            | The remaining time to resolve the issue                                                                                                                                                                                                                                                                                                                                                                                                             [...]
+| `creator_id`                 | varchar  | 255        | The id of issue creator                                                                                                                                                                                                                                                                                                                                                                                                                             [...]
+| `assignee_id`               | varchar  | 255        | The id of issue assignee.<ul><li>For Github issues: this is the last assignee of an issue if the issue has multiple assignees</li><li>For Jira issues: this is the assignee of the issue at the time of collection</li></ul>                                                                                                                                                                                                                         [...]
+| `assignee_name`             | varchar  | 255        | The name of the assignee                                                                                                                                                                                                                                                                                                                                                                                                                             [...]
+| `created_date`              | datetime | 3          | The time issue created                                                                                                                                                                                                                                                                                                                                                                                                                               [...]
+| `updated_date`              | datetime | 3          | The last time issue gets updated                                                                                                                                                                                                                                                                                                                                                                                                                     [...]
+| `resolution_date`           | datetime | 3          | The time the issue changes to 'DONE'.                                                                                                                                                                                                                                                                                                                                                                                                                [...]
+| `lead_time_minutes`         | int      |            | Describes the cycle time from issue creation to issue resolution.<ul><li>For issues whose type = 'REQUIREMENT' and status = 'DONE', lead_time_minutes = resolution_date - created_date. The unit is minute.</li><li>For issues whose type != 'REQUIREMENT' or status != 'DONE', lead_time_minutes is null</li></ul>                                                                                                                                  [...]
+
+#### 2. issue_labels
+
+This table shows the labels of issues. Multiple entries can exist per issue. This table can be used to filter issues by label name.
+
+| **field**  | **type** | **length** | **description** | **key**      |
+| :--------- | :------- | :--------- | :-------------- | :----------- |
+| `name`     | varchar  | 255        | Label name      |              |
+| `issue_id` | varchar  | 255        | Issue ID        | FK_issues.id |
+
+
+#### 3. issue_comments(WIP)
+
+This table shows the comments of issues. Issues with multiple comments are shown as multiple records. This table can be used to calculate _metric - issue response time_.
+
+| **field**      | **type** | **length** | **description**                                                                                                                                                                               | **key**      |
+| :------------- | :------- | :--------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :----------- |
+| `id`           | varchar  | 255        | The unique id of a comment                                                                                                                                                                    | PK           |
+| `issue_id`     | varchar  | 255        | Issue ID                                                                                                                                                                                      | FK_issues.id |
+| `user_id`      | varchar  | 255        | The id of the user who made the comment                                                                                                                                                       | FK_users.id  |
+| `body`         | longtext |            | The body/detail of the comment                                                                                                                                                                |              |
+| `created_date` | datetime | 3          | The creation date of the comment                                                                                                                                                              |              |
+| `updated_date` | datetime | 3          | The last time comment gets updated                                                                                                                                                            |              |
+| `position`     | int      |            | The position of a comment under an issue. It starts from 1. The position is sorted by comment created_date asc.<br/>Eg. If an issue has 5 comments, the position of the 1st created comment is 1. |              |
+
+#### 4. issue_changelog(WIP)
+
+This table shows the changelogs of issues. Issues with multiple changelogs are shown as multiple records.
+
+| **field**      | **type** | **length** | **description**                                                       | **key**      |
+| :------------- | :------- | :--------- | :-------------------------------------------------------------------- | :----------- |
+| `id`           | varchar  | 255        | The unique id of an issue changelog                                   | PK           |
+| `issue_id`     | varchar  | 255        | Issue ID                                                              | FK_issues.id |
+| `actor_id`     | varchar  | 255        | The id of the user who made the change                                | FK_users.id  |
+| `field`        | varchar  | 255        | The id of changed field                                               |              |
+| `from`         | varchar  | 255        | The original value of the changed field                               |              |
+| `to`           | varchar  | 255        | The new value of the changed field                                    |              |
+| `created_date` | datetime | 3          | The creation date of the changelog                                    |              |
+
+
+#### 5. issue_worklogs
+
+This table shows the work logged under issues. Usually, an issue has multiple worklogs logged by different developers.
+
+| **field**            | **type** | **length** | **description**                                                                              | **key**      |
+| :------------------- | :------- | :--------- | :------------------------------------------------------------------------------------------- | :----------- |
+| `issue_id`           | varchar  | 255        | Issue ID                                                                                     | FK_issues.id |
+| `author_id`          | varchar  | 255        | The id of the user who logged the work                                                       | FK_users.id  |
+| `comment`            | varchar  | 255        | The comment an user made while logging the work.                                             |              |
+| `time_spent_minutes` | int      |            | The time user logged. The unit of value is normalized to minute. Eg. 1d =) 480, 4h30m =) 270 |              |
+| `logged_date`        | datetime | 3          | The time of this logging action                                                              |              |
+| `started_date`       | datetime | 3          | Start time of the worklog                                                                    |              |
+
+
+#### 6. boards
+
+A `board` is an issue list or a collection of issues. It's the abstraction of a Jira board, a Jira project or a [Github issue list](https://github.com/merico-dev/lake/issues). This table can be used to filter issues by the boards they belong to.
+
+| **field**      | **type** | **length** | **description**                                                                                                                                                                                                                                                                                                                                                                      | **key** |
+| :------------- | :------- | :--------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------ |
+| `id`           | varchar  | 255        | A board's `id` is composed of "< plugin >:< Entity >:< PK0 >[:PK1]..." <ul><li>For a Github repo's issue list, the board id is like "< github >:< GithubRepos >:< GithubRepoId >". Eg. "github:GithubRepo:384111310"</li> <li>For a Jira Board, the id is like the board id is like "< jira >:< JiraSourceId >< JiraBoards >:< JiraBoardsId >". Eg. "jira:1:JiraBoards:12"</li></ul> | PK      |
+| `name`           | varchar  | 255        | The name of the board. Note: the board name of a Github project 'merico-dev/lake' is 'merico-dev/lake', representing the [default issue list](https://github.com/merico-dev/lake/issues).                                                                                                                                                                                            |         |
+| `description`  | varchar  | 255        | The description of the board.                                                                                                                                                                                                                                                                                                                                                        |         |
+| `url`          | varchar  | 255        | The url of the board. Eg. https://Github.com/merico-dev/lake                                                                                                                                                                                                                                                                                                                         |         |
+| `created_date` | datetime | 3          | Board creation time                                                                                                                                                                                                                                                                                                                             |         |
+
+#### 7. board_issues
+
+This table shows the relation between boards and issues. This table can be used to filter issues by board.
+
+| **field**  | **type** | **length** | **description** | **key**      |
+| :--------- | :------- | :--------- | :-------------- | :----------- |
+| `board_id` | varchar  | 255        | Board id        | FK_boards.id |
+| `issue_id` | varchar  | 255        | Issue id        | FK_issues.id |
+
+#### 8. sprints
+
+A `sprint` is the abstraction of Jira sprints, TAPD iterations and Github milestones. A sprint contains a list of issues.
+
+| **field**           | **type** | **length** | **description**                                                                                                                                                                                                                                                                                                                                                                                                                                              [...]
+| :------------------ | :------- | :--------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [...]
+| `id`                | varchar  | 255        | A sprint's `id` is composed of "< plugin >:< Entity >:< PK0 >[:PK1]..."<ul><li>A sprint in a Github repo is a milestone, the sprint id is like "< github >:< GithubRepos >:< GithubRepoId >:< milestoneNumber >".<br/>Eg. The id for this [sprint](https://github.com/merico-dev/lake/milestone/5) is "github:GithubRepo:384111310:5"</li><li>For a Jira Board, the id is like "< jira >:< JiraSourceId >< JiraBoards >:< JiraBoardsId >".<br/>Eg. "jira:1:J [...]
+| `name`              | varchar  | 255        | The name of sprint.<br/>For Github projects, the sprint name is the milestone name. For instance, 'v0.10.0 - Introduce Temporal to DevLake' is the name of this [sprint](https://github.com/merico-dev/lake/milestone/5).                                                                                                                                                                                                                                    [...]
+| `url`               | varchar  | 255        | The url of sprint.                                                                                                                                                                                                                                                                                                                                                                                                                                           [...]
+| `status`            | varchar  | 255        | There're 3 statuses of a sprint:<ul><li>CLOSED: a completed sprint</li><li>ACTIVE: a sprint started but not completed</li><li>FUTURE: a sprint that has not started</li></ul>                                                                                                                                                                                                                                                                                [...]
+| `started_date`      | datetime | 3          | The start time of a sprint                                                                                                                                                                                                                                                                                                                                                                                                                                   [...]
+| `ended_date`        | datetime | 3          | The planned/estimated end time of a sprint. It's usually set when planning a sprint.                                                                                                                                                                                                                                                                                                                                                                         [...]
+| `completed_date`    | datetime | 3          | The actual time to complete a sprint.                                                                                                                                                                                                                                                                                                                                                                                                                        [...]
+| `original_board_id` | datetime | 3          | The id of board where the sprint first created. This field is not null only when this entity is transformed from Jira sprintas.<br/>In Jira, sprint and board entities have 2 types of relation:<ul><li>A sprint is created based on a specific board. In this case, board(1):(n)sprint. The `original_board_id` is used to show the relation.</li><li>A sprint can be mapped to multiple boards, a board can also show multiple sprints. In this case, boar [...]
+
+#### 9. sprint_issues
+
+This table shows the relation between sprints and issues that have been added to sprints. This table can be used to show metrics such as _'ratio of unplanned issues'_, _'completion rate of sprint issues'_, etc
+
+| **field**        | **type** | **length** | **description**                                                                                                                                                                                                                                                                                                                                                                                                                                                 [...]
+| :--------------- | :------- | :--------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [...]
+| `sprint_id`      | varchar  | 255        | Sprint id                                                                                                                                                                                                                                                                                                                                                                                                                                                       [...]
+| `issue_id`       | varchar  | 255        | Issue id                                                                                                                                                                                                                                                                                                                                                                                                                                                        [...]
+| `is_removed`     | bool     |            | If the issue is removed from this sprint, then TRUE; else FALSE                                                                                                                                                                                                                                                                                                                                                                                                 [...]
+| `added_date`     | datetime | 3          | The time this issue added to the sprint. If an issue is added to a sprint multiple times, the latest time will be the value.                                                                                                                                                                                                                                                                                                                                    [...]
+| `removed_date`   | datetime | 3          | The time this issue gets removed from the sprint. If an issue is removed multiple times, the latest time will be the value.                                                                                                                                                                                                                                                                                                                                     [...]
+| `added_stage`    | varchar  | 255        | The stage when issue is added to this sprint. There're 3 possible values:<ul><li>BEFORE_SPRINT<br/>Planning before sprint starts.<br/> Condition: sprint_issues.added_date <= sprints.start_date</li><li>DURING_SPRINT Planning during a sprint.<br/>Condition: sprints.start_date < sprint_issues.added_date <= sprints.end_date</li><li>AFTER_SPRINT<br/>Planing after a sprint. This is caused by improper operation - adding issues to a completed sprint.< [...]
+| `resolved_stage` | varchar  | 255        | The stage when an issue is resolved (issue status turns to 'DONE'). There're 3 possible values:<ul><li>BEFORE_SPRINT<br/>Condition: issues.resolution_date <= sprints.start_date</li><li>DURING_SPRINT<br/>Condition: sprints.start_date < issues.resolution_date <= sprints.end_date</li><li>AFTER_SPRINT<br/>Condition: issues.resolution_date ) sprints.end_date</li></ul>                                                                                   [...]
+
+#### 10. board_sprints
+
+| **field**   | **type** | **length** | **description** | **key**       |
+| :---------- | :------- | :--------- | :-------------- | :------------ |
+| `board_id`  | varchar  | 255        | Board id        | FK_boards.id  |
+| `sprint_id` | varchar  | 255        | Sprint id       | FK_sprints.id |
+
+<br/>
+
+### Domain 2 - Source Code Management
+
+#### 11. repos
+
+Information about Github or Gitlab repositories. A repository is always owned by a user.
+
+| **field**      | **type** | **length** | **description**                                                                                                                                                                                | **key**     |
+| :------------- | :------- | :--------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :---------- |
+| `id`           | varchar  | 255        | A repo's `id` is composed of "< plugin >:< Entity >:< PK0 >[:PK1]..."<br/>For example, a Github repo's id is like "< github >:< GithubRepos >< GithubRepoId >". Eg. 'github:GithubRepos:384111310' | PK          |
+| `name`         | varchar  | 255        | The name of repo.                                                                                                                                                                              |             |
+| `description`  | varchar  | 255        | The description of repo.                                                                                                                                                                       |             |
+| `url`          | varchar  | 255        | The url of repo. Eg. https://Github.com/merico-dev/lake                                                                                                                                        |             |
+| `owner_id`     | varchar  | 255        | The id of the owner of repo                                                                                                                                                                    | FK_users.id |
+| `language`     | varchar  | 255        | The major language of repo. Eg. The language for merico-dev/lake is 'Go'                                                                                                                       |             |
+| `forked_from`  | varchar  | 255        | Empty unless the repo is a fork in which case it contains the `id` of the repo the repo is forked from.                                                                                        |             |
+| `deleted`      | tinyint  | 255        | 0: repo is active 1: repo has been deleted                                                                                                                                                     |             |
+| `created_date` | datetime | 3          | Repo creation date                                                                                                                                                                             |             |
+| `updated_date` | datetime | 3          | Last full update was done for this repo                                                                                                                                                        |             |
+
+#### 12. repo_languages(WIP)
+
+Languages that are used in the repository along with byte counts for all files in those languages. This is in line with how Github calculates language percentages in a repository. Multiple entries can exist per repo.
+
+The table is filled in when the repo has been first inserted on when an update round for all repos is made.
+
+| **field**      | **type** | **length** | **description**                                                                                                                                                                                    | **key** |
+| :------------- | :------- | :--------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------ |
+| `id`           | varchar  | 255        | A repo's `id` is composed of "< plugin >:< Entity >:< PK0 >[:PK1]..."<br/>For example, a Github repo's id is like "< github >:< GithubRepos >< GithubRepoId >". Eg. 'github:GithubRepos:384111310' | PK      |
+| `language`     | varchar  | 255        | The language of repo.<br/>These are the [languages](https://api.github.com/repos/merico-dev/lake/languages) for merico-dev/lake                                                                    |         |
+| `bytes`        | int      |            | The byte counts for all files in those languages                                                                                                                                                   |         |
+| `created_date` | datetime | 3          | The field is filled in with the latest timestamp the query for a specific `repo_id` was done.                                                                                                      |         |
+
+#### 13. repo_commits
+
+The commits belong to the history of a repository. More than one repos can share the same commits if one is a fork of the other.
+
+| **field**    | **type** | **length** | **description** | **key**        |
+| :----------- | :------- | :--------- | :-------------- | :------------- |
+| `repo_id`    | varchar  | 255        | Repo id         | FK_repos.id    |
+| `commit_sha` | char     | 40         | Commit sha      | FK_commits.sha |
+
+#### 14. refs
+
+A ref is the abstraction of a branch or tag.
+
+| **field**    | **type** | **length** | **description**                                                                                                                                                                                                                                                                                                                                             | **key**     |
+| :----------- | :------- | :--------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :---------- |
+| `id`         | varchar  | 255        | A ref's `id` is composed of "< plugin >:< Entity >:< PK0 >[:PK1]..."<br/>For example, a Github ref is composed of "github:GithubRepos:< GithubRepoId >:< RefUrl >". Eg. The id of release v5.3.0 of PingCAP/TiDB project is 'github:GithubRepos:384111310:refs/tags/v5.3.0' A repo's `id` is composed of "< plugin >:< Entity >:< PK0 >[:PK1]..."           | PK          |
+| `ref_name`   | varchar  | 255        | The name of ref. Eg. '[refs/tags/v0.9.3](https://github.com/merico-dev/lake/tree/v0.9.3)'                                                                                                                                                                                                                                                                   |             |
+| `repo_id`    | varchar  | 255        | The id of repo this ref belongs to                                                                                                                                                                                                                                                                                                                          | FK_repos.id |
+| `commit_sha` | char     | 40         | The commit this ref points to at the time of collection                                                                                                                                                                                                                                                                                                     |             |
+| `is_default` | int      |            | <ul><li>0: the ref is the default branch. By the definition of [Github](https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/managing-branches-in-your-repository/changing-the-default-branch), the default branch is the base branch for pull requests and code commits.</li><li>1: not the default branch</li></ul> |             |
+| `merge_base` | char     | 40         | The merge base commit of the main ref and the current ref                                                                                                                                                                                                                                                                                                   |             |
+| `ref_type`   | varchar  | 64         | There're 2 typical types:<ul><li>BRANCH</li><li>TAG</li></ul>                                                                                                                                                                                                                                                                                               |             |
+
+#### 15. refs_commits_diffs
+
+This table shows the commits added in a new ref compared to an old ref. This table can be used to support tag-based analysis, for instance, '_No. of commits of a tag_', '_No. of merged pull request of a tag_', etc.
+
+The records of this table are computed by [RefDiff](https://github.com/merico-dev/lake/tree/main/plugins/refdiff) plugin. The computation should be manually triggered after using [GitRepoExtractor](https://github.com/merico-dev/lake/tree/main/plugins/gitextractor) to collect commits and refs. The algorithm behind is similar to [this](https://github.com/merico-dev/lake/compare/v0.8.0%E2%80%A6v0.9.0).
+
+| **field**            | **type** | **length** | **description**                                                 | **key**        |
+| :------------------- | :------- | :--------- | :-------------------------------------------------------------- | :------------- |
+| `commit_sha`         | char     | 40         | One of the added commits in the new ref compared to the old ref | FK_commits.sha |
+| `new_ref_id`         | varchar  | 255        | The new ref's id for comparison                                 | FK_refs.id     |
+| `old_ref_id`         | varchar  | 255        | The old ref's id for comparison                                 | FK_refs.id     |
+| `new_ref_commit_sha` | char     | 40         | The commit new ref points to at the time of collection          |                |
+| `old_ref_commit_sha` | char     | 40         | The commit old ref points to at the time of collection          |                |
+| `sorting_index`      | varchar  | 255        | An index for debugging, please skip it                          |                |
+
+#### 16. commits
+
+| **field**         | **type** | **length** | **description**                                                                                                                                                  | **key**        |
+| :---------------- | :------- | :--------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------- |
+| `sha`             | char     | 40         | One of the added commits in the new ref compared to the old ref                                                                                                  | FK_commits.sha |
+| `message`         | varchar  | 255        | Commit message                                                                                                                                                   |                |
+| `author_name`     | varchar  | 255        | The value is set with command `git config user.name xxxxx` commit                                                                                                                            |                |
+| `author_email`    | varchar  | 255        | The value is set with command `git config user.email xxxxx` author                                                                                                                                       |                |
+| `authored_date`   | datetime | 3          | The date when this commit was originally made                                                                                                                    |                |
+| `author_id`       | varchar  | 255        | The id of commit author                                                                                                                                          | FK_users.id    |
+| `committer_name`  | varchar  | 255        | The name of committer                                                                                                                                            |                |
+| `committer_email` | varchar  | 255        | The email of committer                                                                                                                                           |                |
+| `committed_date`  | datetime | 3          | The last time the commit gets modified.<br/>For example, when rebasing the branch where the commit is in on another branch, the committed_date changes.          |                |
+| `committer_id`    | varchar  | 255        | The id of committer                                                                                                                                              | FK_users.id    |
+| `additions`       | int      |            | Added lines of code                                                                                                                                              |                |
+| `deletions`       | int      |            | Deleted lines of code                                                                                                                                            |                |
+| `dev_eq`          | int      |            | A metric that quantifies the amount of code contribution. The data can be retrieved from [AE plugin](https://github.com/merico-dev/lake/tree/v0.9.3/plugins/ae). |                |
+
+
+#### 17. commit_files
+
+The files have been changed via commits. Multiple entries can exist per commit.
+
+| **field**    | **type** | **length** | **description**                        | **key**        |
+| :----------- | :------- | :--------- | :------------------------------------- | :------------- |
+| `commit_sha` | char     | 40         | Commit sha                             | FK_commits.sha |
+| `file_path`  | varchar  | 255        | Path of a changed file in a commit     |                |
+| `additions`  | int      |            | The added lines of code in this file   |                |
+| `deletions`  | int      |            | The deleted lines of code in this file |                |
+
+#### 18. commit_comments(WIP)
+
+Code review comments on commits. These are comments on individual commits. If a commit is associated with a pull request, then its comments are in the [pull_request_comments](https://merico.feishu.cn/docs/doccnvyuG9YpVc6lvmWkmmbZtUc#xt2lv4) table.
+
+| **field**      | **type** | **length** | **description**                     | **key**        |
+| :------------- | :------- | :--------- | :---------------------------------- | :------------- |
+| `id`           | varchar  | 255        | Unique comment id                   |                |
+| `commit_sha`   | char     | 40         | Commit sha                          | FK_commits.sha |
+| `user_id`      | varchar  | 255        | Id of the user who made the comment |                |
+| `created_date` | datetime | 3          | Comment creation time               |                |
+| `body`         | longtext |            | Comment body/detail                 |                |
+| `line`         | int      |            |                                     |                |
+| `position`     | int      |            |                                     |                |
+
+#### 19. commit_parents
+
+The parent commit(s) for each commit, as specified by Git.
+
+| **field**    | **type** | **length** | **description**   | **key**        |
+| :----------- | :------- | :--------- | :---------------- | :------------- |
+| `commit_sha` | char     | 40         | commit sha        | FK_commits.sha |
+| `parent`     | char     | 40         | Parent commit sha | FK_commits.sha |
+
+<br/>
+
+### Domain 3 - Code Review
+
+#### 20. pull_requests
+
+A pull request is the abstraction of Github pull request and Gitlab merge request.
+
+| **field**          | **type** | **length** | **description**                                                                                                                                                                                                                                                                                                                                                                                | **key**        |
+| :----------------- | :------- | :--------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------- |
+| `id`               | char     | 40         | A pull request's `id` is composed of "< plugin >:< Entity >:< PK0 >[:PK1]..." Eg. For 'github:GithubPullRequests:1347'                                                                                                                                                                                                                                                                         | FK_commits.sha |
+| `title`            | varchar  | 255        | The title of pull request                                                                                                                                                                                                                                                                                                                                                                      |                |
+| `description`      | longtext |            | The body/description of pull request                                                                                                                                                                                                                                                                                                                                                           |                |
+| `status`           | varchar  | 255        | the status of pull requests. For a Github pull request, the status can either be 'open' or 'closed'.                                                                                                                                                                                                                                                                                           |                |
+| `number`           | varchar  | 255        | The number of PR. Eg, 1536 is the number of this [PR](https://github.com/merico-dev/lake/pull/1563)                                                                                                                                                                                                                                                                                            |                |
+| `base_repo_id`     | varchar  | 255        | The repo that will be updated.                                                                                                                                                                                                                                                                                                                                                                 |                |
+| `head_reop_id`     | varchar  | 255        | The repo containing the changes that will be added to the base. If the head repository is NULL, this means that the corresponding project had been deleted when DevLake processed the pull request.                                                                                                                                                                                            |                |
+| `base_ref`         | varchar  | 255        | The branch name in the base repo that will be updated                                                                                                                                                                                                                                                                                                                                          |                |
+| `head_ref`         | varchar  | 255        | The branch name in the head repo that contains the changes that will be added to the base                                                                                                                                                                                                                                                                                                      |                |
+| `author_name`      | varchar  | 255        | The creator's name of the pull request                                                                                                                                                                                                                                                                                                                                                         |                |
+| `author_id`        | varchar  | 255        | The creator's id of the pull request                                                                                                                                                                                                                                                                                                                                                           |                |
+| `url`              | varchar  | 255        | the web link of the pull request                                                                                                                                                                                                                                                                                                                                                               |                |
+| `type`             | varchar  | 255        | The work-type of a pull request. For example: feature-development, bug-fix, docs, etc.<br/>The value is transformed from Github pull request labels by configuring `GITHUB_PR_TYPE` in `.env` file during installation.                                                                                                                                                                        |                |
+| `component`        | varchar  | 255        | The component this PR affects.<br/>The value is transformed from Github/Gitlab pull request labels by configuring `GITHUB_PR_COMPONENT` in `.env` file during installation.                                                                                                                                                                                                                    |                |
+| `created_date`     | datetime | 3          | The time PR created.                                                                                                                                                                                                                                                                                                                                                                           |                |
+| `merged_date`      | datetime | 3          | The time PR gets merged. Null when the PR is not merged.                                                                                                                                                                                                                                                                                                                                       |                |
+| `closed_date`      | datetime | 3          | The time PR closed. Null when the PR is not closed.                                                                                                                                                                                                                                                                                                                                            |                |
+| `merge_commit_sha` | char     | 40         | the merge commit of this PR. By the definition of [Github](https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/managing-branches-in-your-repository/changing-the-default-branch), when you click the default Merge pull request option on a pull request on Github, all commits from the feature branch are added to the base branch in a merge commit. |                |
+
+#### 21. pull_request_labels
+
+This table shows the labels of pull request. Multiple entries can exist per pull request. This table can be used to filter pull requests by label name.
+
+| **field**         | **type** | **length** | **description** | **key**             |
+| :---------------- | :------- | :--------- | :-------------- | :------------------ |
+| `name`            | varchar  | 255        | Label name      |                     |
+| `pull_request_id` | varchar  | 255        | Pull request ID | FK_pull_requests.id |
+
+#### 22. pull_request_commits
+
+A commit associated with a pull request
+
+The list is additive. This means if a rebase with commit squashing takes place after the commits of a pull request have been processed, the old commits will not be deleted.
+
+| **field**         | **type** | **length** | **description** | **key**             |
+| :---------------- | :------- | :--------- | :-------------- | :------------------ |
+| `pull_request_id` | varchar  | 255        | Pull request id | FK_pull_requests.id |
+| `commit_sha`      | char     | 40         | Commit sha      | FK_commits.sha      |
+
+#### 23. pull_request_comments(WIP)
+
+A code review comment on a commit associated with a pull request
+
+The list is additive. If commits are squashed on the head repo, the comments remain intact.
+
+| **field**         | **type** | **length** | **description**                                                                                                                                                                                     | **key**             |
+| :---------------- | :------- | :--------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------ |
+| `id`              | varchar  | 255        | Comment id                                                                                                                                                                                          | PK                  |
+| `pull_request_id` | varchar  | 255        | Pull request id                                                                                                                                                                                     | FK_pull_requests.id |
+| `user_id`         | varchar  | 255        | Id of user who made the comment                                                                                                                                                                     | FK_users.id         |
+| `created_date`    | datetime | 3          | Comment creation time                                                                                                                                                                               |                     |
+| `body`            | longtext |            | The body of the comment                                                                                                                                                                             |                     |
+| `position`        | int      |            | The position of a comment under a pull request. It starts from 1. The position is sorted by comment created_date asc.<br/>Eg. If a PR has 5 comments, the position of the 1st created comment is 1. |                     |
+
+#### 24. pull_request_events(WIP)
+
+Events of pull requests.
+
+| **field**         | **type** | **length** | **description**                                                                                                                                                                                                                                                                                                                                                                                                                                          | **k [...]
+| :---------------- | :------- | :--------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-- [...]
+| `id`              | varchar  | 255        | Event id                                                                                                                                                                                                                                                                                                                                                                                                                                                 | PK  [...]
+| `pull_request_id` | varchar  | 255        | Pull request id                                                                                                                                                                                                                                                                                                                                                                                                                                          | FK_ [...]
+| `action`          | varchar  | 255        | The action to be taken, some values:<ul><li>`opened`: When the pull request has been opened</li><li>`closed`: When the pull request has been closed</li><li>`merged`: When Github detected that the pull request has been merged. No merges outside Github (i.e. Git based) are reported</li><li>`reoponed`: When a pull request is opened after being closed</li><li>`syncrhonize`: When new commits are added/removed to the head repository</li></ul> |     [...]
+| `actor_id`        | varchar  | 255        | The user id of the event performer                                                                                                                                                                                                                                                                                                                                                                                                                       | FK_ [...]
+| `created_date`    | datetime | 3          | Event creation time                                                                                                                                                                                                                                                                                                                                                                                                                                      |     [...]
+
+<br/>
+
+### Domain 4 - CI/CD(WIP)
+
+#### 25. jobs
+
+The CI/CD schedule, not a specific task.
+
+| **field** | **type** | **length** | **description** | **key** |
+| :-------- | :------- | :--------- | :-------------- | :------ |
+| `id`      | varchar  | 255        | Job id          | PK      |
+| `name`    | varchar  | 255        | Name of job     |         |
+
+#### 26. builds
+
+A build is an execution of a job.
+
+| **field**      | **type** | **length** | **description**                                                  | **key**    |
+| :------------- | :------- | :--------- | :--------------------------------------------------------------- | :--------- |
+| `id`           | varchar  | 255        | Build id                                                         | PK         |
+| `job_id`       | varchar  | 255        | Id of the job this build belongs to                              | FK_jobs.id |
+| `name`         | varchar  | 255        | Name of build                                                    |            |
+| `duration_sec` | bigint   |            | The duration of build in seconds                                 |            |
+| `started_date` | datetime | 3          | Started time of the build                                        |            |
+| `status`       | varchar  | 255        | The result of build. The values may be 'success', 'failed', etc. |            |
+| `commit_sha`   | char     | 40         | The specific commit being built on. Nullable.                    |            |
+
+
+### Cross-Domain Entities
+
+These entities are used to map entities between different domains. They are the key players to break data isolation.
+
+There're low-level entities such as issue_commits, users, and higher-level cross domain entities such as board_repos
+
+#### 27. issue_commits
+
+A low-level mapping between "issue tracking" and "source code management" domain by mapping `issues` and `commits`. Issue(n): Commit(n).
+
+The original connection between these two entities lies in either issue tracking tools like Jira or source code management tools like GitLab. You have to use tools to accomplish this.
+
+For example, a common method to connect Jira issue and GitLab commit is a GitLab plugin [Jira Integration](https://docs.gitlab.com/ee/integration/jira/). With this plugin, the Jira issue key in the commit message written by the committers will be parsed. Then, the plugin will add the commit urls under this jira issue. Hence, DevLake's [Jira plugin](https://github.com/merico-dev/lake/tree/main/plugins/jira) can get the related commits (including repo, commit_id, url) of an issue.
+
+| **field**    | **type** | **length** | **description** | **key**        |
+| :----------- | :------- | :--------- | :-------------- | :------------- |
+| `issue_id`   | varchar  | 255        | Issue id        | FK_issues.id   |
+| `commit_sha` | char     | 40         | Commit sha      | FK_commits.sha |
+
+#### 28. pull_request_issues
+
+This table shows the issues closed by pull requests. It's a medium-level mapping between "issue tracking" and "source code management" domain by mapping issues and commits. Issue(n): Commit(n).
+
+The data is extracted from the body of pull requests conforming to certain regular expression. The regular expression can be defined in GITHUB_PR_BODY_CLOSE_PATTERN in the .env file
+
+| **field**             | **type** | **length** | **description**     | **key**             |
+| :-------------------- | :------- | :--------- | :------------------ | :------------------ |
+| `pull_request_id`     | char     | 40         | Pull request id     | FK_pull_requests.id |
+| `issue_id`            | varchar  | 255        | Issue id            | FK_issues.id        |
+| `pull_request_number` | varchar  | 255        | Pull request number |                     |
+| `issue_number`        | varchar  | 255        | Issue number        |                     |
+
+#### 29. board_repo(WIP)
+
+A rough way to link "issue tracking" and "source code management" domain by mapping `boards` and `repos`. Board(n): Repo(n).
+
+The mapping logic is under development.
+
+| **field**  | **type** | **length** | **description** | **key**      |
+| :--------- | :------- | :--------- | :-------------- | :----------- |
+| `board_id` | varchar  | 255        | Board id        | FK_boards.id |
+| `repo_id`  | varchar  | 255        | Repo id         | FK_repos.id  |
+
+#### 30. users(WIP)
+
+This is the table to unify user identities across tools. This table can be used to do all user-based metrics, such as _'No. of Issue closed by contributor', 'No. of commits by contributor',_
+
+| **field**      | **type** | **length** | **description**                                                                                                                                                                                         | **key** |
+| :------------- | :------- | :--------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | :------ |
+| `id`           | varchar  | 255        | A user's `id` is composed of "< Plugin >:< Entity >:< PK0 >[:PK1]..."<br/>For example, a Github user's id is composed of "< github >:< GithubUsers >< GithubUserId)". Eg. 'github:GithubUsers:14050754' | PK      |
+| user_name      | varchar  | 255        | username/Github login of a user                                                                                                                                                                         |         |
+| `fullname`     | varchar  | 255        | User's full name                                                                                                                                                                                        |         |
+| `email`        | varchar  | 255        | Email                                                                                                                                                                                                   |         |
+| `avatar_url`   | varchar  | 255        |                                                                                                                                                                                                         |         |
+| `organization` | varchar  | 255        | User's organization or comany name                                                                                                                                                                      |         |
+| `created_date` | datetime | 3          | User creation time                                                                                                                                                                                      |         |
+| `deleted`      | tinyint  |            | 0: default. The user is active 1: the user is no longer active                                                                                                                                          |         |
+
+<br/>
+
+## DWM Entities - (Data Warehouse Middle)
+
+DWM entities are the slight aggregation and operation of DWD to store more organized details or middle-level metrics.
+
+#### 31. issue_status_history
+
+This table shows the history of 'status change' of issues. This table can be used to break down _'issue lead time'_ to _'issue staying time in each status'_ to identify the bottleneck of the delivery workflow.
+
+| **field**         | **type** | **length** | **description**                 | **key**         |
+| :---------------- | :------- | :--------- | :------------------------------ | :-------------- |
+| `issue_id`        | varchar  | 255        | Issue id                        | PK, FK_issue.id |
+| `original_status` | varchar  | 255        | The original status of an issue |                 |
+| `start_date`      | datetime | 3          | The start time of the status    |                 |
+| `end_date`        | datetime | 3          | The end time of the status      |                 |
+
+#### 32. Issue_assignee_history
+
+This table shows the 'assignee change history' of issues. This table can be used to identify _'the actual developer of an issue',_ or _'contributor involved in an issue'_ for contribution analysis.
+
+| **field**    | **type** | **length** | **description**                                    | **key**         |
+| :----------- | :------- | :--------- | :------------------------------------------------- | :-------------- |
+| `issue_id`   | varchar  | 255        | Issue id                                           | PK, FK_issue.id |
+| `assignee`   | varchar  | 255        | The name of assignee of an issue                   |                 |
+| `start_date` | datetime | 3          | The time when the issue is assigned to an assignee |                 |
+| `end_date`   | datetime | 3          | The time when the assignee changes                 |                 |
+
+#### 33. issue_sprints_history
+
+This table shows the 'scope change history' of sprints. This table can be used to analyze the _'how much and how frequently does a team change plans'_.
+
+| **field**    | **type** | **length** | **description**                                    | **key**         |
+| :----------- | :------- | :--------- | :------------------------------------------------- | :-------------- |
+| `issue_id`   | varchar  | 255        | Issue id                                           | PK, FK_issue.id |
+| `sprint_id`  | varchar  | 255        | Sprint id                                          | FK_sprints.id   |
+| `start_date` | datetime | 3          | The time when the issue added to a sprint          |                 |
+| `end_date`   | datetime | 3          | The time when the issue gets removed from a sprint |                 |
+
+#### 34. refs_issues_diffs
+
+This table shows the issues fixed by commits added in a new ref compared to an old one. The data is computed from [table.ref_commits_diff](https://merico.feishu.cn/docs/doccnvyuG9YpVc6lvmWkmmbZtUc#yJOyqa), [table.pull_requests](https://merico.feishu.cn/docs/doccnvyuG9YpVc6lvmWkmmbZtUc#Uc849c), [table.pull_request_commits](https://merico.feishu.cn/docs/doccnvyuG9YpVc6lvmWkmmbZtUc#G9cPfj), and [table.pull_request_issues](https://merico.feishu.cn/docs/doccnvyuG9YpVc6lvmWkmmbZtUc#we6Uac).
+
+This table can support tag-based analysis, for instance, '_No. of bugs closed in a tag_'.
+
+| **field**            | **type** | **length** | **description**                                        | **key**      |
+| :------------------- | :------- | :--------- | :----------------------------------------------------- | :----------- |
+| `new_ref_id`         | varchar  | 255        | The new ref's id for comparison                        | FK_refs.id   |
+| `old_ref_id`         | varchar  | 255        | The old ref's id for comparison                        | FK_refs.id   |
+| `new_ref_commit_sha` | char     | 40         | The commit new ref points to at the time of collection |              |
+| `old_ref_commit_sha` | char     | 40         | The commit old ref points to at the time of collection |              |
+| `issue_number`       | varchar  | 255        | Issue number                                           |              |
+| `issue_id`           | varchar  | 255        | Issue id                                               | FK_issues.id |
diff --git a/versioned_docs/version-v0.11.0/DataModels/_category_.json b/versioned_docs/version-v0.11.0/DataModels/_category_.json
new file mode 100644
index 0000000..e678e71
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/DataModels/_category_.json
@@ -0,0 +1,4 @@
+{
+  "label": "Data Models",
+  "position": 5
+}
diff --git a/versioned_docs/version-v0.11.0/DeveloperManuals/DBMigration.md b/versioned_docs/version-v0.11.0/DeveloperManuals/DBMigration.md
new file mode 100644
index 0000000..9530237
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/DeveloperManuals/DBMigration.md
@@ -0,0 +1,37 @@
+---
+title: "DB Migration"
+description: >
+  DB Migration
+sidebar_position: 3
+---
+
+## Summary
+Starting in v0.10.0, DevLake provides a lightweight migration tool for executing migration scripts.
+Both framework itself and plugins define their migration scripts in their own migration folder.
+The migration scripts are written with gorm in Golang to support different SQL dialects.
+
+
+## Migration Script
+Migration script describes how to do database migration.
+They implement the `Script` interface.
+When DevLake starts, scripts register themselves to the framework by invoking the `Register` function
+
+```go
+type Script interface {
+	Up(ctx context.Context, db *gorm.DB) error
+	Version() uint64
+	Name() string
+}
+```
+
+## Table `migration_history`
+
+The table tracks migration scripts execution and schemas changes.
+From which, DevLake could figure out the current state of database schemas.
+
+
+## How It Works
+1. Check `migration_history` table, calculate all the migration scripts need to be executed.
+2. Sort scripts by Version in ascending order.
+3. Execute scripts.
+4. Save results in the `migration_history` table.
diff --git a/versioned_docs/version-v0.11.0/DeveloperManuals/Dal.md b/versioned_docs/version-v0.11.0/DeveloperManuals/Dal.md
new file mode 100644
index 0000000..9b08542
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/DeveloperManuals/Dal.md
@@ -0,0 +1,173 @@
+---
+title: "Dal"
+sidebar_position: 5
+description: >
+  The Dal (Data Access Layer) is designed to decouple the hard dependency on `gorm` in v0.12
+---
+
+## Summary
+
+The Dal (Data Access Layer) is designed to decouple the hard dependency on `gorm` in v0.12.  The advantages of introducing this isolation are:
+
+ - Unit Test: Mocking an Interface is easier and more reliable than Patching a Pointer.
+ - Clean Code: DBS operations are more consistence than using `gorm ` directly.
+ - Replaceable: It would be easier to replace `gorm` in the future if needed.
+
+## The Dal Interface
+
+```go
+type Dal interface {
+	AutoMigrate(entity interface{}, clauses ...Clause) error
+	Exec(query string, params ...interface{}) error
+	RawCursor(query string, params ...interface{}) (*sql.Rows, error)
+	Cursor(clauses ...Clause) (*sql.Rows, error)
+	Fetch(cursor *sql.Rows, dst interface{}) error
+	All(dst interface{}, clauses ...Clause) error
+	First(dst interface{}, clauses ...Clause) error
+	Count(clauses ...Clause) (int64, error)
+	Pluck(column string, dest interface{}, clauses ...Clause) error
+	Create(entity interface{}, clauses ...Clause) error
+	Update(entity interface{}, clauses ...Clause) error
+	CreateOrUpdate(entity interface{}, clauses ...Clause) error
+	CreateIfNotExist(entity interface{}, clauses ...Clause) error
+	Delete(entity interface{}, clauses ...Clause) error
+	AllTables() ([]string, error)
+}
+```
+
+
+## How to use
+
+### Query
+```go
+// Get a database cursor
+user := &models.User{}
+cursor, err := db.Cursor(
+  dal.From(user),
+  dal.Where("department = ?", "R&D"),
+  dal.Orderby("id DESC"),
+)
+if err != nil {
+  return err
+}
+for cursor.Next() {
+  err = dal.Fetch(cursor, user)  // fetch one record at a time
+  ...
+}
+
+// Get a database cursor by raw sql query
+cursor, err := db.Raw("SELECT * FROM users")
+
+// USE WITH CAUTIOUS: loading a big table at once is slow and dangerous
+// Load all records from database at once. 
+users := make([]models.Users, 0)
+err := db.All(&users, dal.Where("department = ?", "R&D"))
+
+// Load a column as Scalar or Slice
+var email string
+err := db.Pluck("email", &username, dal.Where("id = ?", 1))
+var emails []string
+err := db.Pluck("email", &emails)
+
+// Execute query
+err := db.Exec("UPDATE users SET department = ? WHERE department = ?", "Research & Development", "R&D")
+```
+
+### Insert
+```go
+err := db.Create(&models.User{
+  Email: "hello@example.com", // assumming this the Primarykey
+  Name: "hello",
+  Department: "R&D",
+})
+```
+
+### Update
+```go
+err := db.Create(&models.User{
+  Email: "hello@example.com", // assumming this the Primarykey
+  Name: "hello",
+  Department: "R&D",
+})
+```
+### Insert or Update
+```go
+err := db.CreateOrUpdate(&models.User{
+  Email: "hello@example.com",  // assuming this is the Primarykey
+  Name: "hello",
+  Department: "R&D",
+})
+```
+
+### Insert if record(by PrimaryKey) didn't exist
+```go
+err := db.CreateIfNotExist(&models.User{
+  Email: "hello@example.com",  // assuming this is the Primarykey
+  Name: "hello",
+  Department: "R&D",
+})
+```
+
+### Delete
+```go
+err := db.CreateIfNotExist(&models.User{
+  Email: "hello@example.com",  // assuming this is the Primary key
+})
+```
+
+### DDL and others
+```go
+// Returns all table names
+allTables, err := db.AllTables()
+
+// Automigrate: create/add missing table/columns
+// Note: it won't delete any existing columns, nor does it update the column definition
+err := db.AutoMigrate(&models.User{})
+```
+
+## How to do Unit Test
+First, run the command `make mock` to generate the Mocking Stubs, the generated source files should appear in `mocks` folder. 
+```
+mocks
+├── ApiResourceHandler.go
+├── AsyncResponseHandler.go
+├── BasicRes.go
+├── CloseablePluginTask.go
+├── ConfigGetter.go
+├── Dal.go
+├── DataConvertHandler.go
+├── ExecContext.go
+├── InjectConfigGetter.go
+├── InjectLogger.go
+├── Iterator.go
+├── Logger.go
+├── Migratable.go
+├── PluginApi.go
+├── PluginBlueprintV100.go
+├── PluginInit.go
+├── PluginMeta.go
+├── PluginTask.go
+├── RateLimitedApiClient.go
+├── SubTaskContext.go
+├── SubTaskEntryPoint.go
+├── SubTask.go
+└── TaskContext.go
+```
+With these Mocking stubs, you may start writing your TestCases using the `mocks.Dal`.
+```go
+import "github.com/apache/incubator-devlake/mocks"
+
+func TestCreateUser(t *testing.T) {
+    mockDal := new(mocks.Dal)
+    mockDal.On("Create", mock.Anything, mock.Anything).Return(nil).Once()
+    userService := &services.UserService{
+        Dal: mockDal,
+    }
+    userService.Post(map[string]interface{}{
+        "email": "helle@example.com",
+        "name": "hello",
+        "department": "R&D",
+    })
+    mockDal.AssertExpectations(t)
+```
+
diff --git a/versioned_docs/version-v0.11.0/DeveloperManuals/DeveloperSetup.md b/versioned_docs/version-v0.11.0/DeveloperManuals/DeveloperSetup.md
new file mode 100644
index 0000000..2a462de
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/DeveloperManuals/DeveloperSetup.md
@@ -0,0 +1,131 @@
+---
+title: "Developer Setup"
+description: >
+  The steps to install DevLake in develper mode.
+sidebar_position: 1
+---
+
+
+## Requirements
+
+- <a href="https://docs.docker.com/get-docker" target="_blank">Docker v19.03.10+</a>
+- <a href="https://golang.org/doc/install" target="_blank">Golang v1.17+</a>
+- Make
+  - Mac (Already installed)
+  - Windows: [Download](http://gnuwin32.sourceforge.net/packages/make.htm)
+  - Ubuntu: `sudo apt-get install build-essential libssl-dev`
+
+## How to setup dev environment
+1. Navigate to where you would like to install this project and clone the repository:
+
+   ```sh
+   git clone https://github.com/apache/incubator-devlake
+   cd incubator-devlake
+   ```
+
+2. Install dependencies for plugins:
+
+   - [RefDiff](../Plugins/refdiff.md#development)
+
+3. Install Go packages
+
+    ```sh
+	go get
+    ```
+
+4. Copy the sample config file to new local file:
+
+    ```sh
+    cp .env.example .env
+    ```
+
+5. Update the following variables in the file `.env`:
+
+    * `DB_URL`: Replace `mysql:3306` with `127.0.0.1:3306`
+
+6. Start the MySQL and Grafana containers:
+
+    > Make sure the Docker daemon is running before this step.
+
+    ```sh
+    docker-compose up -d mysql grafana
+    ```
+
+7. Run lake and config UI in dev mode in two separate terminals:
+
+    ```sh
+    # install mockery
+    go install github.com/vektra/mockery/v2@latest
+    # generate mocking stubs
+    make mock
+    # run lake
+    make dev
+    # run config UI
+    make configure-dev
+    ```
+
+    Q: I got an error saying: `libgit2.so.1.3: cannot open share object file: No such file or directory`
+
+    A: Make sure your program can find `libgit2.so.1.3`. `LD_LIBRARY_PATH` can be assigned like this if your `libgit2.so.1.3` is located at `/usr/local/lib`:
+
+    ```sh
+    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
+    ```
+
+8. Visit config UI at `localhost:4000` to configure data connections.
+    - Navigate to desired plugins pages on the Integrations page
+    - Enter the required information for the plugins you intend to use.
+    - Refer to the following for more details on how to configure each one:
+        - [Jira](../Plugins/jira.md)
+        - [GitLab](../Plugins/gitlab.md)
+        - [Jenkins](../Plugins/jenkins.md)
+        - [GitHub](../Plugins/github.md): For users who'd like to collect GitHub data, we recommend reading our [GitHub data collection guide](../UserManuals/GitHubUserGuide.md) which covers the following steps in detail.
+    - Submit the form to update the values by clicking on the **Save Connection** button on each form page
+
+9. Visit `localhost:4000/pipelines/create` to RUN a Pipeline and trigger data collection.
+
+
+   Pipelines Runs can be initiated by the new "Create Run" Interface. Simply enable the **Data Connection Providers** you wish to run collection for, and specify the data you want to collect, for instance, **Project ID** for Gitlab and **Repository Name** for GitHub.
+
+   Once a valid pipeline configuration has been created, press **Create Run** to start/run the pipeline.
+   After the pipeline starts, you will be automatically redirected to the **Pipeline Activity** screen to monitor collection activity.
+
+   **Pipelines** is accessible from the main menu of the config-ui for easy access.
+
+   - Manage All Pipelines: `http://localhost:4000/pipelines`
+   - Create Pipeline RUN: `http://localhost:4000/pipelines/create`
+   - Track Pipeline Activity: `http://localhost:4000/pipelines/activity/[RUN_ID]`
+
+   For advanced use cases and complex pipelines, please use the Raw JSON API to manually initiate a run using **cURL** or graphical API tool such as **Postman**. `POST` the following request to the DevLake API Endpoint.
+
+    ```json
+    [
+        [
+            {
+                "plugin": "github",
+                "options": {
+                    "repo": "lake",
+                    "owner": "merico-dev"
+                }
+            }
+        ]
+    ]
+    ```
+
+   Please refer to [Pipeline Advanced Mode](../UserManuals/AdvancedMode.md) for in-depth explanation.
+
+
+10. Click *View Dashboards* button in the top left when done, or visit `localhost:3002` (username: `admin`, password: `admin`).
+
+   We use <a href="https://grafana.com/" target="_blank">Grafana</a> as a visualization tool to build charts for the <a href="https://github.com/merico-dev/lake/wiki/DataModel.Domain-layer-schema">data stored in our database</a>. Using SQL queries, we can add panels to build, save, and edit customized dashboards.
+
+   All the details on provisioning and customizing a dashboard can be found in the [Grafana Doc](../UserManuals/GrafanaUserGuide.md).
+
+11. (Optional) To run the tests:
+
+    ```sh
+    make test
+    ```
+
+12. For DB migrations, please refer to [Migration Doc](../DeveloperManuals/DBMigration.md).
+
diff --git a/versioned_docs/version-v0.11.0/DeveloperManuals/Notifications.md b/versioned_docs/version-v0.11.0/DeveloperManuals/Notifications.md
new file mode 100644
index 0000000..23456b4
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/DeveloperManuals/Notifications.md
@@ -0,0 +1,32 @@
+---
+title: "Notifications"
+description: >
+  Notifications
+sidebar_position: 4
+---
+
+## Request
+Example request
+```
+POST /lake/notify?nouce=3-FDXxIootApWxEVtz&sign=424c2f6159bd9e9828924a53f9911059433dc14328a031e91f9802f062b495d5
+
+{"TaskID":39,"PluginName":"jenkins","CreatedAt":"2021-09-30T15:28:00.389+08:00","UpdatedAt":"2021-09-30T15:28:00.785+08:00"}
+```
+
+## Configuration
+If you want to use the notification feature, you should add two configuration key to `.env` file.
+```shell
+# .env
+# notification request url, e.g.: http://example.com/lake/notify
+NOTIFICATION_ENDPOINT=
+# secret is used to calculate signature
+NOTIFICATION_SECRET=
+```
+
+## Signature
+You should check the signature before accepting the notification request. We use sha256 algorithm to calculate the checksum.
+```go
+// calculate checksum
+sum := sha256.Sum256([]byte(requestBody + NOTIFICATION_SECRET + nouce))
+return hex.EncodeToString(sum[:])
+```
diff --git a/versioned_docs/version-v0.11.0/DeveloperManuals/PluginImplementation.md b/versioned_docs/version-v0.11.0/DeveloperManuals/PluginImplementation.md
new file mode 100644
index 0000000..e3457c9
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/DeveloperManuals/PluginImplementation.md
@@ -0,0 +1,292 @@
+---
+title: "Plugin Implementation"
+sidebar_position: 2
+description: >
+  Plugin Implementation
+---
+
+## How to Implement a DevLake plugin?
+
+If your favorite DevOps tool is not yet supported by DevLake, don't worry. It's not difficult to implement a DevLake plugin. In this post, we'll go through the basics of DevLake plugins and build an example plugin from scratch together.
+
+## What is a plugin?
+
+A DevLake plugin is a shared library built with Go's `plugin` package that hooks up to DevLake core at run-time.
+
+A plugin may extend DevLake's capability in three ways:
+
+1. Integrating with new data sources
+2. Transforming/enriching existing data
+3. Exporting DevLake data to other data systems
+
+
+## How do plugins work?
+
+A plugin mainly consists of a collection of subtasks that can be executed by DevLake core. For data source plugins, a subtask may be collecting a single entity from the data source (e.g., issues from Jira). Besides the subtasks, there're hooks that a plugin can implement to customize its initialization, migration, and more. See below for a list of the most important interfaces:
+
+1. [PluginMeta](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_meta.go) contains the minimal interface that a plugin should implement, with only two functions 
+   - Description() returns the description of a plugin
+   - RootPkgPath() returns the root package path of a plugin
+2. [PluginInit](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_init.go) allows a plugin to customize its initialization
+3. [PluginTask](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_task.go) enables a plugin to prepare data prior to subtask execution
+4. [PluginApi](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_api.go) lets a plugin exposes some self-defined APIs
+5. [Migratable](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_db_migration.go) is where a plugin manages its database migrations 
+
+The diagram below shows the control flow of executing a plugin:
+
+```mermaid
+flowchart TD;
+    subgraph S4[Step4 sub-task extractor running process];
+    direction LR;
+    D4[DevLake];
+    D4 -- Step4.1 create a new\n ApiExtractor\n and execute it --> E["ExtractXXXMeta.\nEntryPoint"];
+    E <-- Step4.2 read from\n raw table --> RawDataSubTaskArgs.\nTable;
+    E -- "Step4.3 call with RawData" --> ApiExtractor.Extract;
+    ApiExtractor.Extract -- "decode and return gorm models" --> E
+    end
+    subgraph S3[Step3 sub-task collector running process]
+    direction LR
+    D3[DevLake]
+    D3 -- Step3.1 create a new\n ApiCollector\n and execute it --> C["CollectXXXMeta.\nEntryPoint"];
+    C <-- Step3.2 create\n raw table --> RawDataSubTaskArgs.\nRAW_BBB_TABLE;
+    C <-- Step3.3 build query\n before sending requests --> ApiCollectorArgs.\nQuery/UrlTemplate;
+    C <-. Step3.4 send requests by ApiClient \n and return HTTP response.-> A1["HTTP APIs"];
+    C <-- "Step3.5 call and \nreturn decoded data \nfrom HTTP response" --> ResponseParser;
+    end
+    subgraph S2[Step2 DevLake register custom plugin]
+    direction LR
+    D2[DevLake]
+    D2 <-- "Step2.1 function `Init` \nneed to do init jobs" --> plugin.Init;
+    D2 <-- "Step2.2 (Optional) call \nand return migration scripts" --> plugin.MigrationScripts;
+    D2 <-- "Step2.3 (Optional) call \nand return taskCtx" --> plugin.PrepareTaskData;
+    D2 <-- "Step2.4 call and \nreturn subTasks for execting" --> plugin.SubTaskContext;
+    end
+    subgraph S1[Step1 Run DevLake]
+    direction LR
+    main -- Transfer of control \nby `runner.DirectRun` --> D1[DevLake];
+    end
+    S1-->S2-->S3-->S4
+```
+There's a lot of information in the diagram but we don't expect you to digest it right away, simply use it as a reference when you go through the example below.
+
+## A step-by-step guide towards your first plugin
+
+In this guide, we'll walk through how to create a data source plugin from scratch. 
+
+The example in this tutorial comes from DevLake's own needs of managing [CLAs](https://en.wikipedia.org/wiki/Contributor_License_Agreement). Whenever DevLake receives a new PR on GitHub, we need to check if the author has signed a CLA by referencing `https://people.apache.org/public/icla-info.json`. This guide will demonstrate how to collect the ICLA info from Apache API, cache the raw response, and extract the raw data into a relational table ready to be queried.
+
+### Step 1: Bootstrap the new plugin
+
+**Note:** Please make sure you have DevLake up and running before proceeding.
+
+> More info about plugin:
+> Generally, we need these folders in plugin folders: `api`, `models` and `tasks`
+> `api` interacts with `config-ui` for test/get/save connection of data source
+>       - connection [example](https://github.com/apache/incubator-devlake/blob/main/plugins/gitlab/api/connection.go)
+>       - connection model [example](https://github.com/apache/incubator-devlake/blob/main/plugins/gitlab/models/connection.go)
+> `models` stores all `data entities` and `data migration scripts`. 
+>       - entity 
+>       - data migrations [template](https://github.com/apache/incubator-devlake/tree/main/generator/template/migrationscripts)
+> `tasks` contains all of our `sub tasks` for a plugin
+>       - task data [template](https://github.com/apache/incubator-devlake/blob/main/generator/template/plugin/tasks/task_data.go-template)
+>       - api client [template](https://github.com/apache/incubator-devlake/blob/main/generator/template/plugin/tasks/task_data_with_api_client.go-template)
+
+Don't worry if you cannot figure out what these concepts mean immediately. We'll explain them one by one later. 
+
+DevLake provides a generator to create a plugin conveniently. Let's scaffold our new plugin by running `go run generator/main.go create-plugin icla`, which would ask for `with_api_client` and `Endpoint`.
+
+* `with_api_client` is used for choosing if we need to request HTTP APIs by api_client. 
+* `Endpoint` use in which site we will request, in our case, it should be `https://people.apache.org/`.
+
+![create plugin](https://i.imgur.com/itzlFg7.png)
+
+Now we have three files in our plugin. `api_client.go` and `task_data.go` are in subfolder `tasks/`.
+![plugin files](https://i.imgur.com/zon5waf.png)
+
+Have a try to run this plugin by function `main` in `plugin_main.go`. When you see result like this:
+```
+$go run plugins/icla/plugin_main.go
+[2022-06-02 18:07:30]  INFO failed to create dir logs: mkdir logs: file exists
+press `c` to send cancel signal
+[2022-06-02 18:07:30]  INFO  [icla] start plugin
+invalid ICLA_TOKEN, but ignore this error now
+[2022-06-02 18:07:30]  INFO  [icla] scheduler for api https://people.apache.org/ worker: 25, request: 18000, duration: 1h0m0s
+[2022-06-02 18:07:30]  INFO  [icla] total step: 0
+```
+How exciting. It works! The plugin defined and initiated in `plugin_main.go` use some options in `task_data.go`. They are made up as the most straightforward plugin in Apache DevLake, and `api_client.go` will be used in the next step to request HTTP APIs.
+
+### Step 2: Create a sub-task for data collection
+Before we start, it is helpful to know how collection task is executed: 
+1. First, Apache DevLake would call `plugin_main.PrepareTaskData()` to prepare needed data before any sub-tasks. We need to create an API client here.
+2. Then Apache DevLake will call the sub-tasks returned by `plugin_main.SubTaskMetas()`. Sub-task is an independent task to do some job, like requesting API, processing data, etc.
+
+> Each sub-task must be defined as a SubTaskMeta, and implement SubTaskEntryPoint of SubTaskMeta. SubTaskEntryPoint is defined as 
+> ```go
+> type SubTaskEntryPoint func(c SubTaskContext) error
+> ```
+> More info at: https://devlake.apache.org/blog/how-apache-devlake-runs/
+
+#### Step 2.1 Create a sub-task(Collector) for data collection
+
+Let's run `go run generator/main.go create-collector icla committer` and confirm it. This sub-task is activated by registering in `plugin_main.go/SubTaskMetas` automatically.
+
+![](https://i.imgur.com/tkDuofi.png)
+
+> - Collector will collect data from HTTP or other data sources, and save the data into the raw layer. 
+> - Inside the func `SubTaskEntryPoint` of `Collector`, we use `helper.NewApiCollector` to create an object of [ApiCollector](https://github.com/apache/incubator-devlake/blob/main/generator/template/plugin/tasks/api_collector.go-template), then call `execute()` to do the job. 
+
+Now you can notice `data.ApiClient` is inited in `plugin_main.go/PrepareTaskData.ApiClient`. `PrepareTaskData` create a new `ApiClient`, and it's a tool Apache DevLake suggests to request data from HTTP Apis. This tool support some valuable features for HttpApi, like rateLimit, proxy and retry. Of course, if you like, you may use the lib `http` instead, but it will be more tedious.
+
+Let's move forward to use it.
+
+1. To collect data from `https://people.apache.org/public/icla-info.json`,
+we have filled `https://people.apache.org/` into `tasks/api_client.go/ENDPOINT` in Step 1.
+
+![](https://i.imgur.com/q8Zltnl.png)
+
+2. And fill `public/icla-info.json` into `UrlTemplate`, delete unnecessary iterator and add `println("receive data:", res)` in `ResponseParser` to see if collection was successful.
+
+![](https://i.imgur.com/ToLMclH.png)
+
+Ok, now the collector sub-task has been added to the plugin, and we can kick it off by running `main` again. If everything goes smoothly, the output should look like this:
+```bash
+[2022-06-06 12:24:52]  INFO  [icla] start plugin
+invalid ICLA_TOKEN, but ignore this error now
+[2022-06-06 12:24:52]  INFO  [icla] scheduler for api https://people.apache.org/ worker: 25, request: 18000, duration: 1h0m0s
+[2022-06-06 12:24:52]  INFO  [icla] total step: 1
+[2022-06-06 12:24:52]  INFO  [icla] executing subtask CollectCommitter
+[2022-06-06 12:24:52]  INFO  [icla] [CollectCommitter] start api collection
+receive data: 0x140005763f0
+[2022-06-06 12:24:55]  INFO  [icla] [CollectCommitter] finished records: 1
+[2022-06-06 12:24:55]  INFO  [icla] [CollectCommitter] end api collection
+[2022-06-06 12:24:55]  INFO  [icla] finished step: 1 / 1
+```
+
+Great! Now we can see data pulled from the server without any problem. The last step is to decode the response body in `ResponseParser` and return it to the framework, so it can be stored in the database.
+```go
+ResponseParser: func(res *http.Response) ([]json.RawMessage, error) {
+    body := &struct {
+        LastUpdated string          `json:"last_updated"`
+        Committers  json.RawMessage `json:"committers"`
+    }{}
+    err := helper.UnmarshalResponse(res, body)
+    if err != nil {
+        return nil, err
+    }
+    println("receive data:", len(body.Committers))
+    return []json.RawMessage{body.Committers}, nil
+},
+
+```
+Ok, run the function `main` once again, then it turned out like this, and we should be able see some records show up in the table `_raw_icla_committer`.
+```bash
+……
+receive data: 272956 /* <- the number means 272956 models received */
+[2022-06-06 13:46:57]  INFO  [icla] [CollectCommitter] finished records: 1
+[2022-06-06 13:46:57]  INFO  [icla] [CollectCommitter] end api collection
+[2022-06-06 13:46:57]  INFO  [icla] finished step: 1 / 1
+```
+
+![](https://i.imgur.com/aVYNMRr.png)
+
+#### Step 2.2 Create a sub-task(Extractor) to extract data from the raw layer
+
+> - Extractor will extract data from raw layer and save it into tool db table.
+> - Except for some pre-processing, the main flow is similar to the collector.
+
+We have already collected data from HTTP API and saved them into the DB table `_raw_XXXX`. In this step, we will extract the names of committers from the raw data. As you may infer from the name, raw tables are temporary and not easy to use directly.
+
+Now Apache DevLake suggests to save data by [gorm](https://gorm.io/docs/index.html), so we will create a model by gorm and add it into `plugin_main.go/AutoSchemas.Up()`.
+
+plugins/icla/models/committer.go
+```go
+package models
+
+import (
+	"github.com/apache/incubator-devlake/models/common"
+)
+
+type IclaCommitter struct {
+	UserName     string `gorm:"primaryKey;type:varchar(255)"`
+	Name         string `gorm:"primaryKey;type:varchar(255)"`
+	common.NoPKModel
+}
+
+func (IclaCommitter) TableName() string {
+	return "_tool_icla_committer"
+}
+```
+
+plugins/icla/plugin_main.go
+![](https://i.imgur.com/4f0zJty.png)
+
+
+Ok, run the plugin, and table `_tool_icla_committer` will be created automatically just like the snapshot below:
+![](https://i.imgur.com/7Z324IX.png)
+
+Next, let's run `go run generator/main.go create-extractor icla committer` and type in what the command prompt asks for.
+
+![](https://i.imgur.com/UyDP9Um.png)
+
+Let's look at the function `extract` in `committer_extractor.go` created just now, and some codes need to be written here. It's obviously `resData.data` is raw data, so we could decode them by json and add new `IclaCommitter` to save them.
+```go
+Extract: func(resData *helper.RawData) ([]interface{}, error) {
+    names := &map[string]string{}
+    err := json.Unmarshal(resData.Data, names)
+    if err != nil {
+        return nil, err
+    }
+    extractedModels := make([]interface{}, 0)
+    for userName, name := range *names {
+        extractedModels = append(extractedModels, &models.IclaCommitter{
+            UserName: userName,
+            Name:     name,
+        })fco
+    }
+    return extractedModels, nil
+},
+```
+
+Ok, run it then we get:
+```
+[2022-06-06 15:39:40]  INFO  [icla] start plugin
+invalid ICLA_TOKEN, but ignore this error now
+[2022-06-06 15:39:40]  INFO  [icla] scheduler for api https://people.apache.org/ worker: 25, request: 18000, duration: 1h0m0s
+[2022-06-06 15:39:40]  INFO  [icla] total step: 2
+[2022-06-06 15:39:40]  INFO  [icla] executing subtask CollectCommitter
+[2022-06-06 15:39:40]  INFO  [icla] [CollectCommitter] start api collection
+receive data: 272956
+[2022-06-06 15:39:44]  INFO  [icla] [CollectCommitter] finished records: 1
+[2022-06-06 15:39:44]  INFO  [icla] [CollectCommitter] end api collection
+[2022-06-06 15:39:44]  INFO  [icla] finished step: 1 / 2
+[2022-06-06 15:39:44]  INFO  [icla] executing subtask ExtractCommitter
+[2022-06-06 15:39:46]  INFO  [icla] [ExtractCommitter] finished records: 1
+[2022-06-06 15:39:46]  INFO  [icla] finished step: 2 / 2
+```
+Now committer data have been saved in _tool_icla_committer.
+![](https://i.imgur.com/6svX0N2.png)
+
+#### Step 2.3 Convertor
+
+Notes: There are two ways here (open source or using it yourself). It is unnecessary, but we encourage it because convertors and the domain layer will significantly help build dashboards. More info about the domain layer at: https://devlake.apache.org/docs/DataModels/DevLakeDomainLayerSchema/
+
+> - Convertor will convert data from the tool layer and save it into the domain layer.
+> - We use `helper.NewDataConverter` to create an object of [DataConvertor], then call `execute()`. 
+
+#### Step 2.4 Let's try it
+Sometimes OpenApi will be protected by token or other auth types, and we need to log in to gain a token to visit it. For example, only after logging in `private@apahce.com` could we gather the data about contributors signing ICLA. Here we briefly introduce how to authorize DevLake to collect data.
+
+Let's look at `api_client.go`. `NewIclaApiClient` load config `ICLA_TOKEN` by `.env`, so we can add `ICLA_TOKEN=XXXXXX` in `.env` and use it in `apiClient.SetHeaders()` to mock the login status. Code as below:
+![](https://i.imgur.com/dPxooAx.png)
+
+Of course, we can use `username/password` to get a token after login mockery. Just try and adjust according to the actual situation.
+
+Look for more related details at https://github.com/apache/incubator-devlake
+
+#### Final step: Submit the code as open source code
+Good ideas and we encourage contributions~ Let's learn about migration scripts and domain layers to write normative and platform-neutral codes. More info at https://devlake.apache.org/docs/DataModels/DevLakeDomainLayerSchema or contact us for ebullient help.
+
+
+## Done!
+
+Congratulations! The first plugin has been created! 🎖 
\ No newline at end of file
diff --git a/versioned_docs/version-v0.11.0/DeveloperManuals/_category_.json b/versioned_docs/version-v0.11.0/DeveloperManuals/_category_.json
new file mode 100644
index 0000000..fe67a68
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/DeveloperManuals/_category_.json
@@ -0,0 +1,4 @@
+{
+  "label": "Developer Manuals",
+  "position": 4
+}
diff --git a/versioned_docs/version-v0.11.0/EngineeringMetrics.md b/versioned_docs/version-v0.11.0/EngineeringMetrics.md
new file mode 100644
index 0000000..2d9a42a
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/EngineeringMetrics.md
@@ -0,0 +1,195 @@
+---
+sidebar_position: 06
+title: "Engineering Metrics"
+linkTitle: "Engineering Metrics"
+tags: []
+description: >
+  The definition, values and data required for the 20+ engineering metrics supported by DevLake.
+---
+
+<table>
+    <tr>
+        <th><b>Category</b></th>
+        <th><b>Metric Name</b></th>
+        <th><b>Definition</b></th>
+        <th><b>Data Required</b></th>
+        <th style={{width:'70%'}}><b>Use Scenarios and Recommended Practices</b></th>
+        <th><b>Value&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</b></th>
+    </tr>
+    <tr>
+        <td rowspan="10">Delivery Velocity</td>
+        <td>Requirement Count</td>
+        <td>Number of issues in type "Requirement"</td>
+        <td>Issue/Task Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/jira/README.md">Jira issues</a>, <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub issues</a>, etc</td>
+        <td rowspan="2">
+1. Analyze the number of requirements and delivery rate of different time cycles to find the stability and trend of the development process.
+<br/>2. Analyze and compare the number of requirements delivered and delivery rate of each project/team, and compare the scale of requirements of different projects.
+<br/>3. Based on historical data, establish a baseline of the delivery capacity of a single iteration (optimistic, probable and pessimistic values) to provide a reference for iteration estimation.
+<br/>4. Drill down to analyze the number and percentage of requirements in different phases of SDLC. Analyze rationality and identify the requirements stuck in the backlog.</td>
+        <td rowspan="2">1. Based on historical data, establish a baseline of the delivery capacity of a single iteration to improve the organization and planning of R&D resources.
+<br/>2. Evaluate whether the delivery capacity matches the business phase and demand scale. Identify key bottlenecks and reasonably allocate resources.</td>
+    </tr>
+    <tr>
+        <td>Requirement Delivery Rate</td>
+        <td>Ratio of delivered requirements to all requirements</td>
+        <td>Issue/Task Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/jira/README.md">Jira issues</a>, <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub issues</a>, etc</td>
+    </tr>
+    <tr>
+        <td>Requirement Lead Time</td>
+        <td>Lead time of issues with type "Requirement"</td>
+        <td>Issue/Task Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/jira/README.md">Jira issues</a>, <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub issues</a>, etc</td>
+        <td>
+1. Analyze the trend of requirement lead time to observe if it has improved over time.
+<br/>2. Analyze and compare the requirement lead time of each project/team to identify key projects with abnormal lead time.
+<br/>3. Drill down to analyze a requirement's staying time in different phases of SDLC. Analyze the bottleneck of delivery velocity and improve the workflow.</td>
+        <td>1. Analyze key projects and critical points, identify good/to-be-improved practices that affect requirement lead time, and reduce the risk of delays
+<br/>2. Focus on the end-to-end velocity of value delivery process; coordinate different parts of R&D to avoid efficiency shafts; make targeted improvements to bottlenecks.</td>
+    </tr>
+    <tr>
+        <td>Requirement Granularity</td>
+        <td>Number of story points associated with an issue</td>
+        <td>Issue/Task Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/jira/README.md">Jira issues</a>, <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub issues</a>, etc</td>
+        <td>
+1. Analyze the story points/requirement lead time of requirements to evaluate whether the ticket size, ie. requirement complexity is optimal.
+<br/>2. Compare the estimated requirement granularity with the actual situation and evaluate whether the difference is reasonable by combining more microscopic workload metrics (e.g. lines of code/code equivalents)</td>
+        <td>1. Promote product teams to split requirements carefully, improve requirements quality, help developers understand requirements clearly, deliver efficiently and with high quality, and improve the project management capability of the team.
+<br/>2. Establish a data-supported workload estimation model to help R&D teams calibrate their estimation methods and more accurately assess the granularity of requirements, which is useful to achieve better issue planning in project management.</td>
+    </tr>
+    <tr>
+        <td>Commit Count</td>
+        <td>Number of Commits</td>
+        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitextractor/README.md">Git</a>/<a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a>/<a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> commits</td>
+        <td>
+1. Identify the main reasons for the unusual number of commits and the possible impact on the number of commits through comparison
+<br/>2. Evaluate whether the number of commits is reasonable in conjunction with more microscopic workload metrics (e.g. lines of code/code equivalents)</td>
+        <td>1. Identify potential bottlenecks that may affect output
+<br/>2. Encourage R&D practices of small step submissions and develop excellent coding habits</td>
+    </tr>
+    <tr>
+        <td>Added Lines of Code</td>
+        <td>Accumulated number of added lines of code</td>
+        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitextractor/README.md">Git</a>/<a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a>/<a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> commits</td>
+        <td rowspan="2">
+1. From the project/team dimension, observe the accumulated change in Added lines to assess the team activity and code growth rate
+<br/>2. From version cycle dimension, observe the active time distribution of code changes, and evaluate the effectiveness of project development model.
+<br/>3. From the member dimension, observe the trend and stability of code output of each member, and identify the key points that affect code output by comparison.</td>
+        <td rowspan="2">1. identify potential bottlenecks that may affect the output
+<br/>2. Encourage the team to implement a development model that matches the business requirements; develop excellent coding habits</td>
+    </tr>
+    <tr>
+        <td>Deleted Lines of Code</td>
+        <td>Accumulated number of deleted lines of code</td>
+        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitextractor/README.md">Git</a>/<a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a>/<a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> commits</td>
+    </tr>
+    <tr>
+        <td>Pull Request Review Time</td>
+        <td>Time from Pull/Merge created time until merged</td>
+        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a> PRs, <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> MRs, etc</td>
+        <td>
+1. Observe the mean and distribution of code review time from the project/team/individual dimension to assess the rationality of the review time</td>
+        <td>1. Take inventory of project/team code review resources to avoid lack of resources and backlog of review sessions, resulting in long waiting time
+<br/>2. Encourage teams to implement an efficient and responsive code review mechanism</td>
+    </tr>
+    <tr>
+        <td>Bug Age</td>
+        <td>Lead time of issues in type "Bug"</td>
+        <td>Issue/Task Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/jira/README.md">Jira issues</a>, <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub issues</a>, etc</td>
+        <td rowspan="2">
+1. Observe the trend of bug age and locate the key reasons.<br/>
+2. According to the severity level, type (business, functional classification), affected module, source of bugs, count and observe the length of bug and incident age.</td>
+        <td rowspan="2">1. Help the team to establish an effective hierarchical response mechanism for bugs and incidents. Focus on the resolution of important problems in the backlog.<br/>
+2. Improve team's and individual's bug/incident fixing efficiency. Identify good/to-be-improved practices that affect bug age or incident age</td>
+    </tr>
+    <tr>
+        <td>Incident Age</td>
+        <td>Lead time of issues in type "Incident"</td>
+        <td>Issue/Task Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/jira/README.md">Jira issues</a>, <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub issues</a>, etc</td>
+    </tr>
+    <tr>
+        <td rowspan="8">Delivery Quality</td>
+        <td>Pull Request Count</td>
+        <td>Number of Pull/Merge Requests</td>
+        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a> PRs, <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> MRs, etc</td>
+        <td rowspan="3">
+1. From the developer dimension, we evaluate the code quality of developers by combining the task complexity with the metrics related to the number of review passes and review rounds.<br/>
+2. From the reviewer dimension, we observe the reviewer's review style by taking into account the task complexity, the number of passes and the number of review rounds.<br/>
+3. From the project/team dimension, we combine the project phase and team task complexity to aggregate the metrics related to the number of review passes and review rounds, and identify the modules with abnormal code review process and possible quality risks.</td>
+        <td rowspan="3">1. Code review metrics are process indicators to provide quick feedback on developers' code quality<br/>
+2. Promote the team to establish a unified coding specification and standardize the code review criteria<br/>
+3. Identify modules with low-quality risks in advance, optimize practices, and precipitate into reusable knowledge and tools to avoid technical debt accumulation</td>
+    </tr>
+    <tr>
+        <td>Pull Request Pass Rate</td>
+        <td>Ratio of Pull/Merge Review requests to merged</td>
+        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a> PRs, <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> MRs, etc</td>
+    </tr>
+    <tr>
+        <td>Pull Request Review Rounds</td>
+        <td>Number of cycles of commits followed by comments/final merge</td>
+        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a> PRs, <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> MRs, etc</td>
+    </tr>
+    <tr>
+        <td>Pull Request Review Count</td>
+        <td>Number of Pull/Merge Reviewers</td>
+        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a> PRs, <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> MRs, etc</td>
+        <td>1. As a secondary indicator, assess the cost of labor invested in the code review process</td>
+        <td>1. Take inventory of project/team code review resources to avoid long waits for review sessions due to insufficient resource input</td>
+    </tr>
+    <tr>
+        <td>Bug Count</td>
+        <td>Number of bugs found during testing</td>
+        <td>Issue/Task Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/jira/README.md">Jira issues</a>, <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub issues</a>, etc</td>
+        <td rowspan="4">
+1. From the project or team dimension, observe the statistics on the total number of defects, the distribution of the number of defects in each severity level/type/owner, the cumulative trend of defects, and the change trend of the defect rate in thousands of lines, etc.<br/>
+2. From version cycle dimension, observe the statistics on the cumulative trend of the number of defects/defect rate, which can be used to determine whether the growth rate of defects is slowing down, showing a flat convergence trend, and is an important reference for judging the stability of software version quality<br/>
+3. From the time dimension, analyze the trend of the number of test defects, defect rate to locate the key items/key points<br/>
+4. Evaluate whether the software quality and test plan are reasonable by referring to CMMI standard values</td>
+        <td rowspan="4">1. Defect drill-down analysis to inform the development of design and code review strategies and to improve the internal QA process<br/>
+2. Assist teams to locate projects/modules with higher defect severity and density, and clean up technical debts<br/>
+3. Analyze critical points, identify good/to-be-improved practices that affect defect count or defect rate, to reduce the amount of future defects</td>
+    </tr>
+    <tr>
+        <td>Incident Count</td>
+        <td>Number of Incidents found after shipping</td>
+        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a> PRs, <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> MRs, etc</td>
+    </tr>
+    <tr>
+        <td>Bugs Count per 1k Lines of Code</td>
+        <td>Amount of bugs per 1,000 lines of code</td>
+        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a> PRs, <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> MRs, etc</td>
+    </tr>
+    <tr>
+        <td>Incidents Count per 1k Lines of Code</td>
+        <td>Amount of incidents per 1,000 lines of code</td>
+        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a> PRs, <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> MRs, etc</td>
+    </tr>
+    <tr>
+        <td>Delivery Cost</td>
+        <td>Commit Author Count</td>
+        <td>Number of Contributors who have committed code</td>
+        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitextractor/README.md">Git</a>/<a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a>/<a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> commits</td>
+        <td>1. As a secondary indicator, this helps assess the labor cost of participating in coding</td>
+        <td>1. Take inventory of project/team R&D resource inputs, assess input-output ratio, and rationalize resource deployment</td>
+    </tr>
+    <tr>
+        <td rowspan="3">Delivery Capability</td>
+        <td>Build Count</td>
+        <td>The number of builds started</td>
+        <td>CI/CD entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/jenkins/README.md">Jenkins</a> PRs, <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLabCI</a> MRs, etc</td>
+        <td rowspan="3">1. From the project dimension, compare the number of builds and success rate by combining the project phase and the complexity of tasks<br/>
+2. From the time dimension, analyze the trend of the number of builds and success rate to see if it has improved over time</td>
+        <td rowspan="3">1. As a process indicator, it reflects the value flow efficiency of upstream production and research links<br/>
+2. Identify excellent/to-be-improved practices that impact the build, and drive the team to precipitate reusable tools and mechanisms to build infrastructure for fast and high-frequency delivery</td>
+    </tr>
+    <tr>
+        <td>Build Duration</td>
+        <td>The duration of successful builds</td>
+        <td>CI/CD entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/jenkins/README.md">Jenkins</a> PRs, <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLabCI</a> MRs, etc</td>
+    </tr>
+    <tr>
+        <td>Build Success Rate</td>
+        <td>The percentage of successful builds</td>
+        <td>CI/CD entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/jenkins/README.md">Jenkins</a> PRs, <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLabCI</a> MRs, etc</td>
+    </tr>
+</table>
+<br/><br/><br/>
diff --git a/versioned_docs/version-v0.11.0/Overview/Architecture.md b/versioned_docs/version-v0.11.0/Overview/Architecture.md
new file mode 100755
index 0000000..2d780a5
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/Overview/Architecture.md
@@ -0,0 +1,39 @@
+---
+title: "Architecture"
+description: >
+  Understand the architecture of Apache DevLake
+sidebar_position: 2
+---
+
+## Architecture Overview
+
+<p align="center"><img src="/img/Architecture/arch-component.svg" /></p>
+<p align="center">DevLake Components</p>
+
+A DevLake installation typically consists of the following components:
+
+- Config UI: A handy user interface to create, trigger, and debug data pipelines.
+- API Server: The main programmatic interface of DevLake.
+- Runner: The runner does all the heavy-lifting for executing tasks. In the default DevLake installation, it runs within the API Server, but DevLake provides a temporal-based runner (beta) for production environments.
+- Database: The database stores both DevLake's metadata and user data collected by data pipelines. DevLake supports MySQL and PostgreSQL as of v0.11.
+- Plugins: Plugins enable DevLake to collect and analyze dev data from any DevOps tools with an accessible API. DevLake community is actively adding plugins for popular DevOps tools, but if your preferred tool is not covered yet, feel free to open a GitHub issue to let us know or check out our doc on how to build a new plugin by yourself.
+- Dashboards: Dashboards deliver data and insights to DevLake users. A dashboard is simply a collection of SQL queries along with corresponding visualization configurations. DevLake's official dashboard tool is Grafana and pre-built dashboards are shipped in Grafana's JSON format. Users are welcome to swap for their own choice of dashboard/BI tool if desired.
+
+## Dataflow
+
+<p align="center"><img src="/img/Architecture/arch-dataflow.svg" /></p>
+<p align="center">DevLake Dataflow</p>
+
+A typical plugin's dataflow is illustrated below:
+
+1. The Raw layer stores the API responses from data sources (DevOps tools) in JSON. This saves developers' time if the raw data is to be transformed differently later on. Please note that communicating with data sources' APIs is usually the most time-consuming step.
+2. The Tool layer extracts raw data from JSONs into a relational schema that's easier to consume by analytical tasks. Each DevOps tool would have a schema that's tailored to their data structure, hence the name, the Tool layer.
+3. The Domain layer attempts to build a layer of abstraction on top of the Tool layer so that analytics logics can be re-used across different tools. For example, GitHub's Pull Request (PR) and GitLab's Merge Request (MR) are similar entities. They each have their own table name and schema in the Tool layer, but they're consolidated into a single entity in the Domain layer, so that developers only need to implement metrics like Cycle Time and Code Review Rounds once against the domain la [...]
+
+## Principles
+
+1. Extensible: DevLake's plugin system allows users to integrate with any DevOps tool. DevLake also provides a dbt plugin that enables users to define their own data transformation and analysis workflows.
+2. Portable: DevLake has a modular design and provides multiple options for each module. Users of different setups can freely choose the right configuration for themselves.
+3. Robust: DevLake provides an SDK to help plugins efficiently and reliably collect data from data sources while respecting their API rate limits and constraints.
+
+<br/>
diff --git a/versioned_docs/version-v0.11.0/Overview/Introduction.md b/versioned_docs/version-v0.11.0/Overview/Introduction.md
new file mode 100755
index 0000000..c8aacd9
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/Overview/Introduction.md
@@ -0,0 +1,16 @@
+---
+title: "Introduction"
+description: General introduction of Apache DevLake
+sidebar_position: 1
+---
+
+## What is Apache DevLake?
+Apache DevLake is an open-source dev data platform that ingests, analyzes, and visualizes the fragmented data from DevOps tools to distill insights for engineering productivity.
+
+Apache DevLake is designed for developer teams looking to make better sense of their development process and to bring a more data-driven approach to their own practices. You can ask Apache DevLake many questions regarding your development process. Just connect and query.
+
+## What can be accomplished with DevLake?
+1. Collect DevOps data across the entire Software Development Life Cycle (SDLC) and connect the siloed data with a standard [data model](../DataModels/DevLakeDomainLayerSchema.md).
+2. Visualize out-of-the-box engineering [metrics](../EngineeringMetrics.md) in a series of use-case driven dashboards
+3. Easily extend DevLake to support your data sources, metrics, and dashboards with a flexible [framework](Architecture.md) for data collection and ETL.
+
diff --git a/versioned_docs/version-v0.11.0/Overview/Roadmap.md b/versioned_docs/version-v0.11.0/Overview/Roadmap.md
new file mode 100644
index 0000000..9dcf0b3
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/Overview/Roadmap.md
@@ -0,0 +1,33 @@
+---
+title: "Roadmap"
+description: >
+  The goals and roadmap for DevLake in 2022
+sidebar_position: 3
+---
+
+
+## Goals
+DevLake has joined the Apache Incubator and is aiming to become a top-level project. To achieve this goal, the Apache DevLake (Incubating) community will continue to make efforts in helping development teams to analyze and improve their engineering productivity. In the 2022 Roadmap, we have summarized three major goals followed by the feature breakdown to invite the broader community to join us and grow together.
+
+1. As a dev data analysis application, discover and implement 3 (or even more!) usage scenarios:
+   - A collection of metrics to track the contribution, quality and growth of open-source projects
+   - DORA metrics for DevOps engineers
+   - To be decided ([let us know](https://join.slack.com/t/devlake-io/shared_invite/zt-17b6vuvps-x98pqseoUagM7EAmKC82xQ) if you have any suggestions!)
+2. As dev data infrastructure, provide robust data collection modules, customizable data models, and data extensibility.
+3. Design better user experience for end-users and contributors.
+
+## Feature Breakdown
+Apache DevLake is currently under rapid development. You are more than welcome to use the following table to explore your intereted features and make contributions. We deeply appreciate the collective effort of our community to make this project possible!
+
+| Category | Features|
+| --- | --- |
+| More data sources across different [DevOps domains](../DataModels/DevLakeDomainLayerSchema.md) (Goal No.1 & 2)| Features in **bold** are of higher priority <br/><br/> Issue/Task Management: <ul><li>**Jira server** [#886 (closed)](https://github.com/apache/incubator-devlake/issues/886)</li><li>**Jira data center** [#1687 (closed)](https://github.com/apache/incubator-devlake/issues/1687)</li><li>GitLab Issues [#715 (closed)](https://github.com/apache/incubator-devlake/issues/715)</li><li [...]
+| Improved data collection, [data models](../DataModels/DevLakeDomainLayerSchema.md) and data extensibility (Goal No.2)| Data Collection: <br/> <ul><li>Complete the logging system</li><li>Implement a good error handling mechanism during data collection</li></ul> Data Models:<ul><li>Introduce DBT to allow users to create and modify the domain layer schema. [#1479 (closed)](https://github.com/apache/incubator-devlake/issues/1479)</li><li>Design the data models for 5 new domains, please ref [...]
+| Better user experience (Goal No.3) | For new users: <ul><li> Iterate on a clearer step-by-step guide to improve the pre-configuration experience.</li><li>Provide a new Config UI to reduce frictions for data configuration [#1700 (in-progress)](https://github.com/apache/incubator-devlake/issues/1700)</li><li> Showcase dashboard live demos to let users explore and learn about the dashboards. [#1784 (open)](https://github.com/apache/incubator-devlake/issues/1784)</li></ul>For returning use [...]
+
+
+## How to Influence the Roadmap
+A roadmap is only useful when it captures real user needs. We are glad to hear from you if you have specific use cases, feedback, or ideas. You can submit an issue to let us know!
+Also, if you plan to work (or are already working) on a new or existing feature, tell us, so that we can update the roadmap accordingly. We are happy to share knowledge and context to help your feature land successfully.
+<br/><br/><br/>
+
diff --git a/versioned_docs/version-v0.11.0/Overview/_category_.json b/versioned_docs/version-v0.11.0/Overview/_category_.json
new file mode 100644
index 0000000..e224ed8
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/Overview/_category_.json
@@ -0,0 +1,4 @@
+{
+  "label": "Overview",
+  "position": 1
+}
diff --git a/versioned_docs/version-v0.11.0/Plugins/_category_.json b/versioned_docs/version-v0.11.0/Plugins/_category_.json
new file mode 100644
index 0000000..534bad8
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/Plugins/_category_.json
@@ -0,0 +1,4 @@
+{
+  "label": "Plugins",
+  "position": 7
+}
diff --git a/versioned_docs/version-v0.11.0/Plugins/dbt.md b/versioned_docs/version-v0.11.0/Plugins/dbt.md
new file mode 100644
index 0000000..059bf12
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/Plugins/dbt.md
@@ -0,0 +1,67 @@
+---
+title: "DBT"
+description: >
+  DBT Plugin
+---
+
+
+## Summary
+
+dbt (data build tool) enables analytics engineers to transform data in their warehouses by simply writing select statements. dbt handles turning these select statements into tables and views.
+dbt does the T in ELT (Extract, Load, Transform) processes – it doesn’t extract or load data, but it’s extremely good at transforming data that’s already loaded into your warehouse.
+
+## User setup<a id="user-setup"></a>
+- If you plan to use this product, you need to install some environments first.
+
+#### Required Packages to Install<a id="user-setup-requirements"></a>
+- [python3.7+](https://www.python.org/downloads/)
+- [dbt-mysql](https://pypi.org/project/dbt-mysql/#configuring-your-profile)
+
+#### Commands to run or create in your terminal and the dbt project<a id="user-setup-commands"></a>
+1. pip install dbt-mysql
+2. dbt init demoapp (demoapp is project name)
+3. create your SQL transformations and data models
+
+## Convert Data By DBT
+
+Use the Raw JSON API to manually initiate a run using **cURL** or graphical API tool such as **Postman**. `POST` the following request to the DevLake API Endpoint.
+
+```json
+[
+  [
+    {
+      "plugin": "dbt",
+      "options": {
+          "projectPath": "/Users/abeizn/demoapp",
+          "projectName": "demoapp",
+          "projectTarget": "dev",
+          "selectedModels": ["my_first_dbt_model","my_second_dbt_model"],
+          "projectVars": {
+            "demokey1": "demovalue1",
+            "demokey2": "demovalue2"
+        }
+      }
+    }
+  ]
+]
+```
+
+- `projectPath`: the absolute path of the dbt project. (required)
+- `projectName`: the name of the dbt project. (required)
+- `projectTarget`: this is the default target your dbt project will use. (optional)
+- `selectedModels`: a model is a select statement. Models are defined in .sql files, and typically in your models directory. (required)
+And selectedModels accepts one or more arguments. Each argument can be one of:
+1. a package name, runs all models in your project, example: example
+2. a model name, runs a specific model, example: my_fisrt_dbt_model
+3. a fully-qualified path to a directory of models.
+
+- `projectVars`: variables to parametrize dbt models. (optional)
+example:
+`select * from events where event_type = '{{ var("event_type") }}'`
+To execute this SQL query in your model, you need set a value for `event_type`.
+
+### Resources:
+- Learn more about dbt [in the docs](https://docs.getdbt.com/docs/introduction)
+- Check out [Discourse](https://discourse.getdbt.com/) for commonly asked questions and answers
+
+<br/><br/><br/>
diff --git a/versioned_docs/version-v0.11.0/Plugins/feishu.md b/versioned_docs/version-v0.11.0/Plugins/feishu.md
new file mode 100644
index 0000000..c3e0eb6
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/Plugins/feishu.md
@@ -0,0 +1,64 @@
+---
+title: "Feishu"
+description: >
+  Feishu Plugin
+---
+
+## Summary
+
+This plugin collects Feishu meeting data through [Feishu Openapi](https://open.feishu.cn/document/home/user-identity-introduction/introduction).
+
+## Configuration
+
+In order to fully use this plugin, you will need to get app_id and app_secret from a Feishu administrator (for help on App info, please see [official Feishu Docs](https://open.feishu.cn/document/ukTMukTMukTM/ukDNz4SO0MjL5QzM/auth-v3/auth/tenant_access_token_internal)),
+then set these two parameters via Dev Lake's `.env`.
+
+### By `.env`
+
+The connection aspect of the configuration screen requires the following key fields to connect to the Feishu API. As Feishu is a single-source data provider at the moment, the connection name is read-only as there is only one instance to manage. As we continue our development roadmap we may enable multi-source connections for Feishu in the future.
+
+```
+FEISHU_APPID=app_id
+FEISHU_APPSCRECT=app_secret
+```
+
+## Collect data from Feishu
+
+To collect data, select `Advanced Mode` on the `Create Pipeline Run` page and paste a JSON config like the following:
+
+
+```json
+[
+  [
+    {
+      "plugin": "feishu",
+      "options": {
+        "numOfDaysToCollect" : 80,
+        "rateLimitPerSecond" : 5
+      }
+    }
+  ]
+]
+```
+
+> `numOfDaysToCollect`: The number of days you want to collect
+
+> `rateLimitPerSecond`: The number of requests to send(Maximum is 8)
+
+You can also trigger data collection by making a POST request to `/pipelines`.
+```
+curl --location --request POST 'localhost:8080/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+    "name": "feishu 20211126",
+    "tasks": [[{
+      "plugin": "feishu",
+      "options": {
+        "numOfDaysToCollect" : 80,
+        "rateLimitPerSecond" : 5
+      }
+    }]]
+}
+'
+```
\ No newline at end of file
diff --git a/versioned_docs/version-v0.11.0/Plugins/gitee.md b/versioned_docs/version-v0.11.0/Plugins/gitee.md
new file mode 100644
index 0000000..6066fd2
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/Plugins/gitee.md
@@ -0,0 +1,112 @@
+---
+title: "Gitee(WIP)"
+description: >
+  Gitee Plugin
+---
+
+## Summary
+
+## Configuration
+
+### Provider (Datasource) Connection
+The connection aspect of the configuration screen requires the following key fields to connect to the **Gitee API**. As gitee is a _single-source data provider_ at the moment, the connection name is read-only as there is only one instance to manage. As we continue our development roadmap we may enable _multi-source_ connections for gitee in the future.
+
+- **Connection Name** [`READONLY`]
+    - ⚠️ Defaults to "**Gitee**" and may not be changed.
+- **Endpoint URL** (REST URL, starts with `https://` or `http://`)
+    - This should be a valid REST API Endpoint eg. `https://gitee.com/api/v5/`
+    - ⚠️ URL should end with`/`
+- **Auth Token(s)** (Personal Access Token)
+    - For help on **Creating a personal access token**
+    - Provide at least one token for Authentication with the . This field accepts a comma-separated list of values for multiple tokens. The data collection will take longer for gitee since they have a **rate limit of 2k requests per hour**. You can accelerate the process by configuring _multiple_ personal access tokens.
+
+"For API requests using `Basic Authentication` or `OAuth`
+
+
+If you have a need for more api rate limits, you can set many tokens in the config file and we will use all of your tokens.
+
+For an overview of the **gitee REST API**, please see official [gitee Docs on REST](https://gitee.com/api/v5/swagger)
+
+Click **Save Connection** to update connection settings.
+
+
+### Provider (Datasource) Settings
+Manage additional settings and options for the gitee Datasource Provider. Currently there is only one **optional** setting, *Proxy URL*. If you are behind a corporate firewall or VPN you may need to utilize a proxy server.
+
+**gitee Proxy URL [ `Optional`]**
+Enter a valid proxy server address on your Network, e.g. `http://your-proxy-server.com:1080`
+
+Click **Save Settings** to update additional settings.
+
+### Regular Expression Configuration
+Define regex pattern in .env
+- GITEE_PR_BODY_CLOSE_PATTERN: Define key word to associate issue in pr body, please check the example in .env.example
+
+## Sample Request
+In order to collect data, you have to compose a JSON looks like following one, and send it by selecting `Advanced Mode` on `Create Pipeline Run` page:
+1. Configure-UI Mode
+```json
+[
+  [
+    {
+      "plugin": "gitee",
+      "options": {
+        "repo": "lake",
+        "owner": "merico-dev"
+      }
+    }
+  ]
+]
+```
+and if you want to perform certain subtasks.
+```json
+[
+  [
+    {
+      "plugin": "gitee",
+      "subtasks": ["collectXXX", "extractXXX", "convertXXX"],
+      "options": {
+        "repo": "lake",
+        "owner": "merico-dev"
+      }
+    }
+  ]
+]
+```
+
+2. Curl Mode:
+   You can also trigger data collection by making a POST request to `/pipelines`.
+```
+curl --location --request POST 'localhost:8080/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+    "name": "gitee 20211126",
+    "tasks": [[{
+        "plugin": "gitee",
+        "options": {
+            "repo": "lake",
+            "owner": "merico-dev"
+        }
+    }]]
+}
+'
+```
+and if you want to perform certain subtasks.
+```
+curl --location --request POST 'localhost:8080/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+    "name": "gitee 20211126",
+    "tasks": [[{
+        "plugin": "gitee",
+        "subtasks": ["collectXXX", "extractXXX", "convertXXX"],
+        "options": {
+            "repo": "lake",
+            "owner": "merico-dev"
+        }
+    }]]
+}
+'
+```
diff --git a/versioned_docs/version-v0.11.0/Plugins/gitextractor.md b/versioned_docs/version-v0.11.0/Plugins/gitextractor.md
new file mode 100644
index 0000000..ae3fecb
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/Plugins/gitextractor.md
@@ -0,0 +1,63 @@
+---
+title: "GitExtractor"
+description: >
+  GitExtractor Plugin
+---
+
+## Summary
+This plugin extracts commits and references from a remote or local git repository. It then saves the data into the database or csv files.
+
+## Steps to make this plugin work
+
+1. Use the Git repo extractor to retrieve data about commits and branches from your repository.
+2. Use the GitHub plugin to retrieve data about Github issues and PRs from your repository.
+NOTE: you can run only one issue collection stage as described in the Github Plugin README.
+3. Use the [RefDiff](./refdiff.md) plugin to calculate version diff, which will be stored in `refs_commits_diffs` table.
+
+## Sample Request
+
+```
+curl --location --request POST 'localhost:8080/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+    "name": "git repo extractor",
+    "tasks": [
+        [
+            {
+                "Plugin": "gitextractor",
+                "Options": {
+                    "url": "https://github.com/merico-dev/lake.git",
+                    "repoId": "github:GithubRepo:384111310"
+                }
+            }
+        ]
+    ]
+}
+'
+```
+- `url`: the location of the git repository. It should start with `http`/`https` for a remote git repository and with `/` for a local one.
+- `repoId`: column `id` of  `repos`.
+- `proxy`: optional, http proxy, e.g. `http://your-proxy-server.com:1080`.
+- `user`: optional, for cloning private repository using HTTP/HTTPS
+- `password`: optional, for cloning private repository using HTTP/HTTPS
+- `privateKey`: optional, for SSH cloning, base64 encoded `PEM` file
+- `passphrase`: optional, passphrase for the private key
+
+
+## Standalone Mode
+
+You call also run this plugin in a standalone mode without any DevLake service running using the following command:
+
+```
+go run plugins/gitextractor/main.go -url https://github.com/merico-dev/lake.git -id github:GithubRepo:384111310 -db "merico:merico@tcp(127.0.0.1:3306)/lake?charset=utf8mb4&parseTime=True"
+```
+
+For more options (e.g., saving to a csv file instead of a db), please read `plugins/gitextractor/main.go`.
+
+## Development
+
+This plugin depends on `libgit2`, you need to install version 1.3.0 in order to run and debug this plugin on your local
+machine. [Click here](./refdiff.md#Development) for a brief guide.
+
+<br/><br/><br/>
diff --git a/versioned_docs/version-v0.11.0/Plugins/github-connection-in-config-ui.png b/versioned_docs/version-v0.11.0/Plugins/github-connection-in-config-ui.png
new file mode 100644
index 0000000..5359fb1
Binary files /dev/null and b/versioned_docs/version-v0.11.0/Plugins/github-connection-in-config-ui.png differ
diff --git a/versioned_docs/version-v0.11.0/Plugins/github.md b/versioned_docs/version-v0.11.0/Plugins/github.md
new file mode 100644
index 0000000..cca87b7
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/Plugins/github.md
@@ -0,0 +1,95 @@
+---
+title: "GitHub"
+description: >
+  GitHub Plugin
+---
+
+
+
+## Summary
+
+This plugin gathers data from `GitHub` to display information to the user in `Grafana`. We can help tech leaders answer such questions as:
+
+- Is this month more productive than last?
+- How fast do we respond to customer requirements?
+- Was our quality improved or not?
+
+## Metrics
+
+Here are some examples metrics using `GitHub` data:
+- Avg Requirement Lead Time By Assignee
+- Bug Count per 1k Lines of Code
+- Commit Count over Time
+
+## Screenshot
+
+![image](/img/Plugins/github-demo.png)
+
+
+## Configuration
+
+### Provider (Datasource) Connection
+The connection section of the configuration screen requires the following key fields to connect to the **GitHub API**.
+
+![connection-in-config-ui](github-connection-in-config-ui.png)
+
+- **Connection Name** [`READONLY`]
+  - ⚠️ Defaults to "**Github**" and may not be changed. As GitHub is a _single-source data provider_ at the moment, the connection name is read-only as there is only one instance to manage. As we advance on our development roadmap we may enable _multi-source_ connections for GitHub in the future.
+- **Endpoint URL** (REST URL, starts with `https://` or `http://`)
+  - This should be a valid REST API Endpoint eg. `https://api.github.com/`
+  - ⚠️ URL should end with`/`
+- **Auth Token(s)** (Personal Access Token)
+  - For help on **Creating a personal access token**, please see official [GitHub Docs on Personal Tokens](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token)
+  - Provide at least one token for Authentication.
+  - This field accepts a comma-separated list of values for multiple tokens. The data collection will take longer for GitHub since they have a **rate limit of [5,000 requests](https://docs.github.com/en/rest/overview/resources-in-the-rest-api#rate-limiting) per hour** (15,000 requests/hour if you pay for `GitHub` enterprise). You can accelerate the process by configuring _multiple_ personal access tokens.
+
+Click **Save Connection** to update connection settings.
+
+
+### Provider (Datasource) Settings
+Manage additional settings and options for the GitHub Datasource Provider. Currently there is only one **optional** setting, *Proxy URL*. If you are behind a corporate firewall or VPN you may need to utilize a proxy server.
+
+- **GitHub Proxy URL [`Optional`]**
+Enter a valid proxy server address on your Network, e.g. `http://your-proxy-server.com:1080`
+
+Click **Save Settings** to update additional settings.
+
+### Regular Expression Configuration
+Define regex pattern in .env
+- GITHUB_PR_BODY_CLOSE_PATTERN: Define key word to associate issue in PR body, please check the example in .env.example
+
+## Sample Request
+To collect data, select `Advanced Mode` on the `Create Pipeline Run` page and paste a JSON config like the following:
+
+```json
+[
+  [
+    {
+      "plugin": "github",
+      "options": {
+        "repo": "lake",
+        "owner": "merico-dev"
+      }
+    }
+  ]
+]
+```
+
+You can also trigger data collection by making a POST request to `/pipelines`.
+```
+curl --location --request POST 'localhost:8080/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+    "name": "github 20211126",
+    "tasks": [[{
+        "plugin": "github",
+        "options": {
+            "repo": "lake",
+            "owner": "merico-dev"
+        }
+    }]]
+}
+'
+```
+<br/><br/><br/>
diff --git a/versioned_docs/version-v0.11.0/Plugins/gitlab-connection-in-config-ui.png b/versioned_docs/version-v0.11.0/Plugins/gitlab-connection-in-config-ui.png
new file mode 100644
index 0000000..7aacee8
Binary files /dev/null and b/versioned_docs/version-v0.11.0/Plugins/gitlab-connection-in-config-ui.png differ
diff --git a/versioned_docs/version-v0.11.0/Plugins/gitlab.md b/versioned_docs/version-v0.11.0/Plugins/gitlab.md
new file mode 100644
index 0000000..21a86d7
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/Plugins/gitlab.md
@@ -0,0 +1,94 @@
+---
+title: "GitLab"
+description: >
+  GitLab Plugin
+---
+
+
+## Metrics
+
+| Metric Name                 | Description                                                  |
+|:----------------------------|:-------------------------------------------------------------|
+| Pull Request Count          | Number of Pull/Merge Requests                                |
+| Pull Request Pass Rate      | Ratio of Pull/Merge Review requests to merged                |
+| Pull Request Reviewer Count | Number of Pull/Merge Reviewers                               |
+| Pull Request Review Time    | Time from Pull/Merge created time until merged               |
+| Commit Author Count         | Number of Contributors                                       |
+| Commit Count                | Number of Commits                                            |
+| Added Lines                 | Accumulated Number of New Lines                              |
+| Deleted Lines               | Accumulated Number of Removed Lines                          |
+| Pull Request Review Rounds  | Number of cycles of commits followed by comments/final merge |
+
+## Configuration
+
+### Provider (Datasource) Connection
+The connection section of the configuration screen requires the following key fields to connect to the **GitLab API**.
+
+![connection-in-config-ui](gitlab-connection-in-config-ui.png)
+
+- **Connection Name** [`READONLY`]
+  - ⚠️ Defaults to "**GitLab**" and may not be changed. As GitLab is a _single-source data provider_ at the moment, the connection name is read-only as there is only one instance to manage. As we advance on our development roadmap we may enable _multi-source_ connections for GitLab in the future.
+- **Endpoint URL** (REST URL, starts with `https://` or `http://`)
+  - This should be a valid REST API Endpoint eg. `https://gitlab.example.com/api/v4/`
+  - ⚠️ URL should end with`/`
+- **Personal Access Token** (HTTP Basic Auth)
+  - Login to your GitLab Account and create a **Personal Access Token** to authenticate with the API using HTTP Basic Authentication. The token must be 20 characters long. Save the personal access token somewhere safe. After you leave the page, you no longer have access to the token.
+
+    1. In the top-right corner, select your **avatar**.
+    2. Click on **Edit profile**.
+    3. On the left sidebar, select **Access Tokens**.
+    4. Enter a **name** and optional **expiry date** for the token.
+    5. Select the desired **scopes**.
+    6. Click on **Create personal access token**.
+
+    For help on **Creating a personal access token**, please see official [GitLab Docs on Personal Tokens](https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html).
+    For an overview of the **GitLab REST API**, please see official [GitLab Docs on REST](https://docs.gitlab.com/ee/development/documentation/restful_api_styleguide.html#restful-api)
+
+Click **Save Connection** to update connection settings.
+
+### Provider (Datasource) Settings
+There are no additional settings for the GitLab Datasource Provider at this time.
+
+> NOTE: `GitLab Project ID` Mappings feature has been deprecated.
+
+## Gathering Data with GitLab
+
+To collect data, you can make a POST request to `/pipelines`
+
+```
+curl --location --request POST 'localhost:8080/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+    "name": "gitlab 20211126",
+    "tasks": [[{
+        "plugin": "gitlab",
+        "options": {
+            "projectId": <Your gitlab project id>
+        }
+    }]]
+}
+'
+```
+
+## Finding Project Id
+
+To get the project id for a specific `GitLab` repository:
+- Visit the repository page on GitLab
+- Find the project id just below the title
+
+  ![Screen Shot 2021-08-06 at 4 32 53 PM](https://user-images.githubusercontent.com/3789273/128568416-a47b2763-51d8-4a6a-8a8b-396512bffb03.png)
+
+> Use this project id in your requests, to collect data from this project
+
+## ⚠️ (WIP) Create a GitLab API Token <a id="gitlab-api-token"></a>
+
+1. When logged into `GitLab` visit `https://gitlab.com/-/profile/personal_access_tokens`
+2. Give the token any name, no expiration date and all scopes (excluding write access)
+
+    ![Screen Shot 2021-08-06 at 4 44 01 PM](https://user-images.githubusercontent.com/3789273/128569148-96f50d4e-5b3b-4110-af69-a68f8d64350a.png)
+
+3. Click the **Create Personal Access Token** button
+4. Save the API token into `.env` file via `cofnig-ui` or edit the file directly.
+
+<br/><br/><br/>
diff --git a/versioned_docs/version-v0.11.0/Plugins/jenkins.md b/versioned_docs/version-v0.11.0/Plugins/jenkins.md
new file mode 100644
index 0000000..792165d
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/Plugins/jenkins.md
@@ -0,0 +1,59 @@
+---
+title: "Jenkins"
+description: >
+  Jenkins Plugin
+---
+
+## Summary
+
+This plugin collects Jenkins data through [Remote Access API](https://www.jenkins.io/doc/book/using/remote-access-api/). It then computes and visualizes various DevOps metrics from the Jenkins data.
+
+![image](https://user-images.githubusercontent.com/61080/141943122-dcb08c35-cb68-4967-9a7c-87b63c2d6988.png)
+
+## Metrics
+
+| Metric Name        | Description                         |
+|:-------------------|:------------------------------------|
+| Build Count        | The number of builds created        |
+| Build Success Rate | The percentage of successful builds |
+
+## Configuration
+
+In order to fully use this plugin, you will need to set various configurations via Dev Lake's `config-ui`.
+
+### By `config-ui`
+
+The connection section of the configuration screen requires the following key fields to connect to the Jenkins API.
+
+- Connection Name [READONLY]
+  - ⚠️ Defaults to "Jenkins" and may not be changed. As Jenkins is a _single-source data provider_ at the moment, the connection name is read-only as there is only one instance to manage. As we advance on our development roadmap we may enable multi-source connections for Jenkins in the future.
+- Endpoint URL (REST URL, starts with `https://` or `http://`i, ends with `/`)
+  - This should be a valid REST API Endpoint eg. `https://ci.jenkins.io/`
+- Username (E-mail)
+  - Your User ID for the Jenkins Instance.
+- Password (Secret Phrase or API Access Token)
+  - Secret password for common credentials.
+  - For help on Username and Password, please see official Jenkins Docs on Using Credentials
+  - Or you can use **API Access Token** for this field, which can be generated at `User` -> `Configure` -> `API Token` section on Jenkins.
+
+Click Save Connection to update connection settings.
+
+## Collect Data From Jenkins
+
+To collect data, select `Advanced Mode` on the `Create Pipeline Run` page and paste a JSON config like the following:
+
+```json
+[
+  [
+    {
+      "plugin": "jenkins",
+      "options": {}
+    }
+  ]
+]
+```
+
+## Relationship between job and build
+
+Build is kind of a snapshot of job. Running job each time creates a build.
+<br/><br/><br/>
diff --git a/versioned_docs/version-v0.11.0/Plugins/jira-connection-config-ui.png b/versioned_docs/version-v0.11.0/Plugins/jira-connection-config-ui.png
new file mode 100644
index 0000000..df2e8e3
Binary files /dev/null and b/versioned_docs/version-v0.11.0/Plugins/jira-connection-config-ui.png differ
diff --git a/versioned_docs/version-v0.11.0/Plugins/jira-more-setting-in-config-ui.png b/versioned_docs/version-v0.11.0/Plugins/jira-more-setting-in-config-ui.png
new file mode 100644
index 0000000..dffb0c9
Binary files /dev/null and b/versioned_docs/version-v0.11.0/Plugins/jira-more-setting-in-config-ui.png differ
diff --git a/versioned_docs/version-v0.11.0/Plugins/jira.md b/versioned_docs/version-v0.11.0/Plugins/jira.md
new file mode 100644
index 0000000..8ac28d6
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/Plugins/jira.md
@@ -0,0 +1,253 @@
+---
+title: "Jira"
+description: >
+  Jira Plugin
+---
+
+
+## Summary
+
+This plugin collects Jira data through Jira Cloud REST API. It then computes and visualizes various engineering metrics from the Jira data.
+
+<img width="2035" alt="jira metric display" src="https://user-images.githubusercontent.com/2908155/132926143-7a31d37f-22e1-487d-92a3-cf62e402e5a8.png" />
+
+## Project Metrics This Covers
+
+| Metric Name                         | Description                                                                                       |
+|:------------------------------------|:--------------------------------------------------------------------------------------------------|
+| Requirement Count	                  | Number of issues with type "Requirement"                                                          |
+| Requirement Lead Time	              | Lead time of issues with type "Requirement"                                                       |
+| Requirement Delivery Rate           | Ratio of delivered requirements to all requirements                                               |
+| Requirement Granularity             | Number of story points associated with an issue                                                   |
+| Bug Count	                          | Number of issues with type "Bug"<br/><i>bugs are found during testing</i>                         |
+| Bug Age	                          | Lead time of issues with type "Bug"<br/><i>both new and deleted lines count</i>                   |
+| Bugs Count per 1k Lines of Code     | Amount of bugs per 1000 lines of code                                                             |
+| Incident Count                      | Number of issues with type "Incident"<br/><i>incidents are found when running in production</i>   |
+| Incident Age                        | Lead time of issues with type "Incident"                                                          |
+| Incident Count per 1k Lines of Code | Amount of incidents per 1000 lines of code                                                        |
+
+## Configuration
+
+In order to fully use this plugin, you will need to set various configurations via Dev Lake's `config-ui` service. Open `config-ui` on browser, by default the URL is http://localhost:4000, then go to **Data Integrations / JIRA** page. JIRA plugin currently supports multiple data connections, Here you can **add** new connection to your JIRA connection or **update** the settings if needed.
+
+For each connection, you will need to set up following items first:
+
+![connection at config ui](jira-connection-config-ui.png)
+
+- Connection Name: This allow you to distinguish different connections.
+- Endpoint URL: The JIRA instance API endpoint, for JIRA Cloud Service: `https://<mydomain>.atlassian.net/rest`. DevLake officially supports JIRA Cloud Service on atlassian.net, but may or may not work for JIRA Server Instance.
+- Basic Auth Token: First, generate a **JIRA API TOKEN** for your JIRA account on the JIRA console (see [Generating API token](#generating-api-token)), then, in `config-ui` click the KEY icon on the right side of the input to generate a full `HTTP BASIC AUTH` token for you.
+- Proxy Url: Just use when you want collect through VPN.
+
+### More custom configuration
+If you want to add more custom config, you can click "settings" to change these config
+![More config in config ui](jira-more-setting-in-config-ui.png)
+- Issue Type Mapping: JIRA is highly customizable, each JIRA instance may have a different set of issue types than others. In order to compute and visualize metrics for different instances, you need to map your issue types to standard ones. See [Issue Type Mapping](#issue-type-mapping) for detail.
+- Epic Key: unfortunately, epic relationship implementation in JIRA is based on `custom field`, which is vary from instance to instance. Please see [Find Out Custom Fields](#find-out-custom-fields).
+- Story Point Field: same as Epic Key.
+- Remotelink Commit SHA:A regular expression that matches commit links to determine whether an external link is a link to a commit. Taking gitlab as an example, to match all commits similar to https://gitlab.com/merico-dev/ce/example-repository/-/commit/8ab8fb319930dbd8615830276444b8545fd0ad24, you can directly use the regular expression **/commit/([0-9a-f]{40})$**
+
+
+### Generating API token
+1. Once logged into Jira, visit the url `https://id.atlassian.com/manage-profile/security/api-tokens`
+2. Click the **Create API Token** button, and give it any label name
+![image](https://user-images.githubusercontent.com/27032263/129363611-af5077c9-7a27-474a-a685-4ad52366608b.png)
+
+
+### Issue Type Mapping
+
+Devlake supports 3 standard types, all metrics are computed based on these types:
+
+ - `Bug`: Problems found during the `test` phase, before they can reach the production environment.
+ - `Incident`: Problems that went through the `test` phase, got deployed into production environment.
+ - `Requirement`: Normally, it would be `Story` on your instance if you adopted SCRUM.
+
+You can map arbitrary **YOUR OWN ISSUE TYPE** to a single **STANDARD ISSUE TYPE**. Normally, one would map `Story` to `Requirement`, but you could map both `Story` and `Task` to `Requirement` if that was your case. Unspecified types are copied directly for your convenience, so you don't need to map your `Bug` to standard `Bug`.
+
+Type mapping is critical for some metrics, like **Requirement Count**, make sure to map your custom type correctly.
+
+### Find Out Custom Field
+
+Please follow this guide: [How to find the custom field ID in Jira?](https://github.com/apache/incubator-devlake/wiki/How-to-find-the-custom-field-ID-in-Jira)
+
+
+## Collect Data From JIRA
+
+To collect data, select `Advanced Mode` on the `Create Pipeline Run` page and paste a JSON config like the following:
+
+> <font color="#ED6A45">Warning: Data collection only supports single-task execution, and the results of concurrent multi-task execution may not meet expectations.</font>
+
+```
+[
+  [
+    {
+      "plugin": "jira",
+      "options": {
+          "connectionId": 1,
+          "boardId": 8,
+          "since": "2006-01-02T15:04:05Z"
+      }
+    }
+  ]
+]
+```
+
+- `connectionId`: The `ID` field from **JIRA Integration** page.
+- `boardId`: JIRA board id, see "Find Board Id" for details.
+- `since`: optional, download data since a specified date only.
+
+
+### Find Board Id
+
+1. Navigate to the Jira board in the browser
+2. in the URL bar, get the board id from the parameter `?rapidView=`
+
+**Example:**
+
+`https://{your_jira_endpoint}/secure/RapidBoard.jspa?rapidView=51`
+
+![Screenshot](https://user-images.githubusercontent.com/27032263/129363083-df0afa18-e147-4612-baf9-d284a8bb7a59.png)
+
+Your board id is used in all REST requests to Apache DevLake. You do not need to configure this at the data connection level.
+
+
+
+## API
+
+### Data Connections
+
+1. Get all data connection
+
+```GET /plugins/jira/connections
+[
+  {
+    "ID": 14,
+    "CreatedAt": "2021-10-11T11:49:19.029Z",
+    "UpdatedAt": "2021-10-11T11:49:19.029Z",
+    "name": "test-jira-connection",
+    "endpoint": "https://merico.atlassian.net/rest",
+    "basicAuthEncoded": "basicAuth",
+    "epicKeyField": "epicKeyField",
+      "storyPointField": "storyPointField"
+  }
+]
+```
+
+2. Create a new data connection
+
+```POST /plugins/jira/connections
+{
+	"name": "jira data connection name",
+	"endpoint": "jira api endpoint, i.e. https://merico.atlassian.net/rest",
+    "basicAuthEncoded": "generated by `echo -n {jira login email}:{jira token} | base64`",
+	"epicKeyField": "name of customfield of epic key",
+	"storyPointField": "name of customfield of story point",
+	"typeMappings": { // optional, send empty object to delete all typeMappings of the data connection
+		"userType": {
+			"standardType": "devlake standard type"
+		}
+	}
+}
+```
+
+
+3. Update data connection
+
+```PUT /plugins/jira/connections/:connectionId
+{
+	"name": "jira data connection name",
+	"endpoint": "jira api endpoint, i.e. https://merico.atlassian.net/rest",
+    "basicAuthEncoded": "generated by `echo -n {jira login email}:{jira token} | base64`",
+	"epicKeyField": "name of customfield of epic key",
+	"storyPointField": "name of customfield of story point",
+	"typeMappings": { // optional, send empty object to delete all typeMappings of the data connection
+		"userType": {
+			"standardType": "devlake standard type",
+		}
+	}
+}
+```
+
+4. Get data connection detail
+```GET /plugins/jira/connections/:connectionId
+{
+	"name": "jira data connection name",
+	"endpoint": "jira api endpoint, i.e. https://merico.atlassian.net/rest",
+    "basicAuthEncoded": "generated by `echo -n {jira login email}:{jira token} | base64`",
+	"epicKeyField": "name of customfield of epic key",
+	"storyPointField": "name of customfield of story point",
+	"typeMappings": { // optional, send empty object to delete all typeMappings of the data connection
+		"userType": {
+			"standardType": "devlake standard type",
+		}
+	}
+}
+```
+
+5. Delete data connection
+
+```DELETE /plugins/jira/connections/:connectionId
+```
+
+
+### Type mappings
+
+1. Get all type mappings
+```GET /plugins/jira/connections/:connectionId/type-mappings
+[
+  {
+    "jiraConnectionId": 16,
+    "userType": "userType",
+    "standardType": "standardType"
+  }
+]
+```
+
+2. Create a new type mapping
+
+```POST /plugins/jira/connections/:connectionId/type-mappings
+{
+    "userType": "userType",
+    "standardType": "standardType"
+}
+```
+
+3. Update type mapping
+
+```PUT /plugins/jira/connections/:connectionId/type-mapping/:userType
+{
+    "standardType": "standardTypeUpdated"
+}
+```
+
+
+4. Delete type mapping
+
+```DELETE /plugins/jira/connections/:connectionId/type-mapping/:userType
+```
+
+5. API forwarding
+For example:
+Requests to `http://your_devlake_host/plugins/jira/connections/1/proxy/rest/agile/1.0/board/8/sprint`
+would be forwarded to `https://your_jira_host/rest/agile/1.0/board/8/sprint`
+
+```GET /plugins/jira/connections/:connectionId/proxy/rest/*path
+{
+    "maxResults": 1,
+    "startAt": 0,
+    "isLast": false,
+    "values": [
+        {
+            "id": 7,
+            "self": "https://merico.atlassian.net/rest/agile/1.0/sprint/7",
+            "state": "closed",
+            "name": "EE Sprint 7",
+            "startDate": "2020-06-12T00:38:51.882Z",
+            "endDate": "2020-06-26T00:38:00.000Z",
+            "completeDate": "2020-06-22T05:59:58.980Z",
+            "originBoardId": 8,
+            "goal": ""
+        }
+    ]
+}
+```
diff --git a/versioned_docs/version-v0.11.0/Plugins/refdiff.md b/versioned_docs/version-v0.11.0/Plugins/refdiff.md
new file mode 100644
index 0000000..12950f4
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/Plugins/refdiff.md
@@ -0,0 +1,116 @@
+---
+title: "RefDiff"
+description: >
+  RefDiff Plugin
+---
+
+
+## Summary
+
+For development workload analysis, we often need to know how many commits have been created between 2 releases. This plugin calculates which commits differ between 2 Ref (branch/tag), and the result will be stored back into database for further analysis.
+
+## Important Note
+
+You need to run gitextractor before the refdiff plugin. The gitextractor plugin should create records in the `refs` table in your DB before this plugin can be run.
+
+## Configuration
+
+This is a enrichment plugin based on Domain Layer data, no configuration needed
+
+## How to use
+
+In order to trigger the enrichment, you need to insert a new task into your pipeline.
+
+1. Make sure `commits` and `refs` are collected into your database, `refs` table should contain records like following:
+```
+id                                            ref_type
+github:GithubRepo:384111310:refs/tags/0.3.5   TAG
+github:GithubRepo:384111310:refs/tags/0.3.6   TAG
+github:GithubRepo:384111310:refs/tags/0.5.0   TAG
+github:GithubRepo:384111310:refs/tags/v0.0.1  TAG
+github:GithubRepo:384111310:refs/tags/v0.2.0  TAG
+github:GithubRepo:384111310:refs/tags/v0.3.0  TAG
+github:GithubRepo:384111310:refs/tags/v0.4.0  TAG
+github:GithubRepo:384111310:refs/tags/v0.6.0  TAG
+github:GithubRepo:384111310:refs/tags/v0.6.1  TAG
+```
+2. If you want to run calculateIssuesDiff, please configure GITHUB_PR_BODY_CLOSE_PATTERN in .env, you can check the example in .env.example(we have a default value, please make sure your pattern is disclosed by single quotes '')
+3. If you want to run calculatePrCherryPick, please configure GITHUB_PR_TITLE_PATTERN in .env, you can check the example in .env.example(we have a default value, please make sure your pattern is disclosed by single quotes '')
+4. And then, trigger a pipeline like following, you can also define sub tasks, calculateRefDiff will calculate commits between two ref, and creatRefBugStats will create a table to show bug list between two ref:
+```
+curl -v -XPOST http://localhost:8080/pipelines --data @- <<'JSON'
+{
+    "name": "test-refdiff",
+    "tasks": [
+        [
+            {
+                "plugin": "refdiff",
+                "options": {
+                    "repoId": "github:GithubRepo:384111310",
+                    "pairs": [
+                       { "newRef": "refs/tags/v0.6.0", "oldRef": "refs/tags/0.5.0" },
+                       { "newRef": "refs/tags/0.5.0", "oldRef": "refs/tags/0.4.0" }
+                    ],
+                    "tasks": [
+                        "calculateCommitsDiff",
+                        "calculateIssuesDiff",
+                        "calculatePrCherryPick",
+                    ]
+                }
+            }
+        ]
+    ]
+}
+JSON
+```
+
+## Development
+
+This plugin depends on `libgit2`, you need to install version 1.3.0 in order to run and debug this plugin on your local
+machine.
+
+### Ubuntu
+
+```
+apt install cmake
+git clone https://github.com/libgit2/libgit2.git
+cd libgit2
+git checkout v1.3.0
+mkdir build
+cd build
+cmake ..
+make
+make install
+```
+
+### MacOS
+1. [MacPorts](https://guide.macports.org/#introduction) install
+```
+port install libgit2@1.3.0
+```
+2. Source install
+```
+brew install cmake
+git clone https://github.com/libgit2/libgit2.git
+cd libgit2
+git checkout v1.3.0
+mkdir build
+cd build
+cmake ..
+make
+make install
+```
+
+#### Troubleshooting (MacOS)
+
+> Q: I got an error saying: `pkg-config: exec: "pkg-config": executable file not found in $PATH`
+
+> A:
+> 1. Make sure you have pkg-config installed:
+>
+> `brew install pkg-config`
+>
+> 2. Make sure your pkg config path covers the installation:
+> `export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib:/usr/local/lib/pkgconfig`
+
+<br/><br/><br/>
diff --git a/versioned_docs/version-v0.11.0/Plugins/tapd.md b/versioned_docs/version-v0.11.0/Plugins/tapd.md
new file mode 100644
index 0000000..b8db89f
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/Plugins/tapd.md
@@ -0,0 +1,16 @@
+---
+title: "TAPD"
+description: >
+  TAPD Plugin
+---
+
+## Summary
+
+This plugin collects TAPD data.
+
+This plugin is in development so you can't modify settings in config-ui.
+
+## Configuration
+
+In order to fully use this plugin, you will need to get endpoint/basic_auth_encoded/rate_limit and insert it into table `_tool_tapd_connections`.
+
diff --git a/versioned_docs/version-v0.11.0/QuickStart/KubernetesSetup.md b/versioned_docs/version-v0.11.0/QuickStart/KubernetesSetup.md
new file mode 100644
index 0000000..e4faeba
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/QuickStart/KubernetesSetup.md
@@ -0,0 +1,33 @@
+---
+title: "Kubernetes Setup"
+description: >
+  The steps to install Apache DevLake in Kubernetes
+sidebar_position: 2
+---
+
+
+We provide a sample [k8s-deploy.yaml](https://github.com/apache/incubator-devlake/blob/main/k8s-deploy.yaml) for users interested in deploying Apache DevLake on a k8s cluster.
+
+[k8s-deploy.yaml](https://github.com/apache/incubator-devlake/blob/main/k8s-deploy.yaml) will create a namespace `devlake` on your k8s cluster, and use `nodePort 30004` for `config-ui`,  `nodePort 30002` for `grafana` dashboards. If you would like to use certain version of Apache DevLake, please update the image tag of `grafana`, `devlake` and `config-ui` services to specify versions like `v0.10.1`.
+
+## Step-by-step guide
+
+1. Download [k8s-deploy.yaml](https://github.com/apache/incubator-devlake/blob/main/k8s-deploy.yaml) to local machine
+2. Some key points:
+   - `config-ui` deployment:
+     * `GRAFANA_ENDPOINT`: FQDN of grafana service which can be reached from user's browser
+     * `DEVLAKE_ENDPOINT`: FQDN of devlake service which can be reached within k8s cluster, normally you don't need to change it unless namespace was changed
+     * `ADMIN_USER`/`ADMIN_PASS`: Not required, but highly recommended
+   - `devlake-config` config map:
+     * `MYSQL_USER`: shared between `mysql` and `grafana` service
+     * `MYSQL_PASSWORD`: shared between `mysql` and `grafana` service
+     * `MYSQL_DATABASE`: shared between `mysql` and `grafana` service
+     * `MYSQL_ROOT_PASSWORD`: set root password for `mysql`  service
+   - `devlake` deployment:
+     * `DB_URL`: update this value if  `MYSQL_USER`, `MYSQL_PASSWORD` or `MYSQL_DATABASE` were changed
+3. The `devlake` deployment store its configuration in `/app/.env`. In our sample yaml, we use `hostPath` volume, so please make sure directory `/var/lib/devlake` exists on your k8s workers, or employ other techniques to persist `/app/.env` file. Please do NOT mount the entire `/app` directory, because plugins are located in `/app/bin` folder.
+4. Finally, execute the following command, Apache DevLake should be up and running:
+    ```sh
+    kubectl apply -f k8s-deploy.yaml
+    ```
+<br/><br/><br/>
diff --git a/versioned_docs/version-v0.11.0/QuickStart/LocalSetup.md b/versioned_docs/version-v0.11.0/QuickStart/LocalSetup.md
new file mode 100644
index 0000000..5ae0e0e
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/QuickStart/LocalSetup.md
@@ -0,0 +1,44 @@
+---
+title: "Local Setup"
+description: >
+  The steps to install DevLake locally
+sidebar_position: 1
+---
+
+
+## Prerequisites
+
+- [Docker v19.03.10+](https://docs.docker.com/get-docker)
+- [docker-compose v2.2.3+](https://docs.docker.com/compose/install/)
+
+## Launch DevLake
+
+- Commands written `like this` are to be run in your terminal.
+
+1. Download `docker-compose.yml` and `env.example` from [latest release page](https://github.com/apache/incubator-devlake/releases/latest) into a folder.
+2. Rename `env.example` to `.env`. For Mac/Linux users, please run `mv env.example .env` in the terminal.
+3. Run `docker-compose up -d` to launch DevLake.
+
+## Configure data connections and collect data
+
+1. Visit `config-ui` at `http://localhost:4000` in your browser to configure data connections.
+   - Navigate to desired plugins on the Integrations page
+   - Please reference the following for more details on how to configure each one:<br/>
+      - [Jira](../Plugins/jira.md)
+      - [GitHub](../Plugins/github.md): For users who'd like to collect GitHub data, we recommend reading our [GitHub data collection guide](../UserManuals/GitHubUserGuide.md) which covers the following steps in detail.
+      - [GitLab](../Plugins/gitlab.md)
+      - [Jenkins](../Plugins/jenkins.md)
+   - Submit the form to update the values by clicking on the **Save Connection** button on each form page
+   - `devlake` takes a while to fully boot up. if `config-ui` complaining about api being unreachable, please wait a few seconds and try refreshing the page.
+2. Create pipelines to trigger data collection in `config-ui`
+3. Click *View Dashboards* button in the top left when done, or visit `localhost:3002` (username: `admin`, password: `admin`).
+   - We use [Grafana](https://grafana.com/) as a visualization tool to build charts for the [data](../DataModels/DataSupport.md) stored in our database.
+   - Using SQL queries, we can add panels to build, save, and edit customized dashboards.
+   - All the details on provisioning and customizing a dashboard can be found in the [Grafana Doc](../UserManuals/GrafanaUserGuide.md).
+4. To synchronize data periodically, users can set up recurring pipelines with DevLake's [pipeline blueprint](../UserManuals/RecurringPipelines.md) for details.
+
+## Upgrade to a newer version
+
+Support for database schema migration was introduced to DevLake in v0.10.0. From v0.10.0 onwards, users can upgrade their instance smoothly to a newer version. However, versions prior to v0.10.0 do not support upgrading to a newer version with a different database schema. We recommend users to deploy a new instance if needed.
+
+<br/>
diff --git a/versioned_docs/version-v0.11.0/QuickStart/_category_.json b/versioned_docs/version-v0.11.0/QuickStart/_category_.json
new file mode 100644
index 0000000..133c30f
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/QuickStart/_category_.json
@@ -0,0 +1,4 @@
+{
+  "label": "Quick Start",
+  "position": 2
+}
diff --git a/versioned_docs/version-v0.11.0/UserManuals/AdvancedMode.md b/versioned_docs/version-v0.11.0/UserManuals/AdvancedMode.md
new file mode 100644
index 0000000..4323133
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/UserManuals/AdvancedMode.md
@@ -0,0 +1,89 @@
+---
+title: "Advanced Mode"
+sidebar_position: 2
+description: >
+  Advanced Mode
+---
+
+
+## Why advanced mode?
+
+Advanced mode allows users to create any pipeline by writing JSON. This is useful for users who want to:
+
+1. Collect multiple GitHub/GitLab repos or Jira projects within a single pipeline
+2. Have fine-grained control over what entities to collect or what subtasks to run for each plugin
+3. Orchestrate a complex pipeline that consists of multiple stages of plugins.
+
+Advanced mode gives the most flexibility to users by exposing the JSON API.
+
+## How to use advanced mode to create pipelines?
+
+1. Visit the "Create Pipeline Run" page on `config-ui`
+
+![image](https://user-images.githubusercontent.com/2908155/164569669-698da2f2-47c1-457b-b7da-39dfa7963e09.png)
+
+2. Scroll to the bottom and toggle on the "Advanced Mode" button
+
+![image](https://user-images.githubusercontent.com/2908155/164570039-befb86e2-c400-48fe-8867-da44654194bd.png)
+
+3. The pipeline editor expects a 2D array of plugins. The first dimension represents different stages of the pipeline and the second dimension describes the plugins in each stage. Stages run in sequential order and plugins within the same stage runs in parallel. We provide some templates for users to get started. Please also see the next section for some examples.
+
+![image](https://user-images.githubusercontent.com/2908155/164576122-fc015fea-ca4a-48f2-b2f5-6f1fae1ab73c.png)
+
+## Examples
+
+1. Collect multiple GitLab repos sequentially.
+
+>When there're multiple collection tasks against a single data source, we recommend running these tasks sequentially since the collection speed is mostly limited by the API rate limit of the data source.
+>Running multiple tasks against the same data source is unlikely to speed up the process and may overwhelm the data source.
+
+
+Below is an example for collecting 2 GitLab repos sequentially. It has 2 stages, each contains a GitLab task.
+
+
+```
+[
+  [
+    {
+      "Plugin": "gitlab",
+      "Options": {
+        "projectId": 15238074
+      }
+    }
+  ],
+  [
+    {
+      "Plugin": "gitlab",
+      "Options": {
+        "projectId": 11624398
+      }
+    }
+  ]
+]
+```
+
+
+2. Collect a GitHub repo and a Jira board in parallel
+
+Below is an example for collecting a GitHub repo and a Jira board in parallel. It has a single stage with a GitHub task and a Jira task. Since users can configure multiple Jira connection, it's required to pass in a `connectionId` for Jira task to specify which connection to use.
+
+```
+[
+  [
+    {
+      "Plugin": "github",
+      "Options": {
+        "repo": "lake",
+        "owner": "merico-dev"
+      }
+    },
+    {
+      "Plugin": "jira",
+      "Options": {
+        "connectionId": 1,
+        "boardId": 76
+      }
+    }
+  ]
+]
+```
diff --git a/versioned_docs/version-v0.11.0/UserManuals/GitHubUserGuide.md b/versioned_docs/version-v0.11.0/UserManuals/GitHubUserGuide.md
new file mode 100644
index 0000000..fa67456
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/UserManuals/GitHubUserGuide.md
@@ -0,0 +1,118 @@
+---
+title: "GitHub User Guide"
+sidebar_position: 4
+description: >
+  GitHub User Guide
+---
+
+## Summary
+
+GitHub has a rate limit of 5,000 API calls per hour for their REST API.
+As a result, it may take hours to collect commits data from GitHub API for a repo that has 10,000+ commits.
+To accelerate the process, DevLake introduces GitExtractor, a new plugin that collects git data by cloning the git repo instead of by calling GitHub APIs.
+
+Starting from v0.10.0, DevLake will collect GitHub data in 2 separate plugins:
+
+- GitHub plugin (via GitHub API): collect repos, issues, pull requests
+- GitExtractor (via cloning repos):  collect commits, refs
+
+Note that GitLab plugin still collects commits via API by default since GitLab has a much higher API rate limit.
+
+This doc details the process of collecting GitHub data in v0.10.0. We're working on simplifying this process in the next releases.
+
+Before start, please make sure all services are started.
+
+## GitHub Data Collection Procedure
+
+There're 3 steps.
+
+1. Configure GitHub connection
+2. Create a pipeline to run GitHub plugin
+3. Create a pipeline to run GitExtractor plugin
+4. [Optional] Set up a recurring pipeline to keep data fresh
+
+### Step 1 - Configure GitHub connection
+
+1. Visit `config-ui` at `http://localhost:4000` and click the GitHub icon
+
+2. Click the default connection 'Github' in the list
+    ![image](https://user-images.githubusercontent.com/14050754/163591959-11d83216-057b-429f-bb35-a9d845b3de5a.png)
+
+3. Configure connection by providing your GitHub API endpoint URL and your personal access token(s).
+    ![image](https://user-images.githubusercontent.com/14050754/163592015-b3294437-ce39-45d6-adf6-293e620d3942.png)
+
+- Endpoint URL: Leave this unchanged if you're using github.com. Otherwise replace it with your own GitHub instance's REST API endpoint URL. This URL should end with '/'.
+- Auth Token(s): Fill in your personal access tokens(s). For how to generate personal access tokens, please see GitHub's [official documentation](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token).
+You can provide multiple tokens to speed up the data collection process, simply concatenating tokens with commas.
+- GitHub Proxy URL: This is optional. Enter a valid proxy server address on your Network, e.g. http://your-proxy-server.com:1080
+
+4. Click 'Test Connection' and see it's working, then click 'Save Connection'.
+
+5. [Optional] Help DevLake understand your GitHub data by customizing data enrichment rules shown below.
+    ![image](https://user-images.githubusercontent.com/14050754/163592506-1873bdd1-53cb-413b-a528-7bda440d07c5.png)
+
+   1. Pull Request Enrichment Options
+
+      1. `Type`: PRs with label that matches given Regular Expression, their properties `type` will be set to the value of first sub match. For example, with Type being set to `type/(.*)$`, a PR with label `type/bug`, its `type` would be set to `bug`, with label `type/doc`, it would be `doc`.
+      2. `Component`: Same as above, but for `component` property.
+
+   2. Issue Enrichment Options
+
+      1. `Severity`: Same as above, but for `issue.severity` of course.
+
+      2. `Component`: Same as above.
+
+      3. `Priority`: Same as above.
+
+      4. **Requirement** : Issues with label that matches given Regular Expression, their properties `type` will be set to `REQUIREMENT`. Unlike `PR.type`, submatch does nothing,    because for Issue Management Analysis, people tend to focus on 3 kinds of types (Requirement/Bug/Incident), however, the concrete naming varies from repo to repo, time to time, so we decided to standardize them to help analysts make general purpose metrics.
+
+      5. **Bug**: Same as above, with `type` setting to `BUG`
+
+      6. **Incident**: Same as above, with `type` setting to `INCIDENT`
+
+6. Click 'Save Settings'
+
+### Step 2 - Create a pipeline to collect GitHub data
+
+1. Select 'Pipelines > Create Pipeline Run' from `config-ui`
+
+![image](https://user-images.githubusercontent.com/14050754/163592542-8b9d86ae-4f16-492c-8f90-12f1e90c5772.png)
+
+2. Toggle on GitHub plugin, enter the repo you'd like to collect data from.
+
+![image](https://user-images.githubusercontent.com/14050754/163592606-92141c7e-e820-4644-b2c9-49aa44f10871.png)
+
+3. Click 'Run Pipeline'
+
+You'll be redirected to newly created pipeline:
+
+![image](https://user-images.githubusercontent.com/14050754/163592677-268e6b77-db3f-4eec-8a0e-ced282f5a361.png)
+
+
+See the pipeline finishes (progress 100%):
+
+![image](https://user-images.githubusercontent.com/14050754/163592709-cce0d502-92e9-4c19-8504-6eb521b76169.png)
+
+### Step 3 - Create a pipeline to run GitExtractor plugin
+
+1. Enable the `GitExtractor` plugin, and enter your `Git URL` and, select the `Repository ID` from dropdown menu.
+
+![image](https://user-images.githubusercontent.com/2908155/164125950-37822d7f-6ee3-425d-8523-6f6b6213cb89.png)
+
+2. Click 'Run Pipeline' and wait until it's finished.
+
+3. Click `View Dashboards` on the top left corner of `config-ui`, the default username and password of Grafana are `admin`.
+
+![image](https://user-images.githubusercontent.com/61080/163666814-e48ac68d-a0cc-4413-bed7-ba123dd291c8.png)
+
+4. See dashboards populated with GitHub data.
+
+### Step 4 - [Optional] Set up a recurring pipeline to keep data fresh
+
+Please see [How to create recurring pipelines](./RecurringPipelines.md) for details.
+
+
+
+
+
+
diff --git a/versioned_docs/version-v0.11.0/UserManuals/GrafanaUserGuide.md b/versioned_docs/version-v0.11.0/UserManuals/GrafanaUserGuide.md
new file mode 100644
index 0000000..e475702
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/UserManuals/GrafanaUserGuide.md
@@ -0,0 +1,120 @@
+---
+title: "Grafana User Guide"
+sidebar_position: 1
+description: >
+  Grafana User Guide
+---
+
+
+# Grafana
+
+<img src="https://user-images.githubusercontent.com/3789273/128533901-3107e9bf-c3e3-4320-ba47-879fe2b0ea4d.png" width="450px" />
+
+When first visiting Grafana, you will be provided with a sample dashboard with some basic charts setup from the database.
+
+## Contents
+
+Section | Link
+:------------ | :-------------
+Logging In | [View Section](#logging-in)
+Viewing All Dashboards | [View Section](#viewing-all-dashboards)
+Customizing a Dashboard | [View Section](#customizing-a-dashboard)
+Dashboard Settings | [View Section](#dashboard-settings)
+Provisioning a Dashboard | [View Section](#provisioning-a-dashboard)
+Troubleshooting DB Connection | [View Section](#troubleshooting-db-connection)
+
+## Logging In<a id="logging-in"></a>
+
+Once the app is up and running, visit `http://localhost:3002` to view the Grafana dashboard.
+
+Default login credentials are:
+
+- Username: `admin`
+- Password: `admin`
+
+## Viewing All Dashboards<a id="viewing-all-dashboards"></a>
+
+To see all dashboards created in Grafana visit `/dashboards`
+
+Or, use the sidebar and click on **Manage**:
+
+![Screen Shot 2021-08-06 at 11 27 08 AM](https://user-images.githubusercontent.com/3789273/128534617-1992c080-9385-49d5-b30f-be5c96d5142a.png)
+
+
+## Customizing a Dashboard<a id="customizing-a-dashboard"></a>
+
+When viewing a dashboard, click the top bar of a panel, and go to **edit**
+
+![Screen Shot 2021-08-06 at 11 35 36 AM](https://user-images.githubusercontent.com/3789273/128535505-a56162e0-72ad-46ac-8a94-70f1c7a910ed.png)
+
+**Edit Dashboard Panel Page:**
+
+![grafana-sections](https://user-images.githubusercontent.com/3789273/128540136-ba36ee2f-a544-4558-8282-84a7cb9df27a.png)
+
+### 1. Preview Area
+- **Top Left** is the variable select area (custom dashboard variables, used for switching projects, or grouping data)
+- **Top Right** we have a toolbar with some buttons related to the display of the data:
+  - View data results in a table
+  - Time range selector
+  - Refresh data button
+- **The Main Area** will display the chart and should update in real time
+
+> Note: Data should refresh automatically, but may require a refresh using the button in some cases
+
+### 2. Query Builder
+Here we form the SQL query to pull data into our chart, from our database
+- Ensure the **Data Source** is the correct database
+
+  ![Screen Shot 2021-08-06 at 10 14 22 AM](https://user-images.githubusercontent.com/3789273/128545278-be4846e0-852d-4bc8-8994-e99b79831d8c.png)
+
+- Select **Format as Table**, and **Edit SQL** buttons to write/edit queries as SQL
+
+  ![Screen Shot 2021-08-06 at 10 17 52 AM](https://user-images.githubusercontent.com/3789273/128545197-a9ff9cb3-f12d-4331-bf6a-39035043667a.png)
+
+- The **Main Area** is where the queries are written, and in the top right is the **Query Inspector** button (to inspect returned data)
+
+  ![Screen Shot 2021-08-06 at 10 18 23 AM](https://user-images.githubusercontent.com/3789273/128545557-ead5312a-e835-4c59-b9ca-dd5c08f2a38b.png)
+
+### 3. Main Panel Toolbar
+In the top right of the window are buttons for:
+- Dashboard settings (regarding entire dashboard)
+- Save/apply changes (to specific panel)
+
+### 4. Grafana Parameter Sidebar
+- Change chart style (bar/line/pie chart etc)
+- Edit legends, chart parameters
+- Modify chart styling
+- Other Grafana specific settings
+
+## Dashboard Settings<a id="dashboard-settings"></a>
+
+When viewing a dashboard click on the settings icon to view dashboard settings. Here are 2 important sections to use:
+
+![Screen Shot 2021-08-06 at 1 51 14 PM](https://user-images.githubusercontent.com/3789273/128555763-4d0370c2-bd4d-4462-ae7e-4b140c4e8c34.png)
+
+- Variables
+  - Create variables to use throughout the dashboard panels, that are also built on SQL queries
+
+  ![Screen Shot 2021-08-06 at 2 02 40 PM](https://user-images.githubusercontent.com/3789273/128553157-a8e33042-faba-4db4-97db-02a29036e27c.png)
+
+- JSON Model
+  - Copy `json` code here and save it to a new file in `/grafana/dashboards/` with a unique name in the `lake` repo. This will allow us to persist dashboards when we load the app
+
+  ![Screen Shot 2021-08-06 at 2 02 52 PM](https://user-images.githubusercontent.com/3789273/128553176-65a5ae43-742f-4abf-9c60-04722033339e.png)
+
+## Provisioning a Dashboard<a id="provisioning-a-dashboard"></a>
+
+To save a dashboard in the `lake` repo and load it:
+
+1. Create a dashboard in browser (visit `/dashboard/new`, or use sidebar)
+2. Save dashboard (in top right of screen)
+3. Go to dashboard settings (in top right of screen)
+4. Click on _JSON Model_ in sidebar
+5. Copy code into a new `.json` file in `/grafana/dashboards`
+
+## Troubleshooting DB Connection<a id="troubleshooting-db-connection"></a>
+
+To ensure we have properly connected our database to the data source in Grafana, check database settings in `./grafana/datasources/datasource.yml`, specifically:
+- `database`
+- `user`
+- `secureJsonData/password`
diff --git a/versioned_docs/version-v0.11.0/UserManuals/RecurringPipelines.md b/versioned_docs/version-v0.11.0/UserManuals/RecurringPipelines.md
new file mode 100644
index 0000000..ce82b1e
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/UserManuals/RecurringPipelines.md
@@ -0,0 +1,30 @@
+---
+title: "Recurring Pipelines"
+sidebar_position: 3
+description: >
+  Recurring Pipelines
+---
+
+## How to create recurring pipelines?
+
+Once you've verified that a pipeline works, most likely you'll want to run that pipeline periodically to keep data fresh, and DevLake's pipeline blueprint feature have got you covered.
+
+
+1. Click 'Create Pipeline Run' and
+  - Toggle the plugins you'd like to run, here we use GitHub and GitExtractor plugin as an example
+  - Toggle on Automate Pipeline
+    ![image](https://user-images.githubusercontent.com/14050754/163596590-484e4300-b17e-4119-9818-52463c10b889.png)
+
+
+2. Click 'Add Blueprint'. Fill in the form and 'Save Blueprint'.
+
+    - **NOTE**: The schedule syntax is standard unix cron syntax, [Crontab.guru](https://crontab.guru/) is an useful reference
+    - **IMPORANT**: The scheduler is running using the `UTC` timezone. If you want data collection to happen at 3 AM New York time (UTC-04:00) every day, use **Custom Shedule** and set it to `0 7 * * *`
+
+    ![image](https://user-images.githubusercontent.com/14050754/163596655-db59e154-405f-4739-89f2-7dceab7341fe.png)
+
+3. Click 'Save Blueprint'.
+
+4. Click 'Pipeline Blueprints', you can view and edit the new blueprint in the blueprint list.
+
+    ![image](https://user-images.githubusercontent.com/14050754/163596773-4fb4237e-e3f2-4aef-993f-8a1499ca30e2.png)
\ No newline at end of file
diff --git a/versioned_docs/version-v0.11.0/UserManuals/TeamConfiguration.md b/versioned_docs/version-v0.11.0/UserManuals/TeamConfiguration.md
new file mode 100644
index 0000000..4646ffa
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/UserManuals/TeamConfiguration.md
@@ -0,0 +1,129 @@
+---
+title: "Team Configuration"
+sidebar_position: 6
+description: >
+  Team Configuration
+---
+## Summary
+This is a brief step-by-step guide to using the team feature.
+
+Notes: 
+1. Please convert /xxxpath/*.csv to the absolute path of the csv file you want to upload. 
+2. Please replace the 127.0.0.1:8080 in the text with the actual ip and port. 
+
+## Step 1 - Construct the teams table.
+a. Api request example, you can generate sample data.
+
+    i.  GET request: http://127.0.0.1:8080/plugins/org/teams.csv?fake_data=true (put into the browser can download the corresponding csv file)
+
+    ii. The corresponding curl command:
+        curl --location --request GET 'http://127.0.0.1:8080/plugins/org/teams.csv?fake_data=true'
+    
+
+b. The actual api request.
+
+    i.  Create the corresponding teams file: teams.csv 
+    (Notes: 1.The table table field names should have initial capital letters. 2.Be careful not to change the file suffix when opening csv files through the tool ).
+
+    ii. The corresponding curl command(Quick copy folder path for macOS, Shortcut option + command + c):
+    curl --location --request PUT 'http://127.0.0.1:8080/plugins/org/teams.csv' --form 'file=@"/xxxpath/teams.csv"'
+
+    iii. After successful execution, the teams table is generated and the data can be seen in the database table teams. 
+    (Notes: how to connect to the database: mainly through host, port, username, password, and then through sql tools, such as sequal ace, datagrip and other data, of course you can also access through the command line mysql -h `ip` -u `username` -p -P `port`)
+
+![image](/img/Team/teamflow3.png)
+
+
+## Step 2 - Construct user tables (roster)
+a. Api request example, you can generate sample data.
+
+    i.  Get request: http://127.0.0.1:8080/plugins/org/users.csv?fake_data=true (put into the browser can download the corresponding csv file).
+
+    ii. The corresponding curl command:
+    curl --location --request GET 'http://127.0.0.1:8080/plugins/org/users.csv?fake_data=true'
+
+
+b. The actual api request.
+
+    i.  Create the csv file (roster) (Notes: the table header is in capital letters: Id,Email,Name).
+
+    ii. The corresponding curl command:
+    curl --location --request PUT 'http://127.0.0.1:8080/plugins/org/users.csv' --form 'file=@"/xxxpath/users.csv"'
+
+    iii. After successful execution, the users table is generated and the data can be seen in the database table users.
+
+![image](/img/Team/teamflow1.png)
+    
+    iv. Generated the team_users table, you can see the data in the team_users table.
+
+![image](/img/Team/teamflow2.png)
+
+## Step 3 - Update users if you need  
+If there is a problem with team_users association or data in users, just re-put users api interface, i.e. (b in step 2 above)
+
+## Step 4 - Collect accounts 
+accounts table is collected by users through devlake. You can see the accounts table information in the database.
+
+![image](/img/Team/teamflow4.png)
+
+## Step 5 - Automatically match existing accounts and users through api requests
+
+a. Api request:  the name of the plugin is "org", connctionId is order to keep same with other plugins.
+
+```
+curl --location --request POST '127.0.0.1:8080/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '{
+    "name": "test",
+    "plan":[
+        [
+            {
+                "plugin": "org",
+                "subtasks":["connectUserAccountsExact"],
+                "options":{
+                    "connectionId":1
+                }
+            }
+        ]
+    ]
+}'
+```
+
+b. After successful execution, the user_accounts table is generated, and you can see the data in table user_accounts.
+
+![image](/img/Team/teamflow5.png)
+
+## Step 6 - Get user_accountsr relationship
+After generating the user_accounts relationship, the user can get the associated data through the GET method to confirm whether the data user and account match correctly and whether the matched accounts are complete.
+
+a. http://127.0.0.1:8080/plugins/org/user_account_mapping.csv (put into the browser to download the file directly)
+
+b. The corresponding curl command:
+```
+curl --location --request GET 'http://127.0.0.1:8080/plugins/org/user_account_mapping.csv'
+```
+
+![image](/img/Team/teamflow6.png)
+
+c. You can also use sql statements to determine, here to provide a sql statement for reference only.
+```
+SELECT a.id as account_id, a.email, a.user_name as account_user_name, u.id as user_id, u.name as real_name
+FROM accounts a 
+        join user_accounts ua on a.id = ua.account_id
+        join users u on ua.user_id = u.id
+```
+
+## Step 7 - Update user_accounts if you need
+If the association between user and account is not as expected, you can change the user_account_mapping.csv file. For example, I change the UserId in the line Id=github:GithubAccount:1:1234 in the user_account_mapping.csv file to 2, and then upload the user_account_mapping.csv file through the api interface.
+
+a. The corresponding curl command:
+```
+curl --location --request PUT 'http://127.0.0.1:8080/plugins/org/user_account_mapping.csv' --form 'file=@"/xxxpath/user_account_mapping.csv"'
+```
+
+b. You can see that the data in the user_accounts table has been updated.
+
+![image](/img/Team/teamflow7.png)
+
+
+**The above is the flow of user usage for the whole team feature.**
diff --git a/versioned_docs/version-v0.11.0/UserManuals/TemporalSetup.md b/versioned_docs/version-v0.11.0/UserManuals/TemporalSetup.md
new file mode 100644
index 0000000..f893a83
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/UserManuals/TemporalSetup.md
@@ -0,0 +1,35 @@
+---
+title: "Temporal Setup"
+sidebar_position: 5
+description: >
+  The steps to install DevLake in Temporal mode.
+---
+
+
+Normally, DevLake would execute pipelines on a local machine (we call it `local mode`), it is sufficient most of the time. However, when you have too many pipelines that need to be executed in parallel, it can be problematic, as the horsepower and throughput of a single machine is limited.
+
+`temporal mode` was added to support distributed pipeline execution, you can fire up arbitrary workers on multiple machines to carry out those pipelines in parallel to overcome the limitations of a single machine.
+
+But, be careful, many API services like JIRA/GITHUB have a request rate limit mechanism. Collecting data in parallel against the same API service with the same identity would most likely hit such limit.
+
+## How it works
+
+1. DevLake Server and Workers connect to the same temporal server by setting up `TEMPORAL_URL`
+2. DevLake Server sends a `pipeline` to the temporal server, and one of the Workers pick it up and execute it
+
+
+**IMPORTANT: This feature is in early stage of development. Please use with caution**
+
+
+## Temporal Demo
+
+### Requirements
+
+- [Docker](https://docs.docker.com/get-docker)
+- [docker-compose](https://docs.docker.com/compose/install/)
+- [temporalio](https://temporal.io/)
+
+### How to setup
+
+1. Clone and fire up  [temporalio](https://temporal.io/) services
+2. Clone this repo, and fire up DevLake with command `docker-compose -f docker-compose-temporal.yml up -d`
\ No newline at end of file
diff --git a/versioned_docs/version-v0.11.0/UserManuals/_category_.json b/versioned_docs/version-v0.11.0/UserManuals/_category_.json
new file mode 100644
index 0000000..b47bdfd
--- /dev/null
+++ b/versioned_docs/version-v0.11.0/UserManuals/_category_.json
@@ -0,0 +1,4 @@
+{
+  "label": "User Manuals",
+  "position": 3
+}
diff --git a/versioned_sidebars/version-v0.11.0-sidebars.json b/versioned_sidebars/version-v0.11.0-sidebars.json
new file mode 100644
index 0000000..39332bf
--- /dev/null
+++ b/versioned_sidebars/version-v0.11.0-sidebars.json
@@ -0,0 +1,8 @@
+{
+  "docsSidebar": [
+    {
+      "type": "autogenerated",
+      "dirName": "."
+    }
+  ]
+}
diff --git a/versions.json b/versions.json
new file mode 100644
index 0000000..909d780
--- /dev/null
+++ b/versions.json
@@ -0,0 +1,3 @@
+[
+  "v0.11.0"
+]


[incubator-devlake-website] 02/06: fix: fixed some links

Posted by zk...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

zky pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-devlake-website.git

commit b399667decfd4ad231874b03e0c653a4a2644816
Author: yumengwang03 <yu...@merico.dev>
AuthorDate: Wed Jul 13 23:12:13 2022 +0800

    fix: fixed some links
---
 docs/Plugins/gitextractor.md                           | 4 ++--
 versioned_docs/version-v0.11.0/Plugins/GitExtractor.md | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/docs/Plugins/gitextractor.md b/docs/Plugins/gitextractor.md
index d154e9e..b40cede 100644
--- a/docs/Plugins/gitextractor.md
+++ b/docs/Plugins/gitextractor.md
@@ -12,7 +12,7 @@ This plugin extracts commits and references from a remote or local git repositor
 1. Use the Git repo extractor to retrieve data about commits and branches from your repository.
 2. Use the GitHub plugin to retrieve data about Github issues and PRs from your repository.
 NOTE: you can run only one issue collection stage as described in the Github Plugin README.
-3. Use the [RefDiff](./RefDiff.md#development) plugin to calculate version diff, which will be stored in `refs_commits_diffs` table.
+3. Use the [RefDiff](RefDiff.md) plugin to calculate version diff, which will be stored in `refs_commits_diffs` table.
 
 ## Sample Request
 
@@ -58,6 +58,6 @@ For more options (e.g., saving to a csv file instead of a db), please read `plug
 ## Development
 
 This plugin depends on `libgit2`, you need to install version 1.3.0 in order to run and debug this plugin on your local
-machine. [Click here](./RefDiff.md#development) for a brief guide.
+machine. [Click here](RefDiff.md#Development) for a brief guide.
 
 <br/><br/><br/>
diff --git a/versioned_docs/version-v0.11.0/Plugins/GitExtractor.md b/versioned_docs/version-v0.11.0/Plugins/GitExtractor.md
index d154e9e..b40cede 100644
--- a/versioned_docs/version-v0.11.0/Plugins/GitExtractor.md
+++ b/versioned_docs/version-v0.11.0/Plugins/GitExtractor.md
@@ -12,7 +12,7 @@ This plugin extracts commits and references from a remote or local git repositor
 1. Use the Git repo extractor to retrieve data about commits and branches from your repository.
 2. Use the GitHub plugin to retrieve data about Github issues and PRs from your repository.
 NOTE: you can run only one issue collection stage as described in the Github Plugin README.
-3. Use the [RefDiff](./RefDiff.md#development) plugin to calculate version diff, which will be stored in `refs_commits_diffs` table.
+3. Use the [RefDiff](RefDiff.md) plugin to calculate version diff, which will be stored in `refs_commits_diffs` table.
 
 ## Sample Request
 
@@ -58,6 +58,6 @@ For more options (e.g., saving to a csv file instead of a db), please read `plug
 ## Development
 
 This plugin depends on `libgit2`, you need to install version 1.3.0 in order to run and debug this plugin on your local
-machine. [Click here](./RefDiff.md#development) for a brief guide.
+machine. [Click here](RefDiff.md#Development) for a brief guide.
 
 <br/><br/><br/>