You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@devlake.apache.org by zh...@apache.org on 2022/08/26 15:03:55 UTC

[incubator-devlake-website] branch main updated: docs: froze v0.13 docs (#181)

This is an automated email from the ASF dual-hosted git repository.

zhangliang2022 pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-devlake-website.git


The following commit(s) were added to refs/heads/main by this push:
     new cef16f10 docs: froze v0.13 docs (#181)
cef16f10 is described below

commit cef16f1099a2f5f515e1cd3860b4e09f5be80ddb
Author: Yumeng Wang <yu...@merico.dev>
AuthorDate: Fri Aug 26 23:03:52 2022 +0800

    docs: froze v0.13 docs (#181)
---
 .../version-v0.13/01-Overview/02-WhatIsDevLake.md  |  30 +
 .../version-v0.13/01-Overview/03-Architecture.md   |  29 +
 .../version-v0.13/01-Overview/04-Roadmap.md        |  36 ++
 .../version-v0.13/01-Overview/_category_.json      |   4 +
 .../version-v0.13/02-Quick Start/01-LocalSetup.md  |  81 +++
 .../02-Quick Start/02-KubernetesSetup.md           |   9 +
 .../02-Quick Start/03-DeveloperSetup.md            | 123 +++++
 .../version-v0.13/02-Quick Start/_category_.json   |   4 +
 .../version-v0.13/03-Features.md                   |  20 +
 .../version-v0.13/04-EngineeringMetrics.md         | 197 +++++++
 .../DeveloperManuals/E2E-Test-Guide.md             | 202 +++++++
 .../DeveloperManuals/PluginImplementation.md       | 337 ++++++++++++
 .../DataModels/DevLakeDomainLayerSchema.md         | 612 +++++++++++++++++++++
 .../version-v0.13/DataModels/_category_.json       |   8 +
 .../version-v0.13/DeveloperManuals/DBMigration.md  |  53 ++
 .../version-v0.13/DeveloperManuals/Dal.md          | 173 ++++++
 .../DeveloperManuals/DeveloperSetup.md             | 131 +++++
 .../DeveloperManuals/E2E-Test-Guide.md             | 212 +++++++
 .../DeveloperManuals/Notifications.md              |  32 ++
 .../DeveloperManuals/PluginImplementation.md       | 339 ++++++++++++
 .../version-v0.13/DeveloperManuals/Release-SOP.md  | 111 ++++
 .../DeveloperManuals/TagNamingConventions.md       |  13 +
 .../version-v0.13/DeveloperManuals/_category_.json |   8 +
 .../GettingStarted/DockerComposeSetup.md           |  37 ++
 .../version-v0.13/GettingStarted/HelmSetup.md      | 116 ++++
 .../GettingStarted/KubernetesSetup.md              |  51 ++
 .../version-v0.13/GettingStarted/TemporalSetup.md  |  35 ++
 .../version-v0.13/GettingStarted/_category_.json   |   8 +
 versioned_docs/version-v0.13/Glossary.md           | 103 ++++
 .../LiveDemo/AverageRequirementLeadTime.md         |   9 +
 .../version-v0.13/LiveDemo/CommitCountByAuthor.md  |   9 +
 .../version-v0.13/LiveDemo/DetailedBugInfo.md      |   9 +
 .../version-v0.13/LiveDemo/GitHubBasic.md          |   9 +
 .../GitHubReleaseQualityAndContributionAnalysis.md |   9 +
 versioned_docs/version-v0.13/LiveDemo/Jenkins.md   |   9 +
 .../version-v0.13/LiveDemo/WeeklyBugRetro.md       |   9 +
 .../version-v0.13/LiveDemo/_category_.json         |   8 +
 .../version-v0.13/Metrics/AddedLinesOfCode.md      |  33 ++
 versioned_docs/version-v0.13/Metrics/BugAge.md     |  35 ++
 .../Metrics/BugCountPer1kLinesOfCode.md            |  40 ++
 versioned_docs/version-v0.13/Metrics/BuildCount.md |  32 ++
 .../version-v0.13/Metrics/BuildDuration.md         |  32 ++
 .../version-v0.13/Metrics/BuildSuccessRate.md      |  32 ++
 versioned_docs/version-v0.13/Metrics/CFR.md        |  53 ++
 versioned_docs/version-v0.13/Metrics/CodingTime.md |  32 ++
 .../version-v0.13/Metrics/CommitAuthorCount.md     |  32 ++
 .../version-v0.13/Metrics/CommitCount.md           |  55 ++
 versioned_docs/version-v0.13/Metrics/CycleTime.md  |  40 ++
 .../version-v0.13/Metrics/DeletedLinesOfCode.md    |  32 ++
 versioned_docs/version-v0.13/Metrics/DeployTime.md |  30 +
 .../version-v0.13/Metrics/DeploymentFrequency.md   |  45 ++
 .../version-v0.13/Metrics/IncidentAge.md           |  34 ++
 .../Metrics/IncidentCountPer1kLinesOfCode.md       |  39 ++
 .../version-v0.13/Metrics/LeadTimeForChanges.md    |  56 ++
 versioned_docs/version-v0.13/Metrics/MTTR.md       |  56 ++
 versioned_docs/version-v0.13/Metrics/MergeRate.md  |  40 ++
 versioned_docs/version-v0.13/Metrics/PRCount.md    |  39 ++
 versioned_docs/version-v0.13/Metrics/PRSize.md     |  35 ++
 versioned_docs/version-v0.13/Metrics/PickupTime.md |  34 ++
 .../version-v0.13/Metrics/RequirementCount.md      |  68 +++
 .../Metrics/RequirementDeliveryRate.md             |  36 ++
 .../Metrics/RequirementGranularity.md              |  34 ++
 .../version-v0.13/Metrics/RequirementLeadTime.md   |  36 ++
 .../version-v0.13/Metrics/ReviewDepth.md           |  34 ++
 versioned_docs/version-v0.13/Metrics/ReviewTime.md |  39 ++
 .../version-v0.13/Metrics/TimeToMerge.md           |  36 ++
 .../version-v0.13/Metrics/_category_.json          |   8 +
 .../version-v0.13/Overview/Architecture.md         |  39 ++
 .../version-v0.13/Overview/Introduction.md         |  39 ++
 versioned_docs/version-v0.13/Overview/Roadmap.md   |  33 ++
 .../version-v0.13/Overview/_category_.json         |   8 +
 .../version-v0.13/Plugins/_category_.json          |   8 +
 versioned_docs/version-v0.13/Plugins/dbt.md        |  67 +++
 versioned_docs/version-v0.13/Plugins/feishu.md     |  71 +++
 versioned_docs/version-v0.13/Plugins/gitee.md      | 106 ++++
 .../version-v0.13/Plugins/gitextractor.md          |  64 +++
 .../Plugins/github-connection-in-config-ui.png     | Bin 0 -> 51159 bytes
 versioned_docs/version-v0.13/Plugins/github.md     |  67 +++
 .../Plugins/gitlab-connection-in-config-ui.png     | Bin 0 -> 66616 bytes
 versioned_docs/version-v0.13/Plugins/gitlab.md     |  45 ++
 versioned_docs/version-v0.13/Plugins/jenkins.md    |  47 ++
 .../Plugins/jira-connection-config-ui.png          | Bin 0 -> 76052 bytes
 .../Plugins/jira-more-setting-in-config-ui.png     | Bin 0 -> 300823 bytes
 versioned_docs/version-v0.13/Plugins/jira.md       | 196 +++++++
 versioned_docs/version-v0.13/Plugins/refdiff.md    | 139 +++++
 versioned_docs/version-v0.13/Plugins/tapd.md       |  16 +
 .../version-v0.13/SupportedDataSources.md          |  59 ++
 .../UserManuals/ConfigUI/AdvancedMode.md           |  97 ++++
 .../version-v0.13/UserManuals/ConfigUI/GitHub.md   |  87 +++
 .../version-v0.13/UserManuals/ConfigUI/GitLab.md   |  53 ++
 .../version-v0.13/UserManuals/ConfigUI/Jenkins.md  |  33 ++
 .../version-v0.13/UserManuals/ConfigUI/Jira.md     |  67 +++
 .../version-v0.13/UserManuals/ConfigUI/Tutorial.md |  68 +++
 .../UserManuals/ConfigUI/_category_.json           |   4 +
 .../UserManuals/Dashboards/GrafanaUserGuide.md     | 120 ++++
 .../UserManuals/Dashboards/_category_.json         |   4 +
 .../version-v0.13/UserManuals/TeamConfiguration.md | 188 +++++++
 .../version-v0.13/UserManuals/_category_.json      |   8 +
 versioned_sidebars/version-v0.13-sidebars.json     |   8 +
 versions.json                                      |   1 +
 100 files changed, 6074 insertions(+)

diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/01-Overview/02-WhatIsDevLake.md b/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/01-Overview/02-WhatIsDevLake.md
new file mode 100755
index 00000000..9cf249c7
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/01-Overview/02-WhatIsDevLake.md
@@ -0,0 +1,30 @@
+---
+title: "What is DevLake?"
+linkTitle: "What is DevLake?"
+tags: []
+categories: []
+weight: 1
+description: >
+  General introduction of DevLake
+---
+
+
+DevLake brings your DevOps data into one practical, customized, extensible view. Ingest, analyze, and visualize data from an ever-growing list of developer tools, with our open source product.
+
+DevLake is designed for developer teams looking to make better sense of their development process and to bring a more data-driven approach to their own practices. You can ask DevLake many questions regarding your development process. Just connect and query.
+
+<a href="https://app-259373083972538368-3002.ars.teamcode.com/d/0Rjxknc7z/demo-homepage?orgId=1">See demo</a>. Username/password:test/test. The demo is based on the data from this repo, merico-dev/lake.
+
+
+
+<div align="left">
+<img src="https://user-images.githubusercontent.com/14050754/145056261-ceaf7044-f5c5-420f-80ca-54e56eb8e2a7.png" width="100%" alt="User Flow" style={{borderRadius: '15px' }}/>
+<p align="center">User Flow</p>
+</div>
+<br/>
+
+### What can be accomplished with DevLake?
+1. Collect DevOps data across the entire SDLC process and connect data silos.
+2. A standard <a href="https://github.com/merico-dev/lake/wiki/DataModel.Domain-layer-schema">data model</a> and out-of-the-box <a href="https://github.com/merico-dev/lake/wiki/Metric-Cheatsheet">metrics</a> for software engineering.
+3. Flexible <a href="https://github.com/merico-dev/lake/blob/main/ARCHITECTURE.md">framework</a> for data collection and ETL, support customized analysis.
+<br/><br/><br/>
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/01-Overview/03-Architecture.md b/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/01-Overview/03-Architecture.md
new file mode 100755
index 00000000..db3cdedc
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/01-Overview/03-Architecture.md
@@ -0,0 +1,29 @@
+---
+title: "Architecture"
+linkTitle: "Architecture"
+tags: []
+categories: []
+weight: 2
+description: >
+  Understand the architecture of DevLake.
+---
+
+
+![devlake-architecture](https://user-images.githubusercontent.com/14050754/143292041-a4839bf1-ca46-462d-96da-2381c8aa0fed.png)
+<p align="center">Architecture Diagram</p>
+
+## Stack (from low to high)
+
+1. config
+2. logger
+3. models
+4. plugins
+5. services
+6. api / cli
+
+## Rules
+
+1. Higher layer calls lower layer, not the other way around
+2. Whenever lower layer neeeds something from higher layer, a interface should be introduced for decoupling
+3. Components should be initialized in a low to high order during bootstraping
+<br/><br/><br/>
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/01-Overview/04-Roadmap.md b/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/01-Overview/04-Roadmap.md
new file mode 100755
index 00000000..39e229d9
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/01-Overview/04-Roadmap.md
@@ -0,0 +1,36 @@
+---
+title: "Roadmap"
+linkTitle: "Roadmap"
+tags: []
+categories: []
+weight: 3
+description: >
+  The goals and roadmap for DevLake in 2022.
+---
+
+
+## Goals
+1. Moving to Apache Incubator and making DevLake a graduation-ready project.
+2. Explore and implement 3 typical use scenarios to help certain engineering teams and developers:
+   - Observation of open-source project contribution and quality
+   - DORA metrics for the DevOps team
+   - SDLC workflow monitoring and improvement
+3. Better UX for end-users and contributors.
+
+
+## DevLake 2022 Roadmap
+DevLake is currently under rapid development. This page describes the project’s public roadmap, the result of an ongoing collaboration between the core maintainers and the broader DevLake community.<br/><br/>
+This roadmap is broken down by the goals in the last section.
+
+
+| Category | Features|
+| --- | --- |
+| More data sources across different [DevOps domains](https://github.com/merico-dev/lake/wiki/DevOps-Domain-Definition)| 1. **Issue/Task Management - Jira server** <br/> 2. **Issue/Task Management - Jira data center** <br/> 3. Issue/Task Management - GitLab Issues <br/> 4. Issue/Task Management - Trello <br/> 5. **Issue/Task Management - TPAD** <br/> 6. Issue/Task Management - Teambition <br/> 7. Issue/Task Management - Trello <br/> 8. **Source Code Management - GitLab on-premise** <br/> [...]
+| More comprehensive and flexible [engineering data model](https://github.com/merico-dev/lake/issues/700) | 1. complete and polish standard data models for different [DevOps domains](https://github.com/merico-dev/lake/wiki/DevOps-Domain-Definition) <br/> 2. allow users to modify standard tables <br/> 3. allow users to create new tables <br/> 4. allow users to easily define ETL rules <br/> |
+| Better UX | 1. improve config-UI design for better onboard experience <br/> 2. improve data collection speed for Github and other plugins with strict API rate limit <br/> 3. build a website to present well-organized documentation to DevLake users and contributors <br/> |
+
+
+## How to Influence Roadmap
+A roadmap is only useful when it captures real user needs. We are glad to hear from you if you have specific use cases, feedback, or ideas. You can submit an issue to let us know!
+Also, if you plan to work (or are already working) on a new or existing feature, tell us, so that we can update the roadmap accordingly. We are happy to share knowledge and context to help your feature land successfully.
+<br/><br/><br/>
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/01-Overview/_category_.json b/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/01-Overview/_category_.json
new file mode 100644
index 00000000..e224ed81
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/01-Overview/_category_.json
@@ -0,0 +1,4 @@
+{
+  "label": "Overview",
+  "position": 1
+}
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/02-Quick Start/01-LocalSetup.md b/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/02-Quick Start/01-LocalSetup.md
new file mode 100644
index 00000000..98ed3bb2
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/02-Quick Start/01-LocalSetup.md	
@@ -0,0 +1,81 @@
+---
+title: "Local Setup"
+linkTitle: "Local Setup"
+tags: []
+categories: []
+weight: 1
+description: >
+  The steps to install DevLake locally.
+---
+
+
+- If you only plan to run the product locally, this is the **ONLY** section you should need.
+- Commands written `like this` are to be run in your terminal.
+
+#### Required Packages to Install<a id="user-setup-requirements"></a>
+
+- [Docker](https://docs.docker.com/get-docker)
+- [docker-compose](https://docs.docker.com/compose/install/)
+
+NOTE: After installing docker, you may need to run the docker application and restart your terminal
+
+#### Commands to run in your terminal<a id="user-setup-commands"></a>
+
+**IMPORTANT: DevLake doesn't support Database Schema Migration yet,  upgrading an existing instance is likely to break, we recommend that you deploy a new instance instead.**
+
+1. Download `docker-compose.yml` and `env.example` from [latest release page](https://github.com/merico-dev/lake/releases/latest) into a folder.
+2. Rename `env.example` to `.env`. For Mac/Linux users, please run `mv env.example .env` in the terminal.
+3. Start Docker on your machine, then run `docker-compose up -d` to start the services.
+4. Visit `localhost:4000` to set up configuration files.
+   >- Navigate to desired plugins on the Integrations page
+   >- Please reference the following for more details on how to configure each one:<br/>
+      - **jira**
+      - **gitlab**
+      - **jenkins**
+      - **github**
+   >- Submit the form to update the values by clicking on the **Save Connection** button on each form page
+   >- `devlake` takes a while to fully boot up. if `config-ui` complaining about api being unreachable, please wait a few seconds and try refreshing the page.
+
+
+5. Visit `localhost:4000/pipelines/create` to RUN a Pipeline and trigger data collection.
+
+
+   Pipelines Runs can be initiated by the new "Create Run" Interface. Simply enable the **Data Source Providers** you wish to run collection for, and specify the data you want to collect, for instance, **Project ID** for Gitlab and **Repository Name** for GitHub.
+
+   Once a valid pipeline configuration has been created, press **Create Run** to start/run the pipeline.
+   After the pipeline starts, you will be automatically redirected to the **Pipeline Activity** screen to monitor collection activity.
+
+   **Pipelines** is accessible from the main menu of the config-ui for easy access.
+
+   - Manage All Pipelines: `http://localhost:4000/pipelines`
+   - Create Pipeline RUN: `http://localhost:4000/pipelines/create`
+   - Track Pipeline Activity: `http://localhost:4000/pipelines/activity/[RUN_ID]`
+
+   For advanced use cases and complex pipelines, please use the Raw JSON API to manually initiate a run using **cURL** or graphical API tool such as **Postman**. `POST` the following request to the DevLake API Endpoint.
+
+    ```json
+    [
+        [
+            {
+                "plugin": "github",
+                "options": {
+                    "repo": "lake",
+                    "owner": "merico-dev"
+                }
+            }
+        ]
+    ]
+    ```
+
+   Please refer to this wiki [How to trigger data collection](https://github.com/merico-dev/lake/wiki/How-to-use-the-triggers-page).
+
+6. Click *View Dashboards* button in the top left when done, or visit `localhost:3002` (username: `admin`, password: `admin`).
+
+   We use <a href="https://grafana.com/" target="_blank">Grafana</a> as a visualization tool to build charts for the <a href="https://github.com/merico-dev/lake/wiki/DataModel.Domain-layer-schema">data stored in our database</a>. Using SQL queries, we can add panels to build, save, and edit customized dashboards.
+
+   All the details on provisioning and customizing a dashboard can be found in the **Grafana Doc**
+
+#### Setup cron job
+
+To synchronize data periodically, we provide **lake-cli** for easily sending data collection requests along with **a cron job** to periodically trigger the cli tool.
+<br/><br/><br/>
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/02-Quick Start/02-KubernetesSetup.md b/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/02-Quick Start/02-KubernetesSetup.md
new file mode 100644
index 00000000..4327e319
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/02-Quick Start/02-KubernetesSetup.md	
@@ -0,0 +1,9 @@
+---
+title: "Kubernetes Setup"
+linkTitle: "Kubernetes Setup"
+tags: []
+categories: []
+weight: 2
+description: >
+  The steps to install DevLake in Kubernetes.
+---
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/02-Quick Start/03-DeveloperSetup.md b/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/02-Quick Start/03-DeveloperSetup.md
new file mode 100644
index 00000000..b1696d09
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/02-Quick Start/03-DeveloperSetup.md	
@@ -0,0 +1,123 @@
+---
+title: "Developer Setup"
+linkTitle: "Developer Setup"
+tags: []
+categories: []
+weight: 3
+description: >
+  The steps to install DevLake in develper mode.
+---
+
+
+
+#### Requirements
+
+- <a href="https://docs.docker.com/get-docker" target="_blank">Docker</a>
+- <a href="https://golang.org/doc/install" target="_blank">Golang v1.17+</a>
+- Make
+  - Mac (Already installed)
+  - Windows: [Download](http://gnuwin32.sourceforge.net/packages/make.htm)
+  - Ubuntu: `sudo apt-get install build-essential`
+
+#### How to setup dev environment
+1. Navigate to where you would like to install this project and clone the repository:
+
+   ```sh
+   git clone https://github.com/merico-dev/lake.git
+   cd lake
+   ```
+
+2. Install dependencies for plugins:
+
+   - **RefDiff**
+
+3. Install Go packages
+
+    ```sh
+	go get
+    ```
+
+4. Copy the sample config file to new local file:
+
+    ```sh
+    cp .env.example .env
+    ```
+
+5. Update the following variables in the file `.env`:
+
+    * `DB_URL`: Replace `mysql:3306` with `127.0.0.1:3306`
+
+5. Start the MySQL and Grafana containers:
+
+    > Make sure the Docker daemon is running before this step.
+
+    ```sh
+    docker-compose up -d mysql grafana
+    ```
+
+6. Run lake and config UI in dev mode in two seperate terminals:
+
+    ```sh
+    # run lake
+    make dev
+    # run config UI
+    make configure-dev
+    ```
+
+7. Visit config UI at `localhost:4000` to configure data sources.
+   >- Navigate to desired plugins pages on the Integrations page
+   >- You will need to enter the required information for the plugins you intend to use.
+   >- Please reference the following for more details on how to configure each one:
+   **jira**
+   **gitlab**
+   **jenkins**
+   **github**
+
+   >- Submit the form to update the values by clicking on the **Save Connection** button on each form page
+
+8. Visit `localhost:4000/pipelines/create` to RUN a Pipeline and trigger data collection.
+
+
+   Pipelines Runs can be initiated by the new "Create Run" Interface. Simply enable the **Data Source Providers** you wish to run collection for, and specify the data you want to collect, for instance, **Project ID** for Gitlab and **Repository Name** for GitHub.
+
+   Once a valid pipeline configuration has been created, press **Create Run** to start/run the pipeline.
+   After the pipeline starts, you will be automatically redirected to the **Pipeline Activity** screen to monitor collection activity.
+
+   **Pipelines** is accessible from the main menu of the config-ui for easy access.
+
+   - Manage All Pipelines: `http://localhost:4000/pipelines`
+   - Create Pipeline RUN: `http://localhost:4000/pipelines/create`
+   - Track Pipeline Activity: `http://localhost:4000/pipelines/activity/[RUN_ID]`
+
+   For advanced use cases and complex pipelines, please use the Raw JSON API to manually initiate a run using **cURL** or graphical API tool such as **Postman**. `POST` the following request to the DevLake API Endpoint.
+
+    ```json
+    [
+        [
+            {
+                "plugin": "github",
+                "options": {
+                    "repo": "lake",
+                    "owner": "merico-dev"
+                }
+            }
+        ]
+    ]
+    ```
+
+   Please refer to this wiki [How to trigger data collection](https://github.com/merico-dev/lake/wiki/How-to-use-the-triggers-page).
+
+
+9. Click *View Dashboards* button in the top left when done, or visit `localhost:3002` (username: `admin`, password: `admin`).
+
+   We use <a href="https://grafana.com/" target="_blank">Grafana</a> as a visualization tool to build charts for the <a href="https://github.com/merico-dev/lake/wiki/DataModel.Domain-layer-schema">data stored in our database</a>. Using SQL queries, we can add panels to build, save, and edit customized dashboards.
+
+   All the details on provisioning and customizing a dashboard can be found in the **Grafana Doc**
+
+
+10. (Optional) To run the tests:
+
+
+    make test
+
+<br/><br/><br/><br/>
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/02-Quick Start/_category_.json b/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/02-Quick Start/_category_.json
new file mode 100644
index 00000000..9620a053
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/02-Quick Start/_category_.json	
@@ -0,0 +1,4 @@
+{
+  "label": "Quick  Start",
+  "position": 2
+}
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/03-Features.md b/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/03-Features.md
new file mode 100644
index 00000000..2b2bb24b
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/03-Features.md
@@ -0,0 +1,20 @@
+---
+sidebar_position: 03
+title: "Features"
+linkTitle: "Features"
+tags: []
+categories: []
+weight: 30000
+description: >
+  Features of the latest version of DevLake.
+---
+
+
+1. Collect data from [mainstream DevOps tools](https://github.com/merico-dev/lake#project-roadmap), including Jira (Cloud), Jira Server v8+, Git, GitLab, GitHub, Jenkins etc., supported by [plugins](https://github.com/merico-dev/lake/blob/main/ARCHITECTURE.md).
+2. Receive data through Push API.
+3. Standardize DevOps data based on [domain layer schema](https://github.com/merico-dev/lake/wiki/DataModel.Domain-layer-schema). Support 20+ [built-in engineering metrics](https://github.com/merico-dev/lake/wiki/Metric-Cheatsheet) to observe productivity, quality and delivery capability.
+4. Connect "commit" entity with "issue" entity, and generate composite metrics such as `Bugs Count per 1k Lines of Code`.
+5. Identify new commits based on [RefDiff](https://github.com/merico-dev/lake/tree/main/plugins/refdiff#refdiff) plugin, and analyze productivity and quality of each version.
+6. Flexible dashboards to support data visualization and queries, based on Grafana.
+
+<br/><br/><br/>
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/04-EngineeringMetrics.md b/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/04-EngineeringMetrics.md
new file mode 100644
index 00000000..3b8a08b2
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/04-EngineeringMetrics.md
@@ -0,0 +1,197 @@
+---
+sidebar_position: 04
+title: "Engineering Metrics"
+linkTitle: "Engineering Metrics"
+tags: []
+categories: []
+weight: 40000
+description: >
+  The definition, values and data required for the 20+ engineering metrics supported by DevLake.
+---
+
+<table>
+    <tr>
+        <th><b>Category</b></th>
+        <th><b>Metric Name</b></th>
+        <th><b>Definition</b></th>
+        <th><b>Data Required</b></th>
+        <th style={{width:'70%'}}><b>Use Scenarios and Recommended Practices</b></th>
+        <th><b>Value&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</b></th>
+    </tr>
+    <tr>
+        <td rowspan="10">Delivery Velocity</td>
+        <td>Requirement Count</td>
+        <td>Number of issues in type "Requirement"</td>
+        <td>Issue/Task Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/jira/README.md">Jira issues</a>, <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub issues</a>, etc</td>
+        <td rowspan="2">
+1. Analyze the number of requirements and delivery rate of different time cycles to find the stability and trend of the development process.
+<br/>2. Analyze and compare the number of requirements delivered and delivery rate of each project/team, and compare the scale of requirements of different projects.
+<br/>3. Based on historical data, establish a baseline of the delivery capacity of a single iteration (optimistic, probable and pessimistic values) to provide a reference for iteration estimation.
+<br/>4. Drill down to analyze the number and percentage of requirements in different phases of SDLC. Analyze rationality and identify the requirements stuck in the backlog.</td>
+        <td rowspan="2">1. Based on historical data, establish a baseline of the delivery capacity of a single iteration to improve the organization and planning of R&D resources.
+<br/>2. Evaluate whether the delivery capacity matches the business phase and demand scale. Identify key bottlenecks and reasonably allocate resources.</td>
+    </tr>
+    <tr>
+        <td>Requirement Delivery Rate</td>
+        <td>Ratio of delivered requirements to all requirements</td>
+        <td>Issue/Task Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/jira/README.md">Jira issues</a>, <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub issues</a>, etc</td>
+    </tr>
+    <tr>
+        <td>Requirement Lead Time</td>
+        <td>Lead time of issues with type "Requirement"</td>
+        <td>Issue/Task Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/jira/README.md">Jira issues</a>, <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub issues</a>, etc</td>
+        <td>
+1. Analyze the trend of requirement lead time to observe if it has improved over time.
+<br/>2. Analyze and compare the requirement lead time of each project/team to identify key projects with abnormal lead time.
+<br/>3. Drill down to analyze a requirement's staying time in different phases of SDLC. Analyze the bottleneck of delivery velocity and improve the workflow.</td>
+        <td>1. Analyze key projects and critical points, identify good/to-be-improved practices that affect requirement lead time, and reduce the risk of delays
+<br/>2. Focus on the end-to-end velocity of value delivery process; coordinate different parts of R&D to avoid efficiency shafts; make targeted improvements to bottlenecks.</td>
+    </tr>
+    <tr>
+        <td>Requirement Granularity</td>
+        <td>Number of story points associated with an issue</td>
+        <td>Issue/Task Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/jira/README.md">Jira issues</a>, <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub issues</a>, etc</td>
+        <td>
+1. Analyze the story points/requirement lead time of requirements to evaluate whether the ticket size, ie. requirement complexity is optimal.
+<br/>2. Compare the estimated requirement granularity with the actual situation and evaluate whether the difference is reasonable by combining more microscopic workload metrics (e.g. lines of code/code equivalents)</td>
+        <td>1. Promote product teams to split requirements carefully, improve requirements quality, help developers understand requirements clearly, deliver efficiently and with high quality, and improve the project management capability of the team.
+<br/>2. Establish a data-supported workload estimation model to help R&D teams calibrate their estimation methods and more accurately assess the granularity of requirements, which is useful to achieve better issue planning in project management.</td>
+    </tr>
+    <tr>
+        <td>Commit Count</td>
+        <td>Number of Commits</td>
+        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitextractor/README.md">Git</a>/<a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a>/<a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> commits</td>
+        <td>
+1. Identify the main reasons for the unusual number of commits and the possible impact on the number of commits through comparison
+<br/>2. Evaluate whether the number of commits is reasonable in conjunction with more microscopic workload metrics (e.g. lines of code/code equivalents)</td>
+        <td>1. Identify potential bottlenecks that may affect output
+<br/>2. Encourage R&D practices of small step submissions and develop excellent coding habits</td>
+    </tr>
+    <tr>
+        <td>Added Lines of Code</td>
+        <td>Accumulated number of added lines of code</td>
+        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitextractor/README.md">Git</a>/<a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a>/<a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> commits</td>
+        <td rowspan="2">
+1. From the project/team dimension, observe the accumulated change in Added lines to assess the team activity and code growth rate
+<br/>2. From version cycle dimension, observe the active time distribution of code changes, and evaluate the effectiveness of project development model.
+<br/>3. From the member dimension, observe the trend and stability of code output of each member, and identify the key points that affect code output by comparison.</td>
+        <td rowspan="2">1. identify potential bottlenecks that may affect the output
+<br/>2. Encourage the team to implement a development model that matches the business requirements; develop excellent coding habits</td>
+    </tr>
+    <tr>
+        <td>Deleted Lines of Code</td>
+        <td>Accumulated number of deleted lines of code</td>
+        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitextractor/README.md">Git</a>/<a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a>/<a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> commits</td>
+    </tr>
+    <tr>
+        <td>Pull Request Review Time</td>
+        <td>Time from Pull/Merge created time until merged</td>
+        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a> PRs, <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> MRs, etc</td>
+        <td>
+1. Observe the mean and distribution of code review time from the project/team/individual dimension to assess the rationality of the review time</td>
+        <td>1. Take inventory of project/team code review resources to avoid lack of resources and backlog of review sessions, resulting in long waiting time
+<br/>2. Encourage teams to implement an efficient and responsive code review mechanism</td>
+    </tr>
+    <tr>
+        <td>Bug Age</td>
+        <td>Lead time of issues in type "Bug"</td>
+        <td>Issue/Task Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/jira/README.md">Jira issues</a>, <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub issues</a>, etc</td>
+        <td rowspan="2">
+1. Observe the trend of bug age and locate the key reasons.<br/>
+2. According to the severity level, type (business, functional classification), affected module, source of bugs, count and observe the length of bug and incident age.</td>
+        <td rowspan="2">1. Help the team to establish an effective hierarchical response mechanism for bugs and incidents. Focus on the resolution of important problems in the backlog.<br/>
+2. Improve team's and individual's bug/incident fixing efficiency. Identify good/to-be-improved practices that affect bug age or incident age</td>
+    </tr>
+    <tr>
+        <td>Incident Age</td>
+        <td>Lead time of issues in type "Incident"</td>
+        <td>Issue/Task Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/jira/README.md">Jira issues</a>, <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub issues</a>, etc</td>
+    </tr>
+    <tr>
+        <td rowspan="8">Delivery Quality</td>
+        <td>Pull Request Count</td>
+        <td>Number of Pull/Merge Requests</td>
+        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a> PRs, <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> MRs, etc</td>
+        <td rowspan="3">
+1. From the developer dimension, we evaluate the code quality of developers by combining the task complexity with the metrics related to the number of review passes and review rounds.<br/>
+2. From the reviewer dimension, we observe the reviewer's review style by taking into account the task complexity, the number of passes and the number of review rounds.<br/>
+3. From the project/team dimension, we combine the project phase and team task complexity to aggregate the metrics related to the number of review passes and review rounds, and identify the modules with abnormal code review process and possible quality risks.</td>
+        <td rowspan="3">1. Code review metrics are process indicators to provide quick feedback on developers' code quality<br/>
+2. Promote the team to establish a unified coding specification and standardize the code review criteria<br/>
+3. Identify modules with low-quality risks in advance, optimize practices, and precipitate into reusable knowledge and tools to avoid technical debt accumulation</td>
+    </tr>
+    <tr>
+        <td>Pull Request Pass Rate</td>
+        <td>Ratio of Pull/Merge Review requests to merged</td>
+        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a> PRs, <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> MRs, etc</td>
+    </tr>
+    <tr>
+        <td>Pull Request Review Rounds</td>
+        <td>Number of cycles of commits followed by comments/final merge</td>
+        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a> PRs, <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> MRs, etc</td>
+    </tr>
+    <tr>
+        <td>Pull Request Review Count</td>
+        <td>Number of Pull/Merge Reviewers</td>
+        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a> PRs, <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> MRs, etc</td>
+        <td>1. As a secondary indicator, assess the cost of labor invested in the code review process</td>
+        <td>1. Take inventory of project/team code review resources to avoid long waits for review sessions due to insufficient resource input</td>
+    </tr>
+    <tr>
+        <td>Bug Count</td>
+        <td>Number of bugs found during testing</td>
+        <td>Issue/Task Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/jira/README.md">Jira issues</a>, <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub issues</a>, etc</td>
+        <td rowspan="4">
+1. From the project or team dimension, observe the statistics on the total number of defects, the distribution of the number of defects in each severity level/type/owner, the cumulative trend of defects, and the change trend of the defect rate in thousands of lines, etc.<br/>
+2. From version cycle dimension, observe the statistics on the cumulative trend of the number of defects/defect rate, which can be used to determine whether the growth rate of defects is slowing down, showing a flat convergence trend, and is an important reference for judging the stability of software version quality<br/>
+3. From the time dimension, analyze the trend of the number of test defects, defect rate to locate the key items/key points<br/>
+4. Evaluate whether the software quality and test plan are reasonable by referring to CMMI standard values</td>
+        <td rowspan="4">1. Defect drill-down analysis to inform the development of design and code review strategies and to improve the internal QA process<br/>
+2. Assist teams to locate projects/modules with higher defect severity and density, and clean up technical debts<br/>
+3. Analyze critical points, identify good/to-be-improved practices that affect defect count or defect rate, to reduce the amount of future defects</td>
+    </tr>
+    <tr>
+        <td>Incident Count</td>
+        <td>Number of Incidents found after shipping</td>
+        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a> PRs, <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> MRs, etc</td>
+    </tr>
+    <tr>
+        <td>Bugs Count per 1k Lines of Code</td>
+        <td>Amount of bugs per 1,000 lines of code</td>
+        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a> PRs, <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> MRs, etc</td>
+    </tr>
+    <tr>
+        <td>Incidents Count per 1k Lines of Code</td>
+        <td>Amount of incidents per 1,000 lines of code</td>
+        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a> PRs, <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> MRs, etc</td>
+    </tr>
+    <tr>
+        <td>Delivery Cost</td>
+        <td>Commit Author Count</td>
+        <td>Number of Contributors who have committed code</td>
+        <td>Source Code Management entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitextractor/README.md">Git</a>/<a href="https://github.com/merico-dev/lake/blob/main/plugins/github/README.md">GitHub</a>/<a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLab</a> commits</td>
+        <td>1. As a secondary indicator, this helps assess the labor cost of participating in coding</td>
+        <td>1. Take inventory of project/team R&D resource inputs, assess input-output ratio, and rationalize resource deployment</td>
+    </tr>
+    <tr>
+        <td rowspan="3">Delivery Capability</td>
+        <td>Build Count</td>
+        <td>The number of builds started</td>
+        <td>CI/CD entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/jenkins/README.md">Jenkins</a> PRs, <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLabCI</a> MRs, etc</td>
+        <td rowspan="3">1. From the project dimension, compare the number of builds and success rate by combining the project phase and the complexity of tasks<br/>
+2. From the time dimension, analyze the trend of the number of builds and success rate to see if it has improved over time</td>
+        <td rowspan="3">1. As a process indicator, it reflects the value flow efficiency of upstream production and research links<br/>
+2. Identify excellent/to-be-improved practices that impact the build, and drive the team to precipitate reusable tools and mechanisms to build infrastructure for fast and high-frequency delivery</td>
+    </tr>
+    <tr>
+        <td>Build Duration</td>
+        <td>The duration of successful builds</td>
+        <td>CI/CD entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/jenkins/README.md">Jenkins</a> PRs, <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLabCI</a> MRs, etc</td>
+    </tr>
+    <tr>
+        <td>Build Success Rate</td>
+        <td>The percentage of successful builds</td>
+        <td>CI/CD entities: <a href="https://github.com/merico-dev/lake/blob/main/plugins/jenkins/README.md">Jenkins</a> PRs, <a href="https://github.com/merico-dev/lake/blob/main/plugins/gitlab/README.md">GitLabCI</a> MRs, etc</td>
+    </tr>
+</table>
+<br/><br/><br/>
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/DeveloperManuals/E2E-Test-Guide.md b/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/DeveloperManuals/E2E-Test-Guide.md
new file mode 100644
index 00000000..fc9efd0b
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/DeveloperManuals/E2E-Test-Guide.md
@@ -0,0 +1,202 @@
+---
+title: "E2E Test Guide"
+description: >
+  The steps to write E2E tests for plugins.
+---
+
+# 如何为插件编写E2E测试
+
+## 为什么要写 E2E 测试
+
+E2E 测试,作为自动化测试的一环,一般是指文件与模块级别的黑盒测试,或者允许使用一些数据库等外部服务的单元测试。书写E2E测试的目的是屏蔽一些内部实现逻辑,仅从数据正确性的角度来看同样的外部输入,是否可以得到同样的输出。另外,相较于黑盒的集成测试来说,可以避免一些网络等因素带来的偶然问题。更多关于插件的介绍,可以在这里获取更多信息: 为什么要编写 E2E 测试(未完成)
+在 DevLake 中,E2E 测试包含接口测试和插件 Extract/Convert 子任务的输入输出结果验证,本篇仅介绍后者的编写流程。
+
+## 准备数据
+
+我们这里以一个简单的插件——飞书会议时长收集举例,他的目录结构目前是这样的。
+![image](https://user-images.githubusercontent.com/3294100/175061114-53404aac-16ca-45d1-a0ab-3f61d84922ca.png)
+接下来我们将进行次插件的 E2E 测试的编写。
+
+编写测试的第一步,就是运行一下对应插件的 Collect 任务,完成数据的收集,也就是让数据库的`_raw_feishu_`开头的表中,保存有对应的数据。
+以下是采用 DirectRun (cmd) 运行方式的运行日志和数据库结果。
+
+```
+$ go run plugins/feishu/main.go --numOfDaysToCollect 2 --connectionId 1 (注意:随着版本的升级,命令可能产生变化)
+[2022-06-22 23:03:29]  INFO failed to create dir logs: mkdir logs: file exists
+press `c` to send cancel signal
+[2022-06-22 23:03:29]  INFO  [feishu] start plugin
+[2022-06-22 23:03:33]  INFO  [feishu] scheduler for api https://open.feishu.cn/open-apis/vc/v1 worker: 13, request: 10000, duration: 1h0m0s
+[2022-06-22 23:03:33]  INFO  [feishu] total step: 2
+[2022-06-22 23:03:33]  INFO  [feishu] executing subtask collectMeetingTopUserItem
+[2022-06-22 23:03:33]  INFO  [feishu] [collectMeetingTopUserItem] start api collection
+[2022-06-22 23:03:34]  INFO  [feishu] [collectMeetingTopUserItem] finished records: 1
+[2022-06-22 23:03:34]  INFO  [feishu] [collectMeetingTopUserItem] end api collection error: %!w(<nil>)
+[2022-06-22 23:03:34]  INFO  [feishu] finished step: 1 / 2
+[2022-06-22 23:03:34]  INFO  [feishu] executing subtask extractMeetingTopUserItem
+[2022-06-22 23:03:34]  INFO  [feishu] [extractMeetingTopUserItem] get data from _raw_feishu_meeting_top_user_item where params={"connectionId":1} and got 148
+[2022-06-22 23:03:34]  INFO  [feishu] [extractMeetingTopUserItem] finished records: 1
+[2022-06-22 23:03:34]  INFO  [feishu] finished step: 2 / 2
+```
+
+<img width="993" alt="image" src="https://user-images.githubusercontent.com/3294100/175064505-bc2f98d6-3f2e-4ccf-be68-a1cab1e46401.png"/>
+好的,目前数据已经被保存到了`_raw_feishu_*`表中,`data`列就是插件运行的返回信息。这里我们只收集了最近2天的数据,数据信息并不多,但也覆盖了各种情况,即同一个人不同天都有数据。
+
+另外值得一提的是,插件跑了两个任务,`collectMeetingTopUserItem`和`extractMeetingTopUserItem`,前者是收集数据的任务,是本次需要跑的,后者是解析数据的任务,是本次需要测试的。在准备数据环节是否运行无关紧要。
+
+接下来我们需要将数据导出为.csv格式,这一步很多种方案,大家可以八仙过海各显神通,我这里仅仅介绍几种常见的方案。
+
+### DevLake Code Generator 导出
+
+直接运行`go run generator/main.go create-e2e-raw`,根据指引来完成导出。此方案最简单,但也有一定的局限性,比如导出的字段是固定的,如果需要更多的自定义选项,可以参考接下来的方案。
+
+![usage](https://user-images.githubusercontent.com/3294100/175849225-12af5251-6181-4cd9-ba72-26087b05ee73.gif)
+
+### GoLand Database 导出
+
+![image](https://user-images.githubusercontent.com/3294100/175067303-7e5e1c4d-2430-4eb5-ad00-e38d86bbd108.png)
+
+这种方案很简单,无论使用Postgres或者MySQL,都不会出现什么问题。
+![image](https://user-images.githubusercontent.com/3294100/175068178-f1c1c290-e043-4672-b43e-54c4b954c685.png)
+csv导出的成功标准就是go程序可以无误的读取,因此有以下几点值得注意:
+
+1. csv文件中的值,可以用双引号包裹,避免值中的逗号等特殊符号破坏了csv格式
+2. csv文件中双引号转义,一般是`""`代表一个双引号
+3. 注意观察data是否是真实值,而不是base64后的值
+
+导出后,将.csv文件放到`plugins/feishu/e2e/raw_tables/_raw_feishu_meeting_top_user_item.csv`。
+
+### MySQL Select Into Outfile
+
+这是 MySQL 将查询结果导出为文件的方案。目前docker-compose.yml中启动的MySQL,是带有--security参数的,因此不允许`select ... into outfile`,首先需要关闭安全参数,关闭方法大致如下图:
+![origin_img_v2_c809c901-01bc-4ec9-b52a-ab4df24c376g](https://user-images.githubusercontent.com/3294100/175070770-9b7d5b75-574b-49ed-9bca-e9f611f60795.jpg)
+关闭后,使用`select ... into outfile`导出csv文件,导出结果大致如下图:
+![origin_img_v2_ccfdb260-668f-42b4-b249-6c2dd45816ag](https://user-images.githubusercontent.com/3294100/175070866-2204ae13-c058-4a16-bc20-93ab7c95f832.jpg)
+可以注意到,data字段多了hexsha字段,需要人工将其转化为字面量。
+
+### Vscode Database
+
+这是 Vscode 将查询结果导出为文件的方案,但使用起来并不容易。以下是不修改任何配置的导出结果
+![origin_img_v2_c9eaadaa-afbc-4c06-85bc-e78235f7eb3g](https://user-images.githubusercontent.com/3294100/175071987-760c2537-240c-4314-bbd6-1a0cd85ddc0f.jpg)
+但可以明显发现,转义符号并不符合csv规范,并且data并没有成功导出,调整配置且手工替换`\"`为`""`后,得到如下结果。
+![image](https://user-images.githubusercontent.com/3294100/175072314-954c6794-3ebd-45bb-98e7-60ddbb5a7da9.png)
+此文件data字段被base64编码,因此需要人工将其解码为字面量。解码成功后即可使用
+
+### MySQL workerbench
+
+此工具必须要自己书写SQL完成数据的导出,可以模仿以下SQL改写:
+```sql
+SELECT id, params, CAST(`data` as char) as data, url, input,created_at FROM _raw_feishu_meeting_top_user_item;
+```
+![image](https://user-images.githubusercontent.com/3294100/175080866-1631a601-cbe6-40c0-9d3a-d23ca3322a50.png)
+保存格式选择csv,导出后即可直接使用。
+
+### Postgres Copy with csv header;
+
+`Copy(SQL语句) to '/var/lib/postgresql/data/raw.csv' with csv header;`是PG常用的导出csv方法,这里也可以使用。
+```sql
+COPY (
+SELECT id, params, convert_from(data, 'utf-8') as data, url, input,created_at FROM _raw_feishu_meeting_top_user_item
+) to '/var/lib/postgresql/data/raw.csv' with csv header;
+```
+使用以上语句,完成文件的导出。如果你的pg运行在docker中,那么还需要使用 `docker cp`命令将文件导出到宿主机上以便使用。
+
+## 编写E2E测试
+
+首先需要创建测试环境,比如这里创建了`meeting_test.go`
+![image](https://user-images.githubusercontent.com/3294100/175091380-424974b9-15f3-457b-af5c-03d3b5d17e73.png)
+接着在其中输入测试准备代码,如下。其大意为创建了一个`feishu`插件的实例,然后调用`ImportCsvIntoRawTable`将csv文件的数据导入`_raw_feishu_meeting_top_user_item`表中。
+```go
+func TestMeetingDataFlow(t *testing.T) {
+	var plugin impl.Feishu
+	dataflowTester := e2ehelper.NewDataFlowTester(t, "feishu", plugin)
+
+	// import raw data table
+	dataflowTester.ImportCsvIntoRawTable("./raw_tables/_raw_feishu_meeting_top_user_item.csv", "_raw_feishu_meeting_top_user_item")
+}
+```
+导入函数的签名如下:
+```func (t *DataFlowTester) ImportCsvIntoRawTable(csvRelPath string, rawTableName string)```
+他有一个孪生兄弟,仅仅是参数略有区别。
+```func (t *DataFlowTester) ImportCsvIntoTabler(csvRelPath string, dst schema.Tabler)```
+前者用于导入raw layer层的表,后者用于导入任意表。
+**注意:** 另外这两个函数会在导入数据前,先删除数据表并使用`gorm.AutoMigrate`重新表以达到清除表数据的目的。
+
+导入数据完成后,可以尝试运行,目前没有任何测试逻辑,因此一定是PASS的。接着在`TestMeetingDataFlow`继续编写调用调用`extract`任务的逻辑。
+
+```go
+func TestMeetingDataFlow(t *testing.T) {
+	var plugin impl.Feishu
+	dataflowTester := e2ehelper.NewDataFlowTester(t, "feishu", plugin)
+
+	taskData := &tasks.FeishuTaskData{
+		Options: &tasks.FeishuOptions{
+			ConnectionId: 1,
+		},
+	}
+
+	// import raw data table
+	dataflowTester.ImportCsvIntoRawTable("./raw_tables/_raw_feishu_meeting_top_user_item.csv", "_raw_feishu_meeting_top_user_item")
+
+	// verify extraction
+	dataflowTester.FlushTabler(&models.FeishuMeetingTopUserItem{})
+	dataflowTester.Subtask(tasks.ExtractMeetingTopUserItemMeta, taskData)
+
+}
+```
+新增的代码包括调用`dataflowTester.FlushTabler`清空`FeishuMeetingTopUserItem`对应的表,调用`dataflowTester.Subtask`模拟子任务`ExtractMeetingTopUserItemMeta`的运行。
+
+现在在运行试试吧,看看子任务`ExtractMeetingTopUserItemMeta`是否能没有错误的完成运行。`extract`运行的数据结果一般来自raw表,因此,插件子任务编写如果没有差错的话,会正确运行,并且可以在 toolLayer 层的数据表中观察到数据成功解析,在本案例中,即`_tool_feishu_meeting_top_user_items`表中有正确的数据。
+
+如果运行不正确,那么需要先排查插件本身编写的问题,然后才能进入下一步。
+
+## 验证运行结果是否正确
+
+我们继续编写测试,在测试函数的最后,继续加上如下代码
+```go
+
+func TestMeetingDataFlow(t *testing.T) {
+    ……
+    
+    dataflowTester.VerifyTable(
+      models.FeishuMeetingTopUserItem{},
+      "./snapshot_tables/_tool_feishu_meeting_top_user_items.csv",
+      []string{"connection_id", "start_time", "name"},
+      []string{
+        "meeting_count",
+        "meeting_duration",
+        "user_type",
+        "_raw_data_params",
+        "_raw_data_table",
+        "_raw_data_id",
+        "_raw_data_remark",
+      },
+    )
+}
+```
+它的功能是调用`dataflowTester.VerifyTable`完成数据结果的验证。第三个参数是表的主键,第四个参数是表所有需要验证的字段。用于验证的数据存在`./snapshot_tables/_tool_feishu_meeting_top_user_items.csv`中,当然,目前此文件还不存在。
+
+为了方便生成前述文件,DevLake采取了一种称为`Snapshot`的测试技术,在调用`VerifyTable`文件且csv不存在时,将会根据运行结果自动生成文件。
+
+但注意!自动生成后并不是高枕无忧,还需要做两件事:1. 检查文件生成是否正确 2. 再次运行,以便于确定生成结果和再次运行的结果没有差错。
+这两项操作非常重要,直接关系到测试编写的质量,我们应该像对待代码文件一样对待`.csv`格式的 snapshot 文件。
+
+如果这一步出现了问题,一般会有2种可能,
+1. 验证的字段中含有类似create_at运行时间或者自增id的字段,这些无法重复验证的字段应该排除。
+2. 运行的结果中存在`\n`或`\r\n`等转义不匹配的字段,一般是解析`httpResponse`时出现的错误,可以参考如下方案解决:
+    1. 修改api模型中,内容的字段类型为`json.RawMessage`
+    2. 在解析时再将其转化为string
+    3. 经过以上操作,即可原封不动的保存`\n`符号,避免数据库或操作系统对换行符的解析
+
+
+比如在`github`插件中,是这么处理的:
+![image](https://user-images.githubusercontent.com/3294100/175098219-c04b810a-deaf-4958-9295-d5ad4ec152e6.png)
+![image](https://user-images.githubusercontent.com/3294100/175098273-e4a18f9a-51c8-4637-a80c-3901a3c2934e.png)
+
+好了,到这一步,E2E的编写就完成了。我们本次修改一共新增了3个文件,就完成了对会议时长收集任务的测试,是不是还挺简单的~
+![image](https://user-images.githubusercontent.com/3294100/175098574-ae6c7fb7-7123-4d80-aa85-790b492290ca.png)
+
+## 像 CI 一样运行所有插件的 E2E 测试
+
+非常简单,只需要运行`make e2e-plugins`,因为DevLake已经将其固化为一个脚本了~
+
+  
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/DeveloperManuals/PluginImplementation.md b/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/DeveloperManuals/PluginImplementation.md
new file mode 100644
index 00000000..41c44194
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v0.13/DeveloperManuals/PluginImplementation.md
@@ -0,0 +1,337 @@
+---
+title: "如何制作一个DevLake插件?"
+sidebar_position: 2
+description: >
+  如何制作一个DevLake插件?
+---
+
+
+如果你喜欢的DevOps工具还没有被DevLake支持,不要担心。实现一个DevLake插件并不困难。在这篇文章中,我们将了解DevLake插件的基础知识,并一起从头开始建立一个插件的例子。
+
+## 什么是插件?
+
+DevLake插件是用Go的`plugin`包构建的共享库,在运行时与DevLake核心挂钩。
+
+一个插件可以通过三种方式扩展DevLake的能力。
+
+1. 与新的数据源集成
+2. 转化/丰富现有数据
+3. 将DevLake数据导出到其他数据系统
+
+
+## 插件是如何工作的?
+
+一个插件主要包括可以由DevLake核心执行的子任务的集合。对于数据源插件,一个子任务可能是从数据源中收集一个实体(例如,来自Jira的问题)。除了子任务,还有一些钩子,插件可以实现自定义其初始化、迁移等。最重要的接口列表见下文。
+
+1. [PluginMeta](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_meta.go) 包含一个插件最少应该实现的接口,只有两个函数;
+   - Description() 返回插件的描述
+   - RootPkgPath() 返回插件的包路径。
+2. [PluginInit](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_init.go) 实现自定义的初始化方法;
+3. [PluginTask](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_task.go) 实现自定义准备数据,其在子任务之前执行;
+4. [PluginApi](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_api.go) 实现插件自定义的API;
+5. [Migratable](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_db_migration.go) 返回插件自定义的数据库迁移的脚本。
+6. [PluginModel](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_model.go) 实现允许其他插件通过 GetTablesInfo() 的方法来获取当前插件的全部数据库表的 model 信息。(若需domain layer的 model 信息,可访问[DomainLayerSchema](https://devlake.apache.org/zh/docs/DataModels/DevLakeDomainLayerSchema/))
+
+下图是一个插件执行的流程:
+
+```mermaid
+flowchart TD
+    subgraph S4[Step4 Extractor 运行流程]
+    direction LR
+    D4[DevLake]
+    D4 -- "Step4.1 创建\n ApiExtractor 并执行" --> E["ExtractXXXMeta.\nEntryPoint"];
+    E <-- "Step4.2 读取raw table" --> E2["RawDataSubTaskArgs\n.Table"];
+    E -- "Step4.3 解析 RawData" --> ApiExtractor.Extract
+    ApiExtractor.Extract -- "返回 gorm 模型" --> E
+    end
+    subgraph S3[Step3 Collector 运行流程]
+    direction LR
+    D3[DevLake]
+    D3 -- "Step3.1 创建\n ApiCollector 并执行" --> C["CollectXXXMeta.\nEntryPoint"];
+    C <-- "Step3.2 创建raw table" --> C2["RawDataSubTaskArgs\n.RAW_BBB_TABLE"];
+    C <-- "Step3.3 构造请求query" --> ApiCollectorArgs.\nQuery/UrlTemplate;
+    C <-. "Step3.4 通过 ApiClient \n请求并返回HTTP" --> A1["HTTP APIs"];
+    C <-- "Step3.5 解析\n并返回请求结果" --> ResponseParser;
+    end
+    subgraph S2[Step2 DevLake 的自定义插件]
+    direction LR
+    D2[DevLake]
+    D2 <-- "Step2.1 在\`Init\` \n初始化插件" --> plugin.Init;
+    D2 <-- "Step2.2 (Optional) 调用\n与返回 migration 脚本" --> plugin.MigrationScripts;
+    D2 <-- "Step2.3 (Optional) \n初始化并返回taskCtx" --> plugin.PrepareTaskData;
+    D2 <-- "Step2.4 返回\n 需要执行的子函数" --> plugin.SubTaskContext;
+    end
+    subgraph S1[Step1 DevLake 的运行]
+    direction LR
+    main -- "通过 \`runner.DirectRun\`\n 移交控制权" --> D1[DevLake];
+    end
+    S1-->S2-->S3-->S4
+```
+图中信息非常多,当然并不期望马上就能消化完,仅仅作为阅读后文的参考即可。
+
+## 一起来实现一个最简单的插件
+
+在本节中,我们将介绍如何从头创建一个数据收集插件。要收集的数据是 Apache 项目的所有 Committers 和 Contributors 信息,目的是检查其是否签署了 CLA。我们将通过:
+
+* 请求 `https://people.apache.org/public/icla-info.json` 获取 Committers 信息
+* 请求`邮件列表` 获取 Contributors 信息
+  我们将演示如何通过 Apache API 请求并缓存所有 Committers 的信息,并提取出结构化的数据。Contributors 的收集仅做一些思路的介绍。
+
+
+### 一、 创建新的插件
+
+**注意:**在开始之前,请确保DevLake已经能正确启动了。
+
+> 关于插件的其他信息:
+> 一般来说, 我们需要这几个目录: `api`, `models` 和 `tasks`
+> `api` 实现 `config-ui` 等其他服务所需的api
+>
+> - connection [example](https://github.com/apache/incubator-devlake/blob/main/plugins/gitlab/api/connection.go)
+>      connection model [example](https://github.com/apache/incubator-devlake/blob/main/plugins/gitlab/models/connection.go)
+>      `models` 保存数据库模型和Migration脚本. 
+>      - entity 
+>           data migrations [template](https://github.com/apache/incubator-devlake/tree/main/generator/template/migrationscripts)
+>           `tasks` 包含所有子任务
+>                 - task data [template](https://github.com/apache/incubator-devlake/blob/main/generator/template/plugin/tasks/task_data.go-template)
+>                       - api client [template](https://github.com/apache/incubator-devlake/blob/main/generator/template/plugin/tasks/task_data_with_api_client.go-template)
+>
+> 注:如果这些概念让你感到迷惑,不要担心,我们稍后会逐一解释。
+
+DevLake 提供了专门的工具 Generator 来创建插件,可以通过运行`go run generator/main.go creat-plugin icla`来构建新插件,创建的时候会需要输入「是否需要默认的apiClient `with_api_client`」和「要收集的网站`endpoint`」。
+
+* `with_api_client`用于选择是否需要通过api_client发送HTTP APIs。
+* `endpoint`用于确认插件将请求哪个网站,在本案例中是`https://people.apache.org/`。
+
+![](https://i.imgur.com/itzlFg7.png)
+
+现在我们的插件里有三个文件,其中`api_client.go`和`task_data.go`在子文件夹`tasks/`中。
+![1](https://i.imgur.com/zon5waf.png)
+
+接下来让我们试着运行`plugin_main.go`中的`main`函数来启动插件,运行结果应该如下:
+```
+$go run plugins/icla/plugin_main.go
+[2022-06-02 18:07:30]  INFO failed to create dir logs: mkdir logs: file exists
+press `c` to send cancel signal
+[2022-06-02 18:07:30]  INFO  [icla] start plugin
+invalid ICLA_TOKEN, but ignore this error now
+[2022-06-02 18:07:30]  INFO  [icla] scheduler for api https://people.apache.org/ worker: 25, request: 18000, duration: 1h0m0s
+[2022-06-02 18:07:30]  INFO  [icla] total step: 0
+```
+😋 没有报错,那就是成功啦~ `plugin_main.go`这里定义了插件,有一些配置是保存在`task_data.go`中。这两个文件就构成了最简单的插件,而文件`api_client.go`后面会用来发送HTTP APIs。
+
+### 二、 创建数据收集子任务
+在开始创建之前,我们需要先了解一下子任务的执行过程。
+
+1. Apache DevLake会调用`plugin_main.PrepareTaskData()`,准备一些子任务所需要的环境数据,本项任务中需要创建一个apiClient。
+2. Apache DevLake接着会调用定义在`plugin_main.SubTaskMetas()`的子任务,子任务都是互相独立的函数,可以用于完成注入发送API请求,处理数据等任务。
+
+> 每个子任务必须在`SubTaskMeta`中定义,并实现其中的SubTaskEntryPoint函数,其结构为 
+> ```去
+> type SubTaskEntryPoint func(c SubTaskContext) error
+> ```
+> 更多信息见:https://devlake.apache.org/blog/how-apache-devlake-runs/
+>
+> 注:如果这些概念让你感到迷惑,跳过跟着一步步做就好。
+
+#### 2.1 创建 Collector 来请求数据
+
+同样的,运行`go run generator/main.go create-collector icla committer`来创建子任务。Generator运行完成后,会自动创建新的文件,并在`plugin_main.go/SubTaskMetas`中激活。
+
+![](https://i.imgur.com/tkDuofi.png)
+
+> - Collector将从HTTP或其他数据源收集数据,并将数据保存到rawLayer中。
+> - `httpCollector`的`SubTaskEntryPoint`中,默认会使用`helper.NewApiCollector`来创建新的[ApiCollector](https://github.com/apache/incubator-devlake/blob/main/generator/template/plugin/tasks/api_collector.go-template)对象,并调用其`execute()`来并行收集。
+>
+> 注:如果这些概念让你感到迷惑,跳过就好。
+
+现在你可以注意到在`plugin_main.go/PrepareTaskData.ApiClient`中有引用`data.ApiClient`,它是Apache DevLake推荐用于从HTTP APIs请求数据的工具。这个工具支持一些很有用的功能,比如请求限制、代理和重试。当然,如果你喜欢,也可以使用`http`库来代替,只示会显得更加繁琐而已。
+
+回到正题,现在的目标是从`https://people.apache.org/public/icla-info.json`收集数据,因此需要完成以下步骤:
+
+1. 
+我们已经在之前中把`https://people.apache.org/`填入`tasks/api_client.go/ENDPOINT`了,现在在看一眼确认下。
+
+![](https://i.imgur.com/q8Zltnl.png)
+
+2. 将`public/icla-info.json`填入`UrlTemplate`,删除不必要的迭代器,并在`ResponseParser`中添加`println("receive data:", res)`以查看收集是否成功。
+
+![](https://i.imgur.com/ToLMclH.png)
+
+好了,现在Collector已经创建好了,再次运行`main`来启动插件,如果一切顺利的话,输出应该是这样的:
+```bash
+[2022-06-06 12:24:52]  INFO  [icla] start plugin
+invalid ICLA_TOKEN, but ignore this error now
+[2022-06-06 12:24:52]  INFO  [icla] scheduler for api https://people.apache.org/ worker: 25, request: 18000, duration: 1h0m0s
+[2022-06-06 12:24:52]  INFO  [icla] total step: 1
+[2022-06-06 12:24:52]  INFO  [icla] executing subtask CollectCommitter
+[2022-06-06 12:24:52]  INFO  [icla] [CollectCommitter] start api collection
+receive data: 0x140005763f0
+[2022-06-06 12:24:55]  INFO  [icla] [CollectCommitter] finished records: 1
+[2022-06-06 12:24:55]  INFO  [icla] [CollectCommitter] end api collection
+[2022-06-06 12:24:55]  INFO  [icla] finished step: 1 / 1
+```
+
+从以上日志中,可以看到已经能打印出收到数据的日志了,最后一步是在`ResponseParser`中对响应体进行解码,并将其返回给DevLake,以便将其存储在数据库中。
+```go
+ResponseParser: func(res *http.Response) ([]json.RawMessage, error) {
+    body := &struct {
+        LastUpdated string          `json:"last_updated"`
+        Committers  json.RawMessage `json:"committers"`
+    }{}
+    err := helper.UnmarshalResponse(res, body)
+    if err != nil {
+        return nil, err
+    }
+    println("receive data:", len(body.Committers))
+    return []json.RawMessage{body.Committers}, nil
+},
+```
+再次运行函数`main`,结果如下,此时可以在数据库表`_raw_icla_committer`中看到一条新的数据。
+```bash
+……
+receive data: 272956 /* <- 这个数字表示收到了272956个Committer */
+[2022-06-06 13:46:57]  INFO  [icla] [CollectCommitter] finished records: 1
+[2022-06-06 13:46:57]  INFO  [icla] [CollectCommitter] end api collection
+[2022-06-06 13:46:57]  INFO  [icla] finished step: 1 / 1
+```
+
+![](https://i.imgur.com/aVYNMRr.png)
+
+#### 2.2 创建 Extractor,从 rawLayer 中提取数据
+
+> - Extractor将从rawLayer中提取数据并保存到工具db表中。
+> - 除了一些具体的处理内容,主流程与采集器类似。
+
+从HTTP API收集的数据目前仅仅保存在表`_raw_XXXX`中,但其使用起来却很不容易。因此我们将继续从其中提取Committer的名字。目前Apache DevLake建议用[gorm](https://gorm.io/docs/index.html)来保存数据,所以我们将用gorm创建一个模型,并将其添加到`plugin_main.go/AutoMigrate()`中。
+
+plugins/icla/models/committer.go
+```go
+package models
+
+import (
+	"github.com/apache/incubator-devlake/models/common"
+)
+
+type IclaCommitter struct {
+	UserName     string `gorm:"primaryKey;type:varchar(255)"`
+	Name         string `gorm:"primaryKey;type:varchar(255)"`
+	common.NoPKModel
+}
+
+func (IclaCommitter) TableName() string {
+	return "_tool_icla_committer"
+}
+```
+
+plugins/icla/plugin_main.go
+![](https://i.imgur.com/4f0zJty.png)
+
+在做完以上步骤以后,就可以再次运行插件,刚定义的数据表`_tool_icla_committer`会自动创建,就像下面的截图。
+![](https://i.imgur.com/7Z324IX.png)
+
+接下来,让我们运行`go run generator/main.go create-extractor icla committer`并输入命令行提示的内容,来创建新的子任务。
+
+![](https://i.imgur.com/UyDP9Um.png)
+
+运行完成后,来看看刚才创建的`committer_extractor.go`中的函数`extract`,很明显参数中的`resData.data`是原始数据,我们需要用json解码,并创建`IclaCommitter`模型来保存它们。
+```go
+Extract: func(resData *helper.RawData) ([]interface{}, error) {
+    names := &map[string]string{}
+    err := json.Unmarshal(resData.Data, names)
+    if err != nil {
+        return nil, err
+    }
+    extractedModels := make([]interface{}, 0)
+    for userName, name := range *names {
+        extractedModels = append(extractedModels, &models.IclaCommitter{
+            UserName: userName,
+            Name:     name,
+        })fco
+    }
+    return extractedModels, nil
+},
+```
+
+再次运行插件,结果如下:
+```
+[2022-06-06 15:39:40]  INFO  [icla] start plugin
+invalid ICLA_TOKEN, but ignore this error now
+[2022-06-06 15:39:40]  INFO  [icla] scheduler for api https://people.apache.org/ worker: 25, request: 18000, duration: 1h0m0s
+[2022-06-06 15:39:40]  INFO  [icla] total step: 2
+[2022-06-06 15:39:40]  INFO  [icla] executing subtask CollectCommitter
+[2022-06-06 15:39:40]  INFO  [icla] [CollectCommitter] start api collection
+receive data: 272956
+[2022-06-06 15:39:44]  INFO  [icla] [CollectCommitter] finished records: 1
+[2022-06-06 15:39:44]  INFO  [icla] [CollectCommitter] end api collection
+[2022-06-06 15:39:44]  INFO  [icla] finished step: 1 / 2
+[2022-06-06 15:39:44]  INFO  [icla] executing subtask ExtractCommitter
+[2022-06-06 15:39:46]  INFO  [icla] [ExtractCommitter] finished records: 1
+[2022-06-06 15:39:46]  INFO  [icla] finished step: 2 / 2
+```
+可以看到有两个任务运行完成,同时观察数据库发现,提交者的数据已经保存在_tool_icla_committer中了~
+![](https://i.imgur.com/6svX0N2.png)
+
+#### 2.3 子任务 - Converter
+
+注意。这里有两种方式(开源或自己使用)。因此 Converter 不是必须的,但我们鼓励使用它,因为 Converter 和 DomainLayer 非常有助于建立通用的仪表盘。关于 DomainLayer 的更多信息请见:https://devlake.apache.org/docs/DataModels/DevLakeDomainLayerSchema/
+
+> - Converter 将处理 DomainLayer 的数据,并将其保存到 DomainLayer 层中。
+> - 使用`helper.NewDataConverter`来创建一个 DataConvertor 的对象,然后调用`execute()`来运行。
+
+#### 2.4 动手试试更多类型的请求吧~
+有时 OpenApi 会受到 token 或其他保护,只有获得 token 才来能访问。例如在本案例中,我们只有在登录`private@apahce.com`后,才能收集到关于普通 Contributor 签署ICLA的数据。但这里受限于篇幅,仅仅简单介绍一下如何收集需要授权的数据。
+
+让我们注意`api_client.go`文件,其中`NewIclaApiClient`通过`.env`加载配置了`ICLA_TOKEN`,它让我们可以在`.env`中添加`ICLA_TOKEN=XXXX`,并在`apiClient.SetHeaders()`中使用它来模拟登录状态。代码如下。
+![](https://i.imgur.com/dPxooAx.png)
+
+当然,我们可以使用`username/password`来获取模拟登录后的token,试着根据实际情况进行调整即可。
+
+更多相关细节请看https://github.com/apache/incubator-devlake
+
+#### Step 2.5 实现 PluginModel 接口的 GetTablesInfo() 方法
+
+如下gitlab插件示例所示
+将所有需要被外部插件访问到的 model 均添加到返回值中。
+
+```golang
+var _ core.PluginModel = (*Gitlab)(nil)
+
+func (plugin Gitlab) GetTablesInfo() []core.Tabler {
+	return []core.Tabler{
+		&models.GitlabConnection{},
+		&models.GitlabAccount{},
+		&models.GitlabCommit{},
+		&models.GitlabIssue{},
+		&models.GitlabIssueLabel{},
+		&models.GitlabJob{},
+		&models.GitlabMergeRequest{},
+		&models.GitlabMrComment{},
+		&models.GitlabMrCommit{},
+		&models.GitlabMrLabel{},
+		&models.GitlabMrNote{},
+		&models.GitlabPipeline{},
+		&models.GitlabProject{},
+		&models.GitlabProjectCommit{},
+		&models.GitlabReviewer{},
+		&models.GitlabTag{},
+	}
+}
+```
+
+可以使用如下方式来使用该接口
+
+```
+if pm, ok := plugin.(core.PluginModel); ok {
+    tables := pm.GetTablesInfo()
+    for _, table := range tables {
+        // do something
+    }
+}
+
+```
+
+#### 2.6 将插件提交给开源社区
+恭喜你! 第一个插件已经创建完毕! 🎖 我们鼓励开源贡献~ 接下来还需要学习 migrationScripts 和 domainLayers 来编写规范的、平台无关的代码。更多信息请访问https://devlake.apache.org/docs/DataModels/DevLakeDomainLayerSchema,或联系我们以获得热情洋溢的帮助。
+
+![come on](https://user-images.githubusercontent.com/3294100/178882323-7bae0331-c458-4f34-a63d-af3975b9dd85.jpg)
+
diff --git a/versioned_docs/version-v0.13/DataModels/DevLakeDomainLayerSchema.md b/versioned_docs/version-v0.13/DataModels/DevLakeDomainLayerSchema.md
new file mode 100644
index 00000000..46fd24dc
--- /dev/null
+++ b/versioned_docs/version-v0.13/DataModels/DevLakeDomainLayerSchema.md
@@ -0,0 +1,612 @@
+---
+title: "Domain Layer Schema"
+description: >
+  DevLake Domain Layer Schema
+sidebar_position: 2
+---
+
+## Summary
+
+This document describes Apache DevLake's domain layer schema.
+
+Referring to DevLake's [architecture](../Overview/Architecture.md), the data in the domain layer is transformed from the data in the tool layer. The tool layer schema is based on the data from specific tools such as Jira, GitHub, Gitlab, Jenkins, etc. The domain layer schema can be regarded as an abstraction of tool-layer schemas.
+
+Domain layer schema itself includes 2 logical layers: a `DWD` layer and a `DWM` layer. The DWD layer stores the detailed data points, while the DWM is the slight aggregation and operation of DWD to store more organized details or middle-level metrics.
+
+
+## Use Cases
+1. [All metrics](../Metrics) from pre-built dashboards are based on this data schema.
+2. As a user, you can create your own customized dashboards based on this data schema.
+3. As a contributor, you can refer to this data schema while working on the ETL logic when adding/updating data source plugins.
+
+
+## Data Models
+
+This is the up-to-date domain layer schema for DevLake v0.10.x. Tables (entities) are categorized into 5 domains.
+1. Issue tracking domain entities: Jira issues, GitHub issues, GitLab issues, etc.
+2. Source code management domain entities: Git/GitHub/Gitlab commits and refs(tags and branches), etc.
+3. Code review domain entities: GitHub PRs, Gitlab MRs, etc.
+4. CI/CD domain entities: Jenkins jobs & builds, etc.
+5. Cross-domain entities: entities that map entities from different domains to break data isolation.
+
+
+### Schema Diagram
+![Domain Layer Schema](/img/DomainLayerSchema/schema-diagram.png)
+
+When reading the schema, you'll notice that many tables' primary key is called `id`. Unlike auto-increment id or UUID, `id` is a string composed of several parts to uniquely identify similar entities (e.g. repo) from different platforms (e.g. Github/Gitlab) and allow them to co-exist in a single table.
+
+Tables that end with WIP are still under development.
+
+
+### Naming Conventions
+
+1. The name of a table is in plural form. Eg. boards, issues, etc.
+2. The name of a table which describe the relation between 2 entities is in the form of [BigEntity in singular form]\_[SmallEntity in plural form]. Eg. board_issues, sprint_issues, pull_request_comments, etc.
+3. Value of the field in enum type are in capital letters. Eg. [table.issues.type](https://merico.feishu.cn/docs/doccnvyuG9YpVc6lvmWkmmbZtUc#ZDCw9k) has 3 values, REQUIREMENT, BUG, INCIDENT. Values that are phrases, such as 'IN_PROGRESS' of [table.issues.status](https://merico.feishu.cn/docs/doccnvyuG9YpVc6lvmWkmmbZtUc#ZDCw9k), are separated with underscore '\_'.
+
+<br/>
+
+## Get all domain layer model info.
+
+All domain layer models can be accessed by the following method
+
+```golang
+import "github.com/apache/incubator-devlake/models/domainlayer/domaininfo"
+
+domaininfo := domaininfo.GetDomainTablesInfo()
+for _, table := range domaininfo {
+  // do something 
+}
+```
+
+If you want to learn more about plugin models,please visit [PluginImplementation](https://devlake.apache.org/docs/DeveloperManuals/PluginImplementation)
+
+## DWD Entities - (Data Warehouse Detail)
+
+### Domain 1 - Issue Tracking
+
+#### issues
+
+An `issue` is the abstraction of Jira/Github/GitLab/TAPD/... issues.
+
+| **field**                   | **type** | **length** | **description**                                                                                                                                                                                                                                                                                                                                                                                                                                      [...]
+| :-------------------------- | :------- | :--------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [...]
+| `id`                        | varchar  | 255        | An issue's `id` is composed of < plugin >:< Entity >:< PK0 >[:PK1]..." <ul><li>For Github issues, a Github issue's id is like "github:GithubIssues:< GithubIssueId >". Eg. 'github:GithubIssues:1049355647'</li> <li>For Jira issues, a Github repo's id is like "jira:JiraIssues:< JiraSourceId >:< JiraIssueId >". Eg. 'jira:JiraIssues:1:10063'. < JiraSourceId > is used to identify which jira source the issue came from, since DevLake users  [...]
+| `issue_key`                 | varchar  | 255        | The key of this issue. For example, the key of this Github [issue](https://github.com/merico-dev/lake/issues/1145) is 1145.                                                                                                                                                                                                                                                                                                                          [...]
+| `url`                       | varchar  | 255        | The url of the issue. It's a web address in most cases.                                                                                                                                                                                                                                                                                                                                                                                              [...]
+| `title`                     | varchar  | 255        | The title of an issue                                                                                                                                                                                                                                                                                                                                                                                                                                [...]
+| `description`               | longtext |            | The detailed description/summary of an issue                                                                                                                                                                                                                                                                                                                                                                                                         [...]
+| `type`                      | varchar  | 255        | The standard type of this issue. There're 3 standard types: <ul><li>REQUIREMENT: this issue is a feature</li><li>BUG: this issue is a bug found during test</li><li>INCIDENT: this issue is a bug found after release</li></ul>The 3 standard types are transformed from the original types of an issue. The transformation rule is set in the '.env' file or 'config-ui' before data collection. For issues with an original type that has not mapp [...]
+| `status`                    | varchar  | 255        | The standard statuses of this issue. There're 3 standard statuses: <ul><li> TODO: this issue is in backlog or to-do list</li><li>IN_PROGRESS: this issue is in progress</li><li>DONE: this issue is resolved or closed</li></ul>The 3 standard statuses are transformed from the original statuses of an issue. The transformation rule: <ul><li>For Jira issue status: transformed from the Jira issue's `statusCategory`. Jira issue has 3 default [...]
+| `original_status`           | varchar  | 255        | The original status of an issue.                                                                                                                                                                                                                                                                                                                                                                                                                     [...]
+| `story_point`               | int      |            | The story point of this issue. It's default to an empty string for data sources such as Github issues and Gitlab issues.                                                                                                                                                                                                                                                                                                                             [...]
+| `priority`                  | varchar  | 255        | The priority of the issue                                                                                                                                                                                                                                                                                                                                                                                                                            [...]
+| `component`                 | varchar  | 255        | The component a bug-issue affects. This field only supports Github plugin for now. The value is transformed from Github issue labels by the rules set according to the user's configuration of .env by end users during DevLake installation.                                                                                                                                                                                                        [...]
+| `severity`                  | varchar  | 255        | The severity level of a bug-issue. This field only supports Github plugin for now. The value is transformed from Github issue labels by the rules set according to the user's configuration of .env by end users during DevLake installation.                                                                                                                                                                                                        [...]
+| `parent_issue_id`           | varchar  | 255        | The id of its parent issue                                                                                                                                                                                                                                                                                                                                                                                                                           [...]
+| `epic_key`                  | varchar  | 255        | The key of the epic this issue belongs to. For tools with no epic-type issues such as Github and Gitlab, this field is default to an empty string                                                                                                                                                                                                                                                                                                    [...]
+| `original_estimate_minutes` | int      |            | The orginal estimation of the time allocated for this issue                                                                                                                                                                                                                                                                                                                                                                                          [...]
+| `time_spent_minutes`         | int      |            | The orginal estimation of the time allocated for this issue                                                                                                                                                                                                                                                                                                                                                                                         [...]
+| `time_remaining_minutes`     | int      |            | The remaining time to resolve the issue                                                                                                                                                                                                                                                                                                                                                                                                             [...]
+| `creator_id`                 | varchar  | 255        | The id of issue creator                                                                                                                                                                                                                                                                                                                                                                                                                             [...]
+| `creator_name`              | varchar  | 255        | The name of the creator                                                                                                                                                                                                                                                                                                                                                                                                                              [...]
+| `assignee_id`               | varchar  | 255        | The id of issue assignee.<ul><li>For Github issues: this is the last assignee of an issue if the issue has multiple assignees</li><li>For Jira issues: this is the assignee of the issue at the time of collection</li></ul>                                                                                                                                                                                                                         [...]
+| `assignee_name`             | varchar  | 255        | The name of the assignee                                                                                                                                                                                                                                                                                                                                                                                                                             [...]
+| `created_date`              | datetime | 3          | The time issue created                                                                                                                                                                                                                                                                                                                                                                                                                               [...]
+| `updated_date`              | datetime | 3          | The last time issue gets updated                                                                                                                                                                                                                                                                                                                                                                                                                     [...]
+| `resolution_date`           | datetime | 3          | The time the issue changes to 'DONE'.                                                                                                                                                                                                                                                                                                                                                                                                                [...]
+| `lead_time_minutes`         | int      |            | Describes the cycle time from issue creation to issue resolution.<ul><li>For issues whose type = 'REQUIREMENT' and status = 'DONE', lead_time_minutes = resolution_date - created_date. The unit is minute.</li><li>For issues whose type != 'REQUIREMENT' or status != 'DONE', lead_time_minutes is null</li></ul>                                                                                                                                  [...]
+
+#### issue_labels
+
+This table shows the labels of issues. Multiple entries can exist per issue. This table can be used to filter issues by label name.
+
+| **field**  | **type** | **length** | **description** | **key**      |
+| :--------- | :------- | :--------- | :-------------- | :----------- |
+| `name`     | varchar  | 255        | Label name      |              |
+| `issue_id` | varchar  | 255        | Issue ID        | FK_issues.id |
+
+
+#### issue_comments(WIP)
+
+This table shows the comments of issues. Issues with multiple comments are shown as multiple records. This table can be used to calculate _metric - issue response time_.
+
+| **field**      | **type** | **length** | **description**                                                                                                                                                                               | **key**      |
+| :------------- | :------- | :--------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :----------- |
+| `id`           | varchar  | 255        | The unique id of a comment                                                                                                                                                                    | PK           |
+| `issue_id`     | varchar  | 255        | Issue ID                                                                                                                                                                                      | FK_issues.id |
+| `account_id`      | varchar  | 255        | The id of the account who made the comment                                                                                                                                                       | FK_accounts.id  |
+| `body`         | longtext |            | The body/detail of the comment                                                                                                                                                                |              |
+| `created_date` | datetime | 3          | The creation date of the comment                                                                                                                                                              |              |
+| `updated_date` | datetime | 3          | The last time comment gets updated                                                                                                                                                            |              |
+
+#### issue_changelogs
+
+This table shows the changelogs of issues. Issues with multiple changelogs are shown as multiple records. This is transformed from Jira or TAPD changelogs.
+
+| **field**             | **type** | **length** | **description**                                                  | **key**        |
+| :-------------------- | :------- | :--------- | :--------------------------------------------------------------- | :------------- |
+| `id`                  | varchar  | 255        | The unique id of an issue changelog                              | PK             |
+| `issue_id`            | varchar  | 255        | Issue ID                                                         | FK_issues.id   |
+| `author_id`           | varchar  | 255        | The id of the user who made the change                           | FK_accounts.id |
+| `author_name`         | varchar  | 255        | The id of the user who made the change                           | FK_accounts.id |
+| `field_id`            | varchar  | 255        | The id of changed field                                          |                |
+| `field_name`          | varchar  | 255        | The id of changed field                                          |                |
+| `original_from_value` | varchar  | 255        | The original value of the changed field                          |                |
+| `original_to_value`   | varchar  | 255        | The new value of the changed field                               |                |
+| `from_value`          | varchar  | 255        | The transformed/standardized original value of the changed field |                |
+| `to_value`            | varchar  | 255        | The transformed/standardized new value of the changed field      |                |
+| `created_date`        | datetime | 3          | The creation date of the changelog                               |                |
+
+
+#### issue_worklogs
+
+This table shows the work logged under issues. Usually, an issue has multiple worklogs logged by different developers.
+
+| **field**            | **type** | **length** | **description**                                                                              | **key**          |
+| :------------------- | :------- | :--------- | :------------------------------------------------------------------------------------------- | :--------------- |
+| `id`                 | varchar  | 255        | The id of the worklog                                                                                      | PK               |
+| `author_id`          | varchar  | 255        | The id of the author who logged the work                                                     | FK_acccounts.id  |
+| `comment`            | longtext | 255        | The comment made while logging the work.                                                     |                  |
+| `time_spent_minutes` | int      |            | The time logged. The unit of value is normalized to minute. Eg. 1d =) 480, 4h30m =) 270      |                  |
+| `logged_date`        | datetime | 3          | The time of this logging action                                                              |                  |
+| `started_date`       | datetime | 3          | Start time of the worklog                                                                    |                  |
+| `issue_id`           | varchar  | 255        | Issue ID                                                                                     | FK_issues.id     |
+
+
+#### boards
+
+A `board` is an issue list or a collection of issues. It's the abstraction of a Jira board, a Jira project, a [Github issue list](https://github.com/merico-dev/lake/issues) or a GitLab issue list. This table can be used to filter issues by the boards they belong to.
+
+| **field**      | **type** | **length** | **description**                                                                                                                                                                                                                                                                                                                                                                      | **key** |
+| :------------- | :------- | :--------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------ |
+| `id`           | varchar  | 255        | A board's `id` is composed of "< plugin >:< Entity >:< PK0 >[:PK1]..." <ul><li>For a Github repo's issue list, the board id is like "< github >:< GithubRepos >:< GithubRepoId >". Eg. "github:GithubRepo:384111310"</li> <li>For a Jira Board, the id is like the board id is like "< jira >:< JiraSourceId >< JiraBoards >:< JiraBoardsId >". Eg. "jira:1:JiraBoards:12"</li></ul> | PK      |
+| `name`           | varchar  | 255        | The name of the board. Note: the board name of a Github project 'merico-dev/lake' is 'merico-dev/lake', representing the [default issue list](https://github.com/merico-dev/lake/issues).                                                                                                                                                                                            |         |
+| `description`  | varchar  | 255        | The description of the board.                                                                                                                                                                                                                                                                                                                                                        |         |
+| `url`          | varchar  | 255        | The url of the board. Eg. https://Github.com/merico-dev/lake                                                                                                                                                                                                                                                                                                                         |         |
+| `created_date` | datetime | 3          | Board creation time                                                                                                                                                                                                                                                                                                                             |         |
+
+#### board_issues
+
+This table shows the relation between boards and issues. This table can be used to filter issues by board.
+
+| **field**  | **type** | **length** | **description** | **key**      |
+| :--------- | :------- | :--------- | :-------------- | :----------- |
+| `board_id` | varchar  | 255        | Board id        | FK_boards.id |
+| `issue_id` | varchar  | 255        | Issue id        | FK_issues.id |
+
+#### sprints
+
+A `sprint` is the abstraction of Jira sprints, TAPD iterations and Github milestones. A sprint contains a list of issues.
+
+| **field**           | **type** | **length** | **description**                                                                                                                                                                                                                                                                                                                                                                                                                                              [...]
+| :------------------ | :------- | :--------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [...]
+| `id`                | varchar  | 255        | A sprint's `id` is composed of "< plugin >:< Entity >:< PK0 >[:PK1]..."<ul><li>A sprint in a Github repo is a milestone, the sprint id is like "< github >:< GithubRepos >:< GithubRepoId >:< milestoneNumber >".<br/>Eg. The id for this [sprint](https://github.com/merico-dev/lake/milestone/5) is "github:GithubRepo:384111310:5"</li><li>For a Jira Board, the id is like "< jira >:< JiraSourceId >< JiraBoards >:< JiraBoardsId >".<br/>Eg. "jira:1:J [...]
+| `name`              | varchar  | 255        | The name of sprint.<br/>For Github projects, the sprint name is the milestone name. For instance, 'v0.10.0 - Introduce Temporal to DevLake' is the name of this [sprint](https://github.com/merico-dev/lake/milestone/5).                                                                                                                                                                                                                                    [...]
+| `url`               | varchar  | 255        | The url of sprint.                                                                                                                                                                                                                                                                                                                                                                                                                                           [...]
+| `status`            | varchar  | 255        | There're 3 statuses of a sprint:<ul><li>CLOSED: a completed sprint</li><li>ACTIVE: a sprint started but not completed</li><li>FUTURE: a sprint that has not started</li></ul>                                                                                                                                                                                                                                                                                [...]
+| `started_date`      | datetime | 3          | The start time of a sprint                                                                                                                                                                                                                                                                                                                                                                                                                                   [...]
+| `ended_date`        | datetime | 3          | The planned/estimated end time of a sprint. It's usually set when planning a sprint.                                                                                                                                                                                                                                                                                                                                                                         [...]
+| `completed_date`    | datetime | 3          | The actual time to complete a sprint.                                                                                                                                                                                                                                                                                                                                                                                                                        [...]
+| `original_board_id` | datetime | 3          | The id of board where the sprint first created. This field is not null only when this entity is transformed from Jira sprintas.<br/>In Jira, sprint and board entities have 2 types of relation:<ul><li>A sprint is created based on a specific board. In this case, board(1):(n)sprint. The `original_board_id` is used to show the relation.</li><li>A sprint can be mapped to multiple boards, a board can also show multiple sprints. In this case, boar [...]
+
+#### sprint_issues
+
+This table shows the relation between sprints and issues that have been added to sprints. This table can be used to show metrics such as _'ratio of unplanned issues'_, _'completion rate of sprint issues'_, etc
+
+| **field**        | **type** | **length** | **description**                                                                                                                                                                                                                                                                                                                                                                                                                                                 [...]
+| :--------------- | :------- | :--------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [...]
+| `sprint_id`      | varchar  | 255        | Sprint id                                                                                                                                                                                                                                                                                                                                                                                                                                                       [...]
+| `issue_id`       | varchar  | 255        | Issue id                                                                                                                                                                                                                                                                                                                                                                                                                                                        [...]
+| `is_removed`     | bool     |            | If the issue is removed from this sprint, then TRUE; else FALSE                                                                                                                                                                                                                                                                                                                                                                                                 [...]
+| `added_date`     | datetime | 3          | The time this issue added to the sprint. If an issue is added to a sprint multiple times, the latest time will be the value.                                                                                                                                                                                                                                                                                                                                    [...]
+| `removed_date`   | datetime | 3          | The time this issue gets removed from the sprint. If an issue is removed multiple times, the latest time will be the value.                                                                                                                                                                                                                                                                                                                                     [...]
+| `added_stage`    | varchar  | 255        | The stage when issue is added to this sprint. There're 3 possible values:<ul><li>BEFORE_SPRINT<br/>Planning before sprint starts.<br/> Condition: sprint_issues.added_date <= sprints.start_date</li><li>DURING_SPRINT Planning during a sprint.<br/>Condition: sprints.start_date < sprint_issues.added_date <= sprints.end_date</li><li>AFTER_SPRINT<br/>Planing after a sprint. This is caused by improper operation - adding issues to a completed sprint.< [...]
+| `resolved_stage` | varchar  | 255        | The stage when an issue is resolved (issue status turns to 'DONE'). There're 3 possible values:<ul><li>BEFORE_SPRINT<br/>Condition: issues.resolution_date <= sprints.start_date</li><li>DURING_SPRINT<br/>Condition: sprints.start_date < issues.resolution_date <= sprints.end_date</li><li>AFTER_SPRINT<br/>Condition: issues.resolution_date ) sprints.end_date</li></ul>                                                                                   [...]
+
+#### board_sprints
+
+| **field**   | **type** | **length** | **description** | **key**       |
+| :---------- | :------- | :--------- | :-------------- | :------------ |
+| `board_id`  | varchar  | 255        | Board id        | FK_boards.id  |
+| `sprint_id` | varchar  | 255        | Sprint id       | FK_sprints.id |
+
+<br/>
+
+### Domain 2 - Source Code Management
+
+#### repos
+
+Information about Github or Gitlab repositories. A repository is always owned by a user.
+
+| **field**      | **type** | **length** | **description**                                                                                                                                                                                | **key**     |
+| :------------- | :------- | :--------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :---------- |
+| `id`           | varchar  | 255        | A repo's `id` is composed of "< plugin >:< Entity >:< PK0 >[:PK1]..."<br/>For example, a Github repo's id is like "< github >:< GithubRepos >< GithubRepoId >". Eg. 'github:GithubRepos:384111310' | PK          |
+| `name`         | varchar  | 255        | The name of repo.                                                                                                                                                                              |             |
+| `description`  | varchar  | 255        | The description of repo.                                                                                                                                                                       |             |
+| `url`          | varchar  | 255        | The url of repo. Eg. https://Github.com/merico-dev/lake                                                                                                                                        |             |
+| `owner_id`     | varchar  | 255        | The id of the owner of repo                                                                                                                                                                    | FK_accounts.id |
+| `language`     | varchar  | 255        | The major language of repo. Eg. The language for merico-dev/lake is 'Go'                                                                                                                       |             |
+| `forked_from`  | varchar  | 255        | Empty unless the repo is a fork in which case it contains the `id` of the repo the repo is forked from.                                                                                        |             |
+| `deleted`      | tinyint  | 255        | 0: repo is active 1: repo has been deleted                                                                                                                                                     |             |
+| `created_date` | datetime | 3          | Repo creation date                                                                                                                                                                             |             |
+| `updated_date` | datetime | 3          | Last full update was done for this repo                                                                                                                                                        |             |
+
+#### repo_languages(WIP)
+
+Languages that are used in the repository along with byte counts for all files in those languages. This is in line with how Github calculates language percentages in a repository. Multiple entries can exist per repo.
+
+The table is filled in when the repo has been first inserted on when an update round for all repos is made.
+
+| **field**      | **type** | **length** | **description**                                                                                                                                                                                    | **key** |
+| :------------- | :------- | :--------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------ |
+| `id`           | varchar  | 255        | A repo's `id` is composed of "< plugin >:< Entity >:< PK0 >[:PK1]..."<br/>For example, a Github repo's id is like "< github >:< GithubRepos >< GithubRepoId >". Eg. 'github:GithubRepos:384111310' | PK      |
+| `language`     | varchar  | 255        | The language of repo.<br/>These are the [languages](https://api.github.com/repos/merico-dev/lake/languages) for merico-dev/lake                                                                    |         |
+| `bytes`        | int      |            | The byte counts for all files in those languages                                                                                                                                                   |         |
+| `created_date` | datetime | 3          | The field is filled in with the latest timestamp the query for a specific `repo_id` was done.                                                                                                      |         |
+
+#### repo_commits
+
+The commits belong to the history of a repository. More than one repos can share the same commits if one is a fork of the other.
+
+| **field**    | **type** | **length** | **description** | **key**        |
+| :----------- | :------- | :--------- | :-------------- | :------------- |
+| `repo_id`    | varchar  | 255        | Repo id         | FK_repos.id    |
+| `commit_sha` | char     | 40         | Commit sha      | FK_commits.sha |
+
+#### refs
+
+A ref is the abstraction of a branch or tag.
+
+| **field**    | **type** | **length** | **description**                                                                                                                                                                                                                                                                                                                                             | **key**     |
+| :----------- | :------- | :--------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :---------- |
+| `id`         | varchar  | 255        | A ref's `id` is composed of "< plugin >:< Entity >:< PK0 >[:PK1]..."<br/>For example, a Github ref is composed of "github:GithubRepos:< GithubRepoId >:< RefUrl >". Eg. The id of release v5.3.0 of PingCAP/TiDB project is 'github:GithubRepos:384111310:refs/tags/v5.3.0' A repo's `id` is composed of "< plugin >:< Entity >:< PK0 >[:PK1]..."           | PK          |
+| `ref_name`   | varchar  | 255        | The name of ref. Eg. '[refs/tags/v0.9.3](https://github.com/merico-dev/lake/tree/v0.9.3)'                                                                                                                                                                                                                                                                   |             |
+| `repo_id`    | varchar  | 255        | The id of repo this ref belongs to                                                                                                                                                                                                                                                                                                                          | FK_repos.id |
+| `commit_sha` | char     | 40         | The commit this ref points to at the time of collection                                                                                                                                                                                                                                                                                                     |             |
+| `is_default` | int      |            | <ul><li>0: the ref is the default branch. By the definition of [Github](https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/managing-branches-in-your-repository/changing-the-default-branch), the default branch is the base branch for pull requests and code commits.</li><li>1: not the default branch</li></ul> |             |
+| `merge_base` | char     | 40         | The merge base commit of the main ref and the current ref                                                                                                                                                                                                                                                                                                   |             |
+| `ref_type`   | varchar  | 64         | There're 2 typical types:<ul><li>BRANCH</li><li>TAG</li></ul>                                                                                                                                                                                                                                                                                               |             |
+
+#### refs_commits_diffs
+
+This table shows the commits added in a new ref compared to an old ref. This table can be used to support tag-based analysis, for instance, '_No. of commits of a tag_', '_No. of merged pull request of a tag_', etc.
+
+The records of this table are computed by [RefDiff](https://github.com/merico-dev/lake/tree/main/plugins/refdiff) plugin. The computation should be manually triggered after using [GitRepoExtractor](https://github.com/merico-dev/lake/tree/main/plugins/gitextractor) to collect commits and refs. The algorithm behind is similar to [this](https://github.com/merico-dev/lake/compare/v0.8.0%E2%80%A6v0.9.0).
+
+| **field**            | **type** | **length** | **description**                                                 | **key**        |
+| :------------------- | :------- | :--------- | :-------------------------------------------------------------- | :------------- |
+| `commit_sha`         | char     | 40         | One of the added commits in the new ref compared to the old ref | FK_commits.sha |
+| `new_ref_id`         | varchar  | 255        | The new ref's id for comparison                                 | FK_refs.id     |
+| `old_ref_id`         | varchar  | 255        | The old ref's id for comparison                                 | FK_refs.id     |
+| `new_ref_commit_sha` | char     | 40         | The commit new ref points to at the time of collection          |                |
+| `old_ref_commit_sha` | char     | 40         | The commit old ref points to at the time of collection          |                |
+| `sorting_index`      | varchar  | 255        | An index for debugging, please skip it                          |                |
+
+#### commits
+
+| **field**         | **type** | **length** | **description**                                                                                                                                                  | **key**        |
+| :---------------- | :------- | :--------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------- |
+| `sha`             | char     | 40         | One of the added commits in the new ref compared to the old ref                                                                                                  | FK_commits.sha |
+| `message`         | varchar  | 255        | Commit message                                                                                                                                                   |                |
+| `author_name`     | varchar  | 255        | The value is set with command `git config user.name xxxxx` commit                                                                                                                            |                |
+| `author_email`    | varchar  | 255        | The value is set with command `git config user.email xxxxx` author                                                                                                                                       |                |
+| `authored_date`   | datetime | 3          | The date when this commit was originally made                                                                                                                    |                |
+| `author_id`       | varchar  | 255        | The id of commit author                                                                                                                                          | FK_accounts.id    |
+| `committer_name`  | varchar  | 255        | The name of committer                                                                                                                                            |                |
+| `committer_email` | varchar  | 255        | The email of committer                                                                                                                                           |                |
+| `committed_date`  | datetime | 3          | The last time the commit gets modified.<br/>For example, when rebasing the branch where the commit is in on another branch, the committed_date changes.          |                |
+| `committer_id`    | varchar  | 255        | The id of committer                                                                                                                                              | FK_accounts.id    |
+| `additions`       | int      |            | Added lines of code                                                                                                                                              |                |
+| `deletions`       | int      |            | Deleted lines of code                                                                                                                                            |                |
+| `dev_eq`          | int      |            | A metric that quantifies the amount of code contribution. The data can be retrieved from [AE plugin](https://github.com/apache/incubator-devlake/tree/main/plugins/ae). |                |
+
+#### commit_files
+
+The files have been changed via commits.
+
+| **field**    | **type** | **length** | **description**                                        | **key**        |
+| :----------- | :------- | :--------- | :----------------------------------------------------- | :------------- |
+| `id`         | varchar  | 255        | The `id` is composed of "< Commit_sha >:< file_path >" | FK_commits.sha |
+| `commit_sha` | char     | 40         | Commit sha                                             | FK_commits.sha |
+| `file_path`  | varchar  | 255        | Path of a changed file in a commit                     |                |
+| `additions`  | int      |            | The added lines of code in this file by the commit     |                |
+| `deletions`  | int      |            | The deleted lines of code in this file by the commit   |                |
+
+#### components
+
+The components of files extracted from the file paths. This can be used to analyze Git metrics by component.
+
+| **field**    | **type** | **length** | **description**                                        | **key**     |
+| :----------- | :------- | :--------- | :----------------------------------------------------- | :---------- |
+| `repo_id`    | varchar  | 255        | The repo id                                            | FK_repos.id |
+| `name`       | varchar  | 255        | The name of component                                  |             |
+| `path_regex` | varchar  | 255        | The regex to extract components from this repo's paths |             |
+
+#### commit_file_components
+
+The relationship between commit_file and component_name.
+
+| **field**        | **type** | **length** | **description**              | **key**            |
+| :--------------- | :------- | :--------- | :--------------------------- | :----------------- |
+| `commit_file_id` | varchar  | 255        | The id of commit file        | FK_commit_files.id |
+| `component_name` | varchar  | 255        | The component name of a file |                    |
+
+#### commit_parents
+
+The parent commit(s) for each commit, as specified by Git.
+
+| **field**    | **type** | **length** | **description**   | **key**        |
+| :----------- | :------- | :--------- | :---------------- | :------------- |
+| `commit_sha` | char     | 40         | commit sha        | FK_commits.sha |
+| `parent`     | char     | 40         | Parent commit sha | FK_commits.sha |
+
+<br/>
+
+### Domain 3 - Code Review
+
+#### pull_requests
+
+A pull request is the abstraction of Github pull request and Gitlab merge request.
+
+| **field**          | **type** | **length** | **description**                                                                                                                                                                                                                                                                                                                                                                                | **key**        |
+| :----------------- | :------- | :--------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------- |
+| `id`               | char     | 40         | A pull request's `id` is composed of "< plugin >:< Entity >:< PK0 >[:PK1]..." Eg. For 'github:GithubPullRequests:1347'                                                                                                                                                                                                                                                                         | FK_commits.sha |
+| `title`            | varchar  | 255        | The title of pull request                                                                                                                                                                                                                                                                                                                                                                      |                |
+| `description`      | longtext |            | The body/description of pull request                                                                                                                                                                                                                                                                                                                                                           |                |
+| `status`           | varchar  | 255        | the status of pull requests. For a Github pull request, the status can either be 'open' or 'closed'.                                                                                                                                                                                                                                                                                           |                |
+| `parent_pr_id`     | varchar  | 255        | The id of the parent PR                                                                                                                                                                                                                                                                                               |                |
+| `pull_request_key` | varchar  | 255        | The key of PR. Eg, 1536 is the key of this [PR](https://github.com/merico-dev/lake/pull/1563)                                                                                                                                                                                                                                                                                            |                |
+| `base_repo_id`     | varchar  | 255        | The repo that will be updated.                                                                                                                                                                                                                                                                                                                                                                 |                |
+| `head_reop_id`     | varchar  | 255        | The repo containing the changes that will be added to the base. If the head repository is NULL, this means that the corresponding project had been deleted when DevLake processed the pull request.                                                                                                                                                                                            |                |
+| `base_ref`         | varchar  | 255        | The branch name in the base repo that will be updated                                                                                                                                                                                                                                                                                                                                          |                |
+| `head_ref`         | varchar  | 255        | The branch name in the head repo that contains the changes that will be added to the base                                                                                                                                                                                                                                                                                                      |                |
+| `author_name`      | varchar  | 255        | The author's name of the pull request                                                                                                                                                                                                                                                                                                                                                         |                |
+| `author_id`        | varchar  | 255        | The author's id of the pull request                                                                                                                                                                                                                                                                                                                                                           |                |
+| `url`              | varchar  | 255        | the web link of the pull request                                                                                                                                                                                                                                                                                                                                                               |                |
+| `type`             | varchar  | 255        | The work-type of a pull request. For example: feature-development, bug-fix, docs, etc.<br/>The value is transformed from Github pull request labels by configuring `GITHUB_PR_TYPE` in `.env` file during installation.                                                                                                                                                                        |                |
+| `component`        | varchar  | 255        | The component this PR affects.<br/>The value is transformed from Github/Gitlab pull request labels by configuring `GITHUB_PR_COMPONENT` in `.env` file during installation.                                                                                                                                                                                                                    |                |
+| `created_date`     | datetime | 3          | The time PR created.                                                                                                                                                                                                                                                                                                                                                                           |                |
+| `merged_date`      | datetime | 3          | The time PR gets merged. Null when the PR is not merged.                                                                                                                                                                                                                                                                                                                                       |                |
+| `closed_date`      | datetime | 3          | The time PR closed. Null when the PR is not closed.                                                                                                                                                                                                                                                                                                                                            |                |
+| `merge_commit_sha` | char     | 40         | the merge commit of this PR. By the definition of [Github](https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/managing-branches-in-your-repository/changing-the-default-branch), when you click the default Merge pull request option on a pull request on Github, all commits from the feature branch are added to the base branch in a merge commit. |                |
+| `base_commit_sha` | char     | 40         | The base commit of this PR.      |                |
+| `head_commit_sha` | char     | 40         | The head commit of this PR.      |                |
+
+
+#### pull_request_labels
+
+This table shows the labels of pull request. Multiple entries can exist per pull request. This table can be used to filter pull requests by label name.
+
+| **field**         | **type** | **length** | **description** | **key**             |
+| :---------------- | :------- | :--------- | :-------------- | :------------------ |
+| `name`            | varchar  | 255        | Label name      |                     |
+| `pull_request_id` | varchar  | 255        | Pull request ID | FK_pull_requests.id |
+
+#### pull_request_commits
+
+A commit associated with a pull request
+
+The list is additive. This means if a rebase with commit squashing takes place after the commits of a pull request have been processed, the old commits will not be deleted.
+
+| **field**         | **type** | **length** | **description** | **key**             |
+| :---------------- | :------- | :--------- | :-------------- | :------------------ |
+| `pull_request_id` | varchar  | 255        | Pull request id | FK_pull_requests.id |
+| `commit_sha`      | char     | 40         | Commit sha      | FK_commits.sha      |
+
+#### pull_request_comments
+
+Normal comments, review bodies, reviews' inline comments of GitHub's pull requests or GitLab's merge requests.
+
+| **field**         | **type** | **length** | **description**                                            | **key**             |
+| :---------------- | :------- | :--------- | :--------------------------------------------------------- | :------------------ |
+| `id`              | varchar  | 255        | Comment id                                                 | PK                  |
+| `pull_request_id` | varchar  | 255        | Pull request id                                            | FK_pull_requests.id |
+| `body`            | longtext |            | The body of the comments                                   |                     |
+| `account_id`      | varchar  | 255        | The account who made the comment                           | FK_accounts.id     |
+| `created_date`    | datetime | 3          | Comment creation time                                      |                     |
+| `position`        | int      |            | Deprecated                                                 |                     |
+| `type`            | varchar  | 255        | - For normal comments: NORMAL<br/> - For review comments, ie. diff/inline comments: DIFF<br/> - For reviews' body (exist in GitHub but not GitLab): REVIEW                                                |                     |
+| `review_id`       | varchar  | 255        | Review_id of the comment if the type is `REVIEW` or `DIFF` |                     |
+| `status`          | varchar  | 255        | Status of the comment                                      |                     |
+
+
+#### pull_request_events(WIP)
+
+Events of pull requests.
+
+| **field**         | **type** | **length** | **description**                                                                                                                                                                                                                                                                                                                                                                                                                                          | **k [...]
+| :---------------- | :------- | :--------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-- [...]
+| `id`              | varchar  | 255        | Event id                                                                                                                                                                                                                                                                                                                                                                                                                                                 | PK  [...]
+| `pull_request_id` | varchar  | 255        | Pull request id                                                                                                                                                                                                                                                                                                                                                                                                                                          | FK_ [...]
+| `action`          | varchar  | 255        | The action to be taken, some values:<ul><li>`opened`: When the pull request has been opened</li><li>`closed`: When the pull request has been closed</li><li>`merged`: When Github detected that the pull request has been merged. No merges outside Github (i.e. Git based) are reported</li><li>`reoponed`: When a pull request is opened after being closed</li><li>`syncrhonize`: When new commits are added/removed to the head repository</li></ul> |     [...]
+| `actor_id`        | varchar  | 255        | The account id of the event performer                                                                                                                                                                                                                                                                                                                                                                                                                       |  [...]
+| `created_date`    | datetime | 3          | Event creation time                                                                                                                                                                                                                                                                                                                                                                                                                                      |     [...]
+
+<br/>
+
+### Domain 4 - CI/CD(WIP)
+
+#### jobs
+
+The CI/CD schedule, not a specific task.
+
+| **field** | **type** | **length** | **description** | **key** |
+| :-------- | :------- | :--------- | :-------------- | :------ |
+| `id`      | varchar  | 255        | Job id          | PK      |
+| `name`    | varchar  | 255        | Name of job     |         |
+
+#### builds
+
+A build is an execution of a job.
+
+| **field**      | **type** | **length** | **description**                                                  | **key**    |
+| :------------- | :------- | :--------- | :--------------------------------------------------------------- | :--------- |
+| `id`           | varchar  | 255        | Build id                                                         | PK         |
+| `job_id`       | varchar  | 255        | Id of the job this build belongs to                              | FK_jobs.id |
+| `name`         | varchar  | 255        | Name of build                                                    |            |
+| `duration_sec` | bigint   |            | The duration of build in seconds                                 |            |
+| `started_date` | datetime | 3          | Started time of the build                                        |            |
+| `status`       | varchar  | 255        | The result of build. The values may be 'success', 'failed', etc. |            |
+| `commit_sha`   | char     | 40         | The specific commit being built on. Nullable.                    |            |
+
+#### cicd_pipelines
+
+A cicd_pipeline is a series of builds that have connections or a standalone build.
+
+| **field**          | **type**        | **length** | **description**                                                 | **key** |
+| :----------------- | :-------------- | :--------- | :-------------------------------------------------------------- | :------ |
+| `id`               | varchar         | 255       | This key is generated based on details from the original plugin | PK      |
+| `created_at`       | datetime        | 3         |                                                                 |         |
+| `updated_at`       | datetime        | 3         |                                                                 |         |
+| `name`             | varchar         | 255       |                                                                 |         |
+| `commit_sha`       | varchar         | 255       |                                                                 |         |
+| `branch`           | varchar         | 255       |                                                                 |         |
+| `repo`             | varchar         | 255       |                                                                 |         |
+| `result`           | varchar         | 100       |                                                                 |         |
+| `status`           | varchar         | 100       |                                                                 |         |
+| `type`             | varchar         | 100       | to indicate this is CI or CD                                    |         |
+| `duration_sec`     | bigint unsigned |            |                                                                 |         |
+| `created_date`     | datetime        | 3         |                                                                 |         |
+| `finished_date`    | datetime        | 3         |                                                                 |         |
+
+#### cicd_pipeline_repos
+
+A map between cic_pipeline and repo info.
+
+| **field**    | **type** | **length** | **description**                                                 | **key** |
+| :----------- | :------- | :--------- | :-------------------------------------------------------------- | :------ |
+| `commit_sha` | varchar  | 255       |                                                                 | PK      |
+| `branch`     | varchar  | 255       |                                                                 |         |
+| `repo_url`   | varchar  | 255       |                                                                 |         |
+| `id`         | varchar  | 255       | This key is generated based on details from the original plugin | PK      |
+| `created_at` | datetime | 3         |                                                                 |         |
+| `updated_at` | datetime | 3         |                                                                 |         |
+
+
+#### cicd_tasks
+
+A cicd_task is a single job of ci/cd.
+
+| **field**          | **type**        | **length** | **description**                                                 | **key** |
+| :----------------- | :-------------- | :--------- | :-------------------------------------------------------------- | :------ |
+| `id`               | varchar         | 255        | This key is generated based on details from the original plugin | PK      |
+| `created_at`       | datetime        | 3          |                                                                 |         |
+| `updated_at`       | datetime        | 3          |                                                                 |         |
+| `name`             | varchar         | 255        |                                                                 |         |
+| `pipeline_id`      | varchar         | 255        |                                                                 |         |
+| `result`           | varchar         | 100        |                                                                 |         |
+| `status`           | varchar         | 100        |                                                                 |         |
+| `type`             | varchar         | 100        | to indicate this is CI or CD                                    |         |
+| `duration_sec`     | bigint unsigned |            |                                                                 |         |
+| `started_date`     | datetime        | 3          |                                                                 |         |
+| `finished_date`    | datetime        | 3          |                                                                 |         |
+
+
+### Cross-Domain Entities
+
+These entities are used to map entities between different domains. They are the key players to break data isolation.
+
+There're low-level entities such as issue_commits, users, and higher-level cross domain entities such as board_repos
+
+#### issue_commits
+
+A low-level mapping between "issue tracking" and "source code management" domain by mapping `issues` and `commits`. Issue(n): Commit(n).
+
+The original connection between these two entities lies in either issue tracking tools like Jira or source code management tools like GitLab. You have to use tools to accomplish this.
+
+For example, a common method to connect Jira issue and GitLab commit is a GitLab plugin [Jira Integration](https://docs.gitlab.com/ee/integration/jira/). With this plugin, the Jira issue key in the commit message written by the committers will be parsed. Then, the plugin will add the commit urls under this jira issue. Hence, DevLake's [Jira plugin](https://github.com/merico-dev/lake/tree/main/plugins/jira) can get the related commits (including repo, commit_id, url) of an issue.
+
+| **field**    | **type** | **length** | **description** | **key**        |
+| :----------- | :------- | :--------- | :-------------- | :------------- |
+| `issue_id`   | varchar  | 255        | Issue id        | FK_issues.id   |
+| `commit_sha` | char     | 40         | Commit sha      | FK_commits.sha |
+
+#### pull_request_issues
+
+This table shows the issues closed by pull requests. It's a medium-level mapping between "issue tracking" and "source code management" domain by mapping issues and commits. Issue(n): Commit(n).
+
+The data is extracted from the body of pull requests conforming to certain regular expression. The regular expression can be defined in GITHUB_PR_BODY_CLOSE_PATTERN in the .env file
+
+| **field**             | **type** | **length** | **description**     | **key**             |
+| :-------------------- | :------- | :--------- | :------------------ | :------------------ |
+| `pull_request_id`     | char     | 40         | Pull request id     | FK_pull_requests.id |
+| `issue_id`            | varchar  | 255        | Issue id            | FK_issues.id        |
+| `pull_request_number` | varchar  | 255        | Pull request key    |                     |
+| `issue_number`        | varchar  | 255        | Issue key           |                     |
+
+#### board_repos (Deprecated)
+
+A way to link "issue tracking" and "source code management" domain by mapping `boards` and `repos`. Board(n): Repo(n).
+
+| **field**  | **type** | **length** | **description** | **key**      |
+| :--------- | :------- | :--------- | :-------------- | :----------- |
+| `board_id` | varchar  | 255        | Board id        | FK_boards.id |
+| `repo_id`  | varchar  | 255        | Repo id         | FK_repos.id  |
+
+#### accounts
+
+This table stores of user accounts across different tools such as GitHub, Jira, GitLab, etc. This table can be joined to get the metadata of all accounts.
+ metrics, such as _'No. of Issue closed by contributor', 'No. of commits by contributor',_
+
+| **field**      | **type** | **length** | **description**         | **key** |
+| :------------- | :------- | :--------- | :---------------------- | :------ |
+| `id`           | varchar  | 255        | An account's `id` is the identifier of the account of a specific tool. It is composed of "< Plugin >:< Entity >:< PK0 >[:PK1]..."<br/>For example, a Github account's id is composed of "< github >:< GithubAccounts >< GithubUserId)". Eg. 'github:GithubUsers:14050754' | PK      |
+| `email`        | varchar  | 255        | Email of the account                                              |         |
+| `full_name`    | varchar  | 255        | Full name                                                         |         |
+| `user_name`    | varchar  | 255        | Username, nickname or Github login of an account                  |         |
+| `avatar_url`   | varchar  | 255        |                                                                   |         |
+| `organization` | varchar  | 255        | User's organization(s)                                            |         |
+| `created_date` | datetime | 3          | User creation time                                                |         |
+| `status`       | int      |            | 0: default, the user is active. 1: the user is not active         |         |
+
+#### users
+| **field** | **type** | **length** | **description**               | **key** |
+| --------- | -------- | ---------- | ----------------------------- | ------- |
+| `id`      | varchar  | 255        | id of a person                | PK      |
+| `email`   | varchar  | 255        | the primary email of a person |         |
+| `name`    | varchar  | 255        | name of a person              |         |
+
+#### user_accounts
+| **field**    | **type** | **length** | **description** | **key**          |
+| ------------ | -------- | ---------- | --------------- | ---------------- |
+| `user_id`    | varchar  | 255        | users.id        | Composite PK, FK |
+| `account_id` | varchar  | 255        | accounts.id     | Composite PK, FK |
+
+#### teams
+| **field**       | **type** | **length** | **description**                                    | **key** |
+| --------------- | -------- | ---------- | -------------------------------------------------- | ------- |
+| `id`            | varchar  | 255        | id from the data sources, decided by DevLake users | PK      |
+| `name`          | varchar  | 255        | name of the team. Eg. team A, team B, etc.         |         |
+| `alias`         | varchar  | 255        | alias or abbreviation of a team                    |         |
+| `parent_id`     | varchar  | 255        | teams.id, default to null                          | FK      |
+| `sorting_index` | int      | 255        | the field to sort team                             |         |
+
+#### team_users
+| **field** | **type** | **length** | **description**                                 | **key**          |
+| --------- | -------- | ---------- | ----------------------------------------------- | ---------------- |
+| `team_id` | varchar  | 255        | Full name of the team. Eg. team A, team B, etc. | Composite PK, FK |
+| `user_id` | varchar  | 255        | users.id                                        | Composite PK, FK |
+
+
+<br/>
+
+## DWM Entities - (Data Warehouse Middle)
+
+DWM entities are the slight aggregation and operation of DWD to store more organized details or middle-level metrics.
+
+
+#### refs_issues_diffs
+
+This table shows the issues fixed by commits added in a new ref compared to an old one. The data is computed from [table.ref_commits_diff](https://merico.feishu.cn/docs/doccnvyuG9YpVc6lvmWkmmbZtUc#yJOyqa), [table.pull_requests](https://merico.feishu.cn/docs/doccnvyuG9YpVc6lvmWkmmbZtUc#Uc849c), [table.pull_request_commits](https://merico.feishu.cn/docs/doccnvyuG9YpVc6lvmWkmmbZtUc#G9cPfj), and [table.pull_request_issues](https://merico.feishu.cn/docs/doccnvyuG9YpVc6lvmWkmmbZtUc#we6Uac).
+
+This table can support tag-based analysis, for instance, '_No. of bugs closed in a tag_'.
+
+| **field**            | **type** | **length** | **description**                                        | **key**      |
+| :------------------- | :------- | :--------- | :----------------------------------------------------- | :----------- |
+| `new_ref_id`         | varchar  | 255        | The new ref's id for comparison                        | FK_refs.id   |
+| `old_ref_id`         | varchar  | 255        | The old ref's id for comparison                        | FK_refs.id   |
+| `new_ref_commit_sha` | char     | 40         | The commit new ref points to at the time of collection |              |
+| `old_ref_commit_sha` | char     | 40         | The commit old ref points to at the time of collection |              |
+| `issue_number`       | varchar  | 255        | Issue number                                           |              |
+| `issue_id`           | varchar  | 255        | Issue id                                               | FK_issues.id |
diff --git a/versioned_docs/version-v0.13/DataModels/_category_.json b/versioned_docs/version-v0.13/DataModels/_category_.json
new file mode 100644
index 00000000..ae28c626
--- /dev/null
+++ b/versioned_docs/version-v0.13/DataModels/_category_.json
@@ -0,0 +1,8 @@
+{
+  "label": "Data Models",
+  "position": 6,
+  "link":{
+    "type": "generated-index",
+    "slug": "DataModels"
+  }
+}
diff --git a/versioned_docs/version-v0.13/DeveloperManuals/DBMigration.md b/versioned_docs/version-v0.13/DeveloperManuals/DBMigration.md
new file mode 100644
index 00000000..53160498
--- /dev/null
+++ b/versioned_docs/version-v0.13/DeveloperManuals/DBMigration.md
@@ -0,0 +1,53 @@
+---
+title: "DB Migration"
+description: >
+  DB Migration
+sidebar_position: 3
+---
+
+## Summary
+Starting in v0.10.0, DevLake provides a lightweight migration tool for executing migration scripts.
+Both framework itself and plugins define their migration scripts in their own migration folder.
+The migration scripts are written with gorm in Golang to support different SQL dialects.
+
+
+## Migration Script
+Migration script describes how to do database migration.
+They implement the `Script` interface.
+When DevLake starts, scripts register themselves to the framework by invoking the `Register` function
+
+```go
+type Script interface {
+    // this function will contain the business logic of the migration (e.g. DDL logic)
+	Up(ctx context.Context, db *gorm.DB) error
+    // the version number of the migration. typically in date format (YYYYMMDDHHMMSS), e.g. 20220728000001. Migrations are executed sequentially based on this number.
+	Version() uint64
+	// The name of this migration
+	Name() string
+}
+```
+
+## Migration Model
+
+For each migration we define a "snapshot" datamodel of the model that we wish to perform the migration on.
+The fields on this model shall be identical to the actual model, but unlike the actual one, this one will
+never change in the future. The naming convention of these models is `<ModelName>YYYYMMDD` and they must implement
+the `func TableName() string` method, and consumed by the `Script::Up` method.
+
+## Table `migration_history`
+
+The table tracks migration scripts execution and schemas changes.
+From which, DevLake could figure out the current state of database schemas.
+
+## Execution
+
+Each plugin has a `migrationscripts` subpackage that lists all the migrations to be executed for that plugin. You
+will need to add your migration to that list for the framework to pick it up. Similarly, there is such a package
+for the framework-only migrations defined under the `models` package.
+
+
+## How It Works
+1. Check `migration_history` table, calculate all the migration scripts need to be executed.
+2. Sort scripts by Version in ascending order.
+3. Execute scripts.
+4. Save results in the `migration_history` table.
diff --git a/versioned_docs/version-v0.13/DeveloperManuals/Dal.md b/versioned_docs/version-v0.13/DeveloperManuals/Dal.md
new file mode 100644
index 00000000..9b085425
--- /dev/null
+++ b/versioned_docs/version-v0.13/DeveloperManuals/Dal.md
@@ -0,0 +1,173 @@
+---
+title: "Dal"
+sidebar_position: 5
+description: >
+  The Dal (Data Access Layer) is designed to decouple the hard dependency on `gorm` in v0.12
+---
+
+## Summary
+
+The Dal (Data Access Layer) is designed to decouple the hard dependency on `gorm` in v0.12.  The advantages of introducing this isolation are:
+
+ - Unit Test: Mocking an Interface is easier and more reliable than Patching a Pointer.
+ - Clean Code: DBS operations are more consistence than using `gorm ` directly.
+ - Replaceable: It would be easier to replace `gorm` in the future if needed.
+
+## The Dal Interface
+
+```go
+type Dal interface {
+	AutoMigrate(entity interface{}, clauses ...Clause) error
+	Exec(query string, params ...interface{}) error
+	RawCursor(query string, params ...interface{}) (*sql.Rows, error)
+	Cursor(clauses ...Clause) (*sql.Rows, error)
+	Fetch(cursor *sql.Rows, dst interface{}) error
+	All(dst interface{}, clauses ...Clause) error
+	First(dst interface{}, clauses ...Clause) error
+	Count(clauses ...Clause) (int64, error)
+	Pluck(column string, dest interface{}, clauses ...Clause) error
+	Create(entity interface{}, clauses ...Clause) error
+	Update(entity interface{}, clauses ...Clause) error
+	CreateOrUpdate(entity interface{}, clauses ...Clause) error
+	CreateIfNotExist(entity interface{}, clauses ...Clause) error
+	Delete(entity interface{}, clauses ...Clause) error
+	AllTables() ([]string, error)
+}
+```
+
+
+## How to use
+
+### Query
+```go
+// Get a database cursor
+user := &models.User{}
+cursor, err := db.Cursor(
+  dal.From(user),
+  dal.Where("department = ?", "R&D"),
+  dal.Orderby("id DESC"),
+)
+if err != nil {
+  return err
+}
+for cursor.Next() {
+  err = dal.Fetch(cursor, user)  // fetch one record at a time
+  ...
+}
+
+// Get a database cursor by raw sql query
+cursor, err := db.Raw("SELECT * FROM users")
+
+// USE WITH CAUTIOUS: loading a big table at once is slow and dangerous
+// Load all records from database at once. 
+users := make([]models.Users, 0)
+err := db.All(&users, dal.Where("department = ?", "R&D"))
+
+// Load a column as Scalar or Slice
+var email string
+err := db.Pluck("email", &username, dal.Where("id = ?", 1))
+var emails []string
+err := db.Pluck("email", &emails)
+
+// Execute query
+err := db.Exec("UPDATE users SET department = ? WHERE department = ?", "Research & Development", "R&D")
+```
+
+### Insert
+```go
+err := db.Create(&models.User{
+  Email: "hello@example.com", // assumming this the Primarykey
+  Name: "hello",
+  Department: "R&D",
+})
+```
+
+### Update
+```go
+err := db.Create(&models.User{
+  Email: "hello@example.com", // assumming this the Primarykey
+  Name: "hello",
+  Department: "R&D",
+})
+```
+### Insert or Update
+```go
+err := db.CreateOrUpdate(&models.User{
+  Email: "hello@example.com",  // assuming this is the Primarykey
+  Name: "hello",
+  Department: "R&D",
+})
+```
+
+### Insert if record(by PrimaryKey) didn't exist
+```go
+err := db.CreateIfNotExist(&models.User{
+  Email: "hello@example.com",  // assuming this is the Primarykey
+  Name: "hello",
+  Department: "R&D",
+})
+```
+
+### Delete
+```go
+err := db.CreateIfNotExist(&models.User{
+  Email: "hello@example.com",  // assuming this is the Primary key
+})
+```
+
+### DDL and others
+```go
+// Returns all table names
+allTables, err := db.AllTables()
+
+// Automigrate: create/add missing table/columns
+// Note: it won't delete any existing columns, nor does it update the column definition
+err := db.AutoMigrate(&models.User{})
+```
+
+## How to do Unit Test
+First, run the command `make mock` to generate the Mocking Stubs, the generated source files should appear in `mocks` folder. 
+```
+mocks
+├── ApiResourceHandler.go
+├── AsyncResponseHandler.go
+├── BasicRes.go
+├── CloseablePluginTask.go
+├── ConfigGetter.go
+├── Dal.go
+├── DataConvertHandler.go
+├── ExecContext.go
+├── InjectConfigGetter.go
+├── InjectLogger.go
+├── Iterator.go
+├── Logger.go
+├── Migratable.go
+├── PluginApi.go
+├── PluginBlueprintV100.go
+├── PluginInit.go
+├── PluginMeta.go
+├── PluginTask.go
+├── RateLimitedApiClient.go
+├── SubTaskContext.go
+├── SubTaskEntryPoint.go
+├── SubTask.go
+└── TaskContext.go
+```
+With these Mocking stubs, you may start writing your TestCases using the `mocks.Dal`.
+```go
+import "github.com/apache/incubator-devlake/mocks"
+
+func TestCreateUser(t *testing.T) {
+    mockDal := new(mocks.Dal)
+    mockDal.On("Create", mock.Anything, mock.Anything).Return(nil).Once()
+    userService := &services.UserService{
+        Dal: mockDal,
+    }
+    userService.Post(map[string]interface{}{
+        "email": "helle@example.com",
+        "name": "hello",
+        "department": "R&D",
+    })
+    mockDal.AssertExpectations(t)
+```
+
diff --git a/versioned_docs/version-v0.13/DeveloperManuals/DeveloperSetup.md b/versioned_docs/version-v0.13/DeveloperManuals/DeveloperSetup.md
new file mode 100644
index 00000000..ef7ffa2a
--- /dev/null
+++ b/versioned_docs/version-v0.13/DeveloperManuals/DeveloperSetup.md
@@ -0,0 +1,131 @@
+---
+title: "Developer Setup"
+description: >
+  The steps to install DevLake in developer mode.
+sidebar_position: 1
+---
+
+
+## Requirements
+
+- <a href="https://docs.docker.com/get-docker" target="_blank">Docker v19.03.10+</a>
+- <a href="https://golang.org/doc/install" target="_blank">Golang v1.19+</a>
+- Make
+  - Mac (Already installed)
+  - Windows: [Download](http://gnuwin32.sourceforge.net/packages/make.htm)
+  - Ubuntu: `sudo apt-get install build-essential libssl-dev`
+
+## How to setup dev environment
+
+The following guide will walk you through the procedure to run local config-ui and devlake servers against dockerized
+MySQL and Grafana containers.
+
+1. Navigate to where you would like to install this project and clone the repository:
+
+   ```sh
+   git clone https://github.com/apache/incubator-devlake
+   cd incubator-devlake
+   ```
+
+2. Install dependencies for plugins:
+
+   - [RefDiff](../Plugins/refdiff.md#development)
+
+3. Install Go packages
+
+    ```sh
+	go get
+    ```
+
+4. Copy the sample config file to new local file:
+
+    ```sh
+    cp .env.example .env
+    ```
+
+5. Update the following variables in the file `.env`:
+
+    * `DB_URL`: Replace `mysql:3306` with `127.0.0.1:3306`
+
+6. Start the MySQL and Grafana containers:
+
+    > Make sure the Docker daemon is running before this step.
+
+    ```sh
+    docker-compose up -d mysql grafana
+    ```
+
+7. Run lake and config UI in dev mode in two separate terminals:
+
+    ```sh
+    # install mockery
+    go install github.com/vektra/mockery/v2@latest
+    # generate mocking stubs
+    make mock
+    # run lake
+    make dev
+    # run config UI
+    make configure-dev
+    ```
+
+    Q: I got an error saying: `libgit2.so.1.3: cannot open share object file: No such file or directory`
+
+    A: This library is needed by the git-extractor plugin. Make sure your program can find `libgit2.so.1.3`. `LD_LIBRARY_PATH` can be assigned like this if your `libgit2.so.1.3` is located at `/usr/local/lib`:
+
+    ```sh
+    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
+    ```
+   
+    Note that the version has to be pinned to 1.3.0. If you don't have it, you may need to build it manually with CMake from [source](https://github.com/libgit2/libgit2/releases/tag/v1.3.0).
+
+8. Visit config UI at `localhost:4000` to configure data connections.
+    - Please follow the [tutorial](UserManuals/ConfigUI/Tutorial.md)
+    - Submit the form to update the values by clicking on the **Save Connection** button on each form page
+
+9. Visit `localhost:4000/pipelines/create` to RUN a Pipeline and trigger data collection.
+
+
+   Pipelines Runs can be initiated by the new "Create Run" Interface. Simply enable the **Data Connection Providers** you wish to run collection for, and specify the data you want to collect, for instance, **Project ID** for Gitlab and **Repository Name** for GitHub.
+
+   Once a valid pipeline configuration has been created, press **Create Run** to start/run the pipeline.
+   After the pipeline starts, you will be automatically redirected to the **Pipeline Activity** screen to monitor collection activity.
+
+   **Pipelines** is accessible from the main menu of the config-ui for easy access.
+
+   - Manage All Pipelines: `http://localhost:4000/pipelines`
+   - Create Pipeline RUN: `http://localhost:4000/pipelines/create`
+   - Track Pipeline Activity: `http://localhost:4000/pipelines/activity/[RUN_ID]`
+
+   For advanced use cases and complex pipelines, please use the Raw JSON API to manually initiate a run using **cURL** or graphical API tool such as **Postman**. `POST` the following request to the DevLake API Endpoint.
+
+    ```json
+    [
+        [
+            {
+                "plugin": "github",
+                "options": {
+                    "repo": "lake",
+                    "owner": "merico-dev"
+                }
+            }
+        ]
+    ]
+    ```
+
+   Please refer to [Pipeline Advanced Mode](../UserManuals/ConfigUI/AdvancedMode.md) for in-depth explanation.
+
+
+10. Click *View Dashboards* button in the top left when done, or visit `localhost:3002` (username: `admin`, password: `admin`).
+
+   We use <a href="https://grafana.com/" target="_blank">Grafana</a> as a visualization tool to build charts for the <a href="https://github.com/merico-dev/lake/wiki/DataModel.Domain-layer-schema">data stored in our database</a>. Using SQL queries, we can add panels to build, save, and edit customized dashboards.
+
+   All the details on provisioning and customizing a dashboard can be found in the [Grafana Doc](../UserManuals/Dashboards/GrafanaUserGuide.md).
+
+11. (Optional) To run the tests:
+
+    ```sh
+    make test
+    ```
+
+12. For DB migrations, please refer to [Migration Doc](../DeveloperManuals/DBMigration.md).
+
diff --git a/versioned_docs/version-v0.13/DeveloperManuals/E2E-Test-Guide.md b/versioned_docs/version-v0.13/DeveloperManuals/E2E-Test-Guide.md
new file mode 100644
index 00000000..9e28fef1
--- /dev/null
+++ b/versioned_docs/version-v0.13/DeveloperManuals/E2E-Test-Guide.md
@@ -0,0 +1,212 @@
+---
+title: "E2E Test Guide"
+description: >
+  The steps to write E2E tests for plugins.
+---
+
+# How to write E2E tests for plugins
+
+## Why write E2E tests
+
+E2E testing, as a part of automated testing, generally refers to black-box testing at the file and module level or unit testing that allows the use of some external services such as databases. The purpose of writing E2E tests is to shield some internal implementation logic and see whether the same external input can output the same result in terms of data aspects. In addition, compared to the black-box integration tests, it can avoid some chance problems caused by network and other facto [...]
+In DevLake, E2E testing consists of interface testing and input/output result validation for the plugin Extract/Convert subtask. This article only describes the process of writing the latter. As the Collectors invoke external
+services we typically do not write E2E tests for them.
+
+## Preparing data
+
+Let's take a simple plugin - Feishu Meeting Hours Collection as an example here. Its directory structure looks like this.
+![image](https://user-images.githubusercontent.com/3294100/175061114-53404aac-16ca-45d1-a0ab-3f61d84922ca.png)
+Next, we will write the E2E tests of the sub-tasks.
+
+The first step in writing the E2E test is to run the Collect task of the corresponding plugin to complete the data collection; that is, to have the corresponding data saved in the table starting with `_raw_feishu_` in the database.
+This data will be presumed to be the "source of truth" for our tests. Here are the logs and database tables using the DirectRun (cmd) run method.
+```
+$ go run plugins/feishu/main.go --numOfDaysToCollect 2 --connectionId 1 (Note: command may change with version upgrade)
+[2022-06-22 23:03:29] INFO failed to create dir logs: mkdir logs: file exists
+press `c` to send cancel signal
+[2022-06-22 23:03:29]  INFO  [feishu] start plugin
+[2022-06-22 23:03:33]  INFO  [feishu] scheduler for api https://open.feishu.cn/open-apis/vc/v1 worker: 13, request: 10000, duration: 1h0m0s
+[2022-06-22 23:03:33]  INFO  [feishu] total step: 2
+[2022-06-22 23:03:33]  INFO  [feishu] executing subtask collectMeetingTopUserItem
+[2022-06-22 23:03:33]  INFO  [feishu] [collectMeetingTopUserItem] start api collection
+[2022-06-22 23:03:34]  INFO  [feishu] [collectMeetingTopUserItem] finished records: 1
+[2022-06-22 23:03:34]  INFO  [feishu] [collectMeetingTopUserItem] end api collection error: %!w(<nil>)
+[2022-06-22 23:03:34]  INFO  [feishu] finished step: 1 / 2
+[2022-06-22 23:03:34]  INFO  [feishu] executing subtask extractMeetingTopUserItem
+[2022-06-22 23:03:34]  INFO  [feishu] [extractMeetingTopUserItem] get data from _raw_feishu_meeting_top_user_item where params={"connectionId":1} and got 148
+[2022-06-22 23:03:34]  INFO  [feishu] [extractMeetingTopUserItem] finished records: 1
+[2022-06-22 23:03:34]  INFO  [feishu] finished step: 2 / 2
+```
+
+<img width="993" alt="image" src="https://user-images.githubusercontent.com/3294100/175064505-bc2f98d6-3f2e-4ccf-be68-a1cab1e46401.png"/>
+Ok, the data has now been saved to the `_raw_feishu_*` table, and the `data` column is the return information from the plugin. Here we only collected data for the last 2 days. The data information is not much, but it also covers a variety of situations. That is, the same person has data on different days.
+
+It is also worth mentioning that the plugin runs two tasks, `collectMeetingTopUserItem` and `extractMeetingTopUserItem`. The former is the task of collecting, which is needed to run this time, and the latter is the task of extracting data. It doesn't matter whether the extractor runs in the prepared data session.
+
+Next, we need to export the data to .csv format. This step can be done in a variety of different ways - you can show your skills. I will only introduce a few common methods here.
+
+### DevLake Code Generator Export
+
+Run `go run generator/main.go create-e2e-raw` directly and follow the guidelines to complete the export. This solution is the simplest, but has some limitations, such as the exported fields being fixed. You can refer to the next solutions if you need more customisation options.
+
+![usage](https://user-images.githubusercontent.com/3294100/175849225-12af5251-6181-4cd9-ba72-26087b05ee73.gif)
+
+### GoLand Database export
+
+![image](https://user-images.githubusercontent.com/3294100/175067303-7e5e1c4d-2430-4eb5-ad00-e38d86bbd108.png)
+
+This solution is very easy to use and will not cause problems using Postgres or MySQL.
+![image](https://user-images.githubusercontent.com/3294100/175068178-f1c1c290-e043-4672-b43e-54c4b954c685.png)
+The success criteria for csv export is that the go program can read it without errors, so several points are worth noticing.
+
+1. the values in the csv file should be wrapped in double quotes to avoid special symbols such as commas in the values that break the csv format
+2. double quotes in csv files are escaped. generally `""` represents a double quote
+3. pay attention to whether the column `data` is the actual value, not the value after base64 or hex
+
+After exporting, move the .csv file to `plugins/feishu/e2e/raw_tables/_raw_feishu_meeting_top_user_item.csv`.
+
+### MySQL Select Into Outfile
+
+This is MySQL's solution for exporting query results to a file. The MySQL currently started in docker-compose.yml comes with the --security parameter, so it does not allow `select ... into outfile`. The first step is to turn off the security parameter, which is done roughly as follows.
+![origin_img_v2_c809c901-01bc-4ec9-b52a-ab4df24c376g](https://user-images.githubusercontent.com/3294100/175070770-9b7d5b75-574b-49ed-9bca-e9f611f60795.jpg)
+After closing it, use `select ... into outfile` to export the csv file. The export result is rough as follows.
+![origin_img_v2_ccfdb260-668f-42b4-b249-6c2dd45816ag](https://user-images.githubusercontent.com/3294100/175070866-2204ae13-c058-4a16-bc20-93ab7c95f832.jpg)
+Notice that the data field has extra hexsha fields, which need to be manually converted to literal quantities.
+
+### Vscode Database
+
+This is Vscode's solution for exporting query results to a file, but it is not easy to use. Here is the export result without any configuration changes
+![origin_img_v2_c9eaadaa-afbc-4c06-85bc-e78235f7eb3g](https://user-images.githubusercontent.com/3294100/175071987-760c2537-240c-4314-bbd6-1a0cd85ddc0f.jpg)
+However, it is obvious that the escape symbol does not conform to the csv specification, and the data is not successfully exported. After adjusting the configuration and manually replacing `\"` with `""`, we get the following result.
+![image](https://user-images.githubusercontent.com/3294100/175072314-954c6794-3ebd-45bb-98e7-60ddbb5a7da9.png)
+The data field of this file is encoded in base64, so it needs to be decoded manually to a literal amount before using it.
+
+### MySQL workerbench
+
+This tool must write the SQL yourself to complete the data export, which can be rewritten by imitating the following SQL.
+```sql
+SELECT id, params, CAST(`data` as char) as data, url, input,created_at FROM _raw_feishu_meeting_top_user_item;
+```
+![image](https://user-images.githubusercontent.com/3294100/175080866-1631a601-cbe6-40c0-9d3a-d23ca3322a50.png)
+Select csv as the save format and export it for use.
+
+### Postgres Copy with csv header
+
+`Copy(SQL statement) to '/var/lib/postgresql/data/raw.csv' with csv header;` is a common export method for PG to export csv, which can also be used here.
+```sql
+COPY (
+SELECT id, params, convert_from(data, 'utf-8') as data, url, input,created_at FROM _raw_feishu_meeting_top_user_item
+) to '/var/lib/postgresql/data/raw.csv' with csv header;
+```
+Use the above statement to complete the export of the file. If pg runs in docker, just use the command `docker cp` to export the file to the host.
+
+## Writing E2E tests
+
+First, create a test environment. For example, let's create `meeting_test.go`.
+![image](https://user-images.githubusercontent.com/3294100/175091380-424974b9-15f3-457b-af5c-03d3b5d17e73.png)
+Then enter the test preparation code in it as follows. The code is to create an instance of the `feishu` plugin and then call `ImportCsvIntoRawTable` to import the data from the csv file into the `_raw_feishu_meeting_top_user_item` table.
+
+```go
+func TestMeetingDataFlow(t *testing.T) {
+	var plugin impl.Feishu
+	dataflowTester := e2ehelper.NewDataFlowTester(t, "feishu", plugin)
+
+	// import raw data table
+	dataflowTester.ImportCsvIntoRawTable("./raw_tables/_raw_feishu_meeting_top_user_item.csv", "_raw_feishu_meeting_top_user_item")
+}
+```
+The signature of the import function is as follows.
+```func (t *DataFlowTester) ImportCsvIntoRawTable(csvRelPath string, rawTableName string)```
+It has a twin, with only slight differences in parameters.
+```func (t *DataFlowTester) ImportCsvIntoTabler(csvRelPath string, dst schema.Tabler)```
+The former is used to import tables in the raw layer. The latter is used to import arbitrary tables.
+**Note:** These two functions will delete the db table and use `gorm.AutoMigrate` to re-create a new table to clear data in it.
+After importing the data is complete, run this tester and it must be PASS without any test logic at this moment. Then write the logic for calling the call to the extractor task in `TestMeetingDataFlow`.
+
+```go
+func TestMeetingDataFlow(t *testing.T) {
+	var plugin impl.Feishu
+	dataflowTester := e2ehelper.NewDataFlowTester(t, "feishu", plugin)
+
+	taskData := &tasks.FeishuTaskData{
+		Options: &tasks.FeishuOptions{
+			ConnectionId: 1,
+		},
+	}
+
+	// import raw data table
+	dataflowTester.ImportCsvIntoRawTable("./raw_tables/_raw_feishu_meeting_top_user_item.csv", "_raw_feishu_meeting_top_user_item")
+
+	// verify extraction
+	dataflowTester.FlushTabler(&models.FeishuMeetingTopUserItem{})
+	dataflowTester.Subtask(tasks.ExtractMeetingTopUserItemMeta, taskData)
+
+}
+```
+The added code includes a call to `dataflowTester.FlushTabler` to clear the table `_tool_feishu_meeting_top_user_items` and a call to `dataflowTester.Subtask` to simulate the running of the subtask `ExtractMeetingTopUserItemMeta`.
+
+Now run it and see if the subtask `ExtractMeetingTopUserItemMeta` completes without errors. The data results of the `extract` run generally come from the raw table, so the plugin subtask will run correctly if written without errors. We can observe if the data is successfully parsed in the db table in the tool layer. In this case the `_tool_feishu_meeting_top_user_items` table has the correct data.
+
+If the run is incorrect, maybe you can troubleshoot the problem with the plugin itself before moving on to the next step.
+
+## Verify that the results of the task are correct
+
+Let's continue writing the test and add the following code at the end of the test function
+```go
+func TestMeetingDataFlow(t *testing.T) {
+    ......
+    
+    dataflowTester.VerifyTable(
+      models.FeishuMeetingTopUserItem{},
+      "./snapshot_tables/_tool_feishu_meeting_top_user_items.csv",
+      []string{
+        "meeting_count",
+        "meeting_duration",
+        "user_type",
+        "_raw_data_params",
+        "_raw_data_table",
+        "_raw_data_id",
+        "_raw_data_remark",
+      },
+    )
+}
+```
+Its purpose is to call `dataflowTester.VerifyTable` to complete the validation of the data results. The third parameter is all the fields of the table that need to be verified. 
+The data used for validation exists in `. /snapshot_tables/_tool_feishu_meeting_top_user_items.csv`, but of course, this file does not exist yet.
+
+There is a twin, more generalized function, that could be used instead:
+```go
+dataflowTester.VerifyTableWithOptions(models.FeishuMeetingTopUserItem{}, 
+        dataflowTester.TableOptions{
+	        CSVRelPath: "./snapshot_tables/_tool_feishu_meeting_top_user_items.csv"
+        },
+    )
+
+```
+The above usage will default to validating against all fields of the ```models.FeishuMeetingTopUserItem``` model. There are additional fields on ```TableOptions``` that can be specified
+to limit which fields on that model to perform validation on.
+
+To facilitate the generation of the file mentioned above, DevLake has adopted a testing technique called `Snapshot`, which will automatically generate the file based on the run results when the `VerifyTable` or `VerifyTableWithOptions` functions are called without the csv existing.
+
+But note! Please do two things after the snapshot is created: 1. check if the file is generated correctly 2. re-run it to make sure there are no errors between the generated results and the re-run results.
+These two operations are critical and directly related to the quality of test writing. We should treat the snapshot file in `.csv' format like a code file.
+
+If there is a problem with this step, there are usually 2 ways to solve it.
+1. The validated fields contain fields like create_at runtime or self-incrementing ids, which cannot be repeatedly validated and should be excluded.
+2. there is `\n` or `\r\n` or other escape mismatch fields in the run results. Generally, when parsing the `httpResponse` error, you can follow these solutions:
+    1. modify the field type of the content in the api model to `json.
+    2. convert it to string when parsing
+    3. so that the `\n` symbol can be kept intact, avoiding the parsing of line breaks by the database or the operating system
+
+
+For example, in the `github` plugin, this is how it is handled.
+![image](https://user-images.githubusercontent.com/3294100/175098219-c04b810a-deaf-4958-9295-d5ad4ec152e6.png)
+![image](https://user-images.githubusercontent.com/3294100/175098273-e4a18f9a-51c8-4637-a80c-3901a3c2934e.png)
+
+Well, at this point, the E2E writing is done. We have added a total of 3 new files to complete the testing of the meeting length collection task. It's pretty easy.
+![image](https://user-images.githubusercontent.com/3294100/175098574-ae6c7fb7-7123-4d80-aa85-790b492290ca.png)
+
+## Run E2E tests for all plugins like CI
+
+It's straightforward. Just run `make e2e-plugins` because DevLake has already solidified it into a script~
+
diff --git a/versioned_docs/version-v0.13/DeveloperManuals/Notifications.md b/versioned_docs/version-v0.13/DeveloperManuals/Notifications.md
new file mode 100644
index 00000000..23456b4f
--- /dev/null
+++ b/versioned_docs/version-v0.13/DeveloperManuals/Notifications.md
@@ -0,0 +1,32 @@
+---
+title: "Notifications"
+description: >
+  Notifications
+sidebar_position: 4
+---
+
+## Request
+Example request
+```
+POST /lake/notify?nouce=3-FDXxIootApWxEVtz&sign=424c2f6159bd9e9828924a53f9911059433dc14328a031e91f9802f062b495d5
+
+{"TaskID":39,"PluginName":"jenkins","CreatedAt":"2021-09-30T15:28:00.389+08:00","UpdatedAt":"2021-09-30T15:28:00.785+08:00"}
+```
+
+## Configuration
+If you want to use the notification feature, you should add two configuration key to `.env` file.
+```shell
+# .env
+# notification request url, e.g.: http://example.com/lake/notify
+NOTIFICATION_ENDPOINT=
+# secret is used to calculate signature
+NOTIFICATION_SECRET=
+```
+
+## Signature
+You should check the signature before accepting the notification request. We use sha256 algorithm to calculate the checksum.
+```go
+// calculate checksum
+sum := sha256.Sum256([]byte(requestBody + NOTIFICATION_SECRET + nouce))
+return hex.EncodeToString(sum[:])
+```
diff --git a/versioned_docs/version-v0.13/DeveloperManuals/PluginImplementation.md b/versioned_docs/version-v0.13/DeveloperManuals/PluginImplementation.md
new file mode 100644
index 00000000..bcc6599f
--- /dev/null
+++ b/versioned_docs/version-v0.13/DeveloperManuals/PluginImplementation.md
@@ -0,0 +1,339 @@
+---
+title: "Plugin Implementation"
+sidebar_position: 2
+description: >
+  Plugin Implementation
+---
+
+If your favorite DevOps tool is not yet supported by DevLake, don't worry. It's not difficult to implement a DevLake plugin. In this post, we'll go through the basics of DevLake plugins and build an example plugin from scratch together.
+
+## What is a plugin?
+
+A DevLake plugin is a shared library built with Go's `plugin` package that hooks up to DevLake core at run-time.
+
+A plugin may extend DevLake's capability in three ways:
+
+1. Integrating with new data sources
+2. Transforming/enriching existing data
+3. Exporting DevLake data to other data systems
+
+
+## How do plugins work?
+
+A plugin mainly consists of a collection of subtasks that can be executed by DevLake core. For data source plugins, a subtask may be collecting a single entity from the data source (e.g., issues from Jira). Besides the subtasks, there're hooks that a plugin can implement to customize its initialization, migration, and more. See below for a list of the most important interfaces:
+
+1. [PluginMeta](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_meta.go) contains the minimal interface that a plugin should implement, with only two functions 
+   - Description() returns the description of a plugin
+   - RootPkgPath() returns the root package path of a plugin
+2. [PluginInit](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_init.go) allows a plugin to customize its initialization
+3. [PluginTask](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_task.go) enables a plugin to prepare data prior to subtask execution
+4. [PluginApi](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_api.go) lets a plugin exposes some self-defined APIs
+5. [Migratable](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_db_migration.go) is where a plugin manages its database migrations 
+6. [PluginModel](https://github.com/apache/incubator-devlake/blob/main/plugins/core/plugin_model.go) allows other plugins to get the model information of all database tables of the current plugin through the GetTablesInfo() method.If you need to access Domain Layer Models,please visit [DomainLayerSchema](https://devlake.apache.org/docs/DataModels/DevLakeDomainLayerSchema/)
+
+The diagram below shows the control flow of executing a plugin:
+
+```mermaid
+flowchart TD;
+    subgraph S4[Step4 sub-task extractor running process];
+    direction LR;
+    D4[DevLake];
+    D4 -- "Step4.1 create a new\n ApiExtractor\n and execute it" --> E["ExtractXXXMeta.\nEntryPoint"];
+    E <-- "Step4.2 read from\n raw table" --> E2["RawDataSubTaskArgs\n.Table"];
+    E -- "Step4.3 call with RawData" --> ApiExtractor.Extract;
+    ApiExtractor.Extract -- "decode and return gorm models" --> E
+    end
+    subgraph S3[Step3 sub-task collector running process]
+    direction LR
+    D3[DevLake]
+    D3 -- "Step3.1 create a new\n ApiCollector\n and execute it" --> C["CollectXXXMeta.\nEntryPoint"];
+    C <-- "Step3.2 create\n raw table" --> C2["RawDataSubTaskArgs\n.RAW_BBB_TABLE"];
+    C <-- "Step3.3 build query\n before sending requests" --> ApiCollectorArgs.\nQuery/UrlTemplate;
+    C <-. "Step3.4 send requests by ApiClient \n and return HTTP response" .-> A1["HTTP APIs"];
+    C <-- "Step3.5 call and \nreturn decoded data \nfrom HTTP response" --> ResponseParser;
+    end
+    subgraph S2[Step2 DevLake register custom plugin]
+    direction LR
+    D2[DevLake]
+    D2 <-- "Step2.1 function \`Init\` \nneed to do init jobs" --> plugin.Init;
+    D2 <-- "Step2.2 (Optional) call \nand return migration scripts" --> plugin.MigrationScripts;
+    D2 <-- "Step2.3 (Optional) call \nand return taskCtx" --> plugin.PrepareTaskData;
+    D2 <-- "Step2.4 call and \nreturn subTasks for execting" --> plugin.SubTaskContext;
+    end
+    subgraph S1[Step1 Run DevLake]
+    direction LR
+    main -- "Transfer of control \nby \`runner.DirectRun\`" --> D1[DevLake];
+    end
+    S1-->S2-->S3-->S4
+```
+There's a lot of information in the diagram but we don't expect you to digest it right away, simply use it as a reference when you go through the example below.
+
+## A step-by-step guide towards your first plugin
+
+In this section, we will describe how to create a data collection plugin from scratch. The data to be collected is the information about all Committers and Contributors of the Apache project, in order to check whether they have signed the CLA. We are going to
+
+* request `https://people.apache.org/public/icla-info.json` to get the Committers' information
+* request the `mailing list` to get the Contributors' information
+
+We will focus on demonstrating how to request and cache information about all Committers through the Apache API and extract structured data from it. The collection of Contributors will only be briefly described.
+
+### Step 1: Bootstrap the new plugin
+
+**Note:** Please make sure you have DevLake up and running before proceeding.
+
+> More info about plugin:
+> Generally, we need these folders in plugin folders: `api`, `models` and `tasks`
+> `api` interacts with `config-ui` for test/get/save connection of data source
+>       - connection [example](https://github.com/apache/incubator-devlake/blob/main/plugins/gitlab/api/connection.go)
+>       - connection model [example](https://github.com/apache/incubator-devlake/blob/main/plugins/gitlab/models/connection.go)
+> `models` stores all `data entities` and `data migration scripts`. 
+>       - entity 
+>       - data migrations [template](https://github.com/apache/incubator-devlake/tree/main/generator/template/migrationscripts)
+> `tasks` contains all of our `sub tasks` for a plugin
+>       - task data [template](https://github.com/apache/incubator-devlake/blob/main/generator/template/plugin/tasks/task_data.go-template)
+>       - api client [template](https://github.com/apache/incubator-devlake/blob/main/generator/template/plugin/tasks/task_data_with_api_client.go-template)
+
+Don't worry if you cannot figure out what these concepts mean immediately. We'll explain them one by one later. 
+
+DevLake provides a generator to create a plugin conveniently. Let's scaffold our new plugin by running `go run generator/main.go create-plugin icla`, which would ask for `with_api_client` and `Endpoint`.
+
+* `with_api_client` is used for choosing if we need to request HTTP APIs by api_client. 
+* `Endpoint` use in which site we will request, in our case, it should be `https://people.apache.org/`.
+
+![](https://i.imgur.com/itzlFg7.png)
+
+Now we have three files in our plugin. `api_client.go` and `task_data.go` are in subfolder `tasks/`.
+![plugin files](https://i.imgur.com/zon5waf.png)
+
+Have a try to run this plugin by function `main` in `plugin_main.go`. When you see result like this:
+```
+$go run plugins/icla/plugin_main.go
+[2022-06-02 18:07:30]  INFO failed to create dir logs: mkdir logs: file exists
+press `c` to send cancel signal
+[2022-06-02 18:07:30]  INFO  [icla] start plugin
+invalid ICLA_TOKEN, but ignore this error now
+[2022-06-02 18:07:30]  INFO  [icla] scheduler for api https://people.apache.org/ worker: 25, request: 18000, duration: 1h0m0s
+[2022-06-02 18:07:30]  INFO  [icla] total step: 0
+```
+How exciting. It works! The plugin defined and initiated in `plugin_main.go` use some options in `task_data.go`. They are made up as the most straightforward plugin in Apache DevLake, and `api_client.go` will be used in the next step to request HTTP APIs.
+
+### Step 2: Create a sub-task for data collection
+Before we start, it is helpful to know how collection task is executed: 
+1. First, Apache DevLake would call `plugin_main.PrepareTaskData()` to prepare needed data before any sub-tasks. We need to create an API client here.
+2. Then Apache DevLake will call the sub-tasks returned by `plugin_main.SubTaskMetas()`. Sub-task is an independent task to do some job, like requesting API, processing data, etc.
+
+> Each sub-task must be defined as a SubTaskMeta, and implement SubTaskEntryPoint of SubTaskMeta. SubTaskEntryPoint is defined as 
+> ```go
+> type SubTaskEntryPoint func(c SubTaskContext) error
+> ```
+> More info at: https://devlake.apache.org/blog/how-apache-devlake-runs/
+
+#### Step 2.1: Create a sub-task(Collector) for data collection
+
+Let's run `go run generator/main.go create-collector icla committer` and confirm it. This sub-task is activated by registering in `plugin_main.go/SubTaskMetas` automatically.
+
+![](https://i.imgur.com/tkDuofi.png)
+
+> - Collector will collect data from HTTP or other data sources, and save the data into the raw layer. 
+> - Inside the func `SubTaskEntryPoint` of `Collector`, we use `helper.NewApiCollector` to create an object of [ApiCollector](https://github.com/apache/incubator-devlake/blob/main/generator/template/plugin/tasks/api_collector.go-template), then call `execute()` to do the job. 
+
+Now you can notice `data.ApiClient` is initiated in `plugin_main.go/PrepareTaskData.ApiClient`. `PrepareTaskData` create a new `ApiClient`, which is a tool Apache DevLake suggests to request data from HTTP Apis. This tool support some valuable features for HttpApi, like rateLimit, proxy and retry. Of course, if you like, you may use the lib `http` instead, but it will be more tedious.
+
+Let's move forward to use it.
+
+1. To collect data from `https://people.apache.org/public/icla-info.json`,
+we have filled `https://people.apache.org/` into `tasks/api_client.go/ENDPOINT` in Step 1.
+
+![](https://i.imgur.com/q8Zltnl.png)
+
+2. Fill `public/icla-info.json` into `UrlTemplate`, delete the unnecessary iterator and add `println("receive data:", res)` in `ResponseParser` to see if collection was successful.
+
+![](https://i.imgur.com/ToLMclH.png)
+
+Ok, now the collector sub-task has been added to the plugin, and we can kick it off by running `main` again. If everything goes smoothly, the output should look like this:
+```bash
+[2022-06-06 12:24:52]  INFO  [icla] start plugin
+invalid ICLA_TOKEN, but ignore this error now
+[2022-06-06 12:24:52]  INFO  [icla] scheduler for api https://people.apache.org/ worker: 25, request: 18000, duration: 1h0m0s
+[2022-06-06 12:24:52]  INFO  [icla] total step: 1
+[2022-06-06 12:24:52]  INFO  [icla] executing subtask CollectCommitter
+[2022-06-06 12:24:52]  INFO  [icla] [CollectCommitter] start api collection
+receive data: 0x140005763f0
+[2022-06-06 12:24:55]  INFO  [icla] [CollectCommitter] finished records: 1
+[2022-06-06 12:24:55]  INFO  [icla] [CollectCommitter] end api collection
+[2022-06-06 12:24:55]  INFO  [icla] finished step: 1 / 1
+```
+
+Great! Now we can see data pulled from the server without any problem. The last step is to decode the response body in `ResponseParser` and return it to the framework, so it can be stored in the database.
+```go
+ResponseParser: func(res *http.Response) ([]json.RawMessage, error) {
+    body := &struct {
+        LastUpdated string          `json:"last_updated"`
+        Committers  json.RawMessage `json:"committers"`
+    }{}
+    err := helper.UnmarshalResponse(res, body)
+    if err != nil {
+        return nil, err
+    }
+    println("receive data:", len(body.Committers))
+    return []json.RawMessage{body.Committers}, nil
+},
+
+```
+Ok, run the function `main` once again, then it turned out like this, and we should be able see some records show up in the table `_raw_icla_committer`.
+```bash
+……
+receive data: 272956 /* <- the number means 272956 models received */
+[2022-06-06 13:46:57]  INFO  [icla] [CollectCommitter] finished records: 1
+[2022-06-06 13:46:57]  INFO  [icla] [CollectCommitter] end api collection
+[2022-06-06 13:46:57]  INFO  [icla] finished step: 1 / 1
+```
+
+![](https://i.imgur.com/aVYNMRr.png)
+
+#### Step 2.2: Create a sub-task(Extractor) to extract data from the raw layer
+
+> - Extractor will extract data from raw layer and save it into tool db table.
+> - Except for some pre-processing, the main flow is similar to the collector.
+
+We have already collected data from HTTP API and saved them into the DB table `_raw_XXXX`. In this step, we will extract the names of committers from the raw data. As you may infer from the name, raw tables are temporary and not easy to use directly.
+
+Now Apache DevLake suggests to save data by [gorm](https://gorm.io/docs/index.html), so we will create a model by gorm and add it into `plugin_main.go/AutoMigrate()`.
+
+plugins/icla/models/committer.go
+```go
+package models
+
+import (
+	"github.com/apache/incubator-devlake/models/common"
+)
+
+type IclaCommitter struct {
+	UserName     string `gorm:"primaryKey;type:varchar(255)"`
+	Name         string `gorm:"primaryKey;type:varchar(255)"`
+	common.NoPKModel
+}
+
+func (IclaCommitter) TableName() string {
+	return "_tool_icla_committer"
+}
+```
+
+plugins/icla/plugin_main.go
+![](https://i.imgur.com/4f0zJty.png)
+
+
+Ok, run the plugin, and table `_tool_icla_committer` will be created automatically just like the snapshot below:
+![](https://i.imgur.com/7Z324IX.png)
+
+Next, let's run `go run generator/main.go create-extractor icla committer` and type in what the command prompt asks for to create a new sub-task.
+
+![](https://i.imgur.com/UyDP9Um.png)
+
+Let's look at the function `extract` in `committer_extractor.go` created just now, and the code that needs to be written here. It's obvious that `resData.data` is the raw data, so we could json-decode each row add a new `IclaCommitter` for each and save them.
+```go
+Extract: func(resData *helper.RawData) ([]interface{}, error) {
+    names := &map[string]string{}
+    err := json.Unmarshal(resData.Data, names)
+    if err != nil {
+        return nil, err
+    }
+    extractedModels := make([]interface{}, 0)
+    for userName, name := range *names {
+        extractedModels = append(extractedModels, &models.IclaCommitter{
+            UserName: userName,
+            Name:     name,
+        })fco
+    }
+    return extractedModels, nil
+},
+```
+
+Ok, run it then we get:
+```
+[2022-06-06 15:39:40]  INFO  [icla] start plugin
+invalid ICLA_TOKEN, but ignore this error now
+[2022-06-06 15:39:40]  INFO  [icla] scheduler for api https://people.apache.org/ worker: 25, request: 18000, duration: 1h0m0s
+[2022-06-06 15:39:40]  INFO  [icla] total step: 2
+[2022-06-06 15:39:40]  INFO  [icla] executing subtask CollectCommitter
+[2022-06-06 15:39:40]  INFO  [icla] [CollectCommitter] start api collection
+receive data: 272956
+[2022-06-06 15:39:44]  INFO  [icla] [CollectCommitter] finished records: 1
+[2022-06-06 15:39:44]  INFO  [icla] [CollectCommitter] end api collection
+[2022-06-06 15:39:44]  INFO  [icla] finished step: 1 / 2
+[2022-06-06 15:39:44]  INFO  [icla] executing subtask ExtractCommitter
+[2022-06-06 15:39:46]  INFO  [icla] [ExtractCommitter] finished records: 1
+[2022-06-06 15:39:46]  INFO  [icla] finished step: 2 / 2
+```
+Now committer data have been saved in _tool_icla_committer.
+![](https://i.imgur.com/6svX0N2.png)
+
+#### Step 2.3: Convertor
+
+Notes: The goal of Converters is to create a vendor-agnostic model out of the vendor-dependent ones created by the Extractors. 
+They are not necessary to have per se, but we encourage it because converters and the domain layer will significantly help with building dashboards. More info about the domain layer [here](https://devlake.apache.org/docs/DataModels/DevLakeDomainLayerSchema/).
+
+In short:
+
+> - Convertor will convert data from the tool layer and save it into the domain layer.
+> - We use `helper.NewDataConverter` to create an object of DataConvertor, then call `execute()`. 
+
+#### Step 2.4: Let's try it
+Sometimes OpenApi will be protected by token or other auth types, and we need to log in to gain a token to visit it. For example, only after logging in `private@apahce.com` could we gather the data about contributors signing ICLA. Here we briefly introduce how to authorize DevLake to collect data.
+
+Let's look at `api_client.go`. `NewIclaApiClient` load config `ICLA_TOKEN` by `.env`, so we can add `ICLA_TOKEN=XXXXXX` in `.env` and use it in `apiClient.SetHeaders()` to mock the login status. Code as below:
+![](https://i.imgur.com/dPxooAx.png)
+
+Of course, we can use `username/password` to get a token after login mockery. Just try and adjust according to the actual situation.
+
+Look for more related details at https://github.com/apache/incubator-devlake
+
+#### Step 2.5: Implement the GetTablesInfo() method of the PluginModel interface
+
+As shown in the following gitlab plugin example,
+add all models that need to be accessed by external plugins to the return value.
+
+```go
+var _ core.PluginModel = (*Gitlab)(nil)
+
+func (plugin Gitlab) GetTablesInfo() []core.Tabler {
+	return []core.Tabler{
+		&models.GitlabConnection{},
+		&models.GitlabAccount{},
+		&models.GitlabCommit{},
+		&models.GitlabIssue{},
+		&models.GitlabIssueLabel{},
+		&models.GitlabJob{},
+		&models.GitlabMergeRequest{},
+		&models.GitlabMrComment{},
+		&models.GitlabMrCommit{},
+		&models.GitlabMrLabel{},
+		&models.GitlabMrNote{},
+		&models.GitlabPipeline{},
+		&models.GitlabProject{},
+		&models.GitlabProjectCommit{},
+		&models.GitlabReviewer{},
+		&models.GitlabTag{},
+	}
+}
+```
+
+You can use it as follows:
+
+```go
+if pm, ok := plugin.(core.PluginModel); ok {
+    tables := pm.GetTablesInfo()
+    for _, table := range tables {
+        // do something
+    }
+}
+
+```
+
+#### Final step: Submit the code as open source code
+We encourage ideas and contributions ~ Let's use migration scripts, domain layers and other discussed concepts to write normative and platform-neutral code. More info at [here](https://devlake.apache.org/docs/DataModels/DevLakeDomainLayerSchema) or contact us for ebullient help.
+
+
+## Done!
+
+Congratulations! The first plugin has been created! 🎖 
diff --git a/versioned_docs/version-v0.13/DeveloperManuals/Release-SOP.md b/versioned_docs/version-v0.13/DeveloperManuals/Release-SOP.md
new file mode 100644
index 00000000..9e020d4a
--- /dev/null
+++ b/versioned_docs/version-v0.13/DeveloperManuals/Release-SOP.md
@@ -0,0 +1,111 @@
+# Devlake release guide
+
+**Please make sure your public key was included in the https://downloads.apache.org/incubator/devlake/KEYS , if not, please update https://downloads.apache.org/incubator/devlake/KEYS first.**
+## How to update KEYS
+1. Clone the svn repository
+    ```shell
+    svn co https://dist.apache.org/repos/dist/dev/incubator/devlake
+    ```
+2. Append your public key to the KEYS file
+    ```shell
+    cd devlake
+    (gpg --list-sigs <your name> && gpg --armor --export <your name>) >> KEYS
+    ```
+3. Upload
+    ```shell
+    svn add KEYS
+    svn commit -m "update KEYS"
+    svn cp https://dist.apache.org/repos/dist/dev/incubator/devlake/KEYS https://dist.apache.org/repos/dist/release/incubator/devlake/ -m "update KEYS"
+    ```
+We will use `v0.12.0` as an example to demonstrate the release process.
+
+## ASF Release Policy
+https://www.apache.org/legal/release-policy.html
+https://incubator.apache.org/guides/releasemanagement.html
+
+## Tools:
+`gpg` creating and verifying the signature
+`shasum` creating and verifying the checksum
+`git` checkout  and pack the codebase
+`svn` uploading the code to the Apache code hosting server
+
+## Prepare
+- Check against the Incubator Release Checklist
+- Create folder `releases/lake-v0.12.0` and put the two files `docker-compose.yml` and `env.example` in there.
+- Update the file `.github/ISSUE_TEMPLATE/bug-report.yml` to include the version `v0.12.0`
+
+
+## Pack
+- Checkout to the branch/commit
+    ```shell
+    git clone https://github.com/apache/incubator-devlake.git
+    cd incubator-devlake
+    git checkout 25b718a5cc0c6a782c441965e3cbbce6877747d0
+    ```
+
+- Tag the commit and push to origin
+    ```shell
+    git tag v0.12.0-rc2
+    git push origin v0.12.0-rc2
+    ```
+
+- Pack the code
+    ```shell
+    git archive --format=tar.gz --output="<the-output-dir>/apache-devlake-0.12.0-incubating-src.tar.gz" --prefix="apache-devlake-0.12.0-incubating-src/" v0.12.0-rc2
+    ```
+- Before proceeding to the next step, please make sure your public key was included in the https://downloads.apache.org/incubator/devlake/KEYS
+- Create signature and checksum
+    ```shell
+    cd <the-output-dir>
+    gpg -s --armor --output apache-devlake-0.12.0-incubating-src.tar.gz.asc --detach-sig apache-devlake-0.12.0-incubating-src.tar.gz
+    shasum -a 512  apache-devlake-0.12.0-incubating-src.tar.gz > apache-devlake-0.12.0-incubating-src.tar.gz.sha512
+    ```
+- Verify signature and checksum
+    ```shell
+    gpg --verify apache-devlake-0.12.0-incubating-src.tar.gz.asc apache-devlake-0.12.0-incubating-src.tar.gz
+    shasum -a 512 --check apache-devlake-0.12.0-incubating-src.tar.gz.sha512
+    ```
+## Upload
+- Clone the svn repository
+    ```shell
+    svn co https://dist.apache.org/repos/dist/dev/incubator/devlake
+    ```
+- Copy the files into the svn local directory
+    ```shell
+    cd devlake
+    mkdir -p 0.12.0-incubating-rc2
+    cp <the-output-dir>/apache-devlake-0.12.0-incubating-src.tar.gz* 0.12.0-incubating-rc2/
+    - Upload local files
+    svn add 0.12.0-incubating-rc2
+    svn commit -m "add 0.12.0-incubating-rc2"
+    ```
+## Vote
+1. Devlake community vote:
+   - Start the vote by sending an email to <de...@devlake.apache.org>
+     [[VOTE] Release Apache DevLake (Incubating) v0.12.0-rc2](https://lists.apache.org/thread/yxy3kokhhhxlkxcr4op0pwslts7d8tcy)
+   - Announce the vote result
+     [[RESULT][VOTE] Release Apache DevLake (Incubating) v0.12.0-rc2](https://lists.apache.org/thread/qr3fj42tmryztt919jsy5q8hbpmcztky)
+
+2. Apache incubator community vote:
+   - Start the vote by sending an email to general@incubator.apache.org
+     [[VOTE] Release Apache DevLake (Incubating) v0.12.0-rc2](https://lists.apache.org/thread/0bjroykzcyoj7pnjt7gjh1v3yofm901o)
+   - Announce the vote result
+     [[RESULT][VOTE] Release Apache DevLake (Incubating) v0.12.0-rc2](https://lists.apache.org/thread/y2pqg0c2hhgp0pcqolv19s27db190xsh)
+
+## Release
+### Apache
+- Move the release to the ASF content distribution system
+    ```shell
+    svn mv https://dist.apache.org/repos/dist/dev/incubator/devlake/0.12.0-incubating-rc2 https://dist.apache.org/repos/dist/release/incubator/devlake/0.12.0-incubating -m "transfer packages for 0.12.0-incubating-rc2"
+    ```
+- Wait until the directory 0.12.0-incubating in https://downloads.apache.org/incubator/devlake/  was created
+- Announce release by sending an email to general@incubator.apache.org
+   [[ANNOUNCE] Release Apache Devlake(incubating) 0.12.0-incubating](https://lists.apache.org/thread/7h6og1y6nhh4xr4r6rqbnswjoj3msxjk)
+### GitHub
+- Create tag v0.12.0 and push
+    ```shell
+    git checkout v0.12.0-rc2
+    git tag v0.12.0
+    git push origin v0.12.0
+    ```
+- Create release v0.12.0 https://github.com/apache/incubator-devlake/releases/tag/v0.12.0
diff --git a/versioned_docs/version-v0.13/DeveloperManuals/TagNamingConventions.md b/versioned_docs/version-v0.13/DeveloperManuals/TagNamingConventions.md
new file mode 100644
index 00000000..7195070f
--- /dev/null
+++ b/versioned_docs/version-v0.13/DeveloperManuals/TagNamingConventions.md
@@ -0,0 +1,13 @@
+---
+title: "Tag Naming Conventions"
+description: >
+  Tag Naming Conventions
+sidebar_position: 6
+---
+
+Please refer to the rules when creating a new tag for Apache DevLake
+- alpha: internal testing/preview, i.e. v0.12.0-alpha1
+- beta: communtity/customer testing/preview, i.e. v0.12.0-beta1
+- rc: asf release candidate, i.e. v0.12.0-rc1
+
+
diff --git a/versioned_docs/version-v0.13/DeveloperManuals/_category_.json b/versioned_docs/version-v0.13/DeveloperManuals/_category_.json
new file mode 100644
index 00000000..f921ae47
--- /dev/null
+++ b/versioned_docs/version-v0.13/DeveloperManuals/_category_.json
@@ -0,0 +1,8 @@
+{
+  "label": "Developer Manuals",
+  "position": 8,
+  "link":{
+    "type": "generated-index",
+    "slug": "DeveloperManuals"
+  }
+}
diff --git a/versioned_docs/version-v0.13/GettingStarted/DockerComposeSetup.md b/versioned_docs/version-v0.13/GettingStarted/DockerComposeSetup.md
new file mode 100644
index 00000000..c17bb9f5
--- /dev/null
+++ b/versioned_docs/version-v0.13/GettingStarted/DockerComposeSetup.md
@@ -0,0 +1,37 @@
+---
+title: "Install via Docker Compose"
+description: >
+  The steps to install DevLake via Docker Compose
+sidebar_position: 1
+---
+
+
+## Prerequisites
+
+- [Docker v19.03.10+](https://docs.docker.com/get-docker)
+- [docker-compose v2.2.3+](https://docs.docker.com/compose/install/) (If you have Docker Desktop installed then you already have the Compose plugin installed)
+
+## Launch DevLake
+
+- Commands written `like this` are to be run in your terminal.
+
+1. Download `docker-compose.yml` and `env.example` from [latest release page](https://github.com/apache/incubator-devlake/releases/latest) into a folder.
+2. Rename `env.example` to `.env`. For Mac/Linux users, please run `mv env.example .env` in the terminal. This file contains the environment variables that the Devlake server will use. Additional ones can be found in the compose file(s).
+3. Run `docker-compose up -d` to launch DevLake.
+
+## Configure and collect data
+
+1. Visit `config-ui` at `http://localhost:4000` in your browser to configure and collect data.
+   - Please follow the [tutorial](UserManuals/ConfigUI/Tutorial.md)
+   - `devlake` takes a while to fully boot up. if `config-ui` complaining about api being unreachable, please wait a few seconds and try refreshing the page.
+2. Click *View Dashboards* button in the top left when done, or visit `localhost:3002` (username: `admin`, password: `admin`).
+   - We use [Grafana](https://grafana.com/) as a visualization tool to build charts for the [data](../SupportedDataSources.md) stored in our database.
+   - Using SQL queries, we can add panels to build, save, and edit customized dashboards.
+   - All the details on provisioning and customizing a dashboard can be found in the [Grafana Doc](../UserManuals/Dashboards/GrafanaUserGuide.md).
+
+
+## Upgrade to a newer version
+
+Support for database schema migration was introduced to DevLake in v0.10.0. From v0.10.0 onwards, users can upgrade their instance smoothly to a newer version. However, versions prior to v0.10.0 do not support upgrading to a newer version with a different database schema. We recommend users to deploy a new instance if needed.
+
+<br/>
diff --git a/versioned_docs/version-v0.13/GettingStarted/HelmSetup.md b/versioned_docs/version-v0.13/GettingStarted/HelmSetup.md
new file mode 100644
index 00000000..e8920b4a
--- /dev/null
+++ b/versioned_docs/version-v0.13/GettingStarted/HelmSetup.md
@@ -0,0 +1,116 @@
+---
+title: "Install via Helm"
+description: >
+  The steps to install Apache DevLake via Helm for Kubernetes
+sidebar_position: 3
+---
+
+## Prerequisites
+
+- Helm >= 3.6.0
+- Kubernetes >= 1.19.0
+
+
+## Quick Install
+
+clone the code, and enter the deployment/helm folder.
+```
+helm install devlake .
+```
+
+And visit your devlake from the node port (32001 by default).
+
+http://YOUR-NODE-IP:32001
+
+
+## Some example deployments
+
+### Deploy with NodePort
+
+Conditions:
+ - IP Address of Kubernetes node: 192.168.0.6
+ - Want to visit devlake with port 30000.
+
+```
+helm install devlake . --set service.uiPort=30000
+```
+
+After deployed, visit devlake: http://192.168.0.6:30000
+
+### Deploy with Ingress
+
+Conditions:
+ - I have already configured default ingress for the Kubernetes cluster
+ - I want to use http://devlake.example.com for visiting devlake
+
+```
+helm install devlake . --set "ingress.enabled=true,ingress.hostname=devlake.example.com"
+```
+
+After deployed, visit devlake: http://devlake.example.com, and grafana at http://devlake.example.com/grafana
+
+### Deploy with Ingress (Https)
+
+Conditions:
+ - I have already configured ingress(class: nginx) for the Kubernetes cluster, and the https using 8443 port.
+ - I want to use https://devlake-0.example.com:8443 for visiting devlake.
+ - The https certificates are generated by letsencrypt.org, and the certificate and key files: `cert.pem` and `key.pem`
+
+First, create the secret:
+```
+kubectl create secret tls ssl-certificate --cert cert.pem --key secret.pem
+```
+
+Then, deploy the devlake:
+```
+helm install devlake . \
+    --set "ingress.enabled=true,ingress.enableHttps=true,ingress.hostname=devlake-0.example.com" \
+    --set "ingress.className=nginx,ingress.httpsPort=8443" \
+    --set "ingress.tlsSecretName=ssl-certificate"
+```
+
+After deployed, visit devlake: https://devlake-0.example.com:8443, and grafana at https://devlake-0.example.com:8443/grafana
+
+
+## Parameters
+
+Some useful parameters for the chart, you could also check them in values.yaml
+
+| Parameter | Description | Default |
+|-----------|-------------|---------|
+| replicaCount  | Replica Count for devlake, currently not used  | 1  |
+| mysql.useExternal  | If use external mysql server, currently not used  |  false  |
+| mysql.externalServer  | External mysql server address  | 127.0.0.1  |
+| mysql.externalPort  | External mysql server port  | 3306  |
+| mysql.username  | username for mysql | merico  |
+| mysql.password  | password for mysql | merico  |
+| mysql.database  | database for mysql | lake  |
+| mysql.rootPassword  | root password for mysql | admin  |
+| mysql.storage.class  | storage class for mysql's volume | ""  |
+| mysql.storage.size  | volume size for mysql's data | 5Gi  |
+| mysql.image.repository  | repository for mysql's image | mysql  |
+| mysql.image.tag  | image tag for mysql's image | 8.0.26  |
+| mysql.image.pullPolicy  | pullPolicy for mysql's image | IfNotPresent  |
+| grafana.image.repository  | repository for grafana's image | mericodev/grafana  |
+| grafana.image.tag  | image tag for grafana's image | latest  |
+| grafana.image.pullPolicy  | pullPolicy for grafana's image | Always  |
+| lake.storage.class  | storage class for lake's volume | ""  |
+| lake.storage.size  | volume size for lake's data | 100Mi  |
+| lake.image.repository  | repository for lake's image | mericodev/lake  |
+| lake.image.tag  | image tag for lake's image | latest  |
+| lake.image.pullPolicy  | pullPolicy for lake's image | Always  |
+| lake.loggingDir | the root logging directory of Devlake | /app/logs | 
+| ui.image.repository  | repository for ui's image | mericodev/config-ui  |
+| ui.image.tag  | image tag for ui's image | latest  |
+| ui.image.pullPolicy  | pullPolicy for ui's image | Always  |
+| service.type  | Service type for exposed service | NodePort  |
+| service.uiPort  | Service port for config ui | 32001  |
+| service.ingress.enabled  | If enable ingress  |  false  |
+| service.ingress.enableHttps  | If enable https  |  false  |
+| service.ingress.className  | The class name for ingressClass. If leave empty, the default IngressClass will be used  | ""  |
+| service.ingress.hostname  | The hostname/domainname for ingress  | localhost  |
+| service.ingress.prefix | The prefix for endpoints, currently not supported due to devlake's implementation  | /  |
+| service.ingress.tlsSecretName  | The secret name for tls's certificate, required when https enabled  | ""  |
+| service.ingress.httpPort  | The http port for ingress  | 80  |
+| service.ingress.httpsPort  | The https port for ingress  | 443  |
+| option.localtime  | The hostpath for mount as /etc/localtime | /etc/localtime  |
diff --git a/versioned_docs/version-v0.13/GettingStarted/KubernetesSetup.md b/versioned_docs/version-v0.13/GettingStarted/KubernetesSetup.md
new file mode 100644
index 00000000..f87d5ac1
--- /dev/null
+++ b/versioned_docs/version-v0.13/GettingStarted/KubernetesSetup.md
@@ -0,0 +1,51 @@
+---
+title: "Install via Kubernetes"
+description: >
+  The steps to install Apache DevLake via Kubernetes
+sidebar_position: 2
+---
+
+We provide a sample [k8s-deploy.yaml](https://github.com/apache/incubator-devlake/blob/main/k8s-deploy.yaml) to help deploy DevLake to Kubernetes
+
+[k8s-deploy.yaml](https://github.com/apache/incubator-devlake/blob/main/k8s-deploy.yaml) will create a namespace `devlake` on your k8s cluster, and use `nodePort 30004` for `config-ui`,  `nodePort 30002` for `grafana` dashboards. If you would like to use a specific version of Apache DevLake, please update the image tag of `grafana`, `devlake` and `config-ui` deployments.
+
+## Step-by-step guide
+
+1. Download [k8s-deploy.yaml](https://github.com/apache/incubator-devlake/blob/main/k8s-deploy.yaml)
+2. Customize the settings (`devlake-config` config map):
+   - Settings shared between `grafana` and `mysql`
+     * `MYSQL_ROOT_PASSWORD`: set root password for `mysql`
+     * `MYSQL_USER`: shared between `mysql` and `grafana`
+     * `MYSQL_PASSWORD`: shared between `mysql` and `grafana`
+     * `MYSQL_DATABASE`: shared between `mysql` and `grafana`
+   - Settings used by `grafana`
+     * `MYSQL_URL`: set MySQL URL for `grafana` in `$HOST:$PORT` format
+     * `GF_SERVER_ROOT_URL`: Public URL to the `grafana`
+   - Settings used by `config-ui`:
+     * `GRAFANA_ENDPOINT`: FQDN of grafana which can be reached within k8s cluster, normally you don't need to change it unless namespace was changed
+     * `DEVLAKE_ENDPOINT`: FQDN of devlake which can be reached within k8s cluster, normally you don't need to change it unless namespace was changed
+     * `ADMIN_USER`/`ADMIN_PASS`: Not required, but highly recommended
+   - Settings used by `devlake`:
+     * `DB_URL`: update this value if  `MYSQL_USER`, `MYSQL_PASSWORD` or `MYSQL_DATABASE` were changed
+     * `LOGGING_DIR`: the directory of logs for Devlake - you likely don't need to change it.
+3. The `devlake` deployment store its configuration in `/app/.env`. In our sample yaml, we use `hostPath` volume, so please make sure directory `/var/lib/devlake` exists on your k8s workers, or employ other techniques to persist `/app/.env` file. Please do NOT mount the entire `/app` directory, because plugins are located in `/app/bin` folder.
+4. Finally, execute the following command and DevLake should be up and running:
+   ```sh
+   kubectl apply -f k8s-deploy.yaml
+   ```
+
+
+## FAQ
+
+1. Can I use a managed Cloud database service instead of running database in docker?
+
+   Yes, it only takes a few changes in the sample yaml file. Below we'll use MySQL on AWS RDS as an example.
+   1. (Optional) Create a MySQL instance on AWS RDS following this [doc](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_GettingStarted.CreatingConnecting.MySQL.html), skip this step if you'd like to use an existing instance
+   2. Remove the `mysql` deployment and service sections from `k8s-deploy.yaml`
+   3. Update `devlake-config` configmap according to your RDS instance setup:
+     * `MYSQL_ROOT_PASSWORD`: remove this line
+     * `MYSQL_USER`: use your RDS instance's master username
+     * `MYSQL_PASSWORD`: use your RDS instance's password
+     * `MYSQL_DATABASE`: use your RDS instance's DB name, you may need to create a database first with `CREATE DATABASE <DB name>;`
+     * `MYSQL_URL`: set this for `grafana` in `$HOST:$PORT` format, where $HOST and $PORT should be your RDS instance's endpoint and port respectively
+     * `DB_URL`: update the connection string with your RDS instance's info for `devlake`
diff --git a/versioned_docs/version-v0.13/GettingStarted/TemporalSetup.md b/versioned_docs/version-v0.13/GettingStarted/TemporalSetup.md
new file mode 100644
index 00000000..58132999
--- /dev/null
+++ b/versioned_docs/version-v0.13/GettingStarted/TemporalSetup.md
@@ -0,0 +1,35 @@
+---
+title: "Install via Temporal"
+sidebar_position: 6
+description: >
+  The steps to install DevLake in Temporal mode.
+---
+
+
+Normally, DevLake would execute pipelines on a local machine (we call it `local mode`), it is sufficient most of the time. However, when you have too many pipelines that need to be executed in parallel, it can be problematic, as the horsepower and throughput of a single machine is limited.
+
+`temporal mode` was added to support distributed pipeline execution, you can fire up arbitrary workers on multiple machines to carry out those pipelines in parallel to overcome the limitations of a single machine.
+
+But, be careful, many API services like JIRA/GITHUB have a request rate limit mechanism. Collecting data in parallel against the same API service with the same identity would most likely hit such limit.
+
+## How it works
+
+1. DevLake Server and Workers connect to the same temporal server by setting up `TEMPORAL_URL`
+2. DevLake Server sends a `pipeline` to the temporal server, and one of the Workers pick it up and execute it
+
+
+**IMPORTANT: This feature is in early stage of development. Please use with caution**
+
+
+## Temporal Demo
+
+### Requirements
+
+- [Docker](https://docs.docker.com/get-docker)
+- [docker-compose](https://docs.docker.com/compose/install/)
+- [temporalio](https://temporal.io/)
+
+### How to setup
+
+1. Clone and fire up the [temporalio](https://temporal.io/) services
+2. Clone this repo, and fire up DevLake with command `docker-compose -f deployment/temporal/docker-compose-temporal.yml up -d`
\ No newline at end of file
diff --git a/versioned_docs/version-v0.13/GettingStarted/_category_.json b/versioned_docs/version-v0.13/GettingStarted/_category_.json
new file mode 100644
index 00000000..063400ae
--- /dev/null
+++ b/versioned_docs/version-v0.13/GettingStarted/_category_.json
@@ -0,0 +1,8 @@
+{
+  "label": "Getting Started",
+  "position": 2,
+  "link":{
+    "type": "generated-index",
+    "slug": "GettingStarted"
+  }
+}
diff --git a/versioned_docs/version-v0.13/Glossary.md b/versioned_docs/version-v0.13/Glossary.md
new file mode 100644
index 00000000..c3bad3dc
--- /dev/null
+++ b/versioned_docs/version-v0.13/Glossary.md
@@ -0,0 +1,103 @@
+---
+sidebar_position: 7
+title: "Glossary"
+linkTitle: "Glossary"
+description: >
+  DevLake Glossary
+---
+
+*Last updated: May 16 2022*
+
+
+## In Configuration UI (Regular Mode)
+
+The following terms are arranged in the order of their appearance in the actual user workflow.
+
+### Blueprints
+**A blueprint is the plan that covers all the work to get your raw data ready for query and metric computation in the dashboards.** Creating a blueprint consists of four steps:
+1. **Adding [Data Connections](Glossary.md#data-connections)**: For each [data source](Glossary.md#data-sources), one or more data connections can be added to a single blueprint, depending on the data you want to sync to DevLake.
+2. **Setting the [Data Scope](Glossary.md#data-scope)**: For each data connection, you need to configure the scope of data, such as GitHub projects, Jira boards, and their corresponding [data entities](Glossary.md#data-entities).
+3. **Adding [Transformation Rules](Glossary.md#transformation-rules) (optional)**: You can optionally apply transformation for the data scope you have just selected, in order to view more advanced metrics.
+3. **Setting the Sync Frequency**: You can specify the sync frequency for your blueprint to achieve recurring data syncs and transformation. Alternatively, you can set the frequency to manual if you wish to run the tasks in the blueprint manually.
+
+The relationship among Blueprint, Data Connections, Data Scope and Transformation Rules is explained as follows:
+
+![Blueprint ERD](/img/Glossary/blueprint-erd.svg)
+- Each blueprint can have multiple data connections.
+- Each data connection can have multiple sets of data scope.
+- Each set of data scope only consists of one GitHub/GitLab project or Jira board, along with their corresponding data entities.
+- Each set of data scope can only have one set of transformation rules.
+
+### Data Sources
+**A data source is a specific DevOps tool from which you wish to sync your data, such as GitHub, GitLab, Jira and Jenkins.**
+
+DevLake normally uses one [data plugin](Glossary.md#data-plugins) to pull data for a single data source. However, in some cases, DevLake uses multiple data plugins for one data source for the purpose of improved sync speed, among many other advantages. For instance, when you pull data from GitHub or GitLab, aside from the GitHub or GitLab plugin, Git Extractor is also used to pull data from the repositories. In this case, DevLake still refers GitHub or GitLab as a single data source.
+
+### Data Connections
+**A data connection is a specific instance of a data source that stores information such as `endpoint` and `auth`.** A single data source can have one or more data connections (e.g. two Jira instances). Currently, DevLake supports one data connection for GitHub, GitLab and Jenkins, and multiple connections for Jira.
+
+You can set up a new data connection either during the first step of creating a blueprint, or in the Connections page that can be accessed from the navigation bar. Because one single data connection can be reused in multiple blueprints, you can update the information of a particular data connection in Connections, to ensure all its associated blueprints will run properly. For example, you may want to update your GitHub token in a data connection if it goes expired.
+
+### Data Scope
+**In a blueprint, each data connection can have multiple sets of data scope configurations, including GitHub or GitLab projects, Jira boards and their corresponding [data entities](Glossary.md#data-entities).** The fields for data scope configuration vary according to different data sources.
+
+Each set of data scope refers to one GitHub or GitLab project, or one Jira board and the data entities you would like to sync for them, for the convenience of applying transformation in the next step. For instance, if you wish to sync 5 GitHub projects, you will have 5 sets of data scope for GitHub.
+
+To learn more about the default data scope of all data sources and data plugins, please refer to [Supported Data Sources](./SupportedDataSources.md).
+
+### Data Entities
+**Data entities refer to the data fields from one of the five data domains: Issue Tracking, Source Code Management, Code Review, CI/CD and Cross-Domain.**
+
+For instance, if you wish to pull Source Code Management data from GitHub and Issue Tracking data from Jira, you can check the corresponding data entities during setting the data scope of these two data connections.
+
+To learn more details, please refer to [Domain Layer Schema](./DataModels/DevLakeDomainLayerSchema.md).
+
+### Transformation Rules
+**Transformation rules are a collection of methods that allow you to customize how DevLake normalizes raw data for query and metric computation.** Each set of data scope is strictly accompanied with one set of transformation rules. However, for your convenience, transformation rules can also be duplicated across different sets of data scope.
+
+DevLake uses these normalized values in the transformation to design more advanced dashboards, such as the Weekly Bug Retro dashboard. Although configuring transformation rules is not mandatory, if you leave the rules blank or have not configured correctly, only the basic dashboards (e.g. GitHub Basic Metrics) will be displayed as expected, while the advanced dashboards will not.
+
+### Historical Runs
+**A historical run of a blueprint is an actual execution of the data collection and transformation [tasks](Glossary.md#tasks) defined in the blueprint at its creation.** A list of historical runs of a blueprint is the entire running history of that blueprint, whether executed automatically or manually. Historical runs can be triggered in three ways:
+- By the blueprint automatically according to its schedule in the Regular Mode of the Configuration UI
+- By running the JSON in the Advanced Mode of the Configuration UI
+- By calling the API `/pipelines` endpoint manually
+
+However, the name Historical Runs is only used in the Configuration UI. In DevLake API, they are called [pipelines](Glossary.md#pipelines).
+
+## In Configuration UI (Advanced Mode) and API
+
+The following terms have not appeared in the Regular Mode of Configuration UI for simplification, but can be very useful if you want to learn about the underlying framework of Devalke or use Advanced Mode and the DevLake API.
+
+### Data Plugins
+**A data plugin is a specific module that syncs or transforms data.** There are two types of data plugins: Data Collection Plugins and Data Transformation Plugins.
+
+Data Collection Plugins pull data from one or more data sources. DevLake supports 8 data plugins in this category: `ae`, `feishu`, `gitextractor`, `github`, `gitlab`, `jenkins`, `jira` and `tapd`.
+
+Data Transformation Plugins transform the data pulled by other Data Collection Plugins. `refdiff` is currently the only plugin in this category.
+
+Although the names of the data plugins are not displayed in the regular mode of DevLake Configuration UI, they can be used directly in JSON in the Advanced Mode.
+
+For detailed information about the relationship between data sources and data plugins, please refer to [Supported Data Sources](./SupportedDataSources.md).
+
+
+### Pipelines
+**A pipeline is an orchestration of [tasks](Glossary.md#tasks) of data `collection`, `extraction`, `conversion` and `enrichment`, defined in the DevLake API.** A pipeline is composed of one or multiple [stages](Glossary.md#stages) that are executed in a sequential order. Any error occurring during the execution of any stage, task or subtask will cause the immediate fail of the pipeline.
+
+The composition of a pipeline is explained as follows:
+![Blueprint ERD](/img/Glossary/pipeline-erd.svg)
+Notice: **You can manually orchestrate the pipeline in Configuration UI Advanced Mode and the DevLake API; whereas in Configuration UI regular mode, an optimized pipeline orchestration will be automatically generated for you.**
+
+
+### Stages
+**A stages is a collection of tasks performed by data plugins.** Stages are executed in a sequential order in a pipeline.
+
+### Tasks
+**A task is a collection of [subtasks](Glossary.md#subtasks) that perform any of the `collection`, `extraction`, `conversion` and `enrichment` jobs of a particular data plugin.** Tasks are executed in a parallel order in any stages.
+
+### Subtasks
+**A subtask is the minimal work unit in a pipeline that performs in any of the four roles: `Collectors`, `Extractors`, `Converters` and `Enrichers`.** Subtasks are executed in sequential orders.
+- `Collectors`: Collect raw data from data sources, normally via DevLake API and stored into `raw data table`
+- `Extractors`: Extract data from `raw data table` to `domain layer tables`
+- `Converters`: Convert data from `tool layer tables` into `domain layer tables`
+- `Enrichers`: Enrich data from one domain to other domains. For instance, the Fourier Transformation can examine `issue_changelog` to show time distribution of an issue on every assignee.
diff --git a/versioned_docs/version-v0.13/LiveDemo/AverageRequirementLeadTime.md b/versioned_docs/version-v0.13/LiveDemo/AverageRequirementLeadTime.md
new file mode 100644
index 00000000..0710335c
--- /dev/null
+++ b/versioned_docs/version-v0.13/LiveDemo/AverageRequirementLeadTime.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 6
+title: "Average Requirement Lead Time by Assignee"
+description: >
+  DevLake Live Demo
+---
+
+# Average Requirement Lead Time by Assignee
+<iframe src="https://grafana-lake.demo.devlake.io/d/q27fk7cnk/demo-average-requirement-lead-time-by-assignee?orgId=1&from=1635945684845&to=1651584084846" width="100%" height="940px"></iframe>
\ No newline at end of file
diff --git a/versioned_docs/version-v0.13/LiveDemo/CommitCountByAuthor.md b/versioned_docs/version-v0.13/LiveDemo/CommitCountByAuthor.md
new file mode 100644
index 00000000..04e029cf
--- /dev/null
+++ b/versioned_docs/version-v0.13/LiveDemo/CommitCountByAuthor.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 2
+title: "Commit Count by Author"
+description: >
+  DevLake Live Demo
+---
+
+# Commit Count by Author
+<iframe src="https://grafana-lake.demo.devlake.io/d/F0iYknc7z/demo-commit-count-by-author?orgId=1&from=1634911190615&to=1650635990615" width="100%" height="820px"></iframe>
diff --git a/versioned_docs/version-v0.13/LiveDemo/DetailedBugInfo.md b/versioned_docs/version-v0.13/LiveDemo/DetailedBugInfo.md
new file mode 100644
index 00000000..b7776170
--- /dev/null
+++ b/versioned_docs/version-v0.13/LiveDemo/DetailedBugInfo.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 4
+title: "Detailed Bug Info"
+description: >
+  DevLake Live Demo
+---
+
+# Detailed Bug Info
+<iframe src="https://grafana-lake.demo.devlake.io/d/s48Lzn5nz/demo-detailed-bug-info?orgId=1&from=1635945709579&to=1651584109579" width="100%" height="800px"></iframe>
\ No newline at end of file
diff --git a/versioned_docs/version-v0.13/LiveDemo/GitHubBasic.md b/versioned_docs/version-v0.13/LiveDemo/GitHubBasic.md
new file mode 100644
index 00000000..7ea28cdf
--- /dev/null
+++ b/versioned_docs/version-v0.13/LiveDemo/GitHubBasic.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 1
+title: "GitHub Basic Metrics"
+description: >
+  DevLake Live Demo
+---
+
+# GitHub Basic Metrics
+<iframe src="https://grafana-lake.demo.devlake.io/d/KXWvOFQnz/github_basic_metrics?orgId=1&from=1635945132339&to=1651583532339" width="100%" height="3080px"></iframe>
\ No newline at end of file
diff --git a/versioned_docs/version-v0.13/LiveDemo/GitHubReleaseQualityAndContributionAnalysis.md b/versioned_docs/version-v0.13/LiveDemo/GitHubReleaseQualityAndContributionAnalysis.md
new file mode 100644
index 00000000..61db78f9
--- /dev/null
+++ b/versioned_docs/version-v0.13/LiveDemo/GitHubReleaseQualityAndContributionAnalysis.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 5
+title: "GitHub Release Quality and Contribution Analysis"
+description: >
+  DevLake Live Demo
+---
+
+# GitHub Release Quality and Contribution Analysis
+<iframe src="https://grafana-lake.demo.devlake.io/d/2xuOaQUnk1/github_release_quality_and_contribution_analysis?orgId=1&from=1635945847658&to=1651584247658" width="100%" height="2800px"></iframe>
\ No newline at end of file
diff --git a/versioned_docs/version-v0.13/LiveDemo/Jenkins.md b/versioned_docs/version-v0.13/LiveDemo/Jenkins.md
new file mode 100644
index 00000000..506a3c97
--- /dev/null
+++ b/versioned_docs/version-v0.13/LiveDemo/Jenkins.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 7
+title: "Jenkins"
+description: >
+  DevLake Live Demo
+---
+
+# Jenkins
+<iframe src="https://grafana-lake.demo.devlake.io/d/W8AiDFQnk/jenkins?orgId=1&from=1635945337632&to=1651583737632" width="100%" height="1060px"></iframe>
\ No newline at end of file
diff --git a/versioned_docs/version-v0.13/LiveDemo/WeeklyBugRetro.md b/versioned_docs/version-v0.13/LiveDemo/WeeklyBugRetro.md
new file mode 100644
index 00000000..adbc4e80
--- /dev/null
+++ b/versioned_docs/version-v0.13/LiveDemo/WeeklyBugRetro.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 3
+title: "Weekly Bug Retro"
+description: >
+  DevLake Live Demo
+---
+
+# Weekly Bug Retro
+<iframe src="https://grafana-lake.demo.devlake.io/d/-5EKA5w7k/weekly-bug-retro?orgId=1&from=1635945873174&to=1651584273174" width="100%" height="2240px"></iframe>
diff --git a/versioned_docs/version-v0.13/LiveDemo/_category_.json b/versioned_docs/version-v0.13/LiveDemo/_category_.json
new file mode 100644
index 00000000..b6dd7fd6
--- /dev/null
+++ b/versioned_docs/version-v0.13/LiveDemo/_category_.json
@@ -0,0 +1,8 @@
+{
+  "label": "Live Demo",
+  "position": 11,
+  "link":{
+    "type": "generated-index",
+    "slug": "LiveDemo"
+  }
+}
diff --git a/versioned_docs/version-v0.13/Metrics/AddedLinesOfCode.md b/versioned_docs/version-v0.13/Metrics/AddedLinesOfCode.md
new file mode 100644
index 00000000..2921ea65
--- /dev/null
+++ b/versioned_docs/version-v0.13/Metrics/AddedLinesOfCode.md
@@ -0,0 +1,33 @@
+---
+title: "Added Lines of Code"
+description: >
+  Added Lines of Code
+sidebar_position: 7
+---
+
+## What is this metric? 
+The accumulated number of added lines of code.
+
+## Why is it important?
+1. identify potential bottlenecks that may affect the output
+2. Encourage the team to implement a development model that matches the business requirements; develop excellent coding habits
+
+## Which dashboard(s) does it exist in
+N/A
+
+## How is it calculated?
+This metric is calculated by summing the additions of commits in the given data range.
+
+<b>Data Sources Required</b>
+
+This metric relies on commits collected from GitHub, GitLab or BitBucket.
+
+<b>Transformation Rules Required</b>
+
+N/A
+
+
+## How to improve?
+1. From the project/team dimension, observe the accumulated change in Added lines to assess the team activity and code growth rate
+2. From version cycle dimension, observe the active time distribution of code changes, and evaluate the effectiveness of project development model.
+3. From the member dimension, observe the trend and stability of code output of each member, and identify the key points that affect code output by comparison.
diff --git a/versioned_docs/version-v0.13/Metrics/BugAge.md b/versioned_docs/version-v0.13/Metrics/BugAge.md
new file mode 100644
index 00000000..66cdcbad
--- /dev/null
+++ b/versioned_docs/version-v0.13/Metrics/BugAge.md
@@ -0,0 +1,35 @@
+---
+title: "Bug Age"
+description: >
+  Bug Age
+sidebar_position: 9
+---
+
+## What is this metric? 
+The amount of time it takes a bug to fix.
+
+## Why is it important?
+1. Help the team to establish an effective hierarchical response mechanism for bugs. Focus on the resolution of important problems in the backlog.
+2. Improve team's and individual's bug fixing efficiency. Identify good/to-be-improved practices that affect bug age age
+
+## Which dashboard(s) does it exist in
+- Jira
+- GitHub
+- Weekly Bug Retro
+
+
+## How is it calculated?
+This metric equals to `resolution_date` - `created_date` of issues in type "BUG".
+
+<b>Data Sources Required</b>
+
+This metric relies on issues collected from Jira, GitHub, or TAPD.
+
+<b>Transformation Rules Required</b>
+
+This metric relies on the 'type-bug' configuration in Jira, GitHub or TAPD transformation rules to let DevLake know what CI builds/jobs can be regarded as `Bugs`.
+
+
+## How to improve?
+1. Observe the trend of bug age and locate the key reasons.
+2. According to the severity level, type (business, functional classification), affected module, source of bugs, count and observe the length of bug age.
\ No newline at end of file
diff --git a/versioned_docs/version-v0.13/Metrics/BugCountPer1kLinesOfCode.md b/versioned_docs/version-v0.13/Metrics/BugCountPer1kLinesOfCode.md
new file mode 100644
index 00000000..0c252e53
--- /dev/null
+++ b/versioned_docs/version-v0.13/Metrics/BugCountPer1kLinesOfCode.md
@@ -0,0 +1,40 @@
+---
+title: "Bug Count per 1k Lines of Code"
+description: >
+  Bug Count per 1k Lines of Code
+sidebar_position: 12
+---
+
+## What is this metric? 
+Amount of bugs per 1,000 lines of code.
+
+## Why is it important?
+1. Defect drill-down analysis to inform the development of design and code review strategies and to improve the internal QA process
+2. Assist teams to locate projects/modules with higher defect severity and density, and clean up technical debts
+3. Analyze critical points, identify good/to-be-improved practices that affect defect count or defect rate, to reduce the amount of future defects
+
+## Which dashboard(s) does it exist in
+N/A
+
+
+## How is it calculated?
+The number of bugs divided by total accumulated lines of code (additions + deletions) in the given data range.
+
+<b>Data Sources Required</b>
+
+This metric relies on 
+- issues collected from Jira, GitHub or TAPD.
+- commits collected from GitHub, GitLab or BitBucket.
+
+<b>Transformation Rules Required</b>
+
+This metric relies on
+- "Issue type mapping" in Jira, GitHub or TAPD's transformation rules page to let DevLake know what type(s) of issues can be regarded as bugs.
+- "PR-Issue Mapping" in GitHub, GitLab's transformation rules page to let DevLake know the bugs are fixed by which PR/MRs.
+
+
+## How to improve?
+1. From the project or team dimension, observe the statistics on the total number of defects, the distribution of the number of defects in each severity level/type/owner, the cumulative trend of defects, and the change trend of the defect rate in thousands of lines, etc.
+2. From version cycle dimension, observe the statistics on the cumulative trend of the number of defects/defect rate, which can be used to determine whether the growth rate of defects is slowing down, showing a flat convergence trend, and is an important reference for judging the stability of software version quality
+3. From the time dimension, analyze the trend of the number of test defects, defect rate to locate the key items/key points
+4. Evaluate whether the software quality and test plan are reasonable by referring to CMMI standard values
diff --git a/versioned_docs/version-v0.13/Metrics/BuildCount.md b/versioned_docs/version-v0.13/Metrics/BuildCount.md
new file mode 100644
index 00000000..50352bbc
--- /dev/null
+++ b/versioned_docs/version-v0.13/Metrics/BuildCount.md
@@ -0,0 +1,32 @@
+---
+title: "Build Count"
+description: >
+  Build Count
+sidebar_position: 15
+---
+
+## What is this metric? 
+The number of successful builds.
+
+## Why is it important?
+1. As a process indicator, it reflects the value flow efficiency of upstream production and research links
+2. Identify excellent/to-be-improved practices that impact the build, and drive the team to precipitate reusable tools and mechanisms to build infrastructure for fast and high-frequency delivery
+
+## Which dashboard(s) does it exist in
+- Jenkins
+
+
+## How is it calculated?
+This metric is calculated by counting the number of successful CI builds/pipelines/runs in the given data range.
+
+<b>Data Sources Required</b>
+
+This metric relies on CI builds/pipelines/runs collected from Jenkins, GitLab or GitHub.
+
+<b>Transformation Rules Required</b>
+
+N/A
+
+## How to improve?
+1. From the project dimension, compare the number of builds and success rate by combining the project phase and the complexity of tasks.
+2. From the time dimension, analyze the trend of the number of builds and success rate to see if it has improved over time.
diff --git a/versioned_docs/version-v0.13/Metrics/BuildDuration.md b/versioned_docs/version-v0.13/Metrics/BuildDuration.md
new file mode 100644
index 00000000..1aa95385
--- /dev/null
+++ b/versioned_docs/version-v0.13/Metrics/BuildDuration.md
@@ -0,0 +1,32 @@
+---
+title: "Build Duration"
+description: >
+  Build Duration
+sidebar_position: 16
+---
+
+## What is this metric? 
+The duration of successful builds.
+
+## Why is it important?
+1. As a process indicator, it reflects the value flow efficiency of upstream production and research links
+2. Identify excellent/to-be-improved practices that impact the build, and drive the team to precipitate reusable tools and mechanisms to build infrastructure for fast and high-frequency delivery
+
+## Which dashboard(s) does it exist in
+- Jenkins
+
+
+## How is it calculated?
+This metric is calculated by getting the duration of successful CI builds/pipelines/runs in the given data range.
+
+<b>Data Sources Required</b>
+
+This metric relies on CI builds/pipelines/runs collected from Jenkins, GitLab or GitHub.
+
+<b>Transformation Rules Required</b>
+
+N/A
+
+## How to improve?
+1. From the project dimension, compare the number of builds and success rate by combining the project phase and the complexity of tasks.
+2. From the time dimension, analyze the trend of the number of builds and success rate to see if it has improved over time.
diff --git a/versioned_docs/version-v0.13/Metrics/BuildSuccessRate.md b/versioned_docs/version-v0.13/Metrics/BuildSuccessRate.md
new file mode 100644
index 00000000..401086d9
--- /dev/null
+++ b/versioned_docs/version-v0.13/Metrics/BuildSuccessRate.md
@@ -0,0 +1,32 @@
+---
+title: "Build Success Rate"
+description: >
+  Build Success Rate
+sidebar_position: 17
+---
+
+## What is this metric? 
+The ratio of successful builds to all builds.
+
+## Why is it important?
+1. As a process indicator, it reflects the value flow efficiency of upstream production and research links
+2. Identify excellent/to-be-improved practices that impact the build, and drive the team to precipitate reusable tools and mechanisms to build infrastructure for fast and high-frequency delivery
+
+## Which dashboard(s) does it exist in
+- Jenkins
+
+
+## How is it calculated?
+The number of successful builds divided by the total number of builds in the given data range.
+
+<b>Data Sources Required</b>
+
+This metric relies on CI builds/pipelines/runs collected from Jenkins, GitLab or GitHub.
+
+<b>Transformation Rules Required</b>
+
+N/A
+
+## How to improve?
+1. From the project dimension, compare the number of builds and success rate by combining the project phase and the complexity of tasks.
+2. From the time dimension, analyze the trend of the number of builds and success rate to see if it has improved over time.
diff --git a/versioned_docs/version-v0.13/Metrics/CFR.md b/versioned_docs/version-v0.13/Metrics/CFR.md
new file mode 100644
index 00000000..d6e34150
--- /dev/null
+++ b/versioned_docs/version-v0.13/Metrics/CFR.md
@@ -0,0 +1,53 @@
+---
+title: "DORA - Change Failure Rate(WIP)"
+description: >
+  DORA - Change Failure Rate
+sidebar_position: 21
+---
+
+## What is this metric? 
+The percentage of changes that were made to a code that then resulted in incidents, rollbacks, or any type of production failure.
+
+## Why is it important?
+Unlike Deployment Frequency and Lead Time for Changes that measure the throughput, Change Failure Rate measures the stability and quality of software delivery. A low CFR reflects a bad end-user experience as the production failure is relatively high.
+
+## Which dashboard(s) does it exist in
+N/A
+
+
+## How is it calculated?
+The number of failures per the number of deployments. For example, if there are five deployments in a day and one causes a failure, that is a 20% change failure rate.
+
+As you can see, there is not much distinction between performance benchmarks for CFR:
+
+| Groups           | Benchmarks      |
+| -----------------| ----------------|
+| Elite performers | 0%-15%          |
+| High performers  | 16%-30%         |
+| Medium performers| 16%-30%         |
+| Low performers   | 16%-30%         |
+
+<i>Source: 2021 Accelerate State of DevOps, Google</i>
+
+<b>Data Sources Required</b>
+
+This metric relies on:
+- `Deployments` collected in one of the following ways:
+  - Open APIs of Jenkins, GitLab, GitHub, etc.
+  - Webhook for general CI tools.
+  - Releases and PR/MRs from GitHub, GitLab APIs, etc.
+- `Incidents` collected in one of the following ways:
+  - Issue tracking tools such as Jira, TAPD, GitHub, etc.
+  - Bug or Service Monitoring tools such as PagerDuty, Sentry, etc.
+  - CI pipelines that marked the 'failed' deployments.
+
+<b>Transformation Rules Required</b>
+
+This metric relies on:
+- Deployment configuration in Jenkins, GitLab or GitHub transformation rules to let DevLake know what CI builds/jobs can be regarded as `Deployments`.
+- Incident configuration in Jira, GitHub or TAPD transformation rules to let DevLake know what CI builds/jobs can be regarded as `Incidents`.
+
+## How to improve?
+- Add unit tests for all new feature
+- "Shift left", start QA early and introduce more automated tests
+- Enforce code review if it's not strictly executed
diff --git a/versioned_docs/version-v0.13/Metrics/CodingTime.md b/versioned_docs/version-v0.13/Metrics/CodingTime.md
new file mode 100644
index 00000000..d7884748
--- /dev/null
+++ b/versioned_docs/version-v0.13/Metrics/CodingTime.md
@@ -0,0 +1,32 @@
+---
+title: "PR Coding Time"
+description: >
+  PR Coding Time
+sidebar_position: 2
+---
+
+## What is this metric? 
+The time it takes from the first commit until a PR is issued. 
+
+## Why is it important?
+It is recommended that you keep every task on a workable and manageable scale for a reasonably short amount of coding time. The average coding time of most engineering teams is around 3-4 days.
+
+## Which dashboard(s) does it exist in?
+- Engineering Throughput and Cycle Time
+- Engineering Throughput and Cycle Time - Team View
+
+
+## How is it calculated?
+<b>Data Sources Required</b>
+
+This metric relies on PR/MRs collected from GitHub or GitLab.
+
+<b>Transformation Rules Required</b>
+
+N/A
+
+<b>SQL Queries</b>
+
+
+## How to improve?
+Divide coding tasks into workable and manageable pieces.
diff --git a/versioned_docs/version-v0.13/Metrics/CommitAuthorCount.md b/versioned_docs/version-v0.13/Metrics/CommitAuthorCount.md
new file mode 100644
index 00000000..3be4ad20
--- /dev/null
+++ b/versioned_docs/version-v0.13/Metrics/CommitAuthorCount.md
@@ -0,0 +1,32 @@
+---
+title: "Commit Author Count"
+description: >
+  Commit Author Count
+sidebar_position: 14
+---
+
+## What is this metric? 
+The number of commit authors who have committed code.
+
+## Why is it important?
+Take inventory of project/team R&D resource inputs, assess input-output ratio, and rationalize resource deployment.
+
+
+## Which dashboard(s) does it exist in
+N/A
+
+
+## How is it calculated?
+This metric is calculated by counting the number of commit authors in the given data range.
+
+<b>Data Sources Required</b>
+
+This metric relies on commits collected from GitHub, GitLab or BitBucket.
+
+<b>Transformation Rules Required</b>
+
+N/A
+
+
+## How to improve?
+As a secondary indicator, this helps assess the labor cost of participating in coding.
diff --git a/versioned_docs/version-v0.13/Metrics/CommitCount.md b/versioned_docs/version-v0.13/Metrics/CommitCount.md
new file mode 100644
index 00000000..ae85af8d
--- /dev/null
+++ b/versioned_docs/version-v0.13/Metrics/CommitCount.md
@@ -0,0 +1,55 @@
+---
+title: "Commit Count"
+description: >
+  Commit Count
+sidebar_position: 6
+---
+
+## What is this metric? 
+The number of commits created.
+
+## Why is it important?
+1. Identify potential bottlenecks that may affect output
+2. Encourage R&D practices of small step submissions and develop excellent coding habits
+
+## Which dashboard(s) does it exist in
+- GitHub Release Quality and Contribution Analysis
+- Demo-Is this month more productive than last?
+- Demo-Commit Count by Author
+
+## How is it calculated?
+This metric is calculated by counting the number of commits in the given data range.
+
+<b>Data Sources Required</b>
+
+This metric relies on commits collected from GitHub, GitLab or BitBucket.
+
+<b>Transformation Rules Required</b>
+
+N/A
+
+<b>SQL Queries</b>
+
+If you want to see the monthly trend, run the following SQL
+```
+  with _commits as(
+    SELECT
+      DATE_ADD(date(authored_date), INTERVAL -DAY(date(authored_date))+1 DAY) as time,
+      count(*) as commit_count
+    FROM commits
+    WHERE
+      message not like '%Merge%'
+      and $__timeFilter(authored_date)
+    group by 1
+  )
+
+  SELECT 
+    date_format(time,'%M %Y') as month,
+    commit_count as "Commit Count"
+  FROM _commits
+  ORDER BY time
+```
+
+## How to improve?
+1. Identify the main reasons for the unusual number of commits and the possible impact on the number of commits through comparison
+2. Evaluate whether the number of commits is reasonable in conjunction with more microscopic workload metrics (e.g. lines of code/code equivalents)
diff --git a/versioned_docs/version-v0.13/Metrics/CycleTime.md b/versioned_docs/version-v0.13/Metrics/CycleTime.md
new file mode 100644
index 00000000..bbc98349
--- /dev/null
+++ b/versioned_docs/version-v0.13/Metrics/CycleTime.md
@@ -0,0 +1,40 @@
+---
+title: "PR Cycle Time"
+description: >
+  PR Cycle Time
+sidebar_position: 2
+---
+
+## What is this metric? 
+PR Cycle Time is the sum of PR Coding Time, Pickup TIme, Review Time and Deploy Time. It is the total time from the first commit to when the PR is deployed.
+
+## Why is it important?
+PR Cycle Time indicate the overall speed of the delivery progress in terms of PR. 
+
+## Which dashboard(s) does it exist in?
+- Engineering Throughput and Cycle Time
+- Engineering Throughput and Cycle Time - Team View
+
+
+## How is it calculated?
+You can define `deployment` based on your actual practice. For a full list of `deployment`'s definitions that DevLake support, please refer to [Deployment Frequency](/docs/Metrics/DeploymentFrequency.md).
+
+<b>Data Sources Required</b>
+
+This metric relies on PR/MRs collected from GitHub or GitLab.
+
+<b>Transformation Rules Required</b>
+
+N/A
+
+<b>SQL Queries</b>
+
+
+## How to improve?
+1. Divide coding tasks into workable and manageable pieces;
+2. Use DevLake's dashboards to monitor your delivery progress;
+3. Have a habit to check for hanging PRs regularly;
+4. Set up alerts for your communication tools (e.g. Slack, Lark) when new PRs are issued;
+2. Use automated tests for the initial work;
+5. Reduce PR size;
+6. Analyze the causes for long reviews.
\ No newline at end of file
diff --git a/versioned_docs/version-v0.13/Metrics/DeletedLinesOfCode.md b/versioned_docs/version-v0.13/Metrics/DeletedLinesOfCode.md
new file mode 100644
index 00000000..218ceae0
--- /dev/null
+++ b/versioned_docs/version-v0.13/Metrics/DeletedLinesOfCode.md
@@ -0,0 +1,32 @@
+---
+title: "Deleted Lines of Code"
+description: >
+  Deleted Lines of Code
+sidebar_position: 8
+---
+
+## What is this metric? 
+The accumulated number of deleted lines of code.
+
+## Why is it important?
+1. identify potential bottlenecks that may affect the output
+2. Encourage the team to implement a development model that matches the business requirements; develop excellent coding habits
+
+## Which dashboard(s) does it exist in
+N/A
+
+## How is it calculated?
+This metric is calculated by summing the deletions of commits in the given data range.
+
+<b>Data Sources Required</b>
+
+This metric relies on commits collected from GitHub, GitLab or BitBucket.
+
+<b>Transformation Rules Required</b>
+
+N/A
+
+## How to improve?
+1. From the project/team dimension, observe the accumulated change in Added lines to assess the team activity and code growth rate
+2. From version cycle dimension, observe the active time distribution of code changes, and evaluate the effectiveness of project development model.
+3. From the member dimension, observe the trend and stability of code output of each member, and identify the key points that affect code output by comparison.
diff --git a/versioned_docs/version-v0.13/Metrics/DeployTime.md b/versioned_docs/version-v0.13/Metrics/DeployTime.md
new file mode 100644
index 00000000..d9084808
--- /dev/null
+++ b/versioned_docs/version-v0.13/Metrics/DeployTime.md
@@ -0,0 +1,30 @@
+---
+title: "PR Deploy Time"
+description: >
+  PR Deploy Time
+sidebar_position: 2
+---
+
+## What is this metric? 
+The time it takes from when a PR is merged to when it is deployed.
+
+## Why is it important?
+1. Based on historical data, establish a baseline of the delivery capacity of a single iteration to improve the organization and planning of R&D resources.
+2. Evaluate whether the delivery capacity matches the business phase and demand scale. Identify key bottlenecks and reasonably allocate resources.
+
+## Which dashboard(s) does it exist in?
+
+
+## How is it calculated?
+You can define `deployment` based on your actual practice. For a full list of `deployment`'s definitions that DevLake support, please refer to [Deployment Frequency](/docs/Metrics/DeploymentFrequency.md).
+
+<b>Data Sources Required</b>
+
+This metric relies on PR/MRs collected from GitHub or GitLab.
+
+<b>Transformation Rules Required</b>
+
+N/A
+
+## How to improve?
+
diff --git a/versioned_docs/version-v0.13/Metrics/DeploymentFrequency.md b/versioned_docs/version-v0.13/Metrics/DeploymentFrequency.md
new file mode 100644
index 00000000..13a49bc3
--- /dev/null
+++ b/versioned_docs/version-v0.13/Metrics/DeploymentFrequency.md
@@ -0,0 +1,45 @@
+---
+title: "DORA - Deployment Frequency(WIP)"
+description: >
+  DORA - Deployment Frequency
+sidebar_position: 18
+---
+
+## What is this metric? 
+How often an organization deploys code to production or release it to end users.
+
+## Why is it important?
+Deployment frequency reflects the efficiency of a team's deployment. A team that deploys more frequently can deliver the product faster and users' feature requirements can be met faster.
+
+## Which dashboard(s) does it exist in
+N/A
+
+
+## How is it calculated?
+Deployment frequency is calculated based on the number of deployment days, not the number of deployments, e.g.,daily, weekly, monthly, yearly.
+
+| Groups           | Benchmarks                           |
+| -----------------| -------------------------------------|
+| Elite performers | Multiple times a day                 |
+| High performers  | Once a week to once a month          |
+| Medium performers| Once a month to once every six months|
+| Low performers   | Less than once every six months      |
+
+<i>Source: 2021 Accelerate State of DevOps, Google</i>
+
+
+<b>Data Sources Required</b>
+
+This metric relies on deployments collected in multiple ways:
+- Open APIs of Jenkins, GitLab, GitHub, etc.
+- Webhook for general CI tools.
+- Releases and PR/MRs from GitHub, GitLab APIs, etc.
+
+<b>Transformation Rules Required</b>
+
+This metric relies on the deployment configuration in Jenkins, GitLab or GitHub transformation rules to let DevLake know what CI builds/jobs can be regarded as deployments.
+
+## How to improve?
+- Trunk development. Work in small batches and often merge their work into shared trunks.
+- Integrate CI/CD tools for automated deployment
+- Improve automated test coverage
diff --git a/versioned_docs/version-v0.13/Metrics/IncidentAge.md b/versioned_docs/version-v0.13/Metrics/IncidentAge.md
new file mode 100644
index 00000000..4cd5e60c
--- /dev/null
+++ b/versioned_docs/version-v0.13/Metrics/IncidentAge.md
@@ -0,0 +1,34 @@
+---
+title: "Incident Age"
+description: >
+  Incident Age
+sidebar_position: 10
+---
+
+## What is this metric? 
+The amount of time it takes a incident to fix.
+
+## Why is it important?
+1. Help the team to establish an effective hierarchical response mechanism for incidents. Focus on the resolution of important problems in the backlog.
+2. Improve team's and individual's incident fixing efficiency. Identify good/to-be-improved practices that affect incident age
+
+## Which dashboard(s) does it exist in
+- Jira
+- GitHub
+
+
+## How is it calculated?
+This metric equals to `resolution_date` - `created_date` of issues in type "INCIDENT".
+
+<b>Data Sources Required</b>
+
+This metric relies on issues collected from Jira, GitHub, or TAPD.
+
+<b>Transformation Rules Required</b>
+
+This metric relies on the 'type-incident' configuration in Jira, GitHub or TAPD transformation rules to let DevLake know what CI builds/jobs can be regarded as `Incidents`.
+
+
+## How to improve?
+1. Observe the trend of incident age and locate the key reasons.
+2. According to the severity level, type (business, functional classification), affected module, source of bugs, count and observe the length of incident age.
\ No newline at end of file
diff --git a/versioned_docs/version-v0.13/Metrics/IncidentCountPer1kLinesOfCode.md b/versioned_docs/version-v0.13/Metrics/IncidentCountPer1kLinesOfCode.md
new file mode 100644
index 00000000..9ad92787
--- /dev/null
+++ b/versioned_docs/version-v0.13/Metrics/IncidentCountPer1kLinesOfCode.md
@@ -0,0 +1,39 @@
+---
+title: "Incident Count per 1k Lines of Code"
+description: >
+  Incident Count per 1k Lines of Code
+sidebar_position: 13
+---
+
+## What is this metric? 
+Amount of incidents per 1,000 lines of code.
+
+## Why is it important?
+1. Defect drill-down analysis to inform the development of design and code review strategies and to improve the internal QA process
+2. Assist teams to locate projects/modules with higher defect severity and density, and clean up technical debts
+3. Analyze critical points, identify good/to-be-improved practices that affect defect count or defect rate, to reduce the amount of future defects
+
+## Which dashboard(s) does it exist in
+N/A
+
+
+## How is it calculated?
+The number of incidents divided by total accumulated lines of code (additions + deletions) in the given data range.
+
+<b>Data Sources Required</b>
+
+This metric relies on 
+- issues collected from Jira, GitHub or TAPD.
+- commits collected from GitHub, GitLab or BitBucket.
+
+<b>Transformation Rules Required</b>
+
+This metric relies on
+- "Issue type mapping" in Jira, GitHub or TAPD's transformation rules page to let DevLake know what type(s) of issues can be regarded as incidents.
+- "PR-Issue Mapping" in GitHub, GitLab's transformation rules page to let DevLake know the bugs are fixed by which PR/MRs.
+
+## How to improve?
+1. From the project or team dimension, observe the statistics on the total number of defects, the distribution of the number of defects in each severity level/type/owner, the cumulative trend of defects, and the change trend of the defect rate in thousands of lines, etc.
+2. From version cycle dimension, observe the statistics on the cumulative trend of the number of defects/defect rate, which can be used to determine whether the growth rate of defects is slowing down, showing a flat convergence trend, and is an important reference for judging the stability of software version quality
+3. From the time dimension, analyze the trend of the number of test defects, defect rate to locate the key items/key points
+4. Evaluate whether the software quality and test plan are reasonable by referring to CMMI standard values
diff --git a/versioned_docs/version-v0.13/Metrics/LeadTimeForChanges.md b/versioned_docs/version-v0.13/Metrics/LeadTimeForChanges.md
new file mode 100644
index 00000000..4a5d3957
--- /dev/null
+++ b/versioned_docs/version-v0.13/Metrics/LeadTimeForChanges.md
@@ -0,0 +1,56 @@
+---
+title: "DORA - Lead Time for Changes(WIP)"
+description: >
+  DORA - Lead Time for Changes
+sidebar_position: 19
+---
+
+## What is this metric? 
+The median amount of time for a commit to be deployed into production.
+
+## Why is it important?
+This metric measures the time it takes to commit code to the production environment and reflects the speed of software delivery. A lower average change preparation time means that your team is efficient at coding and deploying your project.
+
+## Which dashboard(s) does it exist in
+N/A
+
+
+## How is it calculated?
+This metric can be calculated in two ways:
+- If a deployment can be linked to PRs, then the lead time for changes of a deployment is the average cycle time of its associated PRs. For instance,
+   - Compared to the previous deployment `deploy-1`, `deploy-2` deployed three new commits `commit-1`, `commit-2` and `commit-3`.
+   - `commit-1` is linked to `pr-1`, `commit-2` is linked to `pr-2` and `pr-3`, `commit-3` is not linked to any PR. Then, `deploy-2` is associated with `pr-1`, `pr-2` and `pr-3`.
+   - `Deploy-2`'s lead time for changes = average cycle time of `pr-1`, `pr-2` and `pr-3`.
+- If a deployment can't be linked to PRs, then the lead time for changes is computed based on its associated commits. For instance,
+   - Compared to the previous deployment `deploy-1`, `deploy-2` deployed three new commits `commit-1`, `commit-2` and `commit-3`.
+   - None of `commit-1`, `commit-2` and `commit-3` is linked to any PR. 
+   - Calculate each commit's lead time for changes, which equals to `deploy-2`'s deployed_at - commit's authored_date
+   - `Deploy-2`'s Lead time for changes = average lead time for changes of `commit-1`, `commit-2` and `commit-3`.
+
+Below are the benchmarks for different development teams:
+
+| Groups           | Benchmarks                           |
+| -----------------| -------------------------------------|
+| Elite performers | Less than one hour                   |
+| High performers  | Between one day and one week         |
+| Medium performers| Between one month and six months     |
+| Low performers   | More than six months                 |
+
+<i>Source: 2021 Accelerate State of DevOps, Google</i>
+
+<b>Data Sources Required</b>
+
+This metric relies on deployments collected in multiple ways:
+- Open APIs of Jenkins, GitLab, GitHub, etc.
+- Webhook for general CI tools.
+- Releases and PR/MRs from GitHub, GitLab APIs, etc.
+
+<b>Transformation Rules Required</b>
+
+This metric relies on the deployment configuration in Jenkins, GitLab or GitHub transformation rules to let DevLake know what CI builds/jobs can be regarded as deployments.
+
+## How to improve?
+- Break requirements into smaller, more manageable deliverables
+- Optimize the code review process
+- "Shift left", start QA early and introduce more automated tests
+- Integrate CI/CD tools to automate the deployment process
diff --git a/versioned_docs/version-v0.13/Metrics/MTTR.md b/versioned_docs/version-v0.13/Metrics/MTTR.md
new file mode 100644
index 00000000..a7bcb2dc
--- /dev/null
+++ b/versioned_docs/version-v0.13/Metrics/MTTR.md
@@ -0,0 +1,56 @@
+---
+title: "DORA - Mean Time to Restore Service"
+description: >
+  DORA - Mean Time to Restore Service
+sidebar_position: 20
+---
+
+## What is this metric? 
+The time to restore service after service incidents, rollbacks, or any type of production failure happened.
+
+## Why is it important?
+This metric is essential to measure the disaster control capability of your team and the robustness of the software.
+
+## Which dashboard(s) does it exist in
+N/A
+
+
+## How is it calculated?
+MTTR = Total [incident age](./IncidentAge.md) (in hours)/number of incidents.
+
+If you have three incidents that happened in the given data range, one lasting 1 hour, one lasting 2 hours and one lasting 3 hours. Your MTTR will be: (1 + 2 + 3) / 3 = 2 hours.
+
+Below are the benchmarks for different development teams:
+
+| Groups           | Benchmarks                           |
+| -----------------| -------------------------------------|
+| Elite performers | Less than one hour                   |
+| High performers  | Less one day                         |
+| Medium performers| Between one day and one week         |
+| Low performers   | More than six months                 |
+
+<i>Source: 2021 Accelerate State of DevOps, Google</i>
+
+<b>Data Sources Required</b>
+
+This metric relies on:
+- `Deployments` collected in one of the following ways:
+  - Open APIs of Jenkins, GitLab, GitHub, etc.
+  - Webhook for general CI tools.
+  - Releases and PR/MRs from GitHub, GitLab APIs, etc.
+- `Incidents` collected in one of the following ways:
+  - Issue tracking tools such as Jira, TAPD, GitHub, etc.
+  - Bug or Service Monitoring tools such as PagerDuty, Sentry, etc.
+  - CI pipelines that marked the 'failed' deployments.
+
+<b>Transformation Rules Required</b>
+
+This metric relies on:
+- Deployment configuration in Jenkins, GitLab or GitHub transformation rules to let DevLake know what CI builds/jobs can be regarded as `Deployments`.
+- Incident configuration in Jira, GitHub or TAPD transformation rules to let DevLake know what CI builds/jobs can be regarded as `Incidents`.
+
+## How to improve?
+- Use automated tools to quickly report failure
+- Prioritize recovery when a failure happens
+- Establish a go-to action plan to respond to failures immediately
+- Reduce the deployment time for failure-fixing
diff --git a/versioned_docs/version-v0.13/Metrics/MergeRate.md b/versioned_docs/version-v0.13/Metrics/MergeRate.md
new file mode 100644
index 00000000..c8c27433
--- /dev/null
+++ b/versioned_docs/version-v0.13/Metrics/MergeRate.md
@@ -0,0 +1,40 @@
+---
+title: "PR Merge Rate"
+description: >
+  Pull Request Merge Rate
+sidebar_position: 12
+---
+
+## What is this metric? 
+The ratio of PRs/MRs that get merged.
+
+## Why is it important?
+1. Code review metrics are process indicators to provide quick feedback on developers' code quality
+2. Promote the team to establish a unified coding specification and standardize the code review criteria
+3. Identify modules with low-quality risks in advance, optimize practices, and precipitate into reusable knowledge and tools to avoid technical debt accumulation
+
+## Which dashboard(s) does it exist in
+- Jira
+- GitHub
+- GitLab
+- Weekly Community Retro
+- Engineering Throughput and Cycle Time
+- Engineering Throughput and Cycle Time - Team View 
+
+
+## How is it calculated?
+The number of merged PRs divided by the number of all PRs in the given data range.
+
+<b>Data Sources Required</b>
+
+This metric relies on PRs/MRs collected from GitHub, GitLab or BitBucket.
+
+<b>Transformation Rules Required</b>
+
+N/A
+
+
+## How to improve?
+1. From the developer dimension, we evaluate the code quality of developers by combining the task complexity with the metrics related to the number of review passes and review rounds.
+2. From the reviewer dimension, we observe the reviewer's review style by taking into account the task complexity, the number of passes and the number of review rounds.
+3. From the project/team dimension, we combine the project phase and team task complexity to aggregate the metrics related to the number of review passes and review rounds, and identify the modules with abnormal code review process and possible quality risks.
diff --git a/versioned_docs/version-v0.13/Metrics/PRCount.md b/versioned_docs/version-v0.13/Metrics/PRCount.md
new file mode 100644
index 00000000..4521e786
--- /dev/null
+++ b/versioned_docs/version-v0.13/Metrics/PRCount.md
@@ -0,0 +1,39 @@
+---
+title: "Pull Request Count"
+description: >
+  Pull Request Count
+sidebar_position: 11
+---
+
+## What is this metric? 
+The number of pull requests created.
+
+## Why is it important?
+1. Code review metrics are process indicators to provide quick feedback on developers' code quality
+2. Promote the team to establish a unified coding specification and standardize the code review criteria
+3. Identify modules with low-quality risks in advance, optimize practices, and precipitate into reusable knowledge and tools to avoid technical debt accumulation
+
+## Which dashboard(s) does it exist in
+- Jira
+- GitHub
+- GitLab
+- Weekly Community Retro
+- Engineering Throughput and Cycle Time
+- Engineering Throughput and Cycle Time - Team View 
+
+
+## How is it calculated?
+This metric is calculated by counting the number of PRs in the given data range.
+
+<b>Data Sources Required</b>
+
+This metric relies on PRs/MRs collected from GitHub, GitLab or BitBucket.
+
+<b>Transformation Rules Required</b>
+
+N/A
+
+## How to improve?
+1. From the developer dimension, we evaluate the code quality of developers by combining the task complexity with the metrics related to the number of review passes and review rounds.
+2. From the reviewer dimension, we observe the reviewer's review style by taking into account the task complexity, the number of passes and the number of review rounds.
+3. From the project/team dimension, we combine the project phase and team task complexity to aggregate the metrics related to the number of review passes and review rounds, and identify the modules with abnormal code review process and possible quality risks.
diff --git a/versioned_docs/version-v0.13/Metrics/PRSize.md b/versioned_docs/version-v0.13/Metrics/PRSize.md
new file mode 100644
index 00000000..bf6a87d8
--- /dev/null
+++ b/versioned_docs/version-v0.13/Metrics/PRSize.md
@@ -0,0 +1,35 @@
+---
+title: "PR Size"
+description: >
+  PR Size
+sidebar_position: 2
+---
+
+## What is this metric? 
+The average code changes (in Lines of Code) of PRs in the selected time range.
+
+## Why is it important?
+Small PRs can reduce risks of introducing new bugs and increase code review quality, as problems may often be hidden in big chuncks of code and difficult to identify.
+
+## Which dashboard(s) does it exist in?
+- Engineering Throughput and Cycle Time
+- Engineering Throughput and Cycle Time - Team View
+
+
+## How is it calculated?
+This metric is calculated by counting the total number of code changes (in LOC) divided by the total number of PRs in the selected time range.
+
+<b>Data Sources Required</b>
+
+This metric relies on PR/MRs collected from GitHub or GitLab.
+
+<b>Transformation Rules Required</b>
+
+N/A
+
+<b>SQL Queries</b>
+
+
+## How to improve?
+1. Divide coding tasks into workable and manageable pieces;
+1. Encourage developers to submit small PRs and only keep related changes in the same PR.
diff --git a/versioned_docs/version-v0.13/Metrics/PickupTime.md b/versioned_docs/version-v0.13/Metrics/PickupTime.md
new file mode 100644
index 00000000..07242ae7
--- /dev/null
+++ b/versioned_docs/version-v0.13/Metrics/PickupTime.md
@@ -0,0 +1,34 @@
+---
+title: "PR Pickup Time"
+description: >
+  PR Pickup Time
+sidebar_position: 2
+---
+
+## What is this metric? 
+The time it takes from when a PR is issued until the first comment is added to that PR. 
+
+## Why is it important?
+PR Pickup Time shows how engaged your team is in collaborative work by identifying the delay in picking up PRs. 
+
+## Which dashboard(s) does it exist in?
+- Engineering Throughput and Cycle Time
+- Engineering Throughput and Cycle Time - Team View
+
+
+## How is it calculated?
+<b>Data Sources Required</b>
+
+This metric relies on PR/MRs collected from GitHub or GitLab.
+
+<b>Transformation Rules Required</b>
+
+N/A
+
+<b>SQL Queries</b>
+
+
+## How to improve?
+1. Use DevLake's dashboard to monitor your delivery progress;
+2. Have a habit to check for hanging PRs regularly;
+3. Set up alerts for your communication tools (e.g. Slack, Lark) when new PRs are issued.
diff --git a/versioned_docs/version-v0.13/Metrics/RequirementCount.md b/versioned_docs/version-v0.13/Metrics/RequirementCount.md
new file mode 100644
index 00000000..e9a6bd32
--- /dev/null
+++ b/versioned_docs/version-v0.13/Metrics/RequirementCount.md
@@ -0,0 +1,68 @@
+---
+title: "Requirement Count"
+description: >
+  Requirement Count
+sidebar_position: 2
+---
+
+## What is this metric? 
+The number of delivered requirements or features.
+
+## Why is it important?
+1. Based on historical data, establish a baseline of the delivery capacity of a single iteration to improve the organization and planning of R&D resources.
+2. Evaluate whether the delivery capacity matches the business phase and demand scale. Identify key bottlenecks and reasonably allocate resources.
+
+## Which dashboard(s) does it exist in
+- Jira
+- GitHub
+
+
+## How is it calculated?
+This metric is calculated by counting the number of delivered issues in type "REQUIREMENT" in the given data range.
+
+<b>Data Sources Required</b>
+
+This metric relies on the issues collected from Jira, GitHub, or TAPD.
+
+<b>Transformation Rules Required</b>
+
+This metric relies on the 'type-requirement' configuration in Jira, GitHub or TAPD transformation rules to let DevLake know what CI builds/jobs can be regarded as `Requirements`.
+
+<b>SQL Queries</b>
+
+If you want to see a single count, run the following SQL in Grafana
+```
+  select 
+    count(*) as "Requirement Count"
+  from issues i
+    join board_issues bi on i.id = bi.issue_id
+  where 
+    i.type = 'REQUIREMENT'
+    and i.status = 'DONE'
+    -- this is the default variable in Grafana
+    and $__timeFilter(i.created_date)
+    and bi.board_id in ($board_id)
+```
+
+If you want to see the monthly trend, run the following SQL
+```
+  SELECT
+    DATE_ADD(date(i.created_date), INTERVAL -DAYOFMONTH(date(i.created_date))+1 DAY) as time,
+    count(distinct case when status != 'DONE' then i.id else null end) as "Number of Open Issues",
+    count(distinct case when status = 'DONE' then i.id else null end) as "Number of Delivered Issues"
+  FROM issues i
+    join board_issues bi on i.id = bi.issue_id
+    join boards b on bi.board_id = b.id
+  WHERE 
+    i.type = 'REQUIREMENT'
+    and i.status = 'DONE'
+    and $__timeFilter(i.created_date)
+    and bi.board_id in ($board_id)
+  GROUP by 1
+```
+
+## How to improve?
+1. Analyze the number of requirements and delivery rate of different time cycles to find the stability and trend of the development process.
+2. Analyze and compare the number of requirements delivered and delivery rate of each project/team, and compare the scale of requirements of different projects.
+3. Based on historical data, establish a baseline of the delivery capacity of a single iteration (optimistic, probable and pessimistic values) to provide a reference for iteration estimation.
+4. Drill down to analyze the number and percentage of requirements in different phases of SDLC. Analyze rationality and identify the requirements stuck in the backlog. 
diff --git a/versioned_docs/version-v0.13/Metrics/RequirementDeliveryRate.md b/versioned_docs/version-v0.13/Metrics/RequirementDeliveryRate.md
new file mode 100644
index 00000000..eb0a0313
--- /dev/null
+++ b/versioned_docs/version-v0.13/Metrics/RequirementDeliveryRate.md
@@ -0,0 +1,36 @@
+---
+title: "Requirement Delivery Rate"
+description: >
+  Requirement Delivery Rate
+sidebar_position: 3
+---
+
+## What is this metric? 
+The ratio of delivered requirements to all requirements.
+
+## Why is it important?
+1. Based on historical data, establish a baseline of the delivery capacity of a single iteration to improve the organization and planning of R&D resources.
+2. Evaluate whether the delivery capacity matches the business phase and demand scale. Identify key bottlenecks and reasonably allocate resources.
+
+## Which dashboard(s) does it exist in
+- Jira
+- GitHub
+
+
+## How is it calculated?
+The number of delivered requirements divided by the total number of requirements in the given data range.
+
+<b>Data Sources Required</b>
+
+This metric relies on the issues collected from Jira, GitHub, or TAPD.
+
+<b>Transformation Rules Required</b>
+
+This metric relies on the 'type-requirement' configuration in Jira, GitHub or TAPD transformation rules to let DevLake know what CI builds/jobs can be regarded as `Requirements`.
+
+
+## How to improve?
+1. Analyze the number of requirements and delivery rate of different time cycles to find the stability and trend of the development process.
+2. Analyze and compare the number of requirements delivered and delivery rate of each project/team, and compare the scale of requirements of different projects.
+3. Based on historical data, establish a baseline of the delivery capacity of a single iteration (optimistic, probable and pessimistic values) to provide a reference for iteration estimation.
+4. Drill down to analyze the number and percentage of requirements in different phases of SDLC. Analyze rationality and identify the requirements stuck in the backlog. 
diff --git a/versioned_docs/version-v0.13/Metrics/RequirementGranularity.md b/versioned_docs/version-v0.13/Metrics/RequirementGranularity.md
new file mode 100644
index 00000000..03bb9176
--- /dev/null
+++ b/versioned_docs/version-v0.13/Metrics/RequirementGranularity.md
@@ -0,0 +1,34 @@
+---
+title: "Requirement Granularity"
+description: >
+  Requirement Granularity
+sidebar_position: 5
+---
+
+## What is this metric? 
+The average number of story points per requirement.
+
+## Why is it important?
+1. Promote product teams to split requirements carefully, improve requirements quality, help developers understand requirements clearly, deliver efficiently and with high quality, and improve the project management capability of the team.
+2. Establish a data-supported workload estimation model to help R&D teams calibrate their estimation methods and more accurately assess the granularity of requirements, which is useful to achieve better issue planning in project management.
+
+## Which dashboard(s) does it exist in
+- Jira
+- GitHub
+
+
+## How is it calculated?
+The average story points of issues in type "REQUIREMENT" in the given data range.
+
+<b>Data Sources Required</b>
+
+This metric relies on issues collected from Jira, GitHub, or TAPD.
+
+<b>Transformation Rules Required</b>
+
+This metric relies on the 'type-requirement' configuration in Jira, GitHub or TAPD transformation rules to let DevLake know what CI builds/jobs can be regarded as `Requirements`.
+
+
+## How to improve?
+1. Analyze the story points/requirement lead time of requirements to evaluate whether the ticket size, ie. requirement complexity is optimal.
+2. Compare the estimated requirement granularity with the actual situation and evaluate whether the difference is reasonable by combining more microscopic workload metrics (e.g. lines of code/code equivalents)
diff --git a/versioned_docs/version-v0.13/Metrics/RequirementLeadTime.md b/versioned_docs/version-v0.13/Metrics/RequirementLeadTime.md
new file mode 100644
index 00000000..74061d63
--- /dev/null
+++ b/versioned_docs/version-v0.13/Metrics/RequirementLeadTime.md
@@ -0,0 +1,36 @@
+---
+title: "Requirement Lead Time"
+description: >
+  Requirement Lead Time
+sidebar_position: 4
+---
+
+## What is this metric? 
+The amount of time it takes a requirement to deliver.
+
+## Why is it important?
+1. Analyze key projects and critical points, identify good/to-be-improved practices that affect requirement lead time, and reduce the risk of delays
+2. Focus on the end-to-end velocity of value delivery process; coordinate different parts of R&D to avoid efficiency shafts; make targeted improvements to bottlenecks.
+
+## Which dashboard(s) does it exist in
+- Jira
+- GitHub
+- Community Experience
+
+
+## How is it calculated?
+This metric equals to `resolution_date` - `created_date` of issues in type "REQUIREMENT".
+
+<b>Data Sources Required</b>
+
+This metric relies on issues collected from Jira, GitHub, or TAPD.
+
+<b>Transformation Rules Required</b>
+
+This metric relies on the 'type-requirement' configuration in Jira, GitHub or TAPD transformation rules to let DevLake know what CI builds/jobs can be regarded as `Requirements`.
+
+
+## How to improve?
+1. Analyze the trend of requirement lead time to observe if it has improved over time.
+2. Analyze and compare the requirement lead time of each project/team to identify key projects with abnormal lead time.
+3. Drill down to analyze a requirement's staying time in different phases of SDLC. Analyze the bottleneck of delivery velocity and improve the workflow.
\ No newline at end of file
diff --git a/versioned_docs/version-v0.13/Metrics/ReviewDepth.md b/versioned_docs/version-v0.13/Metrics/ReviewDepth.md
new file mode 100644
index 00000000..59bcfbe8
--- /dev/null
+++ b/versioned_docs/version-v0.13/Metrics/ReviewDepth.md
@@ -0,0 +1,34 @@
+---
+title: "PR Review Depth"
+description: >
+  PR Review Depth
+sidebar_position: 2
+---
+
+## What is this metric? 
+The average number of comments of PRs in the selected time range.
+
+## Why is it important?
+PR Review Depth (in Comments per RR) is related to the quality of code review, indicating how thorough your team reviews PRs.
+
+## Which dashboard(s) does it exist in?
+- Engineering Throughput and Cycle Time
+- Engineering Throughput and Cycle Time - Team View
+
+## How is it calculated?
+This metric is calculated by counting the total number of PR comments divided by the total number of PRs in the selected time range.
+
+<b>Data Sources Required</b>
+
+This metric relies on PR/MRs collected from GitHub or GitLab.
+
+<b>Transformation Rules Required</b>
+
+N/A
+
+<b>SQL Queries</b>
+
+
+## How to improve?
+1. Encourage multiple reviewers to review a PR;
+2. Review Depth is an indicator for generally how thorough your PRs are reviewed, but it does not mean the deeper the better. In some cases, spending an excessive amount of resources on reviewing PRs is also not recommended.
\ No newline at end of file
diff --git a/versioned_docs/version-v0.13/Metrics/ReviewTime.md b/versioned_docs/version-v0.13/Metrics/ReviewTime.md
new file mode 100644
index 00000000..8cfe080b
--- /dev/null
+++ b/versioned_docs/version-v0.13/Metrics/ReviewTime.md
@@ -0,0 +1,39 @@
+---
+title: "PR Review Time"
+description: >
+  PR Review Time
+sidebar_position: 2
+---
+
+## What is this metric? 
+The time it takes to complete a code review of a PR before it gets merged. 
+
+## Why is it important?
+Code review should be conducted almost in real-time and usually take less than two days. Abnormally long PR Review Time may indicate one or more of the following problems:
+1. The PR size is too large that makes it difficult to review.
+2. The team is too busy to review code.
+
+## Which dashboard(s) does it exist in?
+- Engineering Throughput and Cycle Time
+- Engineering Throughput and Cycle Time - Team View
+
+
+## How is it calculated?
+This metric is the time frame between when the first comment is added to a PR, to when the PR is merged.
+
+<b>Data Sources Required</b>
+
+This metric relies on PR/MRs collected from GitHub or GitLab.
+
+<b>Transformation Rules Required</b>
+
+N/A
+
+<b>SQL Queries</b>
+
+
+## How to improve?
+1. Use DevLake's dashboards to monitor your delivery progress;
+2. Use automated tests for the initial work;
+3. Reduce PR size;
+4. Analyze the causes for long reviews.
\ No newline at end of file
diff --git a/versioned_docs/version-v0.13/Metrics/TimeToMerge.md b/versioned_docs/version-v0.13/Metrics/TimeToMerge.md
new file mode 100644
index 00000000..04a39225
--- /dev/null
+++ b/versioned_docs/version-v0.13/Metrics/TimeToMerge.md
@@ -0,0 +1,36 @@
+---
+title: "PR Time To Merge"
+description: >
+  PR Time To Merge
+sidebar_position: 2
+---
+
+## What is this metric? 
+The time it takes from when a PR is issued to when it is merged. Essentially, PR Time to Merge = PR Pickup Time + PR Review Time.
+
+## Why is it important?
+The delay of reviewing and waiting to review PRs has large impact on delivery speed, while reasonably short PR Time to Merge can indicate frictionless teamwork. Improving on this metric is the key to reduce PR cycle time.
+
+## Which dashboard(s) does it exist in?
+- GitHub Basic Metrics
+- Bi-weekly Community Retro
+
+
+## How is it calculated?
+<b>Data Sources Required</b>
+
+This metric relies on PR/MRs collected from GitHub or GitLab.
+
+<b>Transformation Rules Required</b>
+
+N/A
+
+<b>SQL Queries</b>
+
+
+## How to improve?
+1. Use DevLake's dashboards to monitor your delivery progress;
+2. Have a habit to check for hanging PRs regularly;
+3. Set up alerts for your communication tools (e.g. Slack, Lark) when new PRs are issued;
+4. Reduce PR size;
+5. Analyze the causes for long reviews.
diff --git a/versioned_docs/version-v0.13/Metrics/_category_.json b/versioned_docs/version-v0.13/Metrics/_category_.json
new file mode 100644
index 00000000..e944147d
--- /dev/null
+++ b/versioned_docs/version-v0.13/Metrics/_category_.json
@@ -0,0 +1,8 @@
+{
+  "label": "Metrics",
+  "position": 5,
+  "link":{
+    "type": "generated-index",
+    "slug": "Metrics"
+  }
+}
diff --git a/versioned_docs/version-v0.13/Overview/Architecture.md b/versioned_docs/version-v0.13/Overview/Architecture.md
new file mode 100755
index 00000000..d4c6a9c5
--- /dev/null
+++ b/versioned_docs/version-v0.13/Overview/Architecture.md
@@ -0,0 +1,39 @@
+---
+title: "Architecture"
+description: >
+  Understand the architecture of Apache DevLake
+sidebar_position: 2
+---
+
+## Architecture Overview
+
+<p align="center"><img src="/img/Architecture/arch-component.svg" /></p>
+<p align="center">DevLake Components</p>
+
+A DevLake installation typically consists of the following components:
+
+- Config UI: A handy user interface to create, trigger, and debug Blueprints. A Blueprint specifies the where (data connection), what (data scope), how (transformation rule), and when (sync frequency) of a data pipeline.
+- API Server: The main programmatic interface of DevLake.
+- Runner: The runner does all the heavy-lifting for executing tasks. In the default DevLake installation, it runs within the API Server, but DevLake provides a temporal-based runner (beta) for production environments.
+- Database: The database stores both DevLake's metadata and user data collected by data pipelines. DevLake supports MySQL and PostgreSQL as of v0.11.
+- Plugins: Plugins enable DevLake to collect and analyze dev data from any DevOps tools with an accessible API. DevLake community is actively adding plugins for popular DevOps tools, but if your preferred tool is not covered yet, feel free to open a GitHub issue to let us know or check out our doc on how to build a new plugin by yourself.
+- Dashboards: Dashboards deliver data and insights to DevLake users. A dashboard is simply a collection of SQL queries along with corresponding visualization configurations. DevLake's official dashboard tool is Grafana and pre-built dashboards are shipped in Grafana's JSON format. Users are welcome to swap for their own choice of dashboard/BI tool if desired.
+
+## Dataflow
+
+<p align="center"><img src="/img/Architecture/arch-dataflow.svg" /></p>
+<p align="center">DevLake Dataflow</p>
+
+A typical plugin's dataflow is illustrated below:
+
+1. The Raw layer stores the API responses from data sources (DevOps tools) in JSON. This saves developers' time if the raw data is to be transformed differently later on. Please note that communicating with data sources' APIs is usually the most time-consuming step.
+2. The Tool layer extracts raw data from JSONs into a relational schema that's easier to consume by analytical tasks. Each DevOps tool would have a schema that's tailored to their data structure, hence the name, the Tool layer.
+3. The Domain layer attempts to build a layer of abstraction on top of the Tool layer so that analytics logics can be re-used across different tools. For example, GitHub's Pull Request (PR) and GitLab's Merge Request (MR) are similar entities. They each have their own table name and schema in the Tool layer, but they're consolidated into a single entity in the Domain layer, so that developers only need to implement metrics like Cycle Time and Code Review Rounds once against the domain la [...]
+
+## Principles
+
+1. Extensible: DevLake's plugin system allows users to integrate with any DevOps tool. DevLake also provides a dbt plugin that enables users to define their own data transformation and analysis workflows.
+2. Portable: DevLake has a modular design and provides multiple options for each module. Users of different setups can freely choose the right configuration for themselves.
+3. Robust: DevLake provides an SDK to help plugins efficiently and reliably collect data from data sources while respecting their API rate limits and constraints.
+
+<br/>
diff --git a/versioned_docs/version-v0.13/Overview/Introduction.md b/versioned_docs/version-v0.13/Overview/Introduction.md
new file mode 100755
index 00000000..4b692ff2
--- /dev/null
+++ b/versioned_docs/version-v0.13/Overview/Introduction.md
@@ -0,0 +1,39 @@
+---
+title: "Introduction"
+description: General introduction of Apache DevLake
+sidebar_position: 1
+---
+
+## What is Apache DevLake?
+Apache DevLake is an open-source dev data platform that ingests, analyzes, and visualizes the fragmented data from DevOps tools to distill insights for engineering productivity.
+
+Apache DevLake is designed for developer teams looking to make better sense of their development process and to bring a more data-driven approach to their own practices. You can ask Apache DevLake many questions regarding your development process. Just connect and query.
+
+## What can be accomplished with DevLake?
+1. Collect DevOps data across the entire Software Development Life Cycle (SDLC) and connect the siloed data with a standard [data model](../DataModels/DevLakeDomainLayerSchema.md).
+2. Visualize out-of-the-box [engineering metrics](../Metrics) in a series of use-case driven dashboards
+3. Easily extend DevLake to support your data sources, metrics, and dashboards with a flexible [framework](Architecture.md) for data collection and ETL (Extract, Transform, Load).
+
+## How do I use DevLake?
+### 1. Set up DevLake
+You can easily set up Apache DevLake by following our step-by step instructions for [Docker Compose setup](../GettingStarted/DockerComposeSetup.md) or [Kubernetes setup](../GettingStarted/KubernetesSetup.md).
+
+### 2. Create a Blueprint
+The DevLake Configuration UI will guide you through the process (a Blueprint) to define the data connections, data scope, transformation and sync frequency of the data you wish to collect.
+
+![img](/img/Introduction/userflow1.svg)
+
+### 3. Track the Blueprint's progress
+You can track the progress of the Blueprint you have just set up.
+
+![img](/img/Introduction/userflow2.svg)
+
+### 4. View the pre-built dashboards
+Once the first run of the Blueprint is completed, you can view the corresponding dashboards.
+
+![img](/img/Introduction/userflow3.png)
+
+### 5. Customize the dashboards with SQL
+If the pre-built dashboards are limited for your use cases, you can always customize or create your own metrics or dashboards with SQL.
+
+![img](/img/Introduction/userflow4.png)
diff --git a/versioned_docs/version-v0.13/Overview/Roadmap.md b/versioned_docs/version-v0.13/Overview/Roadmap.md
new file mode 100644
index 00000000..6695584e
--- /dev/null
+++ b/versioned_docs/version-v0.13/Overview/Roadmap.md
@@ -0,0 +1,33 @@
+---
+title: "Roadmap"
+description: >
+  The goals and roadmap for DevLake in 2022
+sidebar_position: 3
+---
+
+
+## Goals
+DevLake has joined the Apache Incubator and is aiming to become a top-level project. To achieve this goal, the Apache DevLake (Incubating) community will continue to make efforts in helping development teams to analyze and improve their engineering productivity. In the 2022 Roadmap, we have summarized three major goals followed by the feature breakdown to invite the broader community to join us and grow together.
+
+1. As a dev data analysis application, discover and implement 3 (or even more!) usage scenarios:
+   - A collection of metrics to track the contribution, quality and growth of open-source projects
+   - DORA metrics for DevOps engineers
+   - To be decided ([let us know](https://join.slack.com/t/devlake-io/shared_invite/zt-17b6vuvps-x98pqseoUagM7EAmKC82xQ) if you have any suggestions!)
+2. As dev data infrastructure, provide robust data collection modules, customizable data models, and data extensibility.
+3. Design better user experience for end-users and contributors.
+
+## Feature Breakdown
+Apache DevLake is currently under rapid development. You are more than welcome to use the following table to explore your intereted features and make contributions. We deeply appreciate the collective effort of our community to make this project possible!
+
+| Category | Features|
+| --- | --- |
+| More data sources across different [DevOps domains](../DataModels/DevLakeDomainLayerSchema.md) (Goal No.1 & 2)| Features in **bold** are of higher priority <br/><br/> Issue/Task Management: <ul><li>**Jira server** [#886 (closed)](https://github.com/apache/incubator-devlake/issues/886)</li><li>**Jira data center** [#1687 (closed)](https://github.com/apache/incubator-devlake/issues/1687)</li><li>GitLab Issues [#715 (closed)](https://github.com/apache/incubator-devlake/issues/715)</li><li [...]
+| Improved data collection, [data models](../DataModels/DevLakeDomainLayerSchema.md) and data extensibility (Goal No.2)| Data Collection: <br/> <ul><li>Complete the logging system</li><li>Implement a good error handling mechanism during data collection</li></ul> Data Models:<ul><li>Introduce DBT to allow users to create and modify the domain layer schema. [#1479 (closed)](https://github.com/apache/incubator-devlake/issues/1479)</li><li>Design the data models for 5 new domains, please ref [...]
+| Better user experience (Goal No.3) | For new users: <ul><li> Iterate on a clearer step-by-step guide to improve the pre-configuration experience.</li><li>Provide a new Config UI to reduce frictions for data configuration [#1700 (in-progress)](https://github.com/apache/incubator-devlake/issues/1700)</li><li> Showcase dashboard live demos to let users explore and learn about the dashboards. [#1784 (open)](https://github.com/apache/incubator-devlake/issues/1784)</li></ul>For returning use [...]
+
+
+## How to Influence the Roadmap
+A roadmap is only useful when it captures real user needs. We are glad to hear from you if you have specific use cases, feedback, or ideas. You can submit an issue to let us know!
+Also, if you plan to work (or are already working) on a new or existing feature, tell us, so that we can update the roadmap accordingly. We are happy to share knowledge and context to help your feature land successfully.
+<br/><br/><br/>
+
diff --git a/versioned_docs/version-v0.13/Overview/_category_.json b/versioned_docs/version-v0.13/Overview/_category_.json
new file mode 100644
index 00000000..3e819ddc
--- /dev/null
+++ b/versioned_docs/version-v0.13/Overview/_category_.json
@@ -0,0 +1,8 @@
+{
+  "label": "Overview",
+  "position": 1,
+  "link":{
+    "type": "generated-index",
+    "slug": "Overview"
+  }
+}
diff --git a/versioned_docs/version-v0.13/Plugins/_category_.json b/versioned_docs/version-v0.13/Plugins/_category_.json
new file mode 100644
index 00000000..bbea8d59
--- /dev/null
+++ b/versioned_docs/version-v0.13/Plugins/_category_.json
@@ -0,0 +1,8 @@
+{
+  "label": "Plugins",
+  "position": 9,
+  "link":{
+    "type": "generated-index",
+    "slug": "Plugins"
+  }
+}
diff --git a/versioned_docs/version-v0.13/Plugins/dbt.md b/versioned_docs/version-v0.13/Plugins/dbt.md
new file mode 100644
index 00000000..059bf12c
--- /dev/null
+++ b/versioned_docs/version-v0.13/Plugins/dbt.md
@@ -0,0 +1,67 @@
+---
+title: "DBT"
+description: >
+  DBT Plugin
+---
+
+
+## Summary
+
+dbt (data build tool) enables analytics engineers to transform data in their warehouses by simply writing select statements. dbt handles turning these select statements into tables and views.
+dbt does the T in ELT (Extract, Load, Transform) processes – it doesn’t extract or load data, but it’s extremely good at transforming data that’s already loaded into your warehouse.
+
+## User setup<a id="user-setup"></a>
+- If you plan to use this product, you need to install some environments first.
+
+#### Required Packages to Install<a id="user-setup-requirements"></a>
+- [python3.7+](https://www.python.org/downloads/)
+- [dbt-mysql](https://pypi.org/project/dbt-mysql/#configuring-your-profile)
+
+#### Commands to run or create in your terminal and the dbt project<a id="user-setup-commands"></a>
+1. pip install dbt-mysql
+2. dbt init demoapp (demoapp is project name)
+3. create your SQL transformations and data models
+
+## Convert Data By DBT
+
+Use the Raw JSON API to manually initiate a run using **cURL** or graphical API tool such as **Postman**. `POST` the following request to the DevLake API Endpoint.
+
+```json
+[
+  [
+    {
+      "plugin": "dbt",
+      "options": {
+          "projectPath": "/Users/abeizn/demoapp",
+          "projectName": "demoapp",
+          "projectTarget": "dev",
+          "selectedModels": ["my_first_dbt_model","my_second_dbt_model"],
+          "projectVars": {
+            "demokey1": "demovalue1",
+            "demokey2": "demovalue2"
+        }
+      }
+    }
+  ]
+]
+```
+
+- `projectPath`: the absolute path of the dbt project. (required)
+- `projectName`: the name of the dbt project. (required)
+- `projectTarget`: this is the default target your dbt project will use. (optional)
+- `selectedModels`: a model is a select statement. Models are defined in .sql files, and typically in your models directory. (required)
+And selectedModels accepts one or more arguments. Each argument can be one of:
+1. a package name, runs all models in your project, example: example
+2. a model name, runs a specific model, example: my_fisrt_dbt_model
+3. a fully-qualified path to a directory of models.
+
+- `projectVars`: variables to parametrize dbt models. (optional)
+example:
+`select * from events where event_type = '{{ var("event_type") }}'`
+To execute this SQL query in your model, you need set a value for `event_type`.
+
+### Resources:
+- Learn more about dbt [in the docs](https://docs.getdbt.com/docs/introduction)
+- Check out [Discourse](https://discourse.getdbt.com/) for commonly asked questions and answers
+
+<br/><br/><br/>
diff --git a/versioned_docs/version-v0.13/Plugins/feishu.md b/versioned_docs/version-v0.13/Plugins/feishu.md
new file mode 100644
index 00000000..6cd596f6
--- /dev/null
+++ b/versioned_docs/version-v0.13/Plugins/feishu.md
@@ -0,0 +1,71 @@
+---
+title: "Feishu"
+description: >
+  Feishu Plugin
+---
+
+## Summary
+
+This plugin collects Feishu meeting data through [Feishu Openapi](https://open.feishu.cn/document/home/user-identity-introduction/introduction).
+
+## Configuration
+
+In order to fully use this plugin, you will need to get `app_id` and `app_secret` from a Feishu administrator (for help on App info, please see [official Feishu Docs](https://open.feishu.cn/document/ukTMukTMukTM/ukDNz4SO0MjL5QzM/auth-v3/auth/tenant_access_token_internal)),
+
+A connection should be created before you can collection any data. Currently, this plugin supports creating connection by requesting `connections` API:
+
+```
+curl 'http://localhost:8080/plugins/feishu/connections' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+    "name": "feishu",
+    "endpoint": "https://open.feishu.cn/open-apis/vc/v1/",
+    "proxy": "http://localhost:1080",
+    "rateLimitPerHour": 20000,
+    "appId": "<YOUR_APP_ID>",
+    "appSecret": "<YOUR_APP_SECRET>"
+}
+'
+```
+
+## Collect data from Feishu
+
+To collect data, select `Advanced Mode` on the `Create Pipeline Run` page and paste a JSON config like the following:
+
+
+```json
+[
+  [
+    {
+      "plugin": "feishu",
+      "options": {
+        "connectionId": 1,
+        "numOfDaysToCollect" : 80
+      }
+    }
+  ]
+]
+```
+
+> `numOfDaysToCollect`: The number of days you want to collect
+
+> `rateLimitPerSecond`: The number of requests to send(Maximum is 8)
+
+You can also trigger data collection by making a POST request to `/pipelines`.
+```
+curl 'http://localhost:8080/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+    "name": "feishu 20211126",
+    "tasks": [[{
+      "plugin": "feishu",
+      "options": {
+        "connectionId": 1,
+        "numOfDaysToCollect" : 80
+      }
+    }]]
+}
+'
+```
diff --git a/versioned_docs/version-v0.13/Plugins/gitee.md b/versioned_docs/version-v0.13/Plugins/gitee.md
new file mode 100644
index 00000000..ffed3f53
--- /dev/null
+++ b/versioned_docs/version-v0.13/Plugins/gitee.md
@@ -0,0 +1,106 @@
+---
+title: "Gitee(WIP)"
+description: >
+  Gitee Plugin
+---
+
+## Summary
+
+This plugin collects `Gitee` data through [Gitee Openapi](https://gitee.com/api/v5/swagger).
+
+## Configuration
+
+In order to fully use this plugin, you will need to get `token` on the Gitee website.
+
+A connection should be created before you can collection any data. Currently, this plugin supports creating connection by requesting `connections` API:
+
+```
+curl 'http://localhost:8080/plugins/gitee/connections' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+    "name": "gitee",
+    "endpoint": "https://gitee.com/api/v5/",
+    "proxy": "http://localhost:1080",
+    "rateLimitPerHour": 20000,
+    "token": "<YOUR_TOKEN>"
+}
+'
+```
+
+
+
+## Collect data from Gitee
+
+In order to collect data, you have to compose a JSON looks like following one, and send it by selecting `Advanced Mode` on `Create Pipeline Run` page:
+
+1. Configure-UI Mode
+```json
+[
+  [
+    {
+      "plugin": "gitee",
+      "options": {
+        "connectionId": 1,
+        "repo": "lake",
+        "owner": "merico-dev"
+      }
+    }
+  ]
+]
+```
+and if you want to perform certain subtasks.
+```json
+[
+  [
+    {
+      "plugin": "gitee",
+      "subtasks": ["collectXXX", "extractXXX", "convertXXX"],
+      "options": {
+        "connectionId": 1,
+        "repo": "lake",
+        "owner": "merico-dev"
+      }
+    }
+  ]
+]
+```
+
+2. Curl Mode:
+   You can also trigger data collection by making a POST request to `/pipelines`.
+```
+curl 'http://localhost:8080/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+    "name": "gitee 20211126",
+    "tasks": [[{
+        "plugin": "gitee",
+        "options": {
+            "connectionId": 1,
+            "repo": "lake",
+            "owner": "merico-dev"
+        }
+    }]]
+}
+'
+```
+and if you want to perform certain subtasks.
+```
+curl 'http://localhost:8080/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+    "name": "gitee 20211126",
+    "tasks": [[{
+        "plugin": "gitee",
+        "subtasks": ["collectXXX", "extractXXX", "convertXXX"],
+        "options": {
+            "connectionId": 1,
+            "repo": "lake",
+            "owner": "merico-dev"
+        }
+    }]]
+}
+'
+```
diff --git a/versioned_docs/version-v0.13/Plugins/gitextractor.md b/versioned_docs/version-v0.13/Plugins/gitextractor.md
new file mode 100644
index 00000000..c524a616
--- /dev/null
+++ b/versioned_docs/version-v0.13/Plugins/gitextractor.md
@@ -0,0 +1,64 @@
+---
+title: "GitExtractor"
+description: >
+  GitExtractor Plugin
+---
+
+## Summary
+This plugin extracts commits and references from a remote or local git repository. It then saves the data into the database or csv files.
+
+## Steps to make this plugin work
+
+1. Use the Git repo extractor to retrieve data about commits and branches from your repository.
+2. Use the GitHub plugin to retrieve data about Github issues and PRs from your repository.
+NOTE: you can run only one issue collection stage as described in the Github Plugin README.
+3. Use the [RefDiff](./refdiff.md) plugin to calculate version diff, which will be stored in `refs_commits_diffs` table.
+
+## Sample Request
+
+```
+curl --location --request POST 'localhost:8080/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+    "name": "git repo extractor",
+    "tasks": [
+        [
+            {
+                "Plugin": "gitextractor",
+                "Options": {
+                    "url": "https://github.com/merico-dev/lake.git",
+                    "repoId": "github:GithubRepo:384111310"
+                }
+            }
+        ]
+    ]
+}
+'
+```
+- `url`: the location of the git repository. It should start with `http`/`https` for a remote git repository and with `/` for a local one.
+- `repoId`: column `id` of  `repos`.
+   Note : For GitHub, to find the repo id run `$("meta[name=octolytics-dimension-repository_id]").getAttribute('content')` in browser console. 
+- `proxy`: optional, http proxy, e.g. `http://your-proxy-server.com:1080`.
+- `user`: optional, for cloning private repository using HTTP/HTTPS
+- `password`: optional, for cloning private repository using HTTP/HTTPS
+- `privateKey`: optional, for SSH cloning, base64 encoded `PEM` file
+- `passphrase`: optional, passphrase for the private key
+
+
+## Standalone Mode
+
+You call also run this plugin in a standalone mode without any DevLake service running using the following command:
+
+```
+go run plugins/gitextractor/main.go -url https://github.com/merico-dev/lake.git -id github:GithubRepo:384111310 -db "merico:merico@tcp(127.0.0.1:3306)/lake?charset=utf8mb4&parseTime=True"
+```
+
+For more options (e.g., saving to a csv file instead of a db), please read `plugins/gitextractor/main.go`.
+
+## Development
+
+This plugin depends on `libgit2`, you need to install version 1.3.0 in order to run and debug this plugin on your local
+machine. [Click here](./refdiff.md#Development) for a brief guide.
+
+<br/><br/><br/>
diff --git a/versioned_docs/version-v0.13/Plugins/github-connection-in-config-ui.png b/versioned_docs/version-v0.13/Plugins/github-connection-in-config-ui.png
new file mode 100644
index 00000000..5359fb15
Binary files /dev/null and b/versioned_docs/version-v0.13/Plugins/github-connection-in-config-ui.png differ
diff --git a/versioned_docs/version-v0.13/Plugins/github.md b/versioned_docs/version-v0.13/Plugins/github.md
new file mode 100644
index 00000000..6f76b2dc
--- /dev/null
+++ b/versioned_docs/version-v0.13/Plugins/github.md
@@ -0,0 +1,67 @@
+---
+title: "GitHub"
+description: >
+  GitHub Plugin
+---
+
+
+
+## Summary
+
+This plugin gathers data from `GitHub` to display information to the user in `Grafana`. We can help tech leaders answer such questions as:
+
+- Is this month more productive than last?
+- How fast do we respond to customer requirements?
+- Was our quality improved or not?
+
+## Metrics
+
+Here are some examples metrics using `GitHub` data:
+- Avg Requirement Lead Time By Assignee
+- Bug Count per 1k Lines of Code
+- Commit Count over Time
+
+## Screenshot
+
+![image](/img/Plugins/github-demo.png)
+
+
+## Configuration
+- Configuring GitHub via [config-ui](/UserManuals/ConfigUI/GitHub.md).
+
+## Sample Request
+To collect data, select `Advanced Mode` on the `Create Pipeline Run` page and paste a JSON config like the following:
+
+```json
+[
+  [
+    {
+      "plugin": "github",
+      "options": {
+        "connectionId": 1,
+        "repo": "lake",
+        "owner": "merico-dev"
+      }
+    }
+  ]
+]
+```
+
+You can also trigger data collection by making a POST request to `/pipelines`.
+```
+curl 'http://localhost:8080/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+    "name": "github 20211126",
+    "tasks": [[{
+        "plugin": "github",
+        "options": {
+            "connectionId": 1,
+            "repo": "lake",
+            "owner": "merico-dev"
+        }
+    }]]
+}
+'
+```
diff --git a/versioned_docs/version-v0.13/Plugins/gitlab-connection-in-config-ui.png b/versioned_docs/version-v0.13/Plugins/gitlab-connection-in-config-ui.png
new file mode 100644
index 00000000..7aacee8d
Binary files /dev/null and b/versioned_docs/version-v0.13/Plugins/gitlab-connection-in-config-ui.png differ
diff --git a/versioned_docs/version-v0.13/Plugins/gitlab.md b/versioned_docs/version-v0.13/Plugins/gitlab.md
new file mode 100644
index 00000000..be5e1842
--- /dev/null
+++ b/versioned_docs/version-v0.13/Plugins/gitlab.md
@@ -0,0 +1,45 @@
+---
+title: "GitLab"
+description: >
+  GitLab Plugin
+---
+
+
+## Metrics
+
+| Metric Name                 | Description                                                  |
+|:----------------------------|:-------------------------------------------------------------|
+| Pull Request Count          | Number of Pull/Merge Requests                                |
+| Pull Request Pass Rate      | Ratio of Pull/Merge Review requests to merged                |
+| Pull Request Reviewer Count | Number of Pull/Merge Reviewers                               |
+| Pull Request Review Time    | Time from Pull/Merge created time until merged               |
+| Commit Author Count         | Number of Contributors                                       |
+| Commit Count                | Number of Commits                                            |
+| Added Lines                 | Accumulated Number of New Lines                              |
+| Deleted Lines               | Accumulated Number of Removed Lines                          |
+| Pull Request Review Rounds  | Number of cycles of commits followed by comments/final merge |
+
+## Configuration
+Configuring GitLab via [config-ui](/UserManuals/ConfigUI/GitLab.md).
+
+## Gathering Data with GitLab
+
+To collect data, you can make a POST request to `/pipelines`
+
+```
+curl --location --request POST 'localhost:8080/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '
+{
+    "name": "gitlab 20211126",
+    "tasks": [[{
+        "plugin": "gitlab",
+        "options": {
+            "projectId": <Your gitlab project id>
+        }
+    }]]
+}
+'
+```
+
+<br/><br/><br/>
diff --git a/versioned_docs/version-v0.13/Plugins/jenkins.md b/versioned_docs/version-v0.13/Plugins/jenkins.md
new file mode 100644
index 00000000..9bb0177d
--- /dev/null
+++ b/versioned_docs/version-v0.13/Plugins/jenkins.md
@@ -0,0 +1,47 @@
+---
+title: "Jenkins"
+description: >
+  Jenkins Plugin
+---
+
+## Summary
+
+This plugin collects Jenkins data through [Remote Access API](https://www.jenkins.io/doc/book/using/remote-access-api/). It then computes and visualizes various DevOps metrics from the Jenkins data.
+
+![image](https://user-images.githubusercontent.com/61080/141943122-dcb08c35-cb68-4967-9a7c-87b63c2d6988.png)
+
+## Metrics
+
+| Metric Name        | Description                         |
+|:-------------------|:------------------------------------|
+| Build Count        | The number of builds created        |
+| Build Success Rate | The percentage of successful builds |
+
+## Configuration
+
+In order to fully use this plugin, you will need to set various configurations via Dev Lake's `config-ui`.
+
+### By `config-ui`
+
+The connection section of the configuration screen requires the following key fields to connect to the Jenkins API.
+
+## Collect Data From Jenkins
+
+To collect data, select `Advanced Mode` on the `Create Pipeline Run` page and paste a JSON config like the following:
+
+```json
+[
+  [
+    {
+      "plugin": "jenkins",
+      "options": {
+        "connectionId": 1
+      }
+    }
+  ]
+]
+```
+
+## Relationship between job and build
+
+Build is kind of a snapshot of job. Running job each time creates a build.
diff --git a/versioned_docs/version-v0.13/Plugins/jira-connection-config-ui.png b/versioned_docs/version-v0.13/Plugins/jira-connection-config-ui.png
new file mode 100644
index 00000000..df2e8e39
Binary files /dev/null and b/versioned_docs/version-v0.13/Plugins/jira-connection-config-ui.png differ
diff --git a/versioned_docs/version-v0.13/Plugins/jira-more-setting-in-config-ui.png b/versioned_docs/version-v0.13/Plugins/jira-more-setting-in-config-ui.png
new file mode 100644
index 00000000..dffb0c99
Binary files /dev/null and b/versioned_docs/version-v0.13/Plugins/jira-more-setting-in-config-ui.png differ
diff --git a/versioned_docs/version-v0.13/Plugins/jira.md b/versioned_docs/version-v0.13/Plugins/jira.md
new file mode 100644
index 00000000..7ac79ad0
--- /dev/null
+++ b/versioned_docs/version-v0.13/Plugins/jira.md
@@ -0,0 +1,196 @@
+---
+title: "Jira"
+description: >
+  Jira Plugin
+---
+
+
+## Summary
+
+This plugin collects Jira data through Jira Cloud REST API. It then computes and visualizes various engineering metrics from the Jira data.
+
+<img width="2035" alt="jira metric display" src="https://user-images.githubusercontent.com/2908155/132926143-7a31d37f-22e1-487d-92a3-cf62e402e5a8.png" />
+
+## Project Metrics This Covers
+
+| Metric Name                         | Description                                                                                       |
+|:------------------------------------|:--------------------------------------------------------------------------------------------------|
+| Requirement Count	                  | Number of issues with type "Requirement"                                                          |
+| Requirement Lead Time	              | Lead time of issues with type "Requirement"                                                       |
+| Requirement Delivery Rate           | Ratio of delivered requirements to all requirements                                               |
+| Requirement Granularity             | Number of story points associated with an issue                                                   |
+| Bug Count	                          | Number of issues with type "Bug"<br/><i>bugs are found during testing</i>                         |
+| Bug Age	                          | Lead time of issues with type "Bug"<br/><i>both new and deleted lines count</i>                   |
+| Bugs Count per 1k Lines of Code     | Amount of bugs per 1000 lines of code                                                             |
+| Incident Count                      | Number of issues with type "Incident"<br/><i>incidents are found when running in production</i>   |
+| Incident Age                        | Lead time of issues with type "Incident"                                                          |
+| Incident Count per 1k Lines of Code | Amount of incidents per 1000 lines of code                                                        |
+
+## Configuration
+Configuring Jira via [config-ui](/UserManuals/ConfigUI/Jira.md).
+
+## Collect Data From JIRA
+
+To collect data, select `Advanced Mode` on the `Create Pipeline Run` page and paste a JSON config like the following:
+
+> <font color="#ED6A45">Warning: Data collection only supports single-task execution, and the results of concurrent multi-task execution may not meet expectations.</font>
+
+```
+[
+  [
+    {
+      "plugin": "jira",
+      "options": {
+          "connectionId": 1,
+          "boardId": 8,
+          "since": "2006-01-02T15:04:05Z"
+      }
+    }
+  ]
+]
+```
+
+- `connectionId`: The `ID` field from **JIRA Integration** page.
+- `boardId`: JIRA board id, see "Find Board Id" for details.
+- `since`: optional, download data since a specified date only.
+
+
+## API
+
+### Data Connections
+
+1. Get all data connection
+
+```GET /plugins/jira/connections
+[
+  {
+    "ID": 14,
+    "CreatedAt": "2021-10-11T11:49:19.029Z",
+    "UpdatedAt": "2021-10-11T11:49:19.029Z",
+    "name": "test-jira-connection",
+    "endpoint": "https://merico.atlassian.net/rest",
+    "basicAuthEncoded": "basicAuth",
+    "epicKeyField": "epicKeyField",
+      "storyPointField": "storyPointField"
+  }
+]
+```
+
+2. Create a new data connection
+
+```POST /plugins/jira/connections
+{
+	"name": "jira data connection name",
+	"endpoint": "jira api endpoint, i.e. https://merico.atlassian.net/rest",
+    "basicAuthEncoded": "generated by `echo -n {jira login email}:{jira token} | base64`",
+	"epicKeyField": "name of customfield of epic key",
+	"storyPointField": "name of customfield of story point",
+	"typeMappings": { // optional, send empty object to delete all typeMappings of the data connection
+		"userType": {
+			"standardType": "devlake standard type"
+		}
+	}
+}
+```
+
+
+3. Update data connection
+
+```PUT /plugins/jira/connections/:connectionId
+{
+	"name": "jira data connection name",
+	"endpoint": "jira api endpoint, i.e. https://merico.atlassian.net/rest",
+    "basicAuthEncoded": "generated by `echo -n {jira login email}:{jira token} | base64`",
+	"epicKeyField": "name of customfield of epic key",
+	"storyPointField": "name of customfield of story point",
+	"typeMappings": { // optional, send empty object to delete all typeMappings of the data connection
+		"userType": {
+			"standardType": "devlake standard type",
+		}
+	}
+}
+```
+
+4. Get data connection detail
+```GET /plugins/jira/connections/:connectionId
+{
+	"name": "jira data connection name",
+	"endpoint": "jira api endpoint, i.e. https://merico.atlassian.net/rest",
+    "basicAuthEncoded": "generated by `echo -n {jira login email}:{jira token} | base64`",
+	"epicKeyField": "name of customfield of epic key",
+	"storyPointField": "name of customfield of story point",
+	"typeMappings": { // optional, send empty object to delete all typeMappings of the data connection
+		"userType": {
+			"standardType": "devlake standard type",
+		}
+	}
+}
+```
+
+5. Delete data connection
+
+```DELETE /plugins/jira/connections/:connectionId
+```
+
+
+### Type mappings
+
+1. Get all type mappings
+```GET /plugins/jira/connections/:connectionId/type-mappings
+[
+  {
+    "jiraConnectionId": 16,
+    "userType": "userType",
+    "standardType": "standardType"
+  }
+]
+```
+
+2. Create a new type mapping
+
+```POST /plugins/jira/connections/:connectionId/type-mappings
+{
+    "userType": "userType",
+    "standardType": "standardType"
+}
+```
+
+3. Update type mapping
+
+```PUT /plugins/jira/connections/:connectionId/type-mapping/:userType
+{
+    "standardType": "standardTypeUpdated"
+}
+```
+
+
+4. Delete type mapping
+
+```DELETE /plugins/jira/connections/:connectionId/type-mapping/:userType
+```
+
+5. API forwarding
+For example:
+Requests to `http://your_devlake_host/plugins/jira/connections/1/proxy/rest/agile/1.0/board/8/sprint`
+would be forwarded to `https://your_jira_host/rest/agile/1.0/board/8/sprint`
+
+```GET /plugins/jira/connections/:connectionId/proxy/rest/*path
+{
+    "maxResults": 1,
+    "startAt": 0,
+    "isLast": false,
+    "values": [
+        {
+            "id": 7,
+            "self": "https://merico.atlassian.net/rest/agile/1.0/sprint/7",
+            "state": "closed",
+            "name": "EE Sprint 7",
+            "startDate": "2020-06-12T00:38:51.882Z",
+            "endDate": "2020-06-26T00:38:00.000Z",
+            "completeDate": "2020-06-22T05:59:58.980Z",
+            "originBoardId": 8,
+            "goal": ""
+        }
+    ]
+}
+```
diff --git a/versioned_docs/version-v0.13/Plugins/refdiff.md b/versioned_docs/version-v0.13/Plugins/refdiff.md
new file mode 100644
index 00000000..65e5f640
--- /dev/null
+++ b/versioned_docs/version-v0.13/Plugins/refdiff.md
@@ -0,0 +1,139 @@
+---
+title: "RefDiff"
+description: >
+  RefDiff Plugin
+---
+
+
+## Summary
+
+For development workload analysis, we often need to know how many commits have been created between 2 releases. This plugin calculates which commits differ between 2 Ref (branch/tag), and the result will be stored back into database for further analysis.
+
+## Important Note
+
+You need to run gitextractor before the refdiff plugin. The gitextractor plugin should create records in the `refs` table in your DB before this plugin can be run.
+
+## Configuration
+
+This is a enrichment plugin based on Domain Layer data, no configuration needed
+
+## How to use
+
+In order to trigger the enrichment, you need to insert a new task into your pipeline.
+
+1. Make sure `commits` and `refs` are collected into your database, `refs` table should contain records like following:
+```
+id                                            ref_type
+github:GithubRepo:384111310:refs/tags/0.3.5   TAG
+github:GithubRepo:384111310:refs/tags/0.3.6   TAG
+github:GithubRepo:384111310:refs/tags/0.5.0   TAG
+github:GithubRepo:384111310:refs/tags/v0.0.1  TAG
+github:GithubRepo:384111310:refs/tags/v0.2.0  TAG
+github:GithubRepo:384111310:refs/tags/v0.3.0  TAG
+github:GithubRepo:384111310:refs/tags/v0.4.0  TAG
+github:GithubRepo:384111310:refs/tags/v0.6.0  TAG
+github:GithubRepo:384111310:refs/tags/v0.6.1  TAG
+```
+2. If you want to run calculateIssuesDiff, please configure GITHUB_PR_BODY_CLOSE_PATTERN in .env, you can check the example in .env.example(we have a default value, please make sure your pattern is disclosed by single quotes '')
+3. If you want to run calculatePrCherryPick, please configure GITHUB_PR_TITLE_PATTERN in .env, you can check the example in .env.example(we have a default value, please make sure your pattern is disclosed by single quotes '')
+4. And then, trigger a pipeline like following, you can also define sub tasks, calculateRefDiff will calculate commits between two ref, and creatRefBugStats will create a table to show bug list between two ref:
+```
+curl -v -XPOST http://localhost:8080/pipelines --data @- <<'JSON'
+{
+    "name": "test-refdiff",
+    "tasks": [
+        [
+            {
+                "plugin": "refdiff",
+                "options": {
+                    "repoId": "github:GithubRepo:384111310",
+                    "pairs": [
+                       { "newRef": "refs/tags/v0.6.0", "oldRef": "refs/tags/0.5.0" },
+                       { "newRef": "refs/tags/0.5.0", "oldRef": "refs/tags/0.4.0" }
+                    ],
+                    "tasks": [
+                        "calculateCommitsDiff",
+                        "calculateIssuesDiff",
+                        "calculatePrCherryPick",
+                    ]
+                }
+            }
+        ]
+    ]
+}
+JSON
+```
+Or if you prefered calculating latest releases
+```
+curl -v -XPOST http://localhost:8080/pipelines --data @- <<'JSON'
+{
+    "name": "test-refdiff",
+    "tasks": [
+        [
+            {
+                "plugin": "refdiff",
+                "options": {
+                    "repoId": "github:GithubRepo:384111310",
+                    "tagsPattern": "v\d+\.\d+.\d+",
+                    "tagsLimit": 10,
+                    "tagsOrder": "reverse semver",
+                    "tasks": [
+                        "calculateCommitsDiff",
+                        "calculateIssuesDiff",
+                        "calculatePrCherryPick",
+                    ]
+                }
+            }
+        ]
+    ]
+}
+JSON
+```
+
+## Development
+
+This plugin depends on `libgit2`, you need to install version 1.3.0 in order to run and debug this plugin on your local
+machine.
+
+### Ubuntu
+
+```
+apt install cmake
+git clone https://github.com/libgit2/libgit2.git
+cd libgit2
+git checkout v1.3.0
+mkdir build
+cd build
+cmake ..
+make
+make install
+```
+
+### MacOS
+
+NOTE:Do **NOT** install libgit2 via `MadPorts` or `homebrew`, install from source instead.
+```
+brew install cmake
+git clone https://github.com/libgit2/libgit2.git
+cd libgit2
+git checkout v1.3.0
+mkdir build
+cd build
+cmake ..
+make
+make install
+```
+
+#### Troubleshooting (MacOS)
+
+> Q: I got an error saying: `pkg-config: exec: "pkg-config": executable file not found in $PATH`
+
+> A:
+> 1. Make sure you have pkg-config installed:
+>
+> `brew install pkg-config`
+>
+> 2. Make sure your pkg config path covers the installation:
+> `export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib:/usr/local/lib/pkgconfig`
+
+<br/><br/><br/>
diff --git a/versioned_docs/version-v0.13/Plugins/tapd.md b/versioned_docs/version-v0.13/Plugins/tapd.md
new file mode 100644
index 00000000..b8db89fc
--- /dev/null
+++ b/versioned_docs/version-v0.13/Plugins/tapd.md
@@ -0,0 +1,16 @@
+---
+title: "TAPD"
+description: >
+  TAPD Plugin
+---
+
+## Summary
+
+This plugin collects TAPD data.
+
+This plugin is in development so you can't modify settings in config-ui.
+
+## Configuration
+
+In order to fully use this plugin, you will need to get endpoint/basic_auth_encoded/rate_limit and insert it into table `_tool_tapd_connections`.
+
diff --git a/versioned_docs/version-v0.13/SupportedDataSources.md b/versioned_docs/version-v0.13/SupportedDataSources.md
new file mode 100644
index 00000000..12bdc1a3
--- /dev/null
+++ b/versioned_docs/version-v0.13/SupportedDataSources.md
@@ -0,0 +1,59 @@
+---
+title: "Supported Data Sources"
+description: >
+  Data sources that DevLake supports
+sidebar_position: 4
+---
+
+
+## Data Sources and Data Plugins
+DevLake supports the following data sources. The data from each data source is collected with one or more plugins. There are 9 data plugins in total: `ae`, `feishu`, `gitextractor`, `github`, `gitlab`, `jenkins`, `jira`, `refdiff` and `tapd`.
+
+
+| Data Source | Versions                             | Plugins |
+|-------------|--------------------------------------|-------- |
+| AE          |                                      | `ae`    |
+| Feishu      | Cloud                                |`feishu` |
+| GitHub      | Cloud                                |`github`, `gitextractor`, `refdiff` |
+| Gitlab      | Cloud, Community Edition 13.x+       |`gitlab`, `gitextractor`, `refdiff` |
+| Jenkins     | 2.263.x+                             |`jenkins` |
+| Jira        | Cloud, Server 8.x+, Data Center 8.x+ |`jira` |
+| TAPD        | Cloud                                | `tapd` |
+
+
+
+## Data Collection Scope By Each Plugin
+This table shows the entities collected by each plugin. Domain layer entities in this table are consistent with the entities [here](./DataModels/DevLakeDomainLayerSchema.md).
+
+| Domain Layer Entities | ae             | gitextractor | github         | gitlab  | jenkins | jira    | refdiff | tapd    |
+| --------------------- | -------------- | ------------ | -------------- | ------- | ------- | ------- | ------- | ------- |
+| commits               | update commits | default      | not-by-default | default |         |         |         |         |
+| commit_parents        |                | default      |                |         |         |         |         |         |
+| commit_files          |                | default      |                |         |         |         |         |         |
+| pull_requests         |                |              | default        | default |         |         |         |         |
+| pull_request_commits  |                |              | default        | default |         |         |         |         |
+| pull_request_comments |                |              | default        | default |         |         |         |         |
+| pull_request_labels   |                |              | default        |         |         |         |         |         |
+| refs                  |                | default      |                |         |         |         |         |         |
+| refs_commits_diffs    |                |              |                |         |         |         | default |         |
+| refs_issues_diffs     |                |              |                |         |         |         | default |         |
+| ref_pr_cherry_picks   |                |              |                |         |         |         | default |         |
+| repos                 |                |              | default        | default |         |         |         |         |
+| repo_commits          |                | default      | default        |         |         |         |         |         |
+| board_repos           |                |              |                |         |         |         |         |         |
+| issue_commits         |                |              |                |         |         |         |         |         |
+| issue_repo_commits    |                |              |                |         |         |         |         |         |
+| pull_request_issues   |                |              |                |         |         |         |         |         |
+| refs_issues_diffs     |                |              |                |         |         |         |         |         |
+| boards                |                |              | default        |         |         | default |         | default |
+| board_issues          |                |              | default        |         |         | default |         | default |
+| issue_changelogs      |                |              |                |         |         | default |         | default |
+| issues                |                |              | default        |         |         | default |         | default |
+| issue_comments        |                |              |                |         |         | default |         | default |
+| issue_labels          |                |              | default        |         |         |         |         |         |
+| sprints               |                |              |                |         |         | default |         | default |
+| issue_worklogs        |                |              |                |         |         | default |         | default |
+| users o               |                |              | default        |         |         | default |         | default |
+| builds                |                |              |                |         | default |         |         |         |
+| jobs                  |                |              |                |         | default |         |         |         |
+
diff --git a/versioned_docs/version-v0.13/UserManuals/ConfigUI/AdvancedMode.md b/versioned_docs/version-v0.13/UserManuals/ConfigUI/AdvancedMode.md
new file mode 100644
index 00000000..c0ad7d45
--- /dev/null
+++ b/versioned_docs/version-v0.13/UserManuals/ConfigUI/AdvancedMode.md
@@ -0,0 +1,97 @@
+---
+title: "Using Advanced Mode"
+sidebar_position: 6
+description: >
+  Using the advanced mode of Config-UI
+---
+
+
+## Why advanced mode?
+
+Advanced mode allows users to create any pipeline by writing JSON. This is useful for users who want to:
+
+1. Collect multiple GitHub/GitLab repos or Jira projects within a single pipeline
+2. Have fine-grained control over what entities to collect or what subtasks to run for each plugin
+3. Orchestrate a complex pipeline that consists of multiple stages of plugins.
+
+Advanced mode gives utmost flexibility to users by exposing the JSON API.
+
+## How to use advanced mode to create pipelines?
+
+1. Click on "+ New Blueprint" on the Blueprint page.
+
+![image](/img/AdvancedMode/AdvancedMode1.png)
+
+2. In step 1, click on the "Advanced Mode" link.
+
+![image](/img/AdvancedMode/AdvancedMode2.png)
+
+3. The pipeline editor expects a 2D array of plugins. The first dimension represents different stages of the pipeline and the second dimension describes the plugins in each stage. Stages run in sequential order and plugins within the same stage runs in parallel. We provide some templates for users to get started. Please also see the next section for some examples.
+
+![image](/img/AdvancedMode/AdvancedMode3.png)
+
+4. You can choose how often you would like to sync your data in this step by selecting a sync frequency option or enter a cron code to specify your prefered schedule. After setting up the Blueprint, you will be prompted to the Blueprint's activity detail page, where you can track the progress of the current run and wait for it to finish before the dashboards become available. You can also view all historical runs of previously created Blueprints from the list on the Blueprint page.
+
+## Examples
+
+1. Collect multiple GitLab repos sequentially.
+
+>When there're multiple collection tasks against a single data source, we recommend running these tasks sequentially since the collection speed is mostly limited by the API rate limit of the data source.
+>Running multiple tasks against the same data source is unlikely to speed up the process and may overwhelm the data source.
+
+
+Below is an example for collecting 2 GitLab repos sequentially. It has 2 stages, each contains a GitLab task.
+
+
+```
+[
+  [
+    {
+      "Plugin": "gitlab",
+      "Options": {
+        "projectId": 15238074
+      }
+    }
+  ],
+  [
+    {
+      "Plugin": "gitlab",
+      "Options": {
+        "projectId": 11624398
+      }
+    }
+  ]
+]
+```
+
+
+2. Collect a GitHub repo and a Jira board in parallel
+
+Below is an example for collecting a GitHub repo and a Jira board in parallel. It has a single stage with a GitHub task and a Jira task. Since users can configure multiple Jira connection, it's required to pass in a `connectionId` for Jira task to specify which connection to use.
+
+```
+[
+  [
+    {
+      "Plugin": "github",
+      "Options": {
+        "repo": "lake",
+        "owner": "merico-dev"
+      }
+    },
+    {
+      "Plugin": "jira",
+      "Options": {
+        "connectionId": 1,
+        "boardId": 76
+      }
+    }
+  ]
+]
+```
+## Editing a Blueprint (Advanced Mode)
+This section is for editing a Blueprint in the Advanced Mode. To edit in the Normal mode, please refer to [this guide](Tutorial.md#editing-a-blueprint-normal-mode).
+
+To edit a Blueprint created in the Advanced mode, you can simply go the Settings page of that Blueprint and click on Edit JSON to edit its configuration.
+
+![img](/img/ConfigUI/BlueprintEditing/blueprint-edit2.png)
\ No newline at end of file
diff --git a/versioned_docs/version-v0.13/UserManuals/ConfigUI/GitHub.md b/versioned_docs/version-v0.13/UserManuals/ConfigUI/GitHub.md
new file mode 100644
index 00000000..aaae0da2
--- /dev/null
+++ b/versioned_docs/version-v0.13/UserManuals/ConfigUI/GitHub.md
@@ -0,0 +1,87 @@
+---
+title: "Configuring GitHub"
+sidebar_position: 2
+description: Config UI instruction for GitHub
+---
+
+Visit config-ui: `http://localhost:4000`.
+### Step 1 - Add Data Connections
+![github-add-data-connections](/img/ConfigUI/github-add-data-connections.png)
+
+#### Connection Name
+Name your connection.
+
+#### Endpoint URL
+This should be a valid REST API endpoint, eg. `https://api.github.com/`. The url should end with `/`.
+
+#### Auth Token(s)
+GitHub personal access tokens are required to add a connection.
+- Learn about [how to create a GitHub personal access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token)
+- The data collection speed is relatively slow for GitHub since they have a **rate limit of [5,000 requests](https://docs.github.com/en/rest/overview/resources-in-the-rest-api#rate-limiting) per hour** (15,000 requests/hour if you pay for GitHub enterprise). You can accelerate the process by configuring _multiple_ personal access tokens. Please note that multiple tokens should be created by different GitHub accounts. Tokens belonging to the same GitHub account share the rate limit.
+
+#### Proxy URL (Optional)
+If you are behind a corporate firewall or VPN you may need to utilize a proxy server. Enter a valid proxy server address on your network, e.g. `http://your-proxy-server.com:1080`
+
+#### Test and Save Connection
+Click `Test Connection`, if the connection is successful, click `Save Connection` to add the connection.
+
+
+### Step 2 - Setting Data Scope
+![github-set-data-scope](/img/ConfigUI/github-set-data-scope.png)
+
+#### Projects
+Enter the GitHub repos to collect. If you want to collect more than 1 repo, please separate repos with comma. For example, "apache/incubator-devlake,apache/incubator-devlake-website".
+
+#### Data Entities
+Usually, you don't have to modify this part. However, if you don't want to collect certain GitHub entities, you can unselect some entities to accelerate the collection speed.
+- Issue Tracking: GitHub issues, issue comments, issue labels, etc.
+- Source Code Management: GitHub repos, refs, commits, etc.
+- Code Review: GitHub PRs, PR comments and reviews, etc.
+- Cross Domain: GitHub accounts, etc.
+
+### Step 3 - Adding Transformation Rules (Optional)
+![github-add-transformation-rules-list](/img/ConfigUI/github-add-transformation-rules-list.png)
+![github-add-transformation-rules](/img/ConfigUI/github-add-transformation-rules.png)
+ 
+Without adding transformation rules, you can still view the "[GitHub Basic Metrics](/LiveDemo/GitHubBasic.md)" dashboard. However, if you want to view "[Weekly Bug Retro](/LiveDemo/WeeklyBugRetro.md)", "Weekly Community Retro" or other pre-built dashboards, the following transformation rules, especially "Type/Bug", should be added.<br/>
+
+Each GitHub repo has at most ONE set of transformation rules.
+
+#### Issue Tracking
+
+- Severity: Parse the value of `severity` from issue labels.
+   - when your issue labels for severity level are like 'severity/p0', 'severity/p1', 'severity/p2', then input 'severity/(.*)$'
+   - when your issue labels for severity level are like 'p0', 'p1', 'p2', then input '(p0|p1|p2)$'
+
+- Component: Same as "Severity".
+
+- Priority: Same as "Severity".
+
+- Type/Requirement: The `type` of issues with labels that match given regular expression will be set to "REQUIREMENT". Unlike "PR.type", submatch does nothing, because for issue management analysis, users tend to focus on 3 kinds of types (Requirement/Bug/Incident), however, the concrete naming varies from repo to repo, time to time, so we decided to standardize them to help analysts metrics.
+
+- Type/Bug: Same as "Type/Requirement", with `type` setting to "BUG".
+
+- Type/Incident: Same as "Type/Requirement", with `type` setting to "INCIDENT".
+
+#### Code Review
+
+- Type: The `type` of pull requests will be parsed from PR labels by given regular expression. For example:
+   - when your labels for PR types are like 'type/feature-development', 'type/bug-fixing' and 'type/docs', please input 'type/(.*)$'
+   - when your labels for PR types are like 'feature-development', 'bug-fixing' and 'docs', please input '(feature-development|bug-fixing|docs)$'
+
+- Component: The `component` of pull requests will be parsed from PR labels by given regular expression.
+
+#### Additional Settings (Optional)
+
+- Tags Limit: It'll compare the last N pairs of tags to get the "commit diff', "issue diff" between tags. N defaults to 10.
+   - commit diff: new commits for a tag relative to the previous one
+   - issue diff: issues solved by the new commits for a tag relative to the previous one
+
+- Tags Pattern: Only tags that meet given regular expression will be counted.
+
+- Tags Order: Only "reverse semver" order is supported for now.
+
+Please click `Save` to save the transformation rules for the repo. In the data scope list, click `Next Step` to continue configuring.
+
+### Step 4 - Setting Sync Frequency
+You can choose how often you would like to sync your data in this step by selecting a sync frequency option or enter a cron code to specify your prefered schedule.
diff --git a/versioned_docs/version-v0.13/UserManuals/ConfigUI/GitLab.md b/versioned_docs/version-v0.13/UserManuals/ConfigUI/GitLab.md
new file mode 100644
index 00000000..74c9e41f
--- /dev/null
+++ b/versioned_docs/version-v0.13/UserManuals/ConfigUI/GitLab.md
@@ -0,0 +1,53 @@
+---
+title: "Configuring GitLab"
+sidebar_position: 3
+description: Config UI instruction for GitLab
+---
+
+Visit config-ui: `http://localhost:4000`.
+### Step 1 - Add Data Connections
+![gitlab-add-data-connections](/img/ConfigUI/gitlab-add-data-connections.png)
+
+#### Connection Name
+Name your connection.
+
+#### Endpoint URL
+This should be a valid REST API endpoint. 
+   - If you are using gitlab.com, the endpoint will be `https://gitlab.com/api/v4/`
+   - If you are self-hosting GitLab, the endpoint will look like `https://gitlab.example.com/api/v4/`
+The endpoint url should end with `/`.
+
+#### Auth Token(s)
+GitLab personal access tokens are required to add a connection. Learn about [how to create a GitLab personal access token](https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html).
+
+
+#### Proxy URL (Optional)
+If you are behind a corporate firewall or VPN you may need to utilize a proxy server. Enter a valid proxy server address on your network, e.g. `http://your-proxy-server.com:1080`
+
+#### Test and Save Connection
+Click `Test Connection`, if the connection is successful, click `Save Connection` to add the connection.
+
+
+### Step 2 - Setting Data Scope
+
+#### Projects
+Enter the GitLab repos to collect. How to get `GitLab` repos?
+- Visit the repository page on GitLab
+- Find the project id below the title
+
+![Get GitLab projects](https://user-images.githubusercontent.com/3789273/128568416-a47b2763-51d8-4a6a-8a8b-396512bffb03.png)
+
+If you want to collect more than 1 repo, please separate repos with comma. For example, "apache/incubator-devlake,apache/incubator-devlake-website".
+
+#### Data Entities
+Usually, you don't have to modify this part. However, if you don't want to collect certain GitLab entities, you can unselect some entities to accerlerate the collection speed.
+- Issue Tracking: GitLab issues, issue comments, issue labels, etc.
+- Source Code Management: GitLab repos, refs, commits, etc.
+- Code Review: GitLab MRs, MR comments and reviews, etc.
+- Cross Domain: GitLab accounts, etc.
+
+### Step 3 - Adding Transformation Rules (Optional)
+There are no transformation rules for GitLab repos.
+
+### Step 4 - Setting Sync Frequency
+You can choose how often you would like to sync your data in this step by selecting a sync frequency option or enter a cron code to specify your prefered schedule.
diff --git a/versioned_docs/version-v0.13/UserManuals/ConfigUI/Jenkins.md b/versioned_docs/version-v0.13/UserManuals/ConfigUI/Jenkins.md
new file mode 100644
index 00000000..07d1ed29
--- /dev/null
+++ b/versioned_docs/version-v0.13/UserManuals/ConfigUI/Jenkins.md
@@ -0,0 +1,33 @@
+---
+title: "Configuring Jenkins"
+sidebar_position: 5
+description: Config UI instruction for Jenkins
+---
+
+Visit config-ui: `http://localhost:4000`.
+### Step 1 - Add Data Connections
+![jenkins-add-data-connections](/img/ConfigUI/jenkins-add-data-connections.png)
+
+#### Connection Name
+Name your connection.
+
+#### Endpoint URL
+This should be a valid REST API endpoint. Eg. `https://ci.jenkins.io/`. The endpoint url should end with `/`.
+
+#### Username (E-mail)
+Your User ID for the Jenkins Instance.
+
+#### Password
+For help on Username and Password, please see Jenkins docs on [using credentials](https://www.jenkins.io/doc/book/using/using-credentials/). You can also use "API Access Token" for this field, which can be generated at `User` -> `Configure` -> `API Token` section on Jenkins.
+
+#### Test and Save Connection
+Click `Test Connection`, if the connection is successful, click `Save Connection` to add the connection.
+
+### Step 2 - Setting Data Scope
+There is no data cope setting for Jenkins.
+
+### Step 3 - Adding Transformation Rules (Optional)
+There are no transformation rules for Jenkins.
+
+### Step 4 - Setting Sync Frequency
+You can choose how often you would like to sync your data in this step by selecting a sync frequency option or enter a cron code to specify your prefered schedule.
diff --git a/versioned_docs/version-v0.13/UserManuals/ConfigUI/Jira.md b/versioned_docs/version-v0.13/UserManuals/ConfigUI/Jira.md
new file mode 100644
index 00000000..952ecdde
--- /dev/null
+++ b/versioned_docs/version-v0.13/UserManuals/ConfigUI/Jira.md
@@ -0,0 +1,67 @@
+---
+title: "Configuring Jira"
+sidebar_position: 4
+description: Config UI instruction for Jira
+---
+
+Visit config-ui: `http://localhost:4000`.
+### Step 1 - Add Data Connections
+![jira-add-data-connections](/img/ConfigUI/jira-add-data-connections.png)
+
+#### Connection Name
+Name your connection.
+
+#### Endpoint URL
+This should be a valid REST API endpoint
+   - If you are using Jira Cloud, the endpoint will be `https://<mydomain>.atlassian.net/rest/`
+   - If you are self-hosting Jira v8+, the endpoint will look like `https://jira.<mydomain>.com/rest/`
+The endpoint url should end with `/`.
+
+#### Username / Email
+Input the username or email of your Jira account.
+
+
+#### Password
+- If you are using Jira Cloud, please input the [Jira personal access token](https://confluence.atlassian.com/enterprise/using-personal-access-tokens-1026032365.html).
+- If you are using Jira Server v8+, please input the password of your Jira account.
+
+#### Proxy URL (Optional)
+If you are behind a corporate firewall or VPN you may need to utilize a proxy server. Enter a valid proxy server address on your network, e.g. `http://your-proxy-server.com:1080`
+
+#### Test and Save Connection
+Click `Test Connection`, if the connection is successful, click `Save Connection` to add the connection.
+
+
+### Step 2 - Setting Data Scope
+![jira-set-data-scope](/img/ConfigUI/jira-set-data-scope.png)
+
+#### Projects
+Choose the Jira boards to collect.
+
+#### Data Entities
+Usually, you don't have to modify this part. However, if you don't want to collect certain Jira entities, you can unselect some entities to accerlerate the collection speed.
+- Issue Tracking: Jira issues, issue comments, issue labels, etc.
+- Cross Domain: Jira accounts, etc.
+
+### Step 3 - Adding Transformation Rules (Optional)
+![jira-add-transformation-rules-list](/img/ConfigUI/jira-add-transformation-rules-list.png)
+ 
+Without adding transformation rules, you can not view all charts in "Jira" or "Engineering Throughput and Cycle Time" dashboards.<br/>
+
+Each Jira board has at most ONE set of transformation rules.
+
+![jira-add-transformation-rules](/img/ConfigUI/jira-add-transformation-rules.png)
+
+#### Issue Tracking
+
+- Requirement: choose the issue types to be transformed to "REQUIREMENT".
+- Bug: choose the issue types to be transformed to "BUG".
+- Incident: choose the issue types to be transformed to "INCIDENT".
+- Epic Key: choose the custom field that represents Epic key. In most cases, it is "Epic Link".
+- Story Point: choose the custom field that represents story points. In most cases, it is "Story Points".
+
+#### Additional Settings
+- Remotelink Commit SHA: parse the commits from an issue's remote links by the given regular expression so that the relationship between `issues` and `commits` can be created. You can directly use the regular expression `/commit/([0-9a-f]{40})$`.
+
+### Step 4 - Setting Sync Frequency
+You can choose how often you would like to sync your data in this step by selecting a sync frequency option or enter a cron code to specify your prefered schedule.
diff --git a/versioned_docs/version-v0.13/UserManuals/ConfigUI/Tutorial.md b/versioned_docs/version-v0.13/UserManuals/ConfigUI/Tutorial.md
new file mode 100644
index 00000000..5c61e930
--- /dev/null
+++ b/versioned_docs/version-v0.13/UserManuals/ConfigUI/Tutorial.md
@@ -0,0 +1,68 @@
+---
+title: "Tutorial"
+sidebar_position: 1
+description: Config UI instruction
+---
+
+## Overview
+The Apache DevLake Config UI allows you to configure the data you wish to collect through a graphical user interface. Visit config-ui at `http://localhost:4000`.
+
+## Creating a Blueprint
+
+### Introduction
+A Blueprint is the plan that covers all the work to get your raw data ready for query and metric computaion in the dashboards. We have designed the Blueprint to help you with data collection within only one workflow. Creating a Blueprint consists of four steps:
+
+1. Adding Data Connections: Add new or select from existing data connections for the data you wish to collect
+2. Setting Data Scope: Select the scope of data (e.g. GitHub projects or Jira boards) for your data connections
+3. Adding Transformation (Optional): Add transformation rules for the data scope you have selected in order to view corresponding metrics
+4. Setting Sync Frequency: Set up a schedule for how often you wish your data to be synced
+
+### Step 1 - Adding Data Connections
+There are two ways to add data connections to your Blueprint: adding them during the creation of a Blueprint and adding them separately on the Data Integrations page. There is no difference between these two ways.
+
+When adding data connections from the Blueprint, you can either create a new or select from an exisitng data connections. 
+
+![img](/img/ConfigUI/BlueprintCreation/step1.png)
+
+### Step 2 - Setting Data Scope
+After adding data connections, click on "Next Step" and you will be prompted to select the data scope of each data connections. For instance, for a GitHub connection, you will need to enter the projects you wish to sync and for Jira, you will need to select the boards.
+
+![img](/img/ConfigUI/BlueprintCreation/step2.png)
+
+### Step 3 - Adding Transformation (Optional)
+This step is only required for viewing certain metrics in the pre-built dashboards that require data transformation. Without adding transformation rules, you can still view the basic metrics. 
+
+Currently, DevLake only supports transformation for GitHub and Jira connections.
+
+![img](/img/ConfigUI/BlueprintCreation/step3.png)
+
+### Step 4 - Setting Sync Frequency
+You can choose how often you would like to sync your data in this step by selecting a sync frequency option or enter a cron code to specify your prefered schedule.
+
+After setting up the Blueprint, you will be prompted to the Blueprint's activity detail page, where you can track the progress of the current run and wait for it to finish before the dashboards become available. You can also view all historical runs of previously created Blueprints from the list on the Blueprint page.
+
+![img](/img/ConfigUI/BlueprintCreation/step4.png)
+
+## Editing a Blueprint (Normal Mode)
+On the Blueprint list page, clicking on any Blueprint will lead you to the detail page of the blueprint. If you switch to the Settings tab on the detail page, you can see the settings of your Blueprint and edit parts of it seperately.
+
+In the current version, the Blueprint editing feature **allows** editing:
+- The Blueprint's name
+- The sync frequency
+- The data scope of a connection
+- The data entities of the data scope
+- The transformation rules of any data scope
+
+and does **NOT allow**:
+- Adding or deleting connections to an existing blueprint (will be available in the future)
+- Editing any connections
+
+Please note: 
+1. The connections of some data sources, such as Jenkins, do not have an editing button, because their configuration do not contain data scope, data entities and/or transformation.
+2. If you have created the Blueprint in the Normal mode, you will only be able to edit it in the Normal Mode; if you have created it in the Advanced Mode, please refer to [this guide](AdvancedMode.md#editing-a-blueprint-advanced-mode) for editing.
+
+The Settings page for editing Blueprints:
+![img](/img/ConfigUI/BlueprintEditing/blueprint-edit1.png)
+
+## Creating and Managing Data Connections
+The Data Connections page allows you to view, create and manage all your data connections at one place. 
diff --git a/versioned_docs/version-v0.13/UserManuals/ConfigUI/_category_.json b/versioned_docs/version-v0.13/UserManuals/ConfigUI/_category_.json
new file mode 100644
index 00000000..62f99d48
--- /dev/null
+++ b/versioned_docs/version-v0.13/UserManuals/ConfigUI/_category_.json
@@ -0,0 +1,4 @@
+{
+  "label": "Config UI",
+  "position": 4
+}
diff --git a/versioned_docs/version-v0.13/UserManuals/Dashboards/GrafanaUserGuide.md b/versioned_docs/version-v0.13/UserManuals/Dashboards/GrafanaUserGuide.md
new file mode 100644
index 00000000..41a8e37f
--- /dev/null
+++ b/versioned_docs/version-v0.13/UserManuals/Dashboards/GrafanaUserGuide.md
@@ -0,0 +1,120 @@
+---
+title: "Grafana User Guide"
+sidebar_position: 2
+description: >
+  Grafana User Guide
+---
+
+
+# Grafana
+
+<img src="https://user-images.githubusercontent.com/3789273/128533901-3107e9bf-c3e3-4320-ba47-879fe2b0ea4d.png" width="450px" />
+
+When first visiting Grafana, you will be provided with a sample dashboard with some basic charts setup from the database.
+
+## Contents
+
+Section | Link
+:------------ | :-------------
+Logging In | [View Section](#logging-in)
+Viewing All Dashboards | [View Section](#viewing-all-dashboards)
+Customizing a Dashboard | [View Section](#customizing-a-dashboard)
+Dashboard Settings | [View Section](#dashboard-settings)
+Provisioning a Dashboard | [View Section](#provisioning-a-dashboard)
+Troubleshooting DB Connection | [View Section](#troubleshooting-db-connection)
+
+## Logging In<a id="logging-in"></a>
+
+Once the app is up and running, visit `http://localhost:3002` to view the Grafana dashboard.
+
+Default login credentials are:
+
+- Username: `admin`
+- Password: `admin`
+
+## Viewing All Dashboards<a id="viewing-all-dashboards"></a>
+
+To see all dashboards created in Grafana visit `/dashboards`
+
+Or, use the sidebar and click on **Manage**:
+
+![Screen Shot 2021-08-06 at 11 27 08 AM](https://user-images.githubusercontent.com/3789273/128534617-1992c080-9385-49d5-b30f-be5c96d5142a.png)
+
+
+## Customizing a Dashboard<a id="customizing-a-dashboard"></a>
+
+When viewing a dashboard, click the top bar of a panel, and go to **edit**
+
+![Screen Shot 2021-08-06 at 11 35 36 AM](https://user-images.githubusercontent.com/3789273/128535505-a56162e0-72ad-46ac-8a94-70f1c7a910ed.png)
+
+**Edit Dashboard Panel Page:**
+
+![grafana-sections](https://user-images.githubusercontent.com/3789273/128540136-ba36ee2f-a544-4558-8282-84a7cb9df27a.png)
+
+### 1. Preview Area
+- **Top Left** is the variable select area (custom dashboard variables, used for switching projects, or grouping data)
+- **Top Right** we have a toolbar with some buttons related to the display of the data:
+  - View data results in a table
+  - Time range selector
+  - Refresh data button
+- **The Main Area** will display the chart and should update in real time
+
+> Note: Data should refresh automatically, but may require a refresh using the button in some cases
+
+### 2. Query Builder
+Here we form the SQL query to pull data into our chart, from our database
+- Ensure the **Data Source** is the correct database
+
+  ![Screen Shot 2021-08-06 at 10 14 22 AM](https://user-images.githubusercontent.com/3789273/128545278-be4846e0-852d-4bc8-8994-e99b79831d8c.png)
+
+- Select **Format as Table**, and **Edit SQL** buttons to write/edit queries as SQL
+
+  ![Screen Shot 2021-08-06 at 10 17 52 AM](https://user-images.githubusercontent.com/3789273/128545197-a9ff9cb3-f12d-4331-bf6a-39035043667a.png)
+
+- The **Main Area** is where the queries are written, and in the top right is the **Query Inspector** button (to inspect returned data)
+
+  ![Screen Shot 2021-08-06 at 10 18 23 AM](https://user-images.githubusercontent.com/3789273/128545557-ead5312a-e835-4c59-b9ca-dd5c08f2a38b.png)
+
+### 3. Main Panel Toolbar
+In the top right of the window are buttons for:
+- Dashboard settings (regarding entire dashboard)
+- Save/apply changes (to specific panel)
+
+### 4. Grafana Parameter Sidebar
+- Change chart style (bar/line/pie chart etc)
+- Edit legends, chart parameters
+- Modify chart styling
+- Other Grafana specific settings
+
+## Dashboard Settings<a id="dashboard-settings"></a>
+
+When viewing a dashboard click on the settings icon to view dashboard settings. Here are 2 important sections to use:
+
+![Screen Shot 2021-08-06 at 1 51 14 PM](https://user-images.githubusercontent.com/3789273/128555763-4d0370c2-bd4d-4462-ae7e-4b140c4e8c34.png)
+
+- Variables
+  - Create variables to use throughout the dashboard panels, that are also built on SQL queries
+
+  ![Screen Shot 2021-08-06 at 2 02 40 PM](https://user-images.githubusercontent.com/3789273/128553157-a8e33042-faba-4db4-97db-02a29036e27c.png)
+
+- JSON Model
+  - Copy `json` code here and save it to a new file in `/grafana/dashboards/` with a unique name in the `lake` repo. This will allow us to persist dashboards when we load the app
+
+  ![Screen Shot 2021-08-06 at 2 02 52 PM](https://user-images.githubusercontent.com/3789273/128553176-65a5ae43-742f-4abf-9c60-04722033339e.png)
+
+## Provisioning a Dashboard<a id="provisioning-a-dashboard"></a>
+
+To save a dashboard in the `lake` repo and load it:
+
+1. Create a dashboard in browser (visit `/dashboard/new`, or use sidebar)
+2. Save dashboard (in top right of screen)
+3. Go to dashboard settings (in top right of screen)
+4. Click on _JSON Model_ in sidebar
+5. Copy code into a new `.json` file in `/grafana/dashboards`
+
+## Troubleshooting DB Connection<a id="troubleshooting-db-connection"></a>
+
+To ensure we have properly connected our database to the data source in Grafana, check database settings in `./grafana/datasources/datasource.yml`, specifically:
+- `database`
+- `user`
+- `secureJsonData/password`
diff --git a/versioned_docs/version-v0.13/UserManuals/Dashboards/_category_.json b/versioned_docs/version-v0.13/UserManuals/Dashboards/_category_.json
new file mode 100644
index 00000000..0db83c6e
--- /dev/null
+++ b/versioned_docs/version-v0.13/UserManuals/Dashboards/_category_.json
@@ -0,0 +1,4 @@
+{
+  "label": "Dashboards",
+  "position": 5
+}
diff --git a/versioned_docs/version-v0.13/UserManuals/TeamConfiguration.md b/versioned_docs/version-v0.13/UserManuals/TeamConfiguration.md
new file mode 100644
index 00000000..c8ade3ea
--- /dev/null
+++ b/versioned_docs/version-v0.13/UserManuals/TeamConfiguration.md
@@ -0,0 +1,188 @@
+---
+title: "Team Configuration"
+sidebar_position: 7
+description: >
+  Team Configuration
+---
+## What is 'Team Configuration' and how it works?
+
+To organize and display metrics by `team`, Apache DevLake needs to know about the team configuration in an organization, specifically:
+
+1. What are the teams?
+2. Who are the users(unified identities)?
+3. Which users belong to a team?
+4. Which accounts(identities in specific tools) belong to the same user?
+
+Each of the questions above corresponds to a table in DevLake's schema, illustrated below:
+
+![image](/img/Team/teamflow0.png)
+
+1. `teams` table stores all the teams in the organization.
+2. `users` table stores the organization's roster. An entry in the `users` table corresponds to a person in the org.
+3. `team_users` table stores which users belong to a team.
+4. `user_accounts` table stores which accounts belong to a user. An `account` refers to an identiy in a DevOps tool and is automatically created when importing data from that tool. For example, a `user` may have a GitHub `account` as well as a Jira `account`.
+
+Apache DevLake uses a simple heuristic algorithm based on emails and names to automatically map accounts to users and populate the `user_accounts` table.
+When Apache DevLake cannot confidently map an `account` to a `user` due to insufficient information, it allows DevLake users to manually configure the mapping to ensure accuracy and integrity.
+
+## A step-by-step guide
+
+In the following sections, we'll walk through how to configure teams and create the five aforementioned tables (`teams`, `users`, `team_users`, `accounts`, and `user_accounts`).
+The overall workflow is:
+
+1. Create the `teams` table
+2. Create the `users` and `team_users` table
+3. Populate the `accounts` table via data collection
+4. Run a heuristic algorithm to populate `user_accounts` table
+5. Manually update `user_accounts` when the algorithm can't catch everything
+
+Note:
+
+1. Please replace `/path/to/*.csv` with the absolute path of the CSV file you'd like to upload.
+2. Please replace `127.0.0.1:4000` with your actual Apache DevLake ConfigUI service IP and port number.
+
+## Step 1 - Create the `teams` table
+
+You can create the `teams` table by sending a PUT request to `/plugins/org/teams.csv` with a `teams.csv` file. To jumpstart the process, you can download a template `teams.csv` from `/plugins/org/teams.csv?fake_data=true`. Below are the detailed instructions:
+
+a. Download the template `teams.csv` file
+
+    i.  GET http://127.0.0.1:4000/api/plugins/org/teams.csv?fake_data=true (pasting the URL into your browser will download the template)
+
+    ii. If you prefer using curl:
+        curl --location --request GET 'http://127.0.0.1:4000/api/plugins/org/teams.csv?fake_data=true'
+    
+
+b. Fill out `teams.csv` file and upload it to DevLake
+
+    i. Fill out `teams.csv` with your org data. Please don't modify the column headers or the file suffix.
+
+    ii. Upload `teams.csv` to DevLake with the following curl command: 
+    curl --location --request PUT 'http://127.0.0.1:4000/api/plugins/org/teams.csv' --form 'file=@"/path/to/teams.csv"'
+
+    iii. The PUT request would populate the `teams` table with data from `teams.csv` file.
+    You can connect to the database and verify the data in the `teams` table.
+    See Appendix for how to connect to the database.
+
+![image](/img/Team/teamflow3.png)
+
+
+## Step 2 - Create the `users` and `team_users` table
+
+You can create the `users` and `team_users` table by sending a single PUT request to `/plugins/org/users.csv` with a `users.csv` file. To jumpstart the process, you can download a template `users.csv` from `/plugins/org/users.csv?fake_data=true`. Below are the detailed instructions:
+
+a. Download the template `users.csv` file
+
+    i.  GET http://127.0.0.1:4000/api/plugins/org/users.csv?fake_data=true (pasting the URL into your browser will download the template)
+
+    ii. If you prefer using curl:
+    curl --location --request GET 'http://127.0.0.1:4000/api/plugins/org/users.csv?fake_data=true'
+
+
+b. Fill out `users.csv` and upload to DevLake
+
+    i.  Fill out `users.csv` with your org data. Please don't modify the column headers or the file suffix
+
+    ii. Upload `users.csv` to DevLake with the following curl command:
+    curl --location --request PUT 'http://127.0.0.1:4000/api/plugins/org/users.csv' --form 'file=@"/path/to/users.csv"'
+
+    iii. The PUT request would populate the `users` table along with the `team_users` table with data from `users.csv` file.
+    You can connect to the database and verify these two tables.
+
+![image](/img/Team/teamflow1.png)
+    
+![image](/img/Team/teamflow2.png)
+
+c. If you ever want to update `team_users` or `users` table, simply upload the updated `users.csv` to DevLake again following step b.
+
+## Step 3 - Populate the `accounts` table via data collection
+
+The `accounts` table is automatically populated when you collect data from data sources like GitHub and Jira through DevLake.
+
+For example, the GitHub plugin would create one entry in the `accounts` table for each GitHub user involved in your repository.
+For demo purposes, we'll insert some mock data into the `accounts` table using SQL:
+
+```
+INSERT INTO `accounts` (`id`, `created_at`, `updated_at`, `_raw_data_params`, `_raw_data_table`, `_raw_data_id`, `_raw_data_remark`, `email`, `full_name`, `user_name`, `avatar_url`, `organization`, `created_date`, `status`)
+VALUES
+        ('github:GithubAccount:1:1234', '2022-07-12 10:54:09.632', '2022-07-12 10:54:09.632', '{\"ConnectionId\":1,\"Owner\":\"apache\",\"Repo\":\"incubator-devlake\"}', '_raw_github_api_pull_request_reviews', 28, '', 'TyroneKCummings@teleworm.us', '', 'Tyrone K. Cummings', 'https://avatars.githubusercontent.com/u/101256042?u=a6e460fbaffce7514cbd65ac739a985f5158dabc&v=4', '', NULL, 0),
+        ('jira:JiraAccount:1:629cdf', '2022-07-12 10:54:09.632', '2022-07-12 10:54:09.632', '{\"ConnectionId\":1,\"BoardId\":\"76\"}', '_raw_jira_api_users', 5, '', 'DorothyRUpdegraff@dayrep.com', '', 'Dorothy R. Updegraff', 'https://avatars.jiraxxxx158dabc&v=4', '', NULL, 0);
+
+```
+
+![image](/img/Team/teamflow4.png)
+
+## Step 4 - Run a heuristic algorithm to populate `user_accounts` table
+
+Now that we have data in both the `users` and `accounts` table, we can tell DevLake to infer the mappings between `users` and `accounts` with a simple heuristic algorithm based on names and emails.
+
+a. Send an API request to DevLake to run the mapping algorithm
+
+```
+curl --location --request POST '127.0.0.1:4000/api/pipelines' \
+--header 'Content-Type: application/json' \
+--data-raw '{
+    "name": "test",
+    "plan":[
+        [
+            {
+                "plugin": "org",
+                "subtasks":["connectUserAccountsExact"],
+                "options":{
+                    "connectionId":1
+                }
+            }
+        ]
+    ]
+}'
+```
+
+b. After successful execution, you can verify the data in `user_accounts` in the database. 
+
+![image](/img/Team/teamflow5.png)
+
+## Step 5 - Manually update `user_accounts` when the algorithm can't catch everything
+
+It is recommended to examine the generated `user_accounts` table after running the algorithm.
+We'll demonstrate how to manually update `user_accounts` when the mapping is inaccurate/incomplete in this section.
+To make manual verification easier, DevLake provides an API for users to download `user_accounts` as a CSV file.
+Alternatively, you can verify and modify `user_accounts` all by SQL, see Appendix for more info.
+
+a. GET http://127.0.0.1:4000/api/plugins/org/user_account_mapping.csv(pasting the URL into your browser will download the file). If you prefer using curl:
+```
+curl --location --request GET 'http://127.0.0.1:4000/api/plugins/org/user_account_mapping.csv'
+```
+
+![image](/img/Team/teamflow6.png)
+
+b. If you find the mapping inaccurate or incomplete, you can modify the `user_account_mapping.csv` file and then upload it to DevLake.
+For example, here we change the `UserId` of row 'Id=github:GithubAccount:1:1234' in the `user_account_mapping.csv` file to 2.
+Then we upload the updated `user_account_mapping.csv` file with the following curl command:
+
+```
+curl --location --request PUT 'http://127.0.0.1:4000/api/plugins/org/user_account_mapping.csv' --form 'file=@"/path/to/user_account_mapping.csv"'
+```
+
+c. You can verify the data in the `user_accounts` table has been updated.
+
+![image](/img/Team/teamflow7.png)
+
+## Appendix A: how to connect to the database
+
+Here we use MySQL as an example. You can install database management tools like Sequel Ace, DataGrip, MySQLWorkbench, etc.
+
+
+Or through the command line:
+
+```
+mysql -h <ip> -u <username> -p -P <port>
+```
+
+## Appendix B: how to examine `user_accounts` via SQL
+
+```
+SELECT a.id as account_id, a.email, a.user_name as account_user_name, u.id as user_id, u.name as real_name
+FROM accounts a
+        join user_accounts ua on a.id = ua.account_id
+        join users u on ua.user_id = u.id
+```
diff --git a/versioned_docs/version-v0.13/UserManuals/_category_.json b/versioned_docs/version-v0.13/UserManuals/_category_.json
new file mode 100644
index 00000000..23ce768a
--- /dev/null
+++ b/versioned_docs/version-v0.13/UserManuals/_category_.json
@@ -0,0 +1,8 @@
+{
+  "label": "User Manuals",
+  "position": 3,
+  "link":{
+    "type": "generated-index",
+    "slug": "UserManuals"
+  }
+}
diff --git a/versioned_sidebars/version-v0.13-sidebars.json b/versioned_sidebars/version-v0.13-sidebars.json
new file mode 100644
index 00000000..39332bfe
--- /dev/null
+++ b/versioned_sidebars/version-v0.13-sidebars.json
@@ -0,0 +1,8 @@
+{
+  "docsSidebar": [
+    {
+      "type": "autogenerated",
+      "dirName": "."
+    }
+  ]
+}
diff --git a/versions.json b/versions.json
index dac11c4b..71f44521 100644
--- a/versions.json
+++ b/versions.json
@@ -1,4 +1,5 @@
 [
+  "v0.13",
   "v0.12",
   "v0.11"
 ]