You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@devlake.apache.org by wa...@apache.org on 2022/06/21 09:03:47 UTC

[incubator-devlake] branch main updated: docs: update plugin readmes

This is an automated email from the ASF dual-hosted git repository.

warren pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-devlake.git


The following commit(s) were added to refs/heads/main by this push:
     new 6c60a4c9 docs: update plugin readmes
6c60a4c9 is described below

commit 6c60a4c9519111d09941ebfe02a658c47145a7c9
Author: Startrekzky <ka...@merico.dev>
AuthorDate: Tue Jun 21 16:50:53 2022 +0800

    docs: update plugin readmes
---
 plugins/README.md              |  67 ---------
 plugins/dbt/README-zh-CN.md    |  64 ---------
 plugins/dbt/README.md          |  69 +--------
 plugins/feishu/README.md       | 103 +-------------
 plugins/gitee/README.md        | 107 +-------------
 plugins/gitextractor/README.md | 111 +--------------
 plugins/github/README.md       | 148 +-------------------
 plugins/gitlab/README.md       | 141 +------------------
 plugins/jenkins/README.md      | 106 +-------------
 plugins/jira/README.md         | 311 +----------------------------------------
 plugins/refdiff/README.md      | 208 +--------------------------
 11 files changed, 9 insertions(+), 1426 deletions(-)

diff --git a/plugins/README.md b/plugins/README.md
deleted file mode 100644
index adfc533c..00000000
--- a/plugins/README.md
+++ /dev/null
@@ -1,67 +0,0 @@
-# So you want to Build a New Plugin...
-
-...the good news is, it's easy!
-
-## Preparation work
-1. Create a directory named `yourplugin` under directory `plugins`
-2. Under `yourplugin`, you need three more packages: `api`, `models` and `tasks`
-    1. `api` interacts with `config-ui` for test/get/save connection of data source. Please check [How to create connection to be used by config-ui for a data source]() for detail.
-    2. `models` stores all `data entities` and `data migration scripts`. Please check [How to create models and data migrations]() for detail.
-    3. `tasks` contains all of our `sub tasks` for a plugin
-3. Create a yourplugin.go in `yourplugin`
-```golang
-type YourPlugin struct{}
-
-var _ core.PluginMeta = (*YourPlugin)(nil)
-var _ core.PluginInit = (*YourPlugin)(nil)
-var _ core.PluginTask = (*YourPlugin)(nil)
-var _ core.PluginApi = (*YourPlugin)(nil)
-var _ core.Migratable = (*YourPlugin)(nil)
-
-func (plugin YourPlugin) Init(config *viper.Viper, logger core.Logger, db *gorm.DB) error {
-	return nil
-}
-
-func (plugin YourPlugin) Description() string {
-	return "To collect and enrich data from YourPlugin"
-}
-// Register all subtasks
-func (plugin YourPlugin) SubTaskMetas() []core.SubTaskMeta {
-	return []core.SubTaskMeta{
-		tasks.CollectXXXX,
-		tasks.ExtractXXXX,
-		tasks.ConvertXXXX,
-	}
-}
-// Prepare your apiClient which will be used to request remote api, 
-// `apiClient` is defined in `client.go` under `tasks`
-// `YourPluginTaskData` is defined in `task_data.go` under `tasks`
-func (plugin YourPlugin) PrepareTaskData(taskCtx core.TaskContext, options map[string]interface{}) (interface{}, error) {
-	var op tasks.YourPluginOptions
-	err := mapstructure.Decode(options, &op)
-	if err != nil {
-		return nil, err
-	}
-  // Handle error.
-  if err != nil {
-    logger.Error(err)
-  }
-
-  // Export a variable named PluginEntry for Framework to search and load
-  var PluginEntry YourPlugin //nolint
-}
-```
-
-## Summary
-
-To build a new plugin you will need a few things. You should choose an API that you'd like to see data from. Think about the metrics you would like to see first, and then look for data that can support those metrics.
-
-## Create your sub tasks
-
-1. [Create collector will collect data from remote api server and save into raw layer]()
-2. [Create extractor will extract data from raw layer and save into tool layer]()
-3. [Create convertor will convert data from tool layer and save into domain layer]()
-
-## You're Done!
-
-Congratulations! You have created your first plugin! 🎖 
diff --git a/plugins/dbt/README-zh-CN.md b/plugins/dbt/README-zh-CN.md
deleted file mode 100644
index 54112fec..00000000
--- a/plugins/dbt/README-zh-CN.md
+++ /dev/null
@@ -1,64 +0,0 @@
-# Dbt
-
-<div align="center">
-
-| [English](README.md) | [中文](README-zh-CN.md) |
-| --- | --- |
-
-</div>
-
-<br>
-
-## 概述
-dbt(数据构建工具)使分析工程师能够通过简单地编写select语句来转换仓库中的数据。dbt负责将这些select语句转换为表和视图。dbt在ELT(Extract,Load,Transform)过程中起着重要作用。它不提取或加载数据,但它非常擅长转换已经加载到仓库中的数据。
-
-## 用户安装<a id="user-setup"></a>
-- 如果您计划使用本产品,你首先需要安装一些环境。
-
-#### 需要安装的软件包<a id="user-setup-requirements"></a>
-- [python3.7+](https://www.python.org/downloads/)
-- [dbt-mysql](https://pypi.org/project/dbt-mysql/#configuring-your-profile)
-
-#### 在你的终端和项目中执行或创建以下命令<a id="user-setup-commands"></a>
-1.pip install dbt mysql
-2.dbt init demoapp(demoapp是项目名称)
-3.创建SQL转换和数据模型
-
-## 通过Dbt转换数据
-请使用原始JSON API,使用**cURL**或**Postman**等图形化API工具手动启动运行并且将以下请求发送到DevLake API端点。
-
-```json
-[
-  [
-    {
-      "plugin": "dbt",
-      "options": {
-          "projectPath": "/Users/abeizn/demoapp",
-          "projectName": "demoapp",
-          "projectTarget": "dev",
-          "selectedModels": ["my_first_dbt_model","my_second_dbt_model"],
-          "projectVars": {
-            "demokey1": "demovalue1",
-            "demokey2": "demovalue2"
-        }
-      }
-    }
-  ]
-]
-```
-
-- `projectPath`:dbt项目的绝对路径。(必选)
-- `projectName`:dbt项目的名称。(必选)
-- `projectTarget`:这是dbt项目将使用的默认目标分支。(可选)
-- `selectedModels`:模型是select语句。模型在中定义在sql文件,通常位于模型目录中。(必选)
-selectedModels接受一个或多个参数。每个参数可以是以下参数之一:
-1. 包名: 运行项目中的所有模型,例如:example
-2. 模型名: 运行特定的模型,例如:my_First_dbt_model
-3. 模型目录的完全限定路径。
-
-- `vars`:dbt提供了一种机制变量,用于向模型提供数据进行编译。(可选)
-示例:select * from events where event_type = '{{ var("event_type") }}' ,您需要设置参数“{event_type:real_value}”的值。
-
-### 资源:
--了解更多关于dbt的信息[在文档中](https://docs.getdbt.com/docs/introduction)
--查看[对话](https://discourse.getdbt.com/)常见问题和答案
\ No newline at end of file
diff --git a/plugins/dbt/README.md b/plugins/dbt/README.md
index 587a7f25..9149e252 100644
--- a/plugins/dbt/README.md
+++ b/plugins/dbt/README.md
@@ -1,68 +1 @@
-# Dbt
-
-<div align="center">
-
-| [English](README.md) | [中文](README-zh-CN.md) |
-| --- | --- |
-
-</div>
-
-<br>
-
-## Summary
-
-dbt (data build tool) enables analytics engineers to transform data in their warehouses by simply writing select statements. dbt handles turning these select statements into tables and views.
-dbt does the T in ELT (Extract, Load, Transform) processes – it doesn’t extract or load data, but it’s extremely good at transforming data that’s already loaded into your warehouse.
-
-## User setup<a id="user-setup"></a>
-- If you plan to use this product, you need to install some environments first.
-
-#### Required Packages to Install<a id="user-setup-requirements"></a>
-- [python3.7+](https://www.python.org/downloads/)
-- [dbt-mysql](https://pypi.org/project/dbt-mysql/#configuring-your-profile)
-
-#### Commands to run or create in your terminal and the dbt project<a id="user-setup-commands"></a>
-1. pip install dbt-mysql
-2. dbt init demoapp (demoapp is project name) 
-3. create your SQL transformations and data models
-
-## Convert Data By Dbt
-
-please use the Raw JSON API to manually initiate a run using **cURL** or graphical API tool such as **Postman**. `POST` the following request to the DevLake API Endpoint.
-
-```json
-[
-  [
-    {
-      "plugin": "dbt",
-      "options": {
-          "projectPath": "/Users/abeizn/demoapp",
-          "projectName": "demoapp",
-          "projectTarget": "dev",
-          "selectedModels": ["my_first_dbt_model","my_second_dbt_model"],
-          "projectVars": {
-            "demokey1": "demovalue1",
-            "demokey2": "demovalue2"
-        }
-      }
-    }
-  ]
-]
-```
-
-- `projectPath`: the absolute path of the dbt project. (required)
-- `projectName`: the name of the dbt project. (required)
-- `projectTarget`: this is the default target your dbt project will use. (optional)
-- `selectedModels`: a model is a select statement. Models are defined in .sql files, and typically in your models directory. (required)
-And selectedModels accepts one or more arguments. Each argument can be one of:
-1. a package name #runs all models in your project, example: example
-2. a model name   # runs a specific model, example: my_fisrt_dbt_model
-3. a fully-qualified path to a directory of models.
-
-- `vars`: dbt provides a mechanism variables to provide data to models for compilation. (optional) 
-example: select * from events where event_type = '{{ var("event_type") }}' this sql in your model, you need set parameters "vars": "{event_type: real_value}"
-
-### Resources:
-- Learn more about dbt [in the docs](https://docs.getdbt.com/docs/introduction)
-- Check out [Discourse](https://discourse.getdbt.com/) for commonly asked questions and answers
-
+Please see details in the [Apache DevLake website](https://devlake.apache.org/docs/Plugins/dbt)
\ No newline at end of file
diff --git a/plugins/feishu/README.md b/plugins/feishu/README.md
index d5349c60..179984d0 100644
--- a/plugins/feishu/README.md
+++ b/plugins/feishu/README.md
@@ -1,102 +1 @@
-# Feishu
-
-<div align="center">
-
-| [English](README.md) | [中文](README-zh-CN.md) |
-| --- | --- |
-
-</div>
-
-<br>
-
-## Summary
-
-This plugin collects Feishu data through [Feishu Openapi](https://open.feishu.cn/document/home/user-identity-introduction/introduction).
-
-## Configuration
-
-In order to fully use this plugin, you will need to get app_id and app_secret from feishu administrator(For help on App info, please see [official Feishu Docs](https://open.feishu.cn/document/ukTMukTMukTM/ukDNz4SO0MjL5QzM/auth-v3/auth/tenant_access_token_internal)), 
-then set these two configurations via Dev Lake's `.env`.
-
-### By `.env`
-
-The connection aspect of the configuration screen requires the following key fields to connect to the Feishu API. As Feishu is a single-source data provider at the moment, the connection name is read-only as there is only one instance to manage. As we continue our development roadmap we may enable multi-source connections for Feishu in the future.
-
-FEISHU_APPID=app_id
-
-FEISHU_APPSCRECT=app_secret
-
-
-## Collect Data From Feishu
-
-In order to collect data, you have to compose a JSON looks like following one, and send it by selecting `Advanced Mode` on `Create Pipeline Run` page:
-numOfDaysToCollect: The number of days you want to collect
-rateLimitPerSecond: The number of requests to send(Maximum is 8)
-1. Configure-UI Mode
-```json
-[
-  [
-    {
-      "plugin": "feishu",
-      "options": {
-        "numOfDaysToCollect" : 80,
-        "rateLimitPerSecond" : 5
-      }
-    }
-  ]
-]
-```
-
-and if you want to perform certain subtasks.
-```
-[
-  [
-    {
-      "plugin": "feishu",
-      "subtasks": ["collectXXX", "extractXXX", "convertXXX"],
-      "options": {
-        "numOfDaysToCollect" : 80,
-        "rateLimitPerSecond" : 5
-      }
-    }
-  ]
-]
-```
-
-2. Curl Mode:
-You can also trigger data collection by making a POST request to `/pipelines`.
-```
-curl --location --request POST 'localhost:8080/pipelines' \
---header 'Content-Type: application/json' \
---data-raw '
-{
-    "name": "feishu 20211126",
-    "tasks": [[{
-      "plugin": "feishu",
-      "options": {
-        "numOfDaysToCollect" : 80,
-        "rateLimitPerSecond" : 5
-      }
-    }]]
-}
-'
-```
-
-and if you want to perform certain subtasks.
-```
-curl --location --request POST 'localhost:8080/pipelines' \
---header 'Content-Type: application/json' \
---data-raw '
-{
-    "name": "feishu 20211126",
-    "tasks": [[{
-      "plugin": "feishu",
-      "subtasks": ["collectXXX", "extractXXX", "convertXXX"],
-      "options": {
-        "numOfDaysToCollect" : 80,
-        "rateLimitPerSecond" : 5
-      }
-    }]]
-}
-'
-```
\ No newline at end of file
+Please see details in the [Apache DevLake website](https://devlake.apache.org/docs/Plugins/feishu)
\ No newline at end of file
diff --git a/plugins/gitee/README.md b/plugins/gitee/README.md
index cfb2b925..5ab84662 100644
--- a/plugins/gitee/README.md
+++ b/plugins/gitee/README.md
@@ -9,109 +9,4 @@
 
 <br>
 
-## Summary
-
-## Configuration
-
-### Provider (Datasource) Connection
-The connection aspect of the configuration screen requires the following key fields to connect to the **Gitee API**. As gitee is a _single-source data provider_ at the moment, the connection name is read-only as there is only one instance to manage. As we continue our development roadmap we may enable _multi-source_ connections for gitee in the future.
-
-- **Connection Name** [`READONLY`]
-    - ⚠️ Defaults to "**Gitee**" and may not be changed.
-- **Endpoint URL** (REST URL, starts with `https://` or `http://`)
-    - This should be a valid REST API Endpoint eg. `https://gitee.com/api/v5/`
-    - ⚠️ URL should end with`/`
-- **Auth Token(s)** (Personal Access Token)
-    - For help on **Creating a personal access token**
-    - Provide at least one token for Authentication with the . This field accepts a comma-separated list of values for multiple tokens. The data collection will take longer for gitee since they have a **rate limit of 2k requests per hour**. You can accelerate the process by configuring _multiple_ personal access tokens.
-
-"For API requests using `Basic Authentication` or `OAuth`
-
-
-If you have a need for more api rate limits, you can set many tokens in the config file and we will use all of your tokens.
-
-For an overview of the **gitee REST API**, please see official [gitee Docs on REST](https://gitee.com/api/v5/swagger)
-
-Click **Save Connection** to update connection settings.
-
-
-### Provider (Datasource) Settings
-Manage additional settings and options for the gitee Datasource Provider. Currently there is only one **optional** setting, *Proxy URL*. If you are behind a corporate firewall or VPN you may need to utilize a proxy server.
-
-**gitee Proxy URL [ `Optional`]**
-Enter a valid proxy server address on your Network, e.g. `http://your-proxy-server.com:1080`
-
-Click **Save Settings** to update additional settings.
-
-### Regular Expression Configuration
-Define regex pattern in .env
-- GITEE_PR_BODY_CLOSE_PATTERN: Define key word to associate issue in pr body, please check the example in .env.example
-
-## Sample Request
-In order to collect data, you have to compose a JSON looks like following one, and send it by selecting `Advanced Mode` on `Create Pipeline Run` page:
-1. Configure-UI Mode
-```json
-[
-  [
-    {
-      "plugin": "gitee",
-      "options": {
-        "repo": "lake",
-        "owner": "merico-dev"
-      }
-    }
-  ]
-]
-```
-and if you want to perform certain subtasks.
-```json
-[
-  [
-    {
-      "plugin": "gitee",
-      "subtasks": ["collectXXX", "extractXXX", "convertXXX"],
-      "options": {
-        "repo": "lake",
-        "owner": "merico-dev"
-      }
-    }
-  ]
-]
-```
-
-2. Curl Mode:
-   You can also trigger data collection by making a POST request to `/pipelines`.
-```
-curl --location --request POST 'localhost:8080/pipelines' \
---header 'Content-Type: application/json' \
---data-raw '
-{
-    "name": "gitee 20211126",
-    "tasks": [[{
-        "plugin": "gitee",
-        "options": {
-            "repo": "lake",
-            "owner": "merico-dev"
-        }
-    }]]
-}
-'
-```
-and if you want to perform certain subtasks.
-```
-curl --location --request POST 'localhost:8080/pipelines' \
---header 'Content-Type: application/json' \
---data-raw '
-{
-    "name": "gitee 20211126",
-    "tasks": [[{
-        "plugin": "gitee",
-        "subtasks": ["collectXXX", "extractXXX", "convertXXX"],
-        "options": {
-            "repo": "lake",
-            "owner": "merico-dev"
-        }
-    }]]
-}
-'
-```
+Please see details in the [Apache DevLake website](https://devlake.apache.org/docs/Plugins/gitee)
\ No newline at end of file
diff --git a/plugins/gitextractor/README.md b/plugins/gitextractor/README.md
index 9c18c416..059d99ff 100644
--- a/plugins/gitextractor/README.md
+++ b/plugins/gitextractor/README.md
@@ -1,110 +1 @@
-# Git Repo Extractor
-
-## Summary
-This plugin extract commits and references from a remote or local git repository. It then saves the data into the database or csv files.
-
-## Steps to make this plugin work
-
-1. Use the Git repo extractor to retrieve commit-and-branch-related data from your repo
-2. Use the GitHub plugin to retrieve Github-issue-and-pr-related data from your repo. NOTE: you can run only one the issue collection stage as described in the Github Plugin README.
-3. Use the [RefDiff](../refdiff) plugin to calculate version diff, which will be stored in `refs_commits_diffs` table.
-
-## Sample Request
-1. Configure-UI Mode
-In order to collect data, you have to compose a JSON looks like following one, and send it by selecting `Advanced Mode` on `Create Pipeline Run` page:
-```
-[
-  [
-    {
-      "Plugin": "gitextractor",
-      "Options": {
-        "url": "https://github.com/apache/incubator-devlake.git",
-        "repoId": "github:GithubRepo:384111310"
-      }
-    }
-  ]
-]
-```
-and if you want to perform certain subtasks.
-```
-[
-  [
-    {
-      "plugin": "gitextractor",
-      "subtasks": ["collectXXX", "extractXXX", "convertXXX"],
-      "options": {
-        "url": "https://github.com/apache/incubator-devlake.git",
-        "repoId": "github:GithubRepo:384111310"
-      }
-    }
-  ]
-]
-```
-
-2. Curl Mode:
-You can also trigger data collection by making a POST request to `/pipelines`.
-```
-curl --location --request POST 'localhost:8080/pipelines' \
---header 'Content-Type: application/json' \
---data-raw '
-{
-    "name": "git repo extractor",
-    "tasks": [
-        [
-            {
-                "plugin": "gitextractor",
-                "options": {
-                    "url": "https://github.com/apache/incubator-devlake.git",
-                    "repoId": "github:GithubRepo:384111310"
-                }
-            }
-        ]
-    ]
-}
-'
-```
-and if you want to perform certain subtasks.
-```
-curl --location --request POST 'localhost:8080/pipelines' \
---header 'Content-Type: application/json' \
---data-raw '
-{
-    "name": "git repo extractor",
-    "tasks": [
-        [
-            {
-                "plugin": "gitextractor",
-                "subtasks": ["collectXXX", "extractXXX", "convertXXX"],
-                "options": {
-                    "url": "https://github.com/apache/incubator-devlake.git",
-                    "repoId": "github:GithubRepo:384111310"
-                }
-            }
-        ]
-    ]
-}
-'
-```
-- `url`: the location of the git repository. It should start with `http`/`https` for remote git repository or `/` for a local one.
-- `repoId`: column `id` of  `repos`.
-- `proxy`: optional, http proxy, e.g. `http://your-proxy-server.com:1080`.
-- `user`: optional, for cloning private repository using HTTP/HTTPS
-- `password`: optional, for cloning private repository using HTTP/HTTPS
-- `privateKey`: optional, for SSH cloning, base64 encoded `PEM` file
-- `passphrase`: optional, passphrase for the private key
-
-
-## Standalone Mode
-
-You call also run this plugin in a standalone mode without any DevLake service running using the following command:
-
-```
-go run plugins/gitextractor/main.go -url https://github.com/apache/incubator-devlake.git -id github:GithubRepo:384111310 -db "merico:merico@tcp(127.0.0.1:3306)/lake?charset=utf8mb4&parseTime=True"
-```
-
-For more options (e.g., saving to a csv file instead of a db), please read `plugins/gitextractor/main.go`.
-
-## Development
-
-This plugin depends on `libgit2`, you need to install version 1.3.0 in order to run and debug this plugin on your local
-machine. [Click here](../refdiff#Development) for a brief guide.
+Please see details in the [Apache DevLake website](https://devlake.apache.org/docs/Plugins/gitextractor)
\ No newline at end of file
diff --git a/plugins/github/README.md b/plugins/github/README.md
index 847233ce..27306b27 100644
--- a/plugins/github/README.md
+++ b/plugins/github/README.md
@@ -1,147 +1 @@
-# Github Pond
-
-<div align="center">
-
-| [English](README.md) | [中文](README-zh-CN.md) |
-| --- | --- |
-
-</div>
-
-<br>
-
-## Summary
-
-This plugin gathers data from `GitHub` to display information to the user in `Grafana`. We can help tech leaders answer such questions as:
-
-- Is this month more productive than last?
-- How fast do we respond to customer requirements?
-- Was our quality improved or not?
-
-## Metrics
-
-Here are some examples of what we can use `GitHub` data to show:
-- Avg Requirement Lead Time By Assignee
-- Bug Count per 1k Lines of Code
-- Commit Count over Time
-
-## Screenshot
-
-![image](https://user-images.githubusercontent.com/27032263/141855099-f218f220-1707-45fa-aced-6742ab4c4286.png)
-
-
-## Configuration
-
-### Provider (Datasource) Connection
-The connection aspect of the configuration screen requires the following key fields to connect to the **GitHub API**. As GitHub is a _single-source data provider_ at the moment, the connection name is read-only as there is only one instance to manage. As we continue our development roadmap we may enable _multi-source_ connections for GitHub in the future.
-
-- **Connection Name** [`READONLY`]
-  - ⚠️ Defaults to "**Github**" and may not be changed.
-- **Endpoint URL** (REST URL, starts with `https://` or `http://`)
-  - This should be a valid REST API Endpoint eg. `https://api.github.com/`
-  - ⚠️ URL should end with`/`
-- **Auth Token(s)** (Personal Access Token)
-  - For help on **Creating a personal access token**, please see official [GitHub Docs on Personal Tokens](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token)
-  - Provide at least one token for Authentication with the . This field accepts a comma-separated list of values for multiple tokens. The data collection will take longer for GitHub since they have a **rate limit of 2k requests per hour**. You can accelerate the process by configuring _multiple_ personal access tokens.
-    
-"For API requests using `Basic Authentication` or `OAuth`, you can make up to 5,000 requests per hour."
-
-- https://docs.github.com/en/rest/overview/resources-in-the-rest-api
-
-If you have a need for more api rate limits, you can set many tokens in the config file and we will use all of your tokens.
-
-NOTE: You can get 15000 requests/hour/token if you pay for `GitHub` enterprise.
-    
-For an overview of the **GitHub REST API**, please see official [GitHub Docs on REST](https://docs.github.com/en/rest)
-    
-Click **Save Connection** to update connection settings.
-    
-
-### Provider (Datasource) Settings
-Manage additional settings and options for the GitHub Datasource Provider. Currently there is only one **optional** setting, *Proxy URL*. If you are behind a corporate firewall or VPN you may need to utilize a proxy server.
-
-**GitHub Proxy URL [ `Optional`]**
-Enter a valid proxy server address on your Network, e.g. `http://your-proxy-server.com:1080`
-
-Click **Save Settings** to update additional settings.
-
-### Regular Expression Configuration
-Define regex pattern in request options
-- prType: Define key word to associate issue in pr body
-- prComponent: Define key word to associate issue in pr body
-- prBodyClosePattern: Define key word to associate issue in pr body
-- issueSeverity: Define key word to associate issue in pr body
-- issuePriority: Define key word to associate issue in pr body
-- issueComponent: Define key word to associate issue in pr body
-- issueTypeBug: Define key word to associate issue in pr body
-- issueTypeIncident: Define key word to associate issue in pr body
-- issueTypeRequirement: Define key word to associate issue in pr body
-
-## Sample Request
-In order to collect data, you have to compose a JSON looks like following one, and send it by selecting `Advanced Mode` on `Create Pipeline Run` page:
-1. Configure-UI Mode
-```json
-[
-  [
-    {
-      "plugin": "github",
-      "options": {
-        "repo": "lake",
-        "owner": "merico-dev"
-        // add more config such as prType if necessary.
-      }
-    }
-  ]
-]
-```
-and if you want to perform certain subtasks.
-```json
-[
-  [
-    {
-      "plugin": "github",
-      "subtasks": ["collectXXX", "extractXXX", "convertXXX"],
-      "options": {
-        "repo": "lake",
-        "owner": "merico-dev"
-      }
-    }
-  ]
-]
-```
-
-2. Curl Mode:
-You can also trigger data collection by making a POST request to `/pipelines`.
-```
-curl --location --request POST 'localhost:8080/pipelines' \
---header 'Content-Type: application/json' \
---data-raw '
-{
-    "name": "github 20211126",
-    "tasks": [[{
-        "plugin": "github",
-        "options": {
-            "repo": "lake",
-            "owner": "merico-dev"
-        }
-    }]]
-}
-'
-```
-and if you want to perform certain subtasks.
-```
-curl --location --request POST 'localhost:8080/pipelines' \
---header 'Content-Type: application/json' \
---data-raw '
-{
-    "name": "github 20211126",
-    "tasks": [[{
-        "plugin": "github",
-        "subtasks": ["collectXXX", "extractXXX", "convertXXX"],
-        "options": {
-            "repo": "lake",
-            "owner": "merico-dev"
-        }
-    }]]
-}
-'
-```
+Please see details in the [Apache DevLake website](https://devlake.apache.org/docs/Plugins/github)
\ No newline at end of file
diff --git a/plugins/gitlab/README.md b/plugins/gitlab/README.md
index 01053513..0bc01714 100644
--- a/plugins/gitlab/README.md
+++ b/plugins/gitlab/README.md
@@ -1,140 +1 @@
-# Gitlab Pond
-
-<div align="center">
-
-| [English](README.md) | [中文](README-zh-CN.md) |
-| --- | --- |
-
-</div>
-
-<br>
-
-## Metrics
-
-Metric Name | Description
-:------------ | :-------------
-Pull Request Count | Number of Pull/Merge Requests
-Pull Request Pass Rate | Ratio of Pull/Merge Review requests to merged
-Pull Request Reviewer Count | Number of Pull/Merge Reviewers
-Pull Request Review Time | Time from Pull/Merge created time until merged
-Commit Author Count | Number of Contributors
-Commit Count | Number of Commits
-Added Lines | Accumulated Number of New Lines
-Deleted Lines | Accumulated Number of Removed Lines
-Pull Request Review Rounds | Number of cycles of commits followed by comments/final merge
-
-## Configuration
-
-### Provider (Datasource) Connection
-The connection aspect of the configuration screen requires the following key fields to connect to the **GitLab API**. As GitLab is a _single-source data provider_ at the moment, the connection name is read-only as there is only one instance to manage. As we continue our development roadmap we may enable _multi-source_ connections for GitLab in the future.
-
-- **Connection Name** [`READONLY`]
-  - ⚠️ Defaults to "**Gitlab**" and may not be changed.
-- **Endpoint URL** (REST URL, starts with `https://` or `http://`)
-  - This should be a valid REST API Endpoint eg. `https://gitlab.example.com/api/v4/`
-  - ⚠️ URL should end with`/`
-- **Personal Access Token** (HTTP Basic Auth)
-  - Login to your Gitlab Account and create a **Personal Access Token** to authenticate with the API using HTTP Basic Authentication.. The token must be 20 characters long. Save the personal access token somewhere safe. After you leave the page, you no longer have access to the token.
-
-    1. In the top-right corner, select your **avatar**.
-    2. Select **Edit profile**.
-    3. On the left sidebar, select **Access Tokens**.
-    4. Enter a **name** and optional **expiry date** for the token.
-    5. Select the desired **scopes**.
-    6. Select **Create personal access token**.
-
-For help on **Creating a personal access token**, please see official [GitLab Docs on Personal Tokens](https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html)
-    
-For an overview of the **GitLab REST API**, please see official [GitLab Docs on REST](https://docs.gitlab.com/ee/development/documentation/restful_api_styleguide.html#restful-api)
-    
-Click **Save Connection** to update connection settings.
-    
-### Provider (Datasource) Settings
-There are no additional settings for the GitLab Datasource Provider at this time.
-NOTE: `GitLab Project ID` Mappings feature has been deprecated.
-
-## Gathering Data with Gitlab
-In order to collect data, you have to compose a JSON looks like following one, and send it by selecting `Advanced Mode` on `Create Pipeline Run` page:
-1. Configure-UI Mode
-```json
-[
-  [
-    {
-      "plugin": "gitlab",
-      "options": {
-        "projectId": <Your gitlab project id>
-      }
-    }
-  ]
-]
-```
-and if you want to perform certain subtasks.
-```json
-[
-  [
-    {
-      "plugin": "gitlab",
-      "subtasks": ["collectXXX", "extractXXX", "convertXXX"],
-      "options": {
-        "projectId": <Your gitlab project id>
-      }
-    }
-  ]
-]
-```
-
-2. Curl Mode:
-You can also trigger data collection by making a POST request to `/pipelines`.
-```
-curl --location --request POST 'localhost:8080/pipelines' \
---header 'Content-Type: application/json' \
---data-raw '
-{
-    "name": "gitlab 20211126",
-    "tasks": [[{
-        "plugin": "gitlab",
-        "options": {
-            "projectId": <Your gitlab project id>
-        }
-    }]]
-}
-'
-```
-and if you want to perform certain subtasks.`
-```
-curl --location --request POST 'localhost:8080/pipelines' \
---header 'Content-Type: application/json' \
---data-raw '
-{
-    "name": "gitlab 20211126",
-    "tasks": [[{
-        "plugin": "gitlab",
-        "subtasks": ["collectXXX", "extractXXX", "convertXXX"],
-        "options": {
-            "projectId": <Your gitlab project id>
-        }
-    }]]
-}
-'
-```
-
-
-## Finding Project Id
-
-To get the project id for a specific `Gitlab` repository:
-- Visit the repository page on gitlab
-- Find the project id just below the title
-
-  ![Screen Shot 2021-08-06 at 4 32 53 PM](https://user-images.githubusercontent.com/3789273/128568416-a47b2763-51d8-4a6a-8a8b-396512bffb03.png)
-
-> Use this project id in your requests, to collect data from this project
-
-## ⚠️ (WIP) Create a Gitlab API Token <a id="gitlab-api-token"></a>
-
-1. When logged into `Gitlab` visit `https://gitlab.com/-/profile/personal_access_tokens`
-2. Give the token any name, no expiration date and all scopes (excluding write access)
-
-    ![Screen Shot 2021-08-06 at 4 44 01 PM](https://user-images.githubusercontent.com/3789273/128569148-96f50d4e-5b3b-4110-af69-a68f8d64350a.png)
-
-3. Click the **Create Personal Access Token** button
-4. Save the API token into `.env` file via `cofnig-ui` or edit the file directly.
+Please see details in the [Apache DevLake website](https://devlake.apache.org/docs/Plugins/gitlab)
\ No newline at end of file
diff --git a/plugins/jenkins/README.md b/plugins/jenkins/README.md
index 95e89962..82675fc1 100644
--- a/plugins/jenkins/README.md
+++ b/plugins/jenkins/README.md
@@ -1,105 +1 @@
-# Jenkins
-
-## Summary
-
-This plugin collects Jenkins data through [Remote Access API](https://www.jenkins.io/doc/book/using/remote-access-api/). It then computes and visualizes various devops metrics from the Jenkins data.
-
-![image](https://user-images.githubusercontent.com/61080/141943122-dcb08c35-cb68-4967-9a7c-87b63c2d6988.png)
-
-## Metrics
-
-| Metric Name        | Description                         |
-| :----------------- | :---------------------------------- |
-| Build Count        | The number of builds created        |
-| Build Success Rate | The percentage of successful builds |
-
-## Configuration
-
-In order to fully use this plugin, you will need to set various configurations via Dev Lake's `config-ui`.
-
-### By `config-ui`
-
-The connection aspect of the configuration screen requires the following key fields to connect to the Jenkins API. As Jenkins is a single-source data provider at the moment, the connection name is read-only as there is only one instance to manage. As we continue our development roadmap we may enable multi-source connections for Jenkins in the future.
-
-- Connection Name [READONLY]
-  - ⚠️ Defaults to "Jenkins" and may not be changed.
-- Endpoint URL (REST URL, starts with `https://` or `http://`i, ends with `/`)
-  - This should be a valid REST API Endpoint eg. `https://ci.jenkins.io/`
-- Username (E-mail)
-  - Your User ID for the Jenkins Instance.
-- Password (Secret Phrase or API Access Token)
-  - Secret password for common credentials.
-  - For help on Username and Password, please see official Jenkins Docs on Using Credentials
-  - Or you can use **API Access Token** for this field, which can be generated at `User` -> `Configure` -> `API Token` section on Jenkins.
-
-Click Save Connection to update connection settings.
-
-## Collect Data From Jenkins
-In order to collect data, you have to compose a JSON looks like following one, and send it by selecting `Advanced Mode` on `Create Pipeline Run` page:
-1. Configure-UI Mode
-```json
-[
-  [
-    {
-      "plugin": "jenkins",
-      "options": {
-         "connectionId": 1
-      }
-    }
-  ]
-]
-```
-and if you want to perform certain subtasks.
-```json
-[
-  [
-    {
-      "plugin": "jenkins",
-      "subtasks": ["collectXXX", "extractXXX", "convertXXX"],
-      "options": {
-        "connectionId": 1
-      }
-    }
-  ]
-]
-```
-
-2. Curl Mode:
-You can also trigger data collection by making a POST request to `/pipelines`.
-```
-curl --location --request POST 'localhost:8080/pipelines' \
---header 'Content-Type: application/json' \
---data-raw '
-{
-    "name": "jenkins 20211126",
-    "tasks": [[{
-        "plugin": "jenkins",
-        "options": {
-          "connectionId": 1
-        }
-    }]]
-}
-'
-```
-and if you want to perform certain subtasks.
-```
-curl --location --request POST 'localhost:8080/pipelines' \
---header 'Content-Type: application/json' \
---data-raw '
-{
-    "name": "jenkins 20211126",
-    "tasks": [[{
-        "plugin": "jenkins",
-        "subtasks": ["collectXXX", "extractXXX", "convertXXX"],
-        "options": {
-          "connectionId": 1
-        }
-    }]]
-}
-'
-```
-
-
-## Relationship between job and build
-
-Build is kind of a snapshot of job. Running job each time creates a build.
\ No newline at end of file
+Please see details in the [Apache DevLake website](https://devlake.apache.org/docs/Plugins/jenkins)
\ No newline at end of file
diff --git a/plugins/jira/README.md b/plugins/jira/README.md
index 6b133d63..86210504 100644
--- a/plugins/jira/README.md
+++ b/plugins/jira/README.md
@@ -1,310 +1 @@
-# Jira
-
-<div align="center">
-
-| [English](README.md) | [中文](README-zh-CN.md) |
-| --- | --- |
-
-</div>
-
-<br>
-
-## Summary
-
-This plugin collects Jira data through Jira Cloud REST API. It then computes and visualizes various engineering metrics from the Jira data.
-
-<img width="2035" alt="Screen Shot 2021-09-10 at 4 01 55 PM" src="https://user-images.githubusercontent.com/2908155/132926143-7a31d37f-22e1-487d-92a3-cf62e402e5a8.png">
-
-## Project Metrics This Covers
-
-Metric Name | Description
-:------------ | :-------------
-Requirement Count	| Number of issues with type "Requirement"
-Requirement Lead Time	| Lead time of issues with type "Requirement"
-Requirement Delivery Rate |	Ratio of delivered requirements to all requirements
-Requirement Granularity | Number of story points associated with an issue
-Bug Count	| Number of issues with type "Bug"<br><i>bugs are found during testing</i>
-Bug Age	| Lead time of issues with type "Bug"<br><i>both new and deleted lines count</i>
-Bugs Count per 1k Lines of Code |	Amount of bugs per 1000 lines of code
-Incident Count | Number of issues with type "Incident"<br><i>incidents are found when running in production</i>
-Incident Age | Lead time of issues with type "Incident"
-Incident Count per 1k Lines of Code | Amount of incidents per 1000 lines of code
-
-## Configuration
-
-In order to fully use this plugin, you will need to set various configurations via Dev Lake's `config-ui` service. Open `config-ui` on browser, by default the URL is http://localhost:4000, then go to **Data Integrations / JIRA** page. JIRA plugin currently supports multiple data connections, Here you can **add** new connection to your JIRA connection or **update** the settings if needed.
-
-For each connection, you will need to set up following items:
-
-- Connection Name: This allow you to distinguish different connections.
-- Endpoint URL: The JIRA instance api endpoint, for JIRA Cloud Service, it would be: `https://<mydomain>.atlassian.net/rest`. devlake officially supports JIRA Cloud Service on atlassian.net, may or may not work for JIRA Server Instance.
-- Basic Auth Token: First, generate a **JIRA API TOKEN** for your JIRA account on JIRA console (see [Generating API token](#generating-api-token)), then, in `config-ui` click the KEY icon on the right side of the input to generate a full `HTTP BASIC AUTH` token for you.
-- Issue Type Mapping: JIRA is highly customizable, each JIRA instance may have a different set of issue types than others. In order to compute and visualize metrics for different instances, you need to map your issue types to standard ones. See [Issue Type Mapping](#issue-type-mapping) for detail.
-- Epic Key: unfortunately, epic relationship implementation in JIRA is based on `custom field`, which is vary from instance to instance. Please see [Find Out Custom Fields](#find-out-custom-fields).
-- Story Point Field: same as Epic Key.
-- Remotelink Commit SHA:A regular expression that matches commit links to determine whether an external link is a link to a commit. Taking gitlab as an example, to match all commits similar to https://gitlab.com/merico-dev/ce/example-repository/-/commit/8ab8fb319930dbd8615830276444b8545fd0ad24, you can directly use the regular expression **/commit/([0-9a-f]{40})$**
-### Generating API token
-1. Once logged into Jira, visit the url `https://id.atlassian.com/manage-profile/security/api-tokens`
-2. Click the **Create API Token** button, and give it any label name
-![image](https://user-images.githubusercontent.com/27032263/129363611-af5077c9-7a27-474a-a685-4ad52366608b.png)
-
-
-### Issue Type Mapping
-
-Devlake supports 3 standard types, all metrics are computed based on these types:
-
- - `Bug`: Problems found during `test` phase, before they can reach the production environment.
- - `Incident`: Problems went through `test` phash, got deployed into production environment.
- - `Requirement`: Normally, it would be `Story` on your instance if you adopted SCRUM.
-
-You can may map arbitrary **YOUR OWN ISSUE TYPE** to a single **STANDARD ISSUE TYPE**, normally, one would map `Story` to `Requirement`, but you could map both `Story` and `Task` to `Requirement` if that was your case. Those unspecified type would be copied as standard type directly for your convenience, so you don't need to map your `Bug` to standard `Bug`.
-
-Type mapping is critical for some metrics, like **Requirement Count**, make sure to map your custom type correctly.
-
-### Find Out Custom Field
-
-Please follow this guide: [How to find Jira the custom field ID in Jira? · merico-dev/lake Wiki](https://github.com/apache/incubator-devlake/wiki/How-to-find-the-custom-field-ID-in-Jira)
-
-## Collect Data From JIRA
-
-In order to collect data, you have to compose a JSON looks like following one, and send it by selecting `Advanced Mode` on `Create Pipeline Run` page:
-<font color=“red”>Warning: Data collection only supports single-task execution, and the results of concurrent multi-task execution may not meet expectations.</font>
-
-1. Configure-UI Mode:
-```json
-[
-  [
-    {
-      "plugin": "jira",
-      "options": {
-          "connectionId": 1,
-          "boardId": 8,
-          "since": "2006-01-02T15:04:05Z"
-      }
-    }
-  ]
-]
-```
-and if you want to perform certain subtasks.
-```json
-[
-  [
-    {
-      "plugin": "jira",
-      "subtasks": ["collectXXX", "extractXXX", "convertXXX"],
-      "options": {
-          "connectionId": 1,
-          "boardId": 8,
-          "since": "2006-01-02T15:04:05Z"
-      }
-    }
-  ]
-]
-```
-
-2. Curl Mode:
-```
-curl --location --request POST 'localhost:8080/pipelines' \
---header 'Content-Type: application/json' \
---data-raw '
-{
-    "name": "jenkins 20211126",
-    "tasks": [
-  [
-    {
-      "plugin": "jira",
-      "options": {
-          "connectionId": 1,
-          "boardId": 8,
-          "since": "2006-01-02T15:04:05Z"
-      }
-    }
-  ]
-]
-}
-'
-```
-and if you want to perform certain subtasks.
-```
-curl --location --request POST 'localhost:8080/pipelines' \
---header 'Content-Type: application/json' \
---data-raw '
-{
-    "name": "jenkins 20211126",
-    "tasks": [
-  [
-    {
-      "plugin": "jira",
-      "subtasks": ["collectXXX", "extractXXX", "convertXXX"],
-      "options": {
-          "connectionId": 1,
-          "boardId": 8,
-          "since": "2006-01-02T15:04:05Z"
-      }
-    }
-  ]
-]
-}
-'
-```
-
-
-- `connectionId`: The `ID` field from **JIRA Integration** page.
-- `boardId`: JIRA board id, see [Find Board Id](#find-board-id) for detail.
-- `since`: optional, download data since specified date/time only.
-
-### Find Board Id
-
-1. Navigate to the Jira board in the browser
-2. in the URL bar, get the board id from the parameter `?rapidView=`
-
-**Example:**
-
-`https://<your_jira_endpoint>/secure/RapidBoard.jspa?rapidView=51`
-
-![Screen Shot 2021-08-13 at 10 07 19 AM](https://user-images.githubusercontent.com/27032263/129363083-df0afa18-e147-4612-baf9-d284a8bb7a59.png)
-
-Your board id is used in all REST requests to DevLake. You do not need to configure this at the data connection level.
-
-
-## API
-
-### Data Connections Management
-
-#### Data Connections
-
-- Get all data connection
-```
-GET /plugins/jira/connections
-
-
-[
-  {
-    "ID": 14,
-    "CreatedAt": "2021-10-11T11:49:19.029Z",
-    "UpdatedAt": "2021-10-11T11:49:19.029Z",
-    "name": "test-jira-connection",
-    "endpoint": "https://merico.atlassian.net/rest",
-    "basicAuthEncoded": "basicAuth",
-    "epicKeyField": "epicKeyField",
-    "storyPointField": "storyPointField",
-  }
-]
-```
-- Create a new data connection
-```
-POST /plugins/jira/connections
-{
-	"name": "jira data connection name",
-	"endpoint": "jira api endpoint, i.e. https://merico.atlassian.net/rest",
-	"basicAuthEncoded": "generated by `echo -n <jira login email>:<jira token> | base64`",
-	"epicKeyField": "name of customfield of epic key",
-	"storyPointField": "name of customfield of story point",
-	"typeMappings": { // optional, send empty object to delete all typeMappings of the data connection
-		"userType": {
-			"standardType": "devlake standard type"
-		}
-	}
-}
-```
-- Update data connection
-```
-PUT /plugins/jira/connections/:connectionId
-{
-	"name": "jira data connection name",
-	"endpoint": "jira api endpoint, i.e. https://merico.atlassian.net/rest",
-	"basicAuthEncoded": "generated by `echo -n <jira login email>:<jira token> | base64`",
-	"epicKeyField": "name of customfield of epic key",
-	"storyPointField": "name of customfield of story point",
-	"typeMappings": { // optional, send empty object to delete all typeMappings of the data connection
-		"userType": {
-			"standardType": "devlake standard type",
-		}
-	}
-}
-```
-- Get data connection detail
-```
-GET /plugins/jira/connections/:connectionId
-
-
-{
-	"name": "jira data connection name",
-	"endpoint": "jira api endpoint, i.e. https://merico.atlassian.net/rest",
-	"basicAuthEncoded": "generated by `echo -n <jira login email>:<jira token> | base64`",
-	"epicKeyField": "name of customfield of epic key",
-	"storyPointField": "name of customfield of story point",
-	"typeMappings": { // optional, send empty object to delete all typeMappings of the data connection
-		"userType": {
-			"standardType": "devlake standard type",
-		}
-	}
-}
-```
-- Delete data connection
-```
-DELETE /plugins/jira/connections/:connectionId
-```
-
-#### Type mappings
-
-- Get all type mappings
-```
-GET /plugins/jira/connections/:connectionId/type-mappings
-
-
-[
-  {
-    "jiraConnectionId": 16,
-    "userType": "userType",
-    "standardType": "standardType"
-  }
-]
-```
-- Create a new type mapping
-```
-POST /plugins/jira/connections/:connectionId/type-mappings
-{
-    "userType": "userType",
-    "standardType": "standardType"
-}
-```
-- Update type mapping
-```
-PUT /plugins/jira/connections/:connectionId/type-mapping/:userType
-{
-    "standardType": "standardTypeUpdated"
-}
-```
-- Delete type mapping
-```
-DELETE /plugins/jira/connections/:connectionId/type-mapping/:userType
-```
-- API forwarding
-```
-GET /plugins/jira/connections/:connectionId/proxy/rest/*path
-
-For example:
-Requests to http://your_devlake_host/plugins/jira/connections/1/proxy/rest/agile/1.0/board/8/sprint
-would forward to
-https://your_jira_host/rest/agile/1.0/board/8/sprint
-
-{
-    "maxResults": 1,
-    "startAt": 0,
-    "isLast": false,
-    "values": [
-        {
-            "id": 7,
-            "self": "https://merico.atlassian.net/rest/agile/1.0/sprint/7",
-            "state": "closed",
-            "name": "EE Sprint 7",
-            "startDate": "2020-06-12T00:38:51.882Z",
-            "endDate": "2020-06-26T00:38:00.000Z",
-            "completeDate": "2020-06-22T05:59:58.980Z",
-            "originBoardId": 8,
-            "goal": ""
-        }
-    ]
-}
-```
+Please see details in the [Apache DevLake website](https://devlake.apache.org/docs/Plugins/jira)
\ No newline at end of file
diff --git a/plugins/refdiff/README.md b/plugins/refdiff/README.md
index 1db6a4d2..a2b7f0bf 100644
--- a/plugins/refdiff/README.md
+++ b/plugins/refdiff/README.md
@@ -1,207 +1 @@
-# RefDiff
-## Summary
-
-For development workload analysis, we often need to know how many commits have been created between 2 releases. This plugin offers the ability to calculate the commits of difference between 2 Ref(branch/tag), and the result will be stored back into database for further analysis.
-
-## Important Note
-
-You need to run gitextractor before the refdiff plugin. The gitextractor plugin should create records in the `refs` table in your DB before this plugin can be run.
-
-## Configuration
-
-This is a enrichment plugin based on Domain Layer data, no configuration needed
-
-## How to use
-
-In order to trigger the enrichment, you need to insert a new task into your pipeline.
-
-1. Make sure `commits` and `refs` are collected into your database, `refs` table should contain records like following:
-```
-id                                            ref_type
-github:GithubRepo:384111310:refs/tags/0.3.5   TAG
-github:GithubRepo:384111310:refs/tags/0.3.6   TAG
-github:GithubRepo:384111310:refs/tags/0.5.0   TAG
-github:GithubRepo:384111310:refs/tags/v0.0.1  TAG
-github:GithubRepo:384111310:refs/tags/v0.2.0  TAG
-github:GithubRepo:384111310:refs/tags/v0.3.0  TAG
-github:GithubRepo:384111310:refs/tags/v0.4.0  TAG
-github:GithubRepo:384111310:refs/tags/v0.6.0  TAG
-github:GithubRepo:384111310:refs/tags/v0.6.1  TAG
-```
-2. If you want to run calculateIssuesDiff, please configure GITHUB_PR_BODY_CLOSE_PATTERN in .env, you can check the example in .env.example(we have a default value, please make sure your pattern is disclosed by single quotes '')
-3. If you want to run calculatePrCherryPick, please configure GITHUB_PR_TITLE_PATTERN in .env, you can check the example in .env.example(we have a default value, please make sure your pattern is disclosed by single quotes '')
-4. And then, trigger a pipeline like following, you can also define sub tasks, calculateRefDiff will calculate commits between two ref, and creatRefBugStats will create a table to show bug list between two ref:
-   
-In order to collect data, you have to compose a JSON looks like following one, and send it by selecting `Advanced Mode` on `Create Pipeline Run` page:
-1. Configure-UI Mode:
-```json
-[
-  [
-    {
-      "plugin": "refdiff",
-      "options": {
-        "repoId": "github:GithubRepo:384111310",
-        "pairs": [
-          {
-            "newRef": "refs/tags/v0.6.0",
-            "oldRef": "refs/tags/0.5.0"
-          },
-          {
-            "newRef": "refs/tags/0.5.0",
-            "oldRef": "refs/tags/0.4.0"
-          }
-        ]
-      }
-    }
-  ]
-]
-```
-and if you want to perform certain subtasks.
-```json
-[
-  [
-    {
-      "plugin": "refdiff",
-      "subtasks": [
-        "calculateCommitsDiff",
-        "calculateIssuesDiff",
-        "calculatePrCherryPick"
-      ],
-      "options": {
-        "repoId": "github:GithubRepo:384111310",
-        "pairs": [
-          {
-            "newRef": "refs/tags/v0.6.0",
-            "oldRef": "refs/tags/0.5.0"
-          },
-          {
-            "newRef": "refs/tags/0.5.0",
-            "oldRef": "refs/tags/0.4.0"
-          }
-        ]
-      }
-    }
-  ]
-]
-```
-Or you can use tagsPattern to match the tags you want
-And you can use tagOrder (support `alphabetically`,`reverse alphabetically`,`semver`,`reverse semver`) to set the order rule with tagLimit to limit the count of matching.
-This is support to calculateCommitsDiff and calculateIssuesDiff
-```json
-[
-  [
-    {
-      "plugin": "refdiff",
-      "subtasks": [
-        "calculateCommitsDiff",
-        "calculateIssuesDiff",
-      ],
-      "options": {
-        "repoId": "github:GithubRepo:384111310",
-        "tagsPattern":".*\\.11\\..*",
-        "tagLimit":3,
-        "tagOrder":"reverse semver",
-      }
-    }
-  ]
-]
-```
-
-2. Curl Mode:
-```
-curl --location --request POST 'localhost:8080/pipelines' \
---header 'Content-Type: application/json' \
---data-raw '
-{
-    "name": "test-refdiff",
-    "tasks": [
-        [
-            {
-                "plugin": "refdiff",
-                "options": {
-                    "repoId": "github:GithubRepo:384111310",
-                    "pairs": [
-                       { "newRef": "refs/tags/v0.6.0", "oldRef": "refs/tags/0.5.0" },
-                       { "newRef": "refs/tags/0.5.0", "oldRef": "refs/tags/0.4.0" }
-                    ]
-                }
-            }
-        ]
-    ]
-}'
-```
-and if you want to perform certain subtasks.
-```
-curl --location --request POST 'localhost:8080/pipelines' \
---header 'Content-Type: application/json' \
---data-raw '
-{
-    "name": "test-refdiff",
-    "tasks": [
-        [
-            {
-                "plugin": "refdiff",
-                "subtasks": [
-                    "calculateCommitsDiff",
-                    "calculateIssuesDiff",
-                    "calculatePrCherryPick"
-                ],
-                "options": {
-                    "repoId": "github:GithubRepo:384111310",
-                    "pairs": [
-                       { "newRef": "refs/tags/v0.6.0", "oldRef": "refs/tags/0.5.0" },
-                       { "newRef": "refs/tags/0.5.0", "oldRef": "refs/tags/0.4.0" }
-                    ]
-                }
-            }
-        ]
-    ]
-}'
-```
-
-## Development
-
-This plugin depends on `libgit2`, you need to install version 1.3.0 in order to run and debug this plugin on your local
-machine.
-
-### Ubuntu
-
-```
-apt install cmake
-git clone https://github.com/libgit2/libgit2.git
-cd libgit2
-git checkout v1.3.0
-mkdir build
-cd build
-cmake ..
-make
-make install
-```
-
-### MacOs
-
-```
-brew install cmake
-git clone https://github.com/libgit2/libgit2.git
-cd libgit2
-git checkout v1.3.0
-mkdir build
-cd build
-cmake ..
-make
-make install
-```
-
-Troubleshooting (MacOS)
-
-Q: I got an error saying: `pkg-config: exec: "pkg-config": executable file not found in $PATH`
-
-A:
-
-1. Make sure you have pkg-config installed:
-
-  `brew install pkg-config`
-
-2. Make sure your pkg config path covers the installation: 
-
-  `export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib:/usr/local/lib/pkgconfig`
+Please see details in the [Apache DevLake website](https://devlake.apache.org/docs/Plugins/refdiff)
\ No newline at end of file