You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@devlake.apache.org by zk...@apache.org on 2022/08/03 07:28:29 UTC

[incubator-devlake-website] 06/09: updated in personal pronoun

This is an automated email from the ASF dual-hosted git repository.

zky pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-devlake-website.git

commit 3c15eb46ee1b042bedbc8434aed3547647815cf6
Author: linyh <ya...@meri.co>
AuthorDate: Wed Jun 29 20:58:24 2022 +0800

    updated in personal pronoun
---
 docs/09-DeveloperDoc/e2e-test-writing-guide.md     | 41 ++++++++++++----------
 .../09-DeveloperDoc/e2e-test-writing-guide.md      |  7 ++--
 2 files changed, 26 insertions(+), 22 deletions(-)

diff --git a/docs/09-DeveloperDoc/e2e-test-writing-guide.md b/docs/09-DeveloperDoc/e2e-test-writing-guide.md
index 2f45648a..5e8ea162 100644
--- a/docs/09-DeveloperDoc/e2e-test-writing-guide.md
+++ b/docs/09-DeveloperDoc/e2e-test-writing-guide.md
@@ -7,7 +7,7 @@ In DevLake, E2E testing consists of interface testing and input/output result va
 
 ## Preparing data
 
-Let's take a simple plugin - Flybook Meeting Hours Collection as an example here. Its directory structure looks like this.
+Let's take a simple plugin - Feishu Meeting Hours Collection as an example here. Its directory structure looks like this.
 ![image](https://user-images.githubusercontent.com/3294100/175061114-53404aac-16ca-45d1-a0ab-3f61d84922ca.png)
 Next, we will write the E2E tests of the sub-tasks.
 
@@ -40,7 +40,7 @@ Next, we need to export the data to .csv format. This step is a variety of optio
 
 ### DevLake Code Generator Export
 
-Run `go run generator/main.go create-e2e-raw` directly and follow the guidelines to complete the export. This solution is the simplest, but has some limitations, such as the exported fields are fixed. If you need more customization options, you can refer to next solutions.
+Run `go run generator/main.go create-e2e-raw` directly and follow the guidelines to complete the export. This solution is the simplest, but has some limitations, such as the exported fields being fixed. You can refer to the next solutions if you need more customisation options.
 
 ![usage](https://user-images.githubusercontent.com/3294100/175849225-12af5251-6181-4cd9-ba72-26087b05ee73.gif)
 
@@ -48,7 +48,7 @@ Run `go run generator/main.go create-e2e-raw` directly and follow the guidelines
 
 ![image](https://user-images.githubusercontent.com/3294100/175067303-7e5e1c4d-2430-4eb5-ad00-e38d86bbd108.png)
 
-This solution is very easy to use and will not cause any problems using Postgres or MySQL.
+This solution is very easy to use and will not cause problems using Postgres or MySQL.
 ![image](https://user-images.githubusercontent.com/3294100/175068178-f1c1c290-e043-4672-b43e-54c4b954c685.png)
 The success criteria for csv export is that the go program can read it without errors, so several points are worth noticing.
 
@@ -60,11 +60,11 @@ After exporting, move the .csv file to `plugins/feishu/e2e/raw_tables/_raw_feish
 
 ### MySQL Select Into Outfile
 
-This is MySQL's solution for exporting query results to a file. The MySQL currently started in docker-compose.yml comes with the --security parameter, so it does not allow `select ... into outfile`, you first need to turn off the security parameter, which is done roughly as follows.
+This is MySQL's solution for exporting query results to a file. The MySQL currently started in docker-compose.yml comes with the --security parameter, so it does not allow `select ... into outfile`. The first step is to turn off the security parameter, which is done roughly as follows.
 ![origin_img_v2_c809c901-01bc-4ec9-b52a-ab4df24c376g](https://user-images.githubusercontent.com/3294100/175070770-9b7d5b75-574b-49ed-9bca-e9f611f60795.jpg)
 After closing it, use `select ... into outfile` to export the csv file. The export result is rough as follows.
 ![origin_img_v2_ccfdb260-668f-42b4-b249-6c2dd45816ag](https://user-images.githubusercontent.com/3294100/175070866-2204ae13-c058-4a16-bc20-93ab7c95f832.jpg)
-You can notice that the data field has extra hexsha fields, which need to be manually converted to literal quantities.
+Notice that the data field has extra hexsha fields, which need to be manually converted to literal quantities.
 
 ### Vscode Database
 
@@ -72,7 +72,7 @@ This is Vscode's solution for exporting query results to a file, but it is not e
 ![origin_img_v2_c9eaadaa-afbc-4c06-85bc-e78235f7eb3g](https://user-images.githubusercontent.com/3294100/175071987-760c2537-240c-4314-bbd6-1a0cd85ddc0f.jpg)
 However, it is obvious that the escape symbol does not conform to the csv specification, and the data is not successfully exported. After adjusting the configuration and manually replacing `\"` with `""`, we get the following result.
 ![image](https://user-images.githubusercontent.com/3294100/175072314-954c6794-3ebd-45bb-98e7-60ddbb5a7da9.png)
-The data field of this file is encoded in base64, so it needs to be decoded manually to a literal amount. After successful decode, you can use it.
+The data field of this file is encoded in base64, so it needs to be decoded manually to a literal amount before using it.
 
 ### MySQL workerbench
 
@@ -91,13 +91,14 @@ COPY (
 SELECT id, params, convert_from(data, 'utf-8') as data, url, input,created_at FROM _raw_feishu_meeting_top_user_item
 ) to '/var/lib/postgresql/data/raw.csv' with csv header;
 ```
-Use the above statement to complete the export of the file. If your pg is running in docker, then you also need to use the `docker cp` command to export the file to the host.
+Use the above statement to complete the export of the file. If pg runs in docker, just use the command `docker cp` to export the file to the host.
 
 ## Writing E2E tests
 
-First you need to create a test environment, for example here `meeting_test.go` is created
+First, create a test environment. For example, let's create `meeting_test.go`.
 ![image](https://user-images.githubusercontent.com/3294100/175091380-424974b9-15f3-457b-af5c-03d3b5d17e73.png)
-Then enter the test preparation code in it as follows. The code is to create an instance of the `feishu` plugin, and then call `ImportCsvIntoRawTable` to import the data from the csv file into the `_raw_feishu_meeting_top_user_item` table.
+Then enter the test preparation code in it as follows. The code is to create an instance of the `feishu` plugin and then call `ImportCsvIntoRawTable` to import the data from the csv file into the `_raw_feishu_meeting_top_user_item` table.
+
 ```go
 func TestMeetingDataFlow(t *testing.T) {
 	var plugin impl.Feishu
@@ -112,8 +113,9 @@ The signature of the import function is as follows.
 He has a twin, with only slight differences in parameters.
 ```func (t *DataFlowTester) ImportCsvIntoTabler(csvRelPath string, dst schema.Tabler)```
 The former is used to import tables in the raw layer. The latter is used to import arbitrary tables.
-**Note:** Also these two functions will delete the db table and use `gorm.AutoMigrate` to re-create a new table to clear data in it.
-After importing the data is complete, you can try to run it. It must be PASS without any test logic at this moment. Then proceed to write the logic for calling the call to the extractor task in `TestMeetingDataFlow`.
+**Note:** These two functions will delete the db table and use `gorm.AutoMigrate` to re-create a new table to clear data in it.
+After importing the data is complete, run this tester and it must be PASS without any test logic at this moment. Then write the logic for calling the call to the extractor task in `TestMeetingDataFlow`.
+
 ```go
 func TestMeetingDataFlow(t *testing.T) {
 	var plugin impl.Feishu
@@ -136,9 +138,9 @@ func TestMeetingDataFlow(t *testing.T) {
 ```
 The added code includes a call to `dataflowTester.FlushTabler` to clear the table `_tool_feishu_meeting_top_user_items` and a call to `dataflowTester.Subtask` to simulate the running of the subtask `ExtractMeetingTopUserItemMeta`.
 
-Now run it and see if the subtask `ExtractMeetingTopUserItemMeta` completes without errors. The data results of the `extract` run generally come from the raw table, so the plugin subtask will run correctly if it is written without errors. You can observe if the data is successfully parsed in the db table in the tool layer. In this case the `_tool_feishu_meeting_top_user_items` table has the correct data.
+Now run it and see if the subtask `ExtractMeetingTopUserItemMeta` completes without errors. The data results of the `extract` run generally come from the raw table, so the plugin subtask will run correctly if written without errors. We can observe if the data is successfully parsed in the db table in the tool layer. In this case the `_tool_feishu_meeting_top_user_items` table has the correct data.
 
-If the run is incorrect, you need to troubleshoot the problem with the plugin itself before moving on to the next step.
+If the run is incorrect, maybe you can troubleshoot the problem with the plugin itself before moving on to the next step.
 
 ## Verify that the results of the task are correct
 
@@ -167,15 +169,16 @@ Its purpose is to call `dataflowTester.VerifyTable` to complete the validation o
 
 To facilitate the generation of the file mentioned above, DevLake has adopted a testing technique called `Snapshot`, which will automatically generate the file based on the run results when the `VerifyTable` file is called without the csv existing.
 
-But note! You need to do two things: 1. check if the file is generated correctly 2. re-run it to make sure there are no errors between the generated results and the re-run results.
-These two operations are very important and directly related to the quality of test writing. We should treat the snapshot file in `.csv' format like a code file.
+But note! Please do two things after the snapshot is created: 1. check if the file is generated correctly 2. re-run it to make sure there are no errors between the generated results and the re-run results.
+These two operations are critical and directly related to the quality of test writing. We should treat the snapshot file in `.csv' format like a code file.
 
 If there is a problem with this step, there are usually 2 kinds of problems.
 1. The validated fields contain fields like create_at runtime or self-incrementing ids, which cannot be repeatedly validated and should be excluded.
-2. there is `\n` or `\r\n` or other escape mismatch fields in the run results. Generally, when parsing the `httpResponse` error, you can refer to the following program to solve it.
-  1. modify the field type of the content in the api model to `json.
-  2. convert it to string when parsing
-  3. so that the `\n` symbol can be kept intact, avoiding the parsing of line breaks by the database or the operating system
+2. there is `\n` or `\r\n` or other escape mismatch fields in the run results. Generally, when parsing the `httpResponse` error, you can follow these solutions:
+    1. modify the field type of the content in the api model to `json.
+    2. convert it to string when parsing
+    3. so that the `\n` symbol can be kept intact, avoiding the parsing of line breaks by the database or the operating system
+
 
 For example, in the `github` plugin, this is how it is handled.
 ![image](https://user-images.githubusercontent.com/3294100/175098219-c04b810a-deaf-4958-9295-d5ad4ec152e6.png)
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/09-DeveloperDoc/e2e-test-writing-guide.md b/i18n/zh/docusaurus-plugin-content-docs/current/09-DeveloperDoc/e2e-test-writing-guide.md
index b664a6ca..4b2d124c 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/current/09-DeveloperDoc/e2e-test-writing-guide.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/09-DeveloperDoc/e2e-test-writing-guide.md
@@ -177,9 +177,10 @@ func TestMeetingDataFlow(t *testing.T) {
 如果这一步出现了问题,一般会是2种问题,
 1. 验证的字段中含有类似create_at运行时间或者自增id的字段,这些无法重复验证的字段应该排除。
 2. 运行的结果中存在`\n`或`\r\n`等转义不匹配的字段,一般是解析`httpResponse`时出现的错误,可以参考如下方案解决:
-  1. 修改api模型中,内容的字段类型为`json.RawMessage`
-  2. 在解析时再将其转化为string
-  3. 如此操作,即可原封不动的保存`\n`符号,避免数据库或操作系统对换行符的解析
+    1. 修改api模型中,内容的字段类型为`json.RawMessage`
+    2. 在解析时再将其转化为string
+    3. 如此操作,即可原封不动的保存`\n`符号,避免数据库或操作系统对换行符的解析
+
 
 比如在`github`插件中,是这么处理的:
 ![image](https://user-images.githubusercontent.com/3294100/175098219-c04b810a-deaf-4958-9295-d5ad4ec152e6.png)