You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hugegraph.apache.org by GitBox <gi...@apache.org> on 2022/04/26 03:21:01 UTC

[GitHub] [incubator-hugegraph-doc] imbajin commented on a diff in pull request #113: Translate hugegraph-loader.md into english

imbajin commented on code in PR #113:
URL: https://github.com/apache/incubator-hugegraph-doc/pull/113#discussion_r858207518


##########
content/en/docs/quickstart/hugegraph-loader.md:
##########
@@ -4,252 +4,251 @@ linkTitle: "Load data with HugeGraph-Loader"
 weight: 2
 ---
 
-### 1 HugeGraph-Loader概述
+### 1 HugeGraph-Loader overview

Review Comment:
   `Overview` maybe better?



##########
content/en/docs/quickstart/hugegraph-loader.md:
##########
@@ -4,252 +4,251 @@ linkTitle: "Load data with HugeGraph-Loader"
 weight: 2
 ---
 
-### 1 HugeGraph-Loader概述
+### 1 HugeGraph-Loader overview
 
-HugeGraph-Loader 是 HugeGragh 的数据导入组件,能够将多种数据源的数据转化为图的顶点和边并批量导入到图数据库中。
+HugeGraph-Loader is the data import component of HugeGragh, which can convert data from various data sources into graph vertices and edges and import them into the graph database in batches.
 
-目前支持的数据源包括:
+Currently supported data sources include:
+- Local disk file or directory, supports TEXT, CSV and JSON format files, supports compressed files
+- HDFS file or directory, supports compressed files
+- Mainstream relational databases, such as MySQL, PostgreSQL, Oracle, SQL Server
 
-- 本地磁盘文件或目录,支持 TEXT、CSV 和 JSON 格式的文件,支持压缩文件
-- HDFS 文件或目录,支持压缩文件
-- 主流关系型数据库,如 MySQL、PostgreSQL、Oracle、SQL Server
+Local disk files and HDFS files support resumable uploads.
 
-本地磁盘文件和 HDFS 文件支持断点续传。
+It will be explained in detail later.
 
-后面会具体说明。
+> Note: HugeGraph-Loader requires HugeGraph Server service, please refer to [HugeGraph-Server Quick Start](/docs/quickstart/hugegraph-server) to download and start Server
 
-> 注意:使用 HugeGraph-Loader 需要依赖 HugeGraph Server 服务,下载和启动 Server 请参考 [HugeGraph-Server Quick Start](/docs/quickstart/hugegraph-server)
+### 2 Get HugeGraph-Loader
 
-### 2 获取 HugeGraph-Loader
+There are two ways to get HugeGraph-Loader:
 
-有两种方式可以获取 HugeGraph-Loader:
+- Download the compiled tarball
+- Clone source code to compile and install
 
-- 下载已编译的压缩包
-- 克隆源码编译安装
+#### 2.1 Download the compiled archive
 
-#### 2.1 下载已编译的压缩包
-
-下载最新版本的 HugeGraph-Loader release 包:
+Download the latest version of the HugeGraph-Loader release package:
 
 ```bash
 wget https://github.com/hugegraph/hugegraph-loader/releases/download/v${version}/hugegraph-loader-${version}.tar.gz
 tar zxvf hugegraph-loader-${version}.tar.gz
 ```
 
-#### 2.2 克隆源码编译安装
+#### 2.2 Clone source code to compile and install
 
-克隆最新版本的 HugeGraph-Loader 源码包:
+Clone the latest version of HugeGraph-Loader source package:
 
 ```bash
 $ git clone https://github.com/hugegraph/hugegraph-loader.git
 ```
 
-由于Oracle ojdbc license的限制,需要手动安装ojdbc到本地maven仓库。
-访问[Oracle jdbc 下载](https://www.oracle.com/database/technologies/appdev/jdbc-downloads.html) 页面。选择Oracle Database 12c Release 2 (12.2.0.1) drivers,如下图所示。
+Due to the limitation of the Oracle ojdbc license, you need to manually install ojdbc to the local maven repository.
+Visit the [Oracle jdbc downloads](https://www.oracle.com/database/technologies/appdev/jdbc-downloads.html) page. Select Oracle Database 12c Release 2 (12.2.0.1) drivers, as shown in the following figure.
 
 <center>
   <img src="/docs/images/oracle-download.png" alt="image">
 </center>
 
 
-打开链接后,选择“ojdbc8.jar”, 如下图所示。
+After opening the link, select "ojdbc8.jar" as shown below.
 
 <center>
   <img src="/docs/images/ojdbc8.png" alt="image">
 </center>
 
 
- 把ojdbc8安装到本地maven仓库,进入``ojdbc8.jar``所在目录,执行以下命令。
+ Install ojdbc8 to the local maven repository, enter the directory where ``ojdbc8.jar`` is located, and execute the following command.
 ```
 mvn install:install-file -Dfile=./ojdbc8.jar -DgroupId=com.oracle -DartifactId=ojdbc8 -Dversion=12.2.0.1 -Dpackaging=jar
 ```
 
-编译生成 tar 包:
+Compile and generate tar package:
 
 ```bash
 cd hugegraph-loader
 mvn clean package -DskipTests
 ```
 
-### 3 使用流程
+### 3 Use the process
 
-使用 HugeGraph-Loader 的基本流程分为以下几步:
+The basic process of using HugeGraph-Loader is divided into the following steps:
 
-- 编写图模型
-- 准备数据文件
-- 编写输入源映射文件
-- 执行命令导入
+- Write graph models
+- Prepare data files
+- Write input source map files
+- Execute command import

Review Comment:
   we should align the `-` to keep original tree order



##########
content/en/docs/quickstart/hugegraph-loader.md:
##########
@@ -4,252 +4,251 @@ linkTitle: "Load data with HugeGraph-Loader"
 weight: 2
 ---
 
-### 1 HugeGraph-Loader概述
+### 1 HugeGraph-Loader overview
 
-HugeGraph-Loader 是 HugeGragh 的数据导入组件,能够将多种数据源的数据转化为图的顶点和边并批量导入到图数据库中。
+HugeGraph-Loader is the data import component of HugeGragh, which can convert data from various data sources into graph vertices and edges and import them into the graph database in batches.
 
-目前支持的数据源包括:
+Currently supported data sources include:
+- Local disk file or directory, supports TEXT, CSV and JSON format files, supports compressed files
+- HDFS file or directory, supports compressed files
+- Mainstream relational databases, such as MySQL, PostgreSQL, Oracle, SQL Server
 
-- 本地磁盘文件或目录,支持 TEXT、CSV 和 JSON 格式的文件,支持压缩文件
-- HDFS 文件或目录,支持压缩文件
-- 主流关系型数据库,如 MySQL、PostgreSQL、Oracle、SQL Server
+Local disk files and HDFS files support resumable uploads.
 
-本地磁盘文件和 HDFS 文件支持断点续传。
+It will be explained in detail later.
 
-后面会具体说明。
+> Note: HugeGraph-Loader requires HugeGraph Server service, please refer to [HugeGraph-Server Quick Start](/docs/quickstart/hugegraph-server) to download and start Server
 
-> 注意:使用 HugeGraph-Loader 需要依赖 HugeGraph Server 服务,下载和启动 Server 请参考 [HugeGraph-Server Quick Start](/docs/quickstart/hugegraph-server)
+### 2 Get HugeGraph-Loader
 
-### 2 获取 HugeGraph-Loader
+There are two ways to get HugeGraph-Loader:
 
-有两种方式可以获取 HugeGraph-Loader:
+- Download the compiled tarball
+- Clone source code to compile and install
 
-- 下载已编译的压缩包
-- 克隆源码编译安装
+#### 2.1 Download the compiled archive
 
-#### 2.1 下载已编译的压缩包
-
-下载最新版本的 HugeGraph-Loader release 包:
+Download the latest version of the HugeGraph-Loader release package:
 
 ```bash
 wget https://github.com/hugegraph/hugegraph-loader/releases/download/v${version}/hugegraph-loader-${version}.tar.gz
 tar zxvf hugegraph-loader-${version}.tar.gz
 ```
 
-#### 2.2 克隆源码编译安装
+#### 2.2 Clone source code to compile and install
 
-克隆最新版本的 HugeGraph-Loader 源码包:
+Clone the latest version of HugeGraph-Loader source package:
 
 ```bash
 $ git clone https://github.com/hugegraph/hugegraph-loader.git
 ```
 
-由于Oracle ojdbc license的限制,需要手动安装ojdbc到本地maven仓库。
-访问[Oracle jdbc 下载](https://www.oracle.com/database/technologies/appdev/jdbc-downloads.html) 页面。选择Oracle Database 12c Release 2 (12.2.0.1) drivers,如下图所示。
+Due to the limitation of the Oracle ojdbc license, you need to manually install ojdbc to the local maven repository.
+Visit the [Oracle jdbc downloads](https://www.oracle.com/database/technologies/appdev/jdbc-downloads.html) page. Select Oracle Database 12c Release 2 (12.2.0.1) drivers, as shown in the following figure.
 
 <center>
   <img src="/docs/images/oracle-download.png" alt="image">
 </center>
 
 
-打开链接后,选择“ojdbc8.jar”, 如下图所示。
+After opening the link, select "ojdbc8.jar" as shown below.
 
 <center>
   <img src="/docs/images/ojdbc8.png" alt="image">
 </center>
 
 
- 把ojdbc8安装到本地maven仓库,进入``ojdbc8.jar``所在目录,执行以下命令。
+ Install ojdbc8 to the local maven repository, enter the directory where ``ojdbc8.jar`` is located, and execute the following command.
 ```
 mvn install:install-file -Dfile=./ojdbc8.jar -DgroupId=com.oracle -DartifactId=ojdbc8 -Dversion=12.2.0.1 -Dpackaging=jar
 ```
 
-编译生成 tar 包:
+Compile and generate tar package:
 
 ```bash
 cd hugegraph-loader
 mvn clean package -DskipTests
 ```
 
-### 3 使用流程
+### 3 Use the process
 
-使用 HugeGraph-Loader 的基本流程分为以下几步:
+The basic process of using HugeGraph-Loader is divided into the following steps:
 
-- 编写图模型
-- 准备数据文件
-- 编写输入源映射文件
-- 执行命令导入
+- Write graph models
+- Prepare data files
+- Write input source map files
+- Execute command import
 
-#### 3.1 编写图模型
+#### 3.1 Writing a graph model
 
-这一步是建模的过程,用户需要对自己已有的数据和想要创建的图模型有一个清晰的构想,然后编写 schema 建立图模型。
+This step is the modeling process. Users need to have a clear idea of ​​their existing data and the graph model they want to create, and then write the schema to build the graph model.
 
-比如想创建一个拥有两类顶点及两类边的图,顶点是"人"和"软件",边是"人认识人"和"人创造软件",并且这些顶点和边都带有一些属性,比如顶点"人"有:"姓名"、"年龄"等属性,
-"软件"有:"名字"、"售卖价格"等属性;边"认识"有: "日期"属性等。
+For example, if you want to create a graph with two types of vertices and two types of edges, the vertices are "people" and "software", the edges are "people know people" and "people create software", and these vertices and edges have some attributes, For example, the vertex "person" has: "name", "age" and other attributes,
+"Software" includes: "name", "sale price" and other attributes; side "knowledge" includes: "date" attribute and so on.
 
 <center>
   <img src="/docs/images/demo-graph-model.png" alt="image">
-  <p>示例图模型</p>
+  <p>graph model example</p>
 </center>
 
 
-在设计好了图模型之后,我们可以用`groovy`编写出`schema`的定义,并保存至文件中,这里命名为`schema.groovy`。
+After designing the graph model, we can use `groovy` to write the definition of `schema` and save it to a file, here named `schema.groovy`.
 
 ```groovy
-// 创建一些属性
+// create some properties
 schema.propertyKey("name").asText().ifNotExist().create();
 schema.propertyKey("age").asInt().ifNotExist().create();
 schema.propertyKey("city").asText().ifNotExist().create();
 schema.propertyKey("date").asText().ifNotExist().create();
 schema.propertyKey("price").asDouble().ifNotExist().create();
 
-// 创建 person 顶点类型,其拥有三个属性:name, age, city,主键是 name
+// Create the person vertex type, which has three attributes: name, age, city, and the primary key is name
 schema.vertexLabel("person").properties("name", "age", "city").primaryKeys("name").ifNotExist().create();
-// 创建 software 顶点类型,其拥有两个属性:name, price,主键是 name
+// Create a software vertex type, which has two properties: name, price, the primary key is name
 schema.vertexLabel("software").properties("name", "price").primaryKeys("name").ifNotExist().create();
 
-// 创建 knows 边类型,这类边是从 person 指向 person 的
+// Create the knows edge type, which goes from person to person
 schema.edgeLabel("knows").sourceLabel("person").targetLabel("person").ifNotExist().create();
-// 创建 created 边类型,这类边是从 person 指向 software 的
+// Create the created edge type, which points from person to software
 schema.edgeLabel("created").sourceLabel("person").targetLabel("software").ifNotExist().create();
 ```
 
-> 关于 schema 的详细说明请参考 [hugegraph-client](/docs/clients/hugegraph-client) 中对应部分。
+> Please refer to the corresponding section in [hugegraph-client](/docs/clients/hugegraph-client) for the detailed description of the schema.
 
-#### 3.2 准备数据
+#### 3.2 Prepare data
 
-目前 HugeGraph-Loader 支持的数据源包括:
+The data sources currently supported by HugeGraph-Loader include:
 
-- 本地磁盘文件或目录
-- HDFS 文件或目录
-- 部分关系型数据库
+- local disk file or directory
+- HDFS file or directory
+- Partial relational database

Review Comment:
   the `-` order



##########
content/en/docs/quickstart/hugegraph-loader.md:
##########
@@ -4,252 +4,251 @@ linkTitle: "Load data with HugeGraph-Loader"
 weight: 2
 ---
 
-### 1 HugeGraph-Loader概述
+### 1 HugeGraph-Loader overview
 
-HugeGraph-Loader 是 HugeGragh 的数据导入组件,能够将多种数据源的数据转化为图的顶点和边并批量导入到图数据库中。
+HugeGraph-Loader is the data import component of HugeGragh, which can convert data from various data sources into graph vertices and edges and import them into the graph database in batches.
 
-目前支持的数据源包括:
+Currently supported data sources include:
+- Local disk file or directory, supports TEXT, CSV and JSON format files, supports compressed files
+- HDFS file or directory, supports compressed files
+- Mainstream relational databases, such as MySQL, PostgreSQL, Oracle, SQL Server
 
-- 本地磁盘文件或目录,支持 TEXT、CSV 和 JSON 格式的文件,支持压缩文件
-- HDFS 文件或目录,支持压缩文件
-- 主流关系型数据库,如 MySQL、PostgreSQL、Oracle、SQL Server
+Local disk files and HDFS files support resumable uploads.
 
-本地磁盘文件和 HDFS 文件支持断点续传。
+It will be explained in detail later.
 
-后面会具体说明。
+> Note: HugeGraph-Loader requires HugeGraph Server service, please refer to [HugeGraph-Server Quick Start](/docs/quickstart/hugegraph-server) to download and start Server
 
-> 注意:使用 HugeGraph-Loader 需要依赖 HugeGraph Server 服务,下载和启动 Server 请参考 [HugeGraph-Server Quick Start](/docs/quickstart/hugegraph-server)
+### 2 Get HugeGraph-Loader
 
-### 2 获取 HugeGraph-Loader
+There are two ways to get HugeGraph-Loader:
 
-有两种方式可以获取 HugeGraph-Loader:
+- Download the compiled tarball
+- Clone source code to compile and install
 
-- 下载已编译的压缩包
-- 克隆源码编译安装
+#### 2.1 Download the compiled archive
 
-#### 2.1 下载已编译的压缩包
-
-下载最新版本的 HugeGraph-Loader release 包:
+Download the latest version of the HugeGraph-Loader release package:
 
 ```bash
 wget https://github.com/hugegraph/hugegraph-loader/releases/download/v${version}/hugegraph-loader-${version}.tar.gz
 tar zxvf hugegraph-loader-${version}.tar.gz
 ```
 
-#### 2.2 克隆源码编译安装
+#### 2.2 Clone source code to compile and install
 
-克隆最新版本的 HugeGraph-Loader 源码包:
+Clone the latest version of HugeGraph-Loader source package:
 
 ```bash
 $ git clone https://github.com/hugegraph/hugegraph-loader.git
 ```
 
-由于Oracle ojdbc license的限制,需要手动安装ojdbc到本地maven仓库。
-访问[Oracle jdbc 下载](https://www.oracle.com/database/technologies/appdev/jdbc-downloads.html) 页面。选择Oracle Database 12c Release 2 (12.2.0.1) drivers,如下图所示。
+Due to the limitation of the Oracle ojdbc license, you need to manually install ojdbc to the local maven repository.
+Visit the [Oracle jdbc downloads](https://www.oracle.com/database/technologies/appdev/jdbc-downloads.html) page. Select Oracle Database 12c Release 2 (12.2.0.1) drivers, as shown in the following figure.
 
 <center>
   <img src="/docs/images/oracle-download.png" alt="image">
 </center>
 
 
-打开链接后,选择“ojdbc8.jar”, 如下图所示。
+After opening the link, select "ojdbc8.jar" as shown below.
 
 <center>
   <img src="/docs/images/ojdbc8.png" alt="image">
 </center>
 
 
- 把ojdbc8安装到本地maven仓库,进入``ojdbc8.jar``所在目录,执行以下命令。
+ Install ojdbc8 to the local maven repository, enter the directory where ``ojdbc8.jar`` is located, and execute the following command.
 ```
 mvn install:install-file -Dfile=./ojdbc8.jar -DgroupId=com.oracle -DartifactId=ojdbc8 -Dversion=12.2.0.1 -Dpackaging=jar
 ```
 
-编译生成 tar 包:
+Compile and generate tar package:
 
 ```bash
 cd hugegraph-loader
 mvn clean package -DskipTests
 ```
 
-### 3 使用流程
+### 3 Use the process
 
-使用 HugeGraph-Loader 的基本流程分为以下几步:
+The basic process of using HugeGraph-Loader is divided into the following steps:
 
-- 编写图模型
-- 准备数据文件
-- 编写输入源映射文件
-- 执行命令导入
+- Write graph models
+- Prepare data files
+- Write input source map files
+- Execute command import
 
-#### 3.1 编写图模型
+#### 3.1 Writing a graph model
 
-这一步是建模的过程,用户需要对自己已有的数据和想要创建的图模型有一个清晰的构想,然后编写 schema 建立图模型。
+This step is the modeling process. Users need to have a clear idea of ​​their existing data and the graph model they want to create, and then write the schema to build the graph model.
 
-比如想创建一个拥有两类顶点及两类边的图,顶点是"人"和"软件",边是"人认识人"和"人创造软件",并且这些顶点和边都带有一些属性,比如顶点"人"有:"姓名"、"年龄"等属性,
-"软件"有:"名字"、"售卖价格"等属性;边"认识"有: "日期"属性等。
+For example, if you want to create a graph with two types of vertices and two types of edges, the vertices are "people" and "software", the edges are "people know people" and "people create software", and these vertices and edges have some attributes, For example, the vertex "person" has: "name", "age" and other attributes,
+"Software" includes: "name", "sale price" and other attributes; side "knowledge" includes: "date" attribute and so on.
 
 <center>
   <img src="/docs/images/demo-graph-model.png" alt="image">
-  <p>示例图模型</p>
+  <p>graph model example</p>
 </center>
 
 
-在设计好了图模型之后,我们可以用`groovy`编写出`schema`的定义,并保存至文件中,这里命名为`schema.groovy`。
+After designing the graph model, we can use `groovy` to write the definition of `schema` and save it to a file, here named `schema.groovy`.
 
 ```groovy
-// 创建一些属性
+// create some properties

Review Comment:
   keep `Create` with the below comment or keep `create` together~



##########
content/en/docs/quickstart/hugegraph-loader.md:
##########
@@ -4,252 +4,251 @@ linkTitle: "Load data with HugeGraph-Loader"
 weight: 2
 ---
 
-### 1 HugeGraph-Loader概述
+### 1 HugeGraph-Loader overview
 
-HugeGraph-Loader 是 HugeGragh 的数据导入组件,能够将多种数据源的数据转化为图的顶点和边并批量导入到图数据库中。
+HugeGraph-Loader is the data import component of HugeGragh, which can convert data from various data sources into graph vertices and edges and import them into the graph database in batches.
 
-目前支持的数据源包括:
+Currently supported data sources include:
+- Local disk file or directory, supports TEXT, CSV and JSON format files, supports compressed files
+- HDFS file or directory, supports compressed files
+- Mainstream relational databases, such as MySQL, PostgreSQL, Oracle, SQL Server
 
-- 本地磁盘文件或目录,支持 TEXT、CSV 和 JSON 格式的文件,支持压缩文件
-- HDFS 文件或目录,支持压缩文件
-- 主流关系型数据库,如 MySQL、PostgreSQL、Oracle、SQL Server
+Local disk files and HDFS files support resumable uploads.
 
-本地磁盘文件和 HDFS 文件支持断点续传。
+It will be explained in detail later.
 
-后面会具体说明。
+> Note: HugeGraph-Loader requires HugeGraph Server service, please refer to [HugeGraph-Server Quick Start](/docs/quickstart/hugegraph-server) to download and start Server
 
-> 注意:使用 HugeGraph-Loader 需要依赖 HugeGraph Server 服务,下载和启动 Server 请参考 [HugeGraph-Server Quick Start](/docs/quickstart/hugegraph-server)
+### 2 Get HugeGraph-Loader
 
-### 2 获取 HugeGraph-Loader
+There are two ways to get HugeGraph-Loader:
 
-有两种方式可以获取 HugeGraph-Loader:
+- Download the compiled tarball
+- Clone source code to compile and install
 
-- 下载已编译的压缩包
-- 克隆源码编译安装
+#### 2.1 Download the compiled archive
 
-#### 2.1 下载已编译的压缩包
-
-下载最新版本的 HugeGraph-Loader release 包:
+Download the latest version of the HugeGraph-Loader release package:
 
 ```bash
 wget https://github.com/hugegraph/hugegraph-loader/releases/download/v${version}/hugegraph-loader-${version}.tar.gz
 tar zxvf hugegraph-loader-${version}.tar.gz
 ```
 
-#### 2.2 克隆源码编译安装
+#### 2.2 Clone source code to compile and install
 
-克隆最新版本的 HugeGraph-Loader 源码包:
+Clone the latest version of HugeGraph-Loader source package:
 
 ```bash
 $ git clone https://github.com/hugegraph/hugegraph-loader.git
 ```
 
-由于Oracle ojdbc license的限制,需要手动安装ojdbc到本地maven仓库。
-访问[Oracle jdbc 下载](https://www.oracle.com/database/technologies/appdev/jdbc-downloads.html) 页面。选择Oracle Database 12c Release 2 (12.2.0.1) drivers,如下图所示。
+Due to the limitation of the Oracle ojdbc license, you need to manually install ojdbc to the local maven repository.
+Visit the [Oracle jdbc downloads](https://www.oracle.com/database/technologies/appdev/jdbc-downloads.html) page. Select Oracle Database 12c Release 2 (12.2.0.1) drivers, as shown in the following figure.
 
 <center>
   <img src="/docs/images/oracle-download.png" alt="image">
 </center>
 
 
-打开链接后,选择“ojdbc8.jar”, 如下图所示。
+After opening the link, select "ojdbc8.jar" as shown below.
 
 <center>
   <img src="/docs/images/ojdbc8.png" alt="image">
 </center>
 
 
- 把ojdbc8安装到本地maven仓库,进入``ojdbc8.jar``所在目录,执行以下命令。
+ Install ojdbc8 to the local maven repository, enter the directory where ``ojdbc8.jar`` is located, and execute the following command.
 ```
 mvn install:install-file -Dfile=./ojdbc8.jar -DgroupId=com.oracle -DartifactId=ojdbc8 -Dversion=12.2.0.1 -Dpackaging=jar
 ```
 
-编译生成 tar 包:
+Compile and generate tar package:
 
 ```bash
 cd hugegraph-loader
 mvn clean package -DskipTests
 ```
 
-### 3 使用流程
+### 3 Use the process

Review Comment:
   How to use



##########
content/en/docs/quickstart/hugegraph-loader.md:
##########
@@ -4,252 +4,251 @@ linkTitle: "Load data with HugeGraph-Loader"
 weight: 2
 ---
 
-### 1 HugeGraph-Loader概述
+### 1 HugeGraph-Loader overview
 
-HugeGraph-Loader 是 HugeGragh 的数据导入组件,能够将多种数据源的数据转化为图的顶点和边并批量导入到图数据库中。
+HugeGraph-Loader is the data import component of HugeGragh, which can convert data from various data sources into graph vertices and edges and import them into the graph database in batches.
 
-目前支持的数据源包括:
+Currently supported data sources include:
+- Local disk file or directory, supports TEXT, CSV and JSON format files, supports compressed files
+- HDFS file or directory, supports compressed files
+- Mainstream relational databases, such as MySQL, PostgreSQL, Oracle, SQL Server
 
-- 本地磁盘文件或目录,支持 TEXT、CSV 和 JSON 格式的文件,支持压缩文件
-- HDFS 文件或目录,支持压缩文件
-- 主流关系型数据库,如 MySQL、PostgreSQL、Oracle、SQL Server
+Local disk files and HDFS files support resumable uploads.
 
-本地磁盘文件和 HDFS 文件支持断点续传。
+It will be explained in detail later.
 
-后面会具体说明。
+> Note: HugeGraph-Loader requires HugeGraph Server service, please refer to [HugeGraph-Server Quick Start](/docs/quickstart/hugegraph-server) to download and start Server
 
-> 注意:使用 HugeGraph-Loader 需要依赖 HugeGraph Server 服务,下载和启动 Server 请参考 [HugeGraph-Server Quick Start](/docs/quickstart/hugegraph-server)
+### 2 Get HugeGraph-Loader
 
-### 2 获取 HugeGraph-Loader
+There are two ways to get HugeGraph-Loader:
 
-有两种方式可以获取 HugeGraph-Loader:
+- Download the compiled tarball
+- Clone source code to compile and install
 
-- 下载已编译的压缩包
-- 克隆源码编译安装
+#### 2.1 Download the compiled archive
 
-#### 2.1 下载已编译的压缩包
-
-下载最新版本的 HugeGraph-Loader release 包:
+Download the latest version of the HugeGraph-Loader release package:
 
 ```bash
 wget https://github.com/hugegraph/hugegraph-loader/releases/download/v${version}/hugegraph-loader-${version}.tar.gz
 tar zxvf hugegraph-loader-${version}.tar.gz
 ```
 
-#### 2.2 克隆源码编译安装
+#### 2.2 Clone source code to compile and install
 
-克隆最新版本的 HugeGraph-Loader 源码包:
+Clone the latest version of HugeGraph-Loader source package:
 
 ```bash
 $ git clone https://github.com/hugegraph/hugegraph-loader.git
 ```
 
-由于Oracle ojdbc license的限制,需要手动安装ojdbc到本地maven仓库。
-访问[Oracle jdbc 下载](https://www.oracle.com/database/technologies/appdev/jdbc-downloads.html) 页面。选择Oracle Database 12c Release 2 (12.2.0.1) drivers,如下图所示。
+Due to the limitation of the Oracle ojdbc license, you need to manually install ojdbc to the local maven repository.
+Visit the [Oracle jdbc downloads](https://www.oracle.com/database/technologies/appdev/jdbc-downloads.html) page. Select Oracle Database 12c Release 2 (12.2.0.1) drivers, as shown in the following figure.
 
 <center>
   <img src="/docs/images/oracle-download.png" alt="image">
 </center>
 
 
-打开链接后,选择“ojdbc8.jar”, 如下图所示。
+After opening the link, select "ojdbc8.jar" as shown below.
 
 <center>
   <img src="/docs/images/ojdbc8.png" alt="image">
 </center>
 
 
- 把ojdbc8安装到本地maven仓库,进入``ojdbc8.jar``所在目录,执行以下命令。
+ Install ojdbc8 to the local maven repository, enter the directory where ``ojdbc8.jar`` is located, and execute the following command.
 ```
 mvn install:install-file -Dfile=./ojdbc8.jar -DgroupId=com.oracle -DartifactId=ojdbc8 -Dversion=12.2.0.1 -Dpackaging=jar
 ```
 
-编译生成 tar 包:
+Compile and generate tar package:
 
 ```bash
 cd hugegraph-loader
 mvn clean package -DskipTests
 ```
 
-### 3 使用流程
+### 3 Use the process
 
-使用 HugeGraph-Loader 的基本流程分为以下几步:
+The basic process of using HugeGraph-Loader is divided into the following steps:
 
-- 编写图模型
-- 准备数据文件
-- 编写输入源映射文件
-- 执行命令导入
+- Write graph models
+- Prepare data files
+- Write input source map files
+- Execute command import
 
-#### 3.1 编写图模型
+#### 3.1 Writing a graph model

Review Comment:
   Construct graph schema



##########
content/en/docs/quickstart/hugegraph-loader.md:
##########
@@ -4,252 +4,251 @@ linkTitle: "Load data with HugeGraph-Loader"
 weight: 2
 ---
 
-### 1 HugeGraph-Loader概述
+### 1 HugeGraph-Loader overview
 
-HugeGraph-Loader 是 HugeGragh 的数据导入组件,能够将多种数据源的数据转化为图的顶点和边并批量导入到图数据库中。
+HugeGraph-Loader is the data import component of HugeGragh, which can convert data from various data sources into graph vertices and edges and import them into the graph database in batches.
 
-目前支持的数据源包括:
+Currently supported data sources include:
+- Local disk file or directory, supports TEXT, CSV and JSON format files, supports compressed files
+- HDFS file or directory, supports compressed files
+- Mainstream relational databases, such as MySQL, PostgreSQL, Oracle, SQL Server
 
-- 本地磁盘文件或目录,支持 TEXT、CSV 和 JSON 格式的文件,支持压缩文件
-- HDFS 文件或目录,支持压缩文件
-- 主流关系型数据库,如 MySQL、PostgreSQL、Oracle、SQL Server
+Local disk files and HDFS files support resumable uploads.
 
-本地磁盘文件和 HDFS 文件支持断点续传。
+It will be explained in detail later.
 
-后面会具体说明。
+> Note: HugeGraph-Loader requires HugeGraph Server service, please refer to [HugeGraph-Server Quick Start](/docs/quickstart/hugegraph-server) to download and start Server
 
-> 注意:使用 HugeGraph-Loader 需要依赖 HugeGraph Server 服务,下载和启动 Server 请参考 [HugeGraph-Server Quick Start](/docs/quickstart/hugegraph-server)
+### 2 Get HugeGraph-Loader
 
-### 2 获取 HugeGraph-Loader
+There are two ways to get HugeGraph-Loader:
 
-有两种方式可以获取 HugeGraph-Loader:
+- Download the compiled tarball
+- Clone source code to compile and install
 
-- 下载已编译的压缩包
-- 克隆源码编译安装
+#### 2.1 Download the compiled archive
 
-#### 2.1 下载已编译的压缩包
-
-下载最新版本的 HugeGraph-Loader release 包:
+Download the latest version of the HugeGraph-Loader release package:
 
 ```bash
 wget https://github.com/hugegraph/hugegraph-loader/releases/download/v${version}/hugegraph-loader-${version}.tar.gz
 tar zxvf hugegraph-loader-${version}.tar.gz
 ```
 
-#### 2.2 克隆源码编译安装
+#### 2.2 Clone source code to compile and install
 
-克隆最新版本的 HugeGraph-Loader 源码包:
+Clone the latest version of HugeGraph-Loader source package:
 
 ```bash
 $ git clone https://github.com/hugegraph/hugegraph-loader.git
 ```
 
-由于Oracle ojdbc license的限制,需要手动安装ojdbc到本地maven仓库。
-访问[Oracle jdbc 下载](https://www.oracle.com/database/technologies/appdev/jdbc-downloads.html) 页面。选择Oracle Database 12c Release 2 (12.2.0.1) drivers,如下图所示。
+Due to the limitation of the Oracle ojdbc license, you need to manually install ojdbc to the local maven repository.

Review Comment:
   Due to the license limitation of the `Oracle OJDBC`



##########
content/en/docs/quickstart/hugegraph-loader.md:
##########
@@ -922,13 +921,13 @@ schema.indexLabel("knowsByWeight").onE("knows").by("weight").range().ifNotExist(
 }
 ```
 
-#### 4.4 执行命令导入
+#### 4.4 Execute command import

Review Comment:
   Command to import (or other better translation)



##########
content/en/docs/quickstart/hugegraph-loader.md:
##########
@@ -535,134 +534,134 @@ Office,388
 }
 ```
 
-映射文件 1.0 版本是以顶点和边为中心,设置输入源;而 2.0 版本是以输入源为中心,设置顶点和边映射。有些输入源(比如一个文件)既能生成顶点,也能生成边,如果用 1.0 版的格式写,就需要在 vertex 和 egde 映射块中各写一次 input 块,这两次的 input 块是完全一样的;而 2.0 版本只需要写一次 input。所以 2.0 版相比于 1.0 版,能省掉一些 input 的重复书写。
+The 1.0 version of the mapping file is centered on the vertex and edge, and sets the input source; while the 2.0 version is centered on the input source, and sets the vertex and edge mapping. Some input sources (such as a file) can generate both vertices and edges. If you write in the 1.0 format, you need to write an input block in each of the vertex and egde mapping blocks. The two input blocks are exactly the same ; and the 2.0 version only needs to write input once. Therefore, compared with version 1.0, version 2.0 can save some repetitive writing of input.
 
-在 hugegraph-loader-{version} 的 bin 目录下,有一个脚本工具 `mapping-convert.sh` 能直接将 1.0 版本的映射文件转换为 2.0 版本的,使用方式如下:
+In the bin directory of hugegraph-loader-{version}, there is a script tool `mapping-convert.sh` that can directly convert the mapping file of version 1.0 to version 2.0. The usage is as follows:
 
 ```bash
 bin/mapping-convert.sh struct.json
 ```
 
-会在 struct.json 的同级目录下生成一个 struct-v2.json。
+A struct-v2.json will be generated in the same directory as struct.json.
 
-##### 3.3.2 输入源
+##### 3.3.2 Input Source
 
-输入源目前分为三类:FILE、HDFS、JDBC,由`type`节点区分,我们称为本地文件输入源、HDFS 输入源和 JDBC 输入源,下面分别介绍。
+Input sources are currently divided into three categories: FILE, HDFS, and JDBC, which are distinguished by the `type` node. We call them local file input sources, HDFS input sources, and JDBC input sources, which are described below.
 
-###### 3.3.2.1 本地文件输入源
+###### 3.3.2.1 Local file input source
 
-- id: 输入源的 id,该字段用于支持一些内部功能,非必填(未填时会自动生成),强烈建议写上,对于调试大有裨益;
-- skip: 是否跳过该输入源,由于 JSON 文件无法添加注释,如果某次导入时不想导入某个输入源,但又不想删除该输入源的配置,则可以设置为 true 将其跳过,默认为 false,非必填;
-- input: 输入源映射块,复合结构
-    - type: 输入源类型,必须填 file 或 FILE; 
-    - path: 本地文件或目录的路径,绝对路径或相对于映射文件的相对路径,建议使用绝对路径,必填;
-    - file_filter: 从`path`中筛选复合条件的文件,复合结构,目前只支持配置扩展名,用子节点`extensions`表示,默认为"*",表示保留所有文件;
-    - format: 本地文件的格式,可选值为 CSV、TEXT 及 JSON,必须大写,必填;               
-    - header: 文件各列的列名,如不指定则会以数据文件第一行作为 header;当文件本身有标题且又指定了 header,文件的第一行会被当作普通的数据行;JSON 文件不需要指定 header,选填;    
-    - delimiter: 文件行的列分隔符,默认以逗号`","`作为分隔符,`JSON`文件不需要指定,选填;     
-    - charset: 文件的编码字符集,默认`UTF-8`,选填;    
-    - date_format: 自定义的日期格式,默认值为 yyyy-MM-dd HH:mm:ss,选填;如果日期是以时间戳的形式呈现的,此项须写为`timestamp`(固定写法); 
-    - time_zone: 设置日期数据是处于哪个时区的,默认值为`GMT+8`,选填;
-    - skipped_line: 想跳过的行,复合结构,目前只能配置要跳过的行的正则表达式,用子节点`regex`描述,默认不跳过任何行,选填;
-    - compression: 文件的压缩格式,可选值为 NONE、GZIP、BZ2、XZ、LZMA、SNAPPY_RAW、SNAPPY_FRAMED、Z、DEFLATE、LZ4_BLOCK、LZ4_FRAMED、ORC 和 PARQUET,默认为 NONE,表示非压缩文件,选填;
-    - list_format: 当文件(非 JSON )的某列是集合结构时(对应图中的 PropertyKey 的 Cardinality 为 Set 或 List),可以用此项设置该列的起始符、分隔符、结束符,复合结构: 
-        - start_symbol: 集合结构列的起始符 (默认值是 `[`, JSON 格式目前不支持指定)
-        - elem_delimiter: 集合结构列的分隔符 (默认值是 `|`, JSON 格式目前只支持原生`,`分隔)
-        - end_symbol: 集合结构列的结束符 (默认值是 `]`, JSON 格式目前不支持指定)
+- id: The id of the input source. This field is used to support some internal functions. It is not required (it will be automatically generated if it is not filled in). It is strongly recommended to write it, which is very helpful for debugging;
+- skip: whether to skip the input source, because the JSON file cannot add comments, if you do not want to import an input source during a certain import, but do not want to delete the configuration of the input source, you can set it to true to skip it, the default is false, not required;
+- input: input source map block, composite structure
+    - type: input source type, file or FILE must be filled;
+    - path: the path of the local file or directory, the absolute path or the relative path relative to the mapping file, it is recommended to use the absolute path, required;
+    - file_filter: filter files with compound conditions from `path`, compound structure, currently only supports configuration extensions, represented by child node `extensions`, the default is "*", which means to keep all files;
+    - format: the format of the local file, the optional values ​​are CSV, TEXT and JSON, which must be uppercase and required;               
+    - header: the column name of each column of the file, if not specified, the first line of the data file will be used as the header; when the file itself has a header and the header is specified, the first line of the file will be treated as a normal data line; JSON The file does not need to specify a header, optional;    
+    - delimiter: The column delimiter of the file line, the default is comma `","` as the delimiter, the `JSON` file does not need to be specified, optional;     
+    - charset: the encoded character set of the file, the default is `UTF-8`, optional;    
+    - date_format: custom date format, the default value is yyyy-MM-dd HH:mm:ss, optional; if the date is presented in the form of a timestamp, this item must be written as `timestamp` (fixed writing);
+    - time_zone: Set which time zone the date data is in, the default value is `GMT+8`, optional;
+    - skipped_line: The line to be skipped, compound structure, currently only the regular expression of the line to be skipped can be configured, described by the child node `regex`, no line is skipped by default, optional;
+    - compression: The compression format of the file, the optional values ​​are NONE, GZIP, BZ2, XZ, LZMA, SNAPPY_RAW, SNAPPY_FRAMED, Z, DEFLATE, LZ4_BLOCK, LZ4_FRAMED, ORC and PARQUET, the default is NONE, which means a non-compressed file, optional;
+    - list_format: When a column of the file (non-JSON) is a collection structure (the Cardinality of the PropertyKey in the corresponding figure is Set or List), you can use this item to set the start character, separator, and end character of the column, compound structure :
+        - start_symbol: The start character of the collection structure column (the default value is `[`, JSON format currently does not support specification)
+        - elem_delimiter: the delimiter of the collection structure column (the default value is `|`, JSON format currently only supports native `,` delimiter)
+        - end_symbol: the end character of the collection structure column (the default value is `]`, the JSON format does not currently support specification)
 
-###### 3.3.2.2 HDFS 输入源
+###### 3.3.2.2 HDFS input source
 
-上述`本地文件输入源`的节点及含义这里基本都适用,下面仅列出 HDFS 输入源不一样的和特有的节点。
+The nodes and meanings of the above `local file input source` are basically applicable here. Only the different and unique nodes of the HDFS input source are listed below.
 
-- type: 输入源类型,必须填 hdfs 或 HDFS,必填; 
-- path: HDFS 文件或目录的路径,必须是 HDFS 的绝对路径,必填; 
-- core_site_path: HDFS 集群的 core-site.xml 文件路径,重点要指明 namenode 的地址(fs.default.name),以及文件系统的实现(fs.hdfs.impl);
+- type: input source type, must fill in hdfs or HDFS, required;
+- path: the path of the HDFS file or directory, it must be the absolute path of HDFS, required;
+- core_site_path: the path of the core-site.xml file of the HDFS cluster, the key point is to specify the address of the namenode (fs.default.name) and the implementation of the file system (fs.hdfs.impl);
 
-###### 3.3.2.3 JDBC 输入源
+###### 3.3.2.3 JDBC input source
 
-前面说到过支持多种关系型数据库,但由于它们的映射结构非常相似,故统称为 JDBC 输入源,然后用`vendor`节点区分不同的数据库。
+As mentioned above, it supports multiple relational databases, but because their mapping structures are very similar, they are collectively referred to as JDBC input sources, and then use the `vendor` node to distinguish different databases.
 
-- type: 输入源类型,必须填 jdbc 或 JDBC,必填;
-- vendor: 数据库类型,可选项为 [MySQL、PostgreSQL、Oracle、SQLServer],不区分大小写,必填;
-- driver: jdbc 使用的 driver 类型,必填;
-- url: jdbc 要连接的数据库的 url,必填;
-- database: 要连接的数据库名,必填;
-- schema: 要连接的 schema 名,不同的数据库要求不一样,下面详细说明;
-- table: 要连接的表名,必填;
-- username: 连接数据库的用户名,必填;
-- password: 连接数据库的密码,必填;
-- batch_size: 按页获取表数据时的一页的大小,默认为 500,选填;
+- type: input source type, must fill in jdbc or JDBC, required;
+- vendor: database type, optional options are [MySQL, PostgreSQL, Oracle, SQLServer], case-insensitive, required;
+- driver: the type of driver used by jdbc, required;
+- url: the url of the database that jdbc wants to connect to, required;
+- database: the name of the database to be connected, required;
+- schema: The name of the schema to be connected, different databases have different requirements, and the details are explained below;
+- table: the name of the table to be connected, required;
+- username: username to connect to the database, required;
+- password: password for connecting to the database, required;
+- batch_size: The size of one page when obtaining table data by page, the default is 500, optional;
 
 **MYSQL**
 
-| 节点 | 固定值或常见值 |
-| --- | --- | 
+| Node | Fixed value or common value |
+| --- | --- |
 | vendor | MYSQL |
 | driver | com.mysql.cj.jdbc.Driver |
 | url | jdbc:mysql://127.0.0.1:3306 |
 
-schema: 可空,若填写必须与database的值一样
+schema: nullable, if filled in, it must be the same as the value of database
 
 **POSTGRESQL**
 
-| 节点 | 固定值或常见值 |
-| --- | --- | 
+| Node | Fixed value or common value |
+| --- | --- |
 | vendor | POSTGRESQL |
 | driver | org.postgresql.Driver |
 | url | jdbc:postgresql://127.0.0.1:5432 |
 
-schema: 可空,默认值为“public”
+schema: nullable, default is "public"
 
 **ORACLE**
 
-| 节点 | 固定值或常见值 |
-| --- | --- | 
+| Node | Fixed value or common value |
+| --- | --- |
 | vendor | ORACLE |
 | driver | oracle.jdbc.driver.OracleDriver |
 | url | jdbc:oracle:thin:@127.0.0.1:1521 |
 
-schema: 可空,默认值与用户名相同
+schema: nullable, the default value is the same as the username
 
 **SQLSERVER**
 
-| 节点 | 固定值或常见值 |
-| --- | --- | 
+| Node | Fixed value or common value |
+| --- | --- |
 | vendor | SQLSERVER |
 | driver | com.microsoft.sqlserver.jdbc.SQLServerDriver |
 | url | jdbc:sqlserver://127.0.0.1:1433 |
 
-schema: 必填
+schema: required
 
-##### 3.3.1 顶点和边映射
+##### 3.3.1 Vertex and Edge Mapping
 
-顶点和边映射的节点(JSON 文件中的一个 key)有很多相同的部分,下面先介绍相同部分,再分别介绍`顶点映射`和`边映射`的特有节点。
+The nodes of vertex and edge mapping (a key in the JSON file) have a lot of the same parts. The same parts are introduced first, and then the unique nodes of `vertex map` and `edge map` are introduced respectively.
 
-**相同部分的节点**
+**Nodes of the same section**
 
-- label: 待导入的顶点/边数据所属的`label`,必填;                                                                                   
-- field_mapping: 将输入源列的列名映射为顶点/边的属性名,选填;
-- value_mapping: 将输入源的数据值映射为顶点/边的属性值,选填;
-- selected: 选择某些列插入,其他未选中的不插入,不能与`ignored`同时存在,选填;                                                                           
-- ignored: 忽略某些列,使其不参与插入,不能与`selected`同时存在,选填;
-- null_values: 可以指定一些字符串代表空值,比如"NULL",如果该列对应的顶点/边属性又是一个可空属性,那在构造顶点/边时不会设置该属性的值,选填;                                                                                
-- update_strategies: 如果数据需要按特定方式批量**更新**时可以对每个属性指定具体的更新策略 (具体见下),选填;
-- unfold: 是否将列展开,展开的每一列都会与其他列一起组成一行,相当于是展开成了多行;比如文件的某一列(id 列)的值是`[1,2,3]`,其他列的值是`18,Beijing`,当设置了 unfold 之后,这一行就会变成 3 行,分别是:`1,18,Beijing`,`2,18,Beijing`和`3,18,Beijing`。需要注意的是此项只会展开被选作为 id 的列。默认 false,选填;
+- label: `label` to which the vertex/edge data to be imported belongs, required;                                                                                   
+- field_mapping: Map the column name of the input source column to the attribute name of the vertex/edge, optional;
+- value_mapping: map the data value of the input source to the attribute value of the vertex/edge, optional;
+- selected: select some columns to insert, other unselected ones are not inserted, cannot exist at the same time as `ignored`, optional;                                                                           
+- ignored: ignore some columns so that they do not participate in insertion, cannot exist at the same time as `selected`, optional;
+- null_values: You can specify some strings to represent null values, such as "NULL". If the vertex/edge attribute corresponding to this column is also a nullable attribute, the value of this attribute will not be set when constructing the vertex/edge, optional ;                                                                                
+- update_strategies: If the data needs to be **updated** in batches in a specific way, you can specify a specific update strategy for each attribute (see below for details), optional;
+- unfold: Whether to unfold the column, each unfolded column will form a row with other columns, which is equivalent to unfolding into multiple rows; for example, the value of a certain column (id column) of the file is `[1,2,3]`, The values ​​of other columns are `18,Beijing`. When unfold is set, this row will become 3 rows, namely: `1,18,Beijing`, `2,18,Beijing` and `3,18, Beijing`. Note that this will only expand the column selected as id. Default false, optional;
 
-**更新策略**支持8种 :  (需要全大写)
+**Update strategy** supports 8 types: (requires all uppercase)
 
-1. 数值累加 : `SUM`
-2. 两个数字/日期取更大的: `BIGGER`
-3. 两个数字/日期取更小: `SMALLER`
-4. **Set**属性取并集: `UNION`
-5. **Set**属性取交集: `INTERSECTION`
-6. **List**属性追加元素: `APPEND`
-7. **List/Set**属性删除元素: `ELIMINATE`
-8. 覆盖已有属性: `OVERRIDE`
+1. Value accumulation: `SUM`
+2. Take the greater of the two numbers/dates: `BIGGER`
+3. Take the smaller of two numbers/dates: `SMALLER`
+4. **Set** property takes union: `UNION`
+5. **Set** attribute intersection: `INTERSECTION`
+6. **List** attribute append element: `APPEND`
+7. **List/Set** attribute delete element: `ELIMINATE`
+8. Override an existing property: `OVERRIDE`
 
-**注意:** 如果新导入的属性值为空, 会采用已有的旧数据而不会采用空值, 效果可以参考如下示例
+**Note:** If the newly imported attribute value is empty, the existing old data will be used instead of the empty value. For the effect, please refer to the following example
 
 ```json

Review Comment:
   use ```javascript  to support comment



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@hugegraph.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org