You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@linkis.apache.org by ca...@apache.org on 2023/03/02 07:20:25 UTC

[linkis] branch dev-1.3.2 updated: update README.md (#4259)

This is an automated email from the ASF dual-hosted git repository.

casion pushed a commit to branch dev-1.3.2
in repository https://gitbox.apache.org/repos/asf/linkis.git


The following commit(s) were added to refs/heads/dev-1.3.2 by this push:
     new bfdd9defa update  README.md (#4259)
bfdd9defa is described below

commit bfdd9defa46cb1c351de8f1d09926c37ef48366e
Author: binbincheng <10...@users.noreply.github.com>
AuthorDate: Thu Mar 2 15:20:16 2023 +0800

    update  README.md (#4259)
    
    * update  README.md
    
    * update  README.md
    
    * Update the content in the README document
---
 README.md    | 47 ++++++++++++++++++++++------------------------
 README_CN.md | 61 ++++++++++++++++++++++++++++++------------------------------
 2 files changed, 52 insertions(+), 56 deletions(-)

diff --git a/README.md b/README.md
index 9c864cca5..549270e51 100644
--- a/README.md
+++ b/README.md
@@ -66,41 +66,39 @@ Since the first release of Linkis in 2019, it has accumulated more than **700**
 
 # Features
 
-- **Support for diverse underlying computation storage engines**  
-  - Currently supported computation/storage engines: Spark、Hive、Flink、Python、Pipeline、Sqoop、openLooKeng、Presto、ElasticSearch、JDBC, Shell, etc
-  - Computation/storage engines to be supported: Trino (planned 1.3.1), SeaTunnel (planned 1.3.1), etc
-  - Supported scripting languages: SparkSQL、HiveQL、Python、Shell、Pyspark、R、Scala and JDBC, etc
+- **Support for diverse underlying computation storage engines** : Spark, Hive, Python, Shell, Flink, JDBC, Pipeline, Sqoop, OpenLooKeng, Presto, ElasticSearch, Trino, SeaTunnel, etc.;
 
-- **Powerful task/request governance capabilities** With services such as Orchestrator, Label Manager and customized Spring Cloud Gateway, Linkis is able to provide multi-level labels based, cross-cluster/cross-IDC fine-grained routing, load balance, multi-tenancy, traffic control, resource control, and orchestration strategies like dual-active, active-standby, etc
+- **Support for diverse language** : SparkSQL, HiveSQL, Python, Shell, Pyspark, Scala, JSON and Java;
 
-- **Support full stack computation/storage engine** As a computation middleware, it will receive, execute and manage tasks and requests for various computation storage engines, including batch tasks, interactive query tasks, real-time streaming tasks and storage tasks
+- **Powerful computing governance capability** : It can provide task routing, load balancing, multi-tenant, traffic control, resource control and other capabilities based on multi-level labels;
 
-- **Resource management capabilities**  ResourceManager is not only capable of managing resources for Yarn and Linkis EngineManger, but also able to provide label-based multi-level resource allocation and recycling, allowing itself to have powerful resource management capabilities across multiple Yarn clusters and multiple computation resource types
+- **Support full stack computation/storage engine** : The ability to receive, execute and manage tasks and requests for various compute and storage engines, including offline batch tasks, interactive query tasks, real-time streaming tasks and data lake tasks;
 
-- **Unified Context Service** Generate Context ID for each task/request,  associate and manage user and system resource files (JAR, ZIP, Properties, etc.), result set, parameter variable, function, etc., across user, system, and computing engine. Set in one place, automatic reference everywhere
+- **Unified context service** : supports cross-user, system and computing engine to associate and manage user and system resource files (JAR, ZIP, Properties, etc.), result sets, parameter variables, functions, UDFs, etc., one setting, automatic reference everywhere;
 
-- **Unified materials** System and user-level unified material management, which can be shared and transferred across users and systems
+- **Unified materials** : provides system and user level material management, can share and flow, share materials across users, across systems;
 
-# Supported Engine Types
+- **Unified data source management** : provides the ability to add, delete, check and change information of Hive, ElasticSearch, Mysql, Kafka, MongoDB and other data sources, version control, connection test, and query metadata information of corresponding data sources;
 
-| **Engine Name** | **Suppor Component Version<br/>(Default Dependent Version)** | **Linkis Version Requirements** | **Included in Release Package<br/> By Default** | **Description** |
+- **Error code capability** : provides error codes and solutions for common errors of tasks, which is convenient for users to locate problems by themselves;
+
+# Engine Type
+
+| **Engine name** | **Support underlying component version<br/>(default dependency version)** | **Linkis Version Requirements** | **Included in Release Package By Default** | **Description** |
 |:---- |:---- |:---- |:---- |:---- |
 |Spark|Apache 2.0.0~2.4.7, <br/>CDH >= 5.4.0, <br/>(default Apache Spark 2.4.3)|\>=1.0.3|Yes|Spark EngineConn, supports SQL , Scala, Pyspark and R code|
-|Hive|Apache >= 1.0.0, <br/>CDH >= 5.4.0, <br/>(default Apache Hive 2.3.3)|\>=1.0.3|Yes |Hive EngineConn, supports HiveQL code|
-|Python|Python >= 2.6, <br/>(default Python2*)|\>=1.0.3|Yes |Python EngineConn, supports python code|
+|Hive|Apache >= 1.0.0, <br/>CDH >= 5.4.0, <br/>(default Apache Hive 2.3.3)|\>=1.0.3|Yes|Hive EngineConn, supports HiveQL code|
+|Python|Python >= 2.6, <br/>(default Python2*)|\>=1.0.3|Yes|Python EngineConn, supports python code|
 |Shell|Bash >= 2.0|\>=1.0.3|Yes|Shell EngineConn, supports Bash shell code|
-|JDBC|MySQL >= 5.0, Hive >=1.2.1, <br/>(default Hive-jdbc 2.3.4)|\>=1.0.3|No|JDBC EngineConn, already supports MySQL and HiveQL, can be extended quickly Support other engines with JDBC Driver package, such as Oracle|
-|Flink |Flink >= 1.12.2, <br/>(default Apache Flink 1.12.2)|\>=1.0.3|No |Flink EngineConn, supports FlinkSQL code, also supports starting a new Yarn in the form of Flink Jar Application |
-|Pipeline|-|\>=1.0.3|No|Pipeline EngineConn, supports file import and export|
+|JDBC|MySQL >= 5.0, Hive >=1.2.1, <br/>(default Hive-jdbc 2.3.4)|\>=1.0.3|No |JDBC EngineConn, already supports MySQL and HiveQL, can be extended quickly Support other engines with JDBC Driver package, such as Oracle|
+|Flink |Flink >= 1.12.2, <br/>(default Apache Flink 1.12.2)|\>=1.0.2|No |Flink EngineConn, supports FlinkSQL code, also supports starting a new Yarn in the form of Flink Jar Application|
+|Pipeline|-|\>=1.0.2|No|Pipeline EngineConn, supports file import and export|
 |openLooKeng|openLooKeng >= 1.5.0, <br/>(default openLookEng 1.5.0)|\>=1.1.1|No|openLooKeng EngineConn, supports querying data virtualization engine with Sql openLooKeng|
 |Sqoop| Sqoop >= 1.4.6, <br/>(default Apache Sqoop 1.4.6)|\>=1.1.2|No|Sqoop EngineConn, support data migration tool Sqoop engine|
-|Presto|Presto >= 0.180, <br/>(default Presto 0.234)|\>=1.2.0|-|Presto EngineConn, supports Presto SQL code|
-|ElasticSearch|ElasticSearch >=6.0, <br/>(default ElasticSearch 7.6.2)|\>=1.2.0|-|ElasticSearch EngineConn, supports SQL and DSL code|
-|Impala|Impala >= 3.2.0, CDH >=6.3.0|ongoing|-|Impala EngineConn, supports Impala SQL code|
-|MLSQL| MLSQL >=1.1.0|ongoing|-|MLSQL EngineConn, supports MLSQL code.|
-|Hadoop|Apache >=2.6.0, <br/>CDH >=5.4.0|ongoing|-|Hadoop EngineConn, supports Hadoop MR/YARN application|
-|TiSpark|1.1|ongoing|-|TiSpark EngineConn, supports querying TiDB with SparkSQL|
-
+|Presto|Presto >= 0.180|\>=1.2.0|No|Presto EngineConn, supports Presto SQL code|
+|ElasticSearch|ElasticSearch >=6.0|\>=1.2.0|No|ElasticSearch EngineConn, supports SQL and DSL code|
+|Trino | Trino >=371 | >=1.3.1 | No |   Trino EngineConn, supports Trino SQL code |
+|Seatunnel | Seatunnel >=2.1.2 | >=1.3.1 | No | Seatunnel EngineConn, supportt Seatunnel SQL code |
 
 # Download
 
@@ -193,8 +191,7 @@ For code and documentation contributions, please follow the [contribution guide]
 - By mail [dev@linkis.apache.org](mailto:dev@linkis.apache.org)
 - You can scan the QR code below to join our WeChat group to get more immediate response
 
-![wechatgroup](https://linkis.apache.org/Images/wedatasphere_contact_01.png)
-
+<img src="https://linkis.apache.org/Images/wedatasphere_contact_01.png" width="256"/>
 
 # Who is Using Linkis
 
diff --git a/README_CN.md b/README_CN.md
index 34de36909..242aebeb3 100644
--- a/README_CN.md
+++ b/README_CN.md
@@ -66,37 +66,35 @@ Linkis 自 2019 年开源发布以来,已累计积累了 700 多家试验企
 
 ## 核心特点
 
-- **丰富的底层计算存储引擎支持**  
-  - **目前支持的计算存储引擎** Spark、Hive、Flink、Python、Pipeline、Sqoop、openLooKeng、Presto、ElasticSearch、JDBC 和 Shell 等  
-  - **正在支持中的计算存储引擎** Trino(计划 1.3.1)、SeaTunnel(计划 1.3.1) 等  
-  - **支持的脚本语言** SparkSQL、HiveQL、Python、Shell、Pyspark、R、Scala 和 JDBC 等
-- **强大的计算治理能力** 基于 Orchestrator、Label Manager 和定制的 Spring Cloud Gateway 等服务,Linkis 能够提供基于多级标签的跨集群/跨 IDC 细粒度路由、负载均衡、多租户、流量控制、资源控制和编排策略 (如双活、主备等) 支持能力  
-- **全栈计算存储引擎架构支持** 能够接收、执行和管理针对各种计算存储引擎的任务和请求,包括离线批量任务、交互式查询任务、实时流式任务和存储型任务
-- **资源管理能力** ResourceManager 不仅具备对 Yarn 和 Linkis EngineManager 的资源管理能力,还将提供基于标签的多级资源分配和回收能力,让 ResourceManager 具备跨集群、跨计算资源类型的强大资源管理能力
-- **统一上下文服务** 为每个计算任务生成 context id,跨用户、系统、计算引擎的关联管理用户和系统资源文件(JAR、ZIP、Properties 等),结果集,参数变量,函数等,一处设置,处处自动引用
-- **统一物料** 系统和用户级物料管理,可分享和流转,跨用户、系统共享物料
-- **统一数据源管理** 提供了 hive、es、mysql、kafka 类型数据源的增删查改、版本控制、连接测试等功能
-- **数据源对应的元数据查询** 提供了 hive、es、mysql、kafka 元数据的数据库、表、分区查询
-
-# 支持的引擎类型
-
-| **引擎名** | **支持底层组件版本 <br/>(默认依赖版本)** | **Linkis 版本要求** | **是否默认包含在发布包中** | **说明** |
+- **丰富的底层计算存储引擎支持**:Spark、Hive、Python、Shell、Flink、JDBC、Pipeline、Sqoop、OpenLooKeng、Presto、ElasticSearch、Trino、SeaTunnel 等;
+- **丰富的语言支持**:SparkSQL、HiveSQL、Python、Shell、Pyspark、Scala、JSON 和 Java 等;
+- **强大的计算治理能力**: 能够提供基于多级标签的任务路由、负载均衡、多租户、流量控制、资源控制等能力;
+- **全栈计算存储引擎架构支持**:  能够接收、执行和管理针对各种计算存储引擎的任务和请求,包括离线批量任务、交互式查询任务、实时流式任务和数据湖任务;
+- **统一上下文服务**:支持跨用户、系统、计算引擎去关联管理用户和系统的资源文件(JAR、ZIP、Properties 等),结果集、参数变量、函数、UDF等,一处设置,处处自动引用;
+- **统一物料**: 提供了系统和用户级物料管理,可分享和流转,跨用户、跨系统共享物料;
+- **统一数据源管理**:  提供了Hive、ElasticSearch、Mysql、Kafka、MongoDB 等类型数据源信息的增删查改、版本控制、连接测试和对应数据源的元数据信息查询能力;
+- **错误码能力**:提供了任务常见错误的错误码和解决方案,方便用户自助定位问题;
+
+# 引擎类型
+
+| **引擎名** | **支持底层组件版本<br/>(默认依赖版本)** | **Linkis 1.X 版本要求** | **是否默认包含在发布包中** | **说明** |
 |:---- |:---- |:---- |:---- |:---- |
-|Spark|Apache 2.0.0~2.4.7, <br/>CDH >= 5.4.0, <br/>(默认 Apache Spark 2.4.3)|\>=1.0.3|是|Spark EngineConn, 支持 SQL, Scala, Pyspark 和 R 代码|
-|Hive|Apache >= 1.0.0, <br/>CDH >= 5.4.0, <br/>(默认 Apache Hive 2.3.3)|\>=1.0.3|是|Hive EngineConn, 支持 HiveQL 代码|
-|Python|Python >= 2.6, <br/>(默认 Python2*)|\>=1.0.3|是|Python EngineConn, 支持 python 代码|
-|Shell|Bash >= 2.0|\>=1.0.3|是|Shell EngineConn, 支持 Bash shell 代码|
-|JDBC|MySQL >= 5.0, Hive >=1.2.1, <br/>(默认 Hive-jdbc 2.3.4)|\>=1.0.3|否|JDBC EngineConn, 已支持 MySQL 和 HiveQL,可快速扩展支持其他有 JDBC Driver 包的引擎, 如 Oracle|
-|Flink |Flink >= 1.12.2, <br/>(默认 Apache Flink 1.12.2)|\>=1.0.3|否|Flink EngineConn, 支持 FlinkSQL 代码,也支持以 Flink Jar 形式启动一个新的 Yarn 应用程序|
-|Pipeline|-|\>=1.0.3|否|Pipeline EngineConn, 支持文件的导入和导出|
-|openLooKeng|openLooKeng >= 1.5.0, <br/>(默认 openLookEng 1.5.0)|\>=1.1.1|否|openLooKeng EngineConn, 支持用 Sql 查询数据虚拟化引擎 openLooKeng|
-|Sqoop| Sqoop >= 1.4.6, <br/>(默认 Apache Sqoop 1.4.6)|\>=1.1.2|否|Sqoop EngineConn, 支持 数据迁移工具 Sqoop 引擎|
-|Presto|Presto >= 0.180, <br/>(默认 Presto 0.234)|\>=1.2.0|否|Presto EngineConn, 支持 Presto SQL 代码|
-|ElasticSearch|ElasticSearch >=6.0, <br/>((默认 ElasticSearch 7.6.2)|\>=1.2.0|否|ElasticSearch EngineConn, 支持 SQL 和 DSL 代码|
-|Impala|Impala >= 3.2.0, CDH >=6.3.0|ongoing|-|Impala EngineConn,支持 Impala SQL 代码|
-|MLSQL| MLSQL >=1.1.0|ongoing|-|MLSQL EngineConn, 支持 MLSQL 代码.|
-|Hadoop|Apache >=2.6.0, <br/>CDH >=5.4.0|ongoing|-|Hadoop EngineConn, 支持 Hadoop MR/YARN application|
-|TiSpark|1.1|ongoing|-|TiSpark EngineConn, 支持用 SparkSQL 查询 TiDB|
+|Spark|Apache 2.0.0~2.4.7, <br/>CDH >= 5.4.0, <br/>(默认Apache Spark 2.4.3)|\>=1.0.3|是|Spark EngineConn, 支持SQL, Scala, Pyspark 和R 代码。|
+|Hive|Apache >= 1.0.0, <br/>CDH >= 5.4.0, <br/>(默认Apache Hive 2.3.3)|\>=1.0.3|是|Hive EngineConn, 支持HiveQL 代码。|
+|Python|Python >= 2.6, <br/>(默认Python2*)|\>=1.0.3|是|Python EngineConn, 支持python 代码。|
+|Shell|Bash >= 2.0|\>=1.0.3|是|Shell EngineConn, 支持Bash shell 代码。|
+|JDBC|MySQL >= 5.0, Hive >=1.2.1, <br/>(默认Hive-jdbc 2.3.4)|\>=1.0.3|否|JDBC EngineConn, 已支持MySQL 和HiveQL,可快速扩展支持其他有JDBC Driver 包的引擎, 如Oracle。|
+|Flink |Flink >= 1.12.2, <br/>(默认Apache Flink 1.12.2)|\>=1.0.2|否|Flink EngineConn, 支持FlinkSQL 代码,也支持以Flink Jar 形式启动一个新的Yarn 应用程序。|
+|Pipeline|-|\>=1.0.2|否|Pipeline EngineConn, 支持文件的导入和导出。|
+|openLooKeng|openLooKeng >= 1.5.0, <br/>(默认openLookEng 1.5.0)|\>=1.1.1|否|openLooKeng EngineConn, 支持用Sql查询数据虚拟化引擎openLooKeng。|
+|Sqoop| Sqoop >= 1.4.6, <br/>(默认Apache Sqoop 1.4.6)|\>=1.1.2|否|Sqoop EngineConn, 支持 数据迁移工具 Sqoop 引擎。|
+|Presto|Presto >= 0.180|\>=1.2.0|否|Presto EngineConn, 支持Presto SQL 代码。|
+|ElasticSearch|ElasticSearch >=6.0|\>=1.2.0|否|ElasticSearch EngineConn, 支持SQL 和DSL 代码。|
+|Trino | Trino >=371 | >=1.3.1 | 否 |   Trino EngineConn, 支持Trino SQL 代码 |
+|Seatunnel | Seatunnel >=2.1.2 | >=1.3.1 | 否 | Seatunnel EngineConn, 支持Seatunnel SQL 代码 |
+
+
+
 
 # 下载
 
@@ -187,7 +185,8 @@ Linkis 基于微服务架构开发,其服务可以分为 3 类:计算治理服
 - 通过邮件方式 [dev@linkis.apache.org](mailto:dev@linkis.apache.org)
 - 可以扫描下面的二维码,加入我们的微信群,以获得更快速的响应
 
-![wechatgroup](https://linkis.apache.org/Images/wedatasphere_contact_01.png)
+<img src="https://linkis.apache.org/Images/wedatasphere_contact_01.png" width="256"/>
+
 
 # 谁在使用 Linkis
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org