You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@linkis.apache.org by ca...@apache.org on 2022/07/25 06:06:37 UTC

[incubator-linkis-website] branch dev updated: Python engine adding version switching steps (#421)

This is an automated email from the ASF dual-hosted git repository.

casion pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git


The following commit(s) were added to refs/heads/dev by this push:
     new 9bce791e74 Python engine adding version switching steps (#421)
9bce791e74 is described below

commit 9bce791e7439b4e19eabb73cb04952747b3b4439
Author: 成彬彬 <10...@users.noreply.github.com>
AuthorDate: Mon Jul 25 14:06:34 2022 +0800

    Python engine adding version switching steps (#421)
    
    * Python engine adding version switching steps
    
    * update spark.md
---
 docs/engine_usage/pipeline.md                          |   2 +-
 docs/engine_usage/python.md                            |  11 +++++++++++
 docs/engine_usage/spark.md                             |  11 ++++++++---
 docs/user_guide/linkiscli_manual.md                    |  14 +++++++-------
 .../current/engine_usage/python.md                     |  16 ++++++++++++++--
 .../current/engine_usage/spark.md                      |   7 +++++--
 .../current/user_guide/linkiscli_manual.md             |  14 +++++++-------
 .../version-1.0.2/engine_usage/python.md               |   2 +-
 .../version-1.0.2/engine_usage/spark.md                |   5 ++++-
 .../version-1.0.3/engine_usage/python.md               |   2 +-
 .../version-1.0.3/engine_usage/spark.md                |   5 ++++-
 .../version-1.1.0/engine_usage/python.md               |   2 +-
 .../version-1.1.0/engine_usage/spark.md                |   7 ++++++-
 .../version-1.1.1/engine_usage/python.md               |   2 +-
 .../version-1.1.1/engine_usage/spark.md                |   9 +++++++--
 .../version-1.1.2/engine_usage/python.md               |   2 +-
 .../version-1.1.2/engine_usage/spark.md                |   9 +++++++--
 static/Images/EngineUsage/python-configure.png         | Bin 0 -> 105825 bytes
 18 files changed, 86 insertions(+), 34 deletions(-)

diff --git a/docs/engine_usage/pipeline.md b/docs/engine_usage/pipeline.md
index f35e192ed8..3c4dea3a7d 100644
--- a/docs/engine_usage/pipeline.md
+++ b/docs/engine_usage/pipeline.md
@@ -1,5 +1,5 @@
 ---
-title: pipeline engine
+title: Pipeline Engine
 sidebar_position: 10
 ---
 
diff --git a/docs/engine_usage/python.md b/docs/engine_usage/python.md
index 97ee168c13..ebd05857e6 100644
--- a/docs/engine_usage/python.md
+++ b/docs/engine_usage/python.md
@@ -23,6 +23,17 @@ Table 1-1 Environmental configuration list
 Python supports python2 and
 For python3, you can simply change the configuration to complete the Python version switch, without recompiling the python EngineConn version.
 
+```
+#1: CLI to submit tasks for version switching, and set the version Python at the end of the command Version=python3 (python3: the name of the file generated when creating a soft connection, which can be customized)
+sh ./ bin/linkis-cli -engineType python-python2 -codeType python -code "print(\"hello\")"  -submitUser hadoop -proxyUser hadoop  -confMap  python. version=python3
+
+#2: CLI to submit the task for version switching, and add the command setting to the version path python Version=/usr/bin/python (/usr/bin/python: the path of the generated file when creating the soft connection)
+sh ./ bin/linkis-cli -engineType python-python2 -codeType python -code "print(\"hello\")"  -submitUser hadoop -proxyUser hadoop  -confMap  python. version=/usr/bin/python
+
+```
+Page configuration:
+![](/Images/EngineUsage/python-configure.png)
+
 ### 2.2 python engineConn deployment and loading
 
 Here you can use the default loading method to be used normally.
diff --git a/docs/engine_usage/spark.md b/docs/engine_usage/spark.md
index 106ebce8f7..7dc53b328e 100644
--- a/docs/engine_usage/spark.md
+++ b/docs/engine_usage/spark.md
@@ -68,10 +68,15 @@ If you use Hive, you only need to make the following changes:
 ### 3.2 How to use Linkis-cli
 
 After Linkis 1.0, you can submit tasks through cli. We only need to specify the corresponding EngineConn and CodeType tag types. The use of Spark is as follows:
+
 ```shell
-## codeType py-->pyspark  sql-->sparkSQL scala-->Spark scala
-sh ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql -code "show tables"  -submitUser hadoop -proxyUser hadoop
-```
+## codeType correspondence py-->pyspark sql-->sparkSQL scala-->Spark scala
+sh ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql -code "show tables" -submitUser hadoop -proxyUser hadoop
+
+# You can specify the yarn queue in the submission parameter by -confMap wds.linkis.yarnqueue=dws
+sh ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql -confMap wds.linkis.yarnqueue=dws -code "show tables" -submitUser hadoop -proxyUser hadoop
+````
+
 The specific usage can refer to [Linkis CLI Manual](user_guide/linkiscli_manual.md).
 
 
diff --git a/docs/user_guide/linkiscli_manual.md b/docs/user_guide/linkiscli_manual.md
index 067b99c48d..f4b1b69f2c 100644
--- a/docs/user_guide/linkiscli_manual.md
+++ b/docs/user_guide/linkiscli_manual.md
@@ -26,7 +26,7 @@ The first step is to check whether the default configuration file `linkis-cli.pr
 The second step is to enter the linkis installation directory and enter the command:
 
 ```bash
-    ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql -code "select count(*) from testdb.test;"  -submitUser hadoop -proxyUser hadoop 
+   sh ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql -code "select count(*) from testdb.test;"  -submitUser hadoop -proxyUser hadoop 
 ```
 
 In the third step, you will see the information on the console that the task has been submitted to linkis and started to execute.
@@ -37,7 +37,7 @@ Linkis-cli currently only supports synchronous submission, that is, after submit
 ## How to use
 
 ```bash
-   ./bin/linkis-cli [parameter] [cli parameter]
+   sh ./bin/linkis-cli [parameter] [cli parameter]
 ```
 
 ## Supported parameter list
@@ -78,7 +78,7 @@ Linkis-cli currently only supports synchronous submission, that is, after submit
 Cli parameters can be passed in manually specified, this way will overwrite the conflicting configuration items in the default configuration file
 
 ```bash
-    ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql -code "select count(*) from testdb.test;" -submitUser hadoop -proxyUser hadoop --gwUrl http://127.0.0.1:9001- -authStg token --authKey [tokenKey] --authVal [tokenValue]
+    sh ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql -code "select count(*) from testdb.test;" -submitUser hadoop -proxyUser hadoop --gwUrl http://127.0.0.1:9001- -authStg token --authKey [tokenKey] --authVal [tokenValue]
 ```
 
 #### Two, add engine initial parameters
@@ -90,7 +90,7 @@ The initial parameters of the engine can be added through the `-confMap` paramet
 For example: the following example sets startup parameters such as the yarn queue for engine startup and the number of spark executors:
 
 ```bash
-   ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql -confMap wds.linkis.yarnqueue=q02,spark.executor.instances=3 -code "select count(*) from testdb.test;"  -submitUser hadoop -proxyUser hadoop  
+  sh ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql -confMap wds.linkis.yarnqueue=q02,spark.executor.instances=3 -code "select count(*) from testdb.test;"  -submitUser hadoop -proxyUser hadoop  
 ```
 
 Of course, these parameters can also be read in a configuration file, we will talk about it later
@@ -100,7 +100,7 @@ Of course, these parameters can also be read in a configuration file, we will ta
 Labels can be added through the `-labelMap` parameter. Like the `-confMap`, the type of the `-labelMap` parameter is also Map:
 
 ```bash
-   /bin/linkis-cli -engineType spark-2.4.3 -codeType sql -labelMap labelKey=labelVal -code "select count(*) from testdb.test;"  -submitUser hadoop -proxyUser hadoop  
+  sh ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql -labelMap labelKey=labelVal -code "select count(*) from testdb.test;"  -submitUser hadoop -proxyUser hadoop  
 ```
 
 #### Fourth, variable replacement
@@ -108,7 +108,7 @@ Labels can be added through the `-labelMap` parameter. Like the `-confMap`, the
 Linkis-cli variable substitution is realized by `${}` symbol and `-varMap`
 
 ```bash
-   ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql -code "select count(*) from \${key};" -varMap key=testdb.test  -submitUser hadoop -proxyUser hadoop  
+   sh ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql -code "select count(*) from \${key};" -varMap key=testdb.test  -submitUser hadoop -proxyUser hadoop  
 ```
 
 During execution, the sql statement will be replaced with:
@@ -124,7 +124,7 @@ Note that the escape character in `'\$'` is to prevent the parameter from being
 1. linkis-cli supports loading user-defined configuration files, the configuration file path is specified by the `--userConf` parameter, and the configuration file needs to be in the file format of `.properties`
         
 ```bash
-   ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql -code "select count(*) from testdb.test;"  -submitUser hadoop -proxyUser hadoop  --userConf [configuration file path]
+  sh ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql -code "select count(*) from testdb.test;"  -submitUser hadoop -proxyUser hadoop  --userConf [configuration file path]
 ``` 
         
         
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine_usage/python.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine_usage/python.md
index e189f2a9aa..48ac0e1a2e 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine_usage/python.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine_usage/python.md
@@ -5,7 +5,7 @@ sidebar_position: 5
 
 本文主要介绍在Linkis1.X中,Python引擎的配置、部署和使用。
 
-## 1.Spark引擎使用前的环境配置
+## 1.Python引擎使用前的环境配置
 
 如果您希望在您的服务器上使用python引擎,您需要保证用户的PATH中是有python的执行目录和执行权限。
 
@@ -20,7 +20,19 @@ sidebar_position: 5
 ### 2.1 Python版本的选择和编译
 
 Python是支持python2 和
-python3的,您可以简单更改配置就可以完成Python版本的切换,不需要重新编译python的引擎版本。
+python3的,您可以简单更改配置就可以完成Python版本的切换,不需要重新编译python的引擎版本,具体配置如下。
+
+
+```
+#1:cli的方式提交任务进行版本切换,命令末端设置版本 python.version=python3 (python3:创建软连接时生成文件的名称,可自定义命名)
+sh ./bin/linkis-cli -engineType python-python2 -codeType python -code "print(\"hello\")"  -submitUser hadoop -proxyUser hadoop  -confMap  python.version=python3
+
+#2:cli的方式提交任务进行版本切换,命令设置加入版本路径 python.version=/usr/bin/python (/usr/bin/python:创建软连接时生成文件的路径)
+sh ./bin/linkis-cli -engineType python-python2 -codeType python -code "print(\"hello\")"  -submitUser hadoop -proxyUser hadoop  -confMap  python.version=/usr/bin/python
+
+```
+页面配置:
+![](/Images/EngineUsage/python-configure.png)
 
 ### 2.2 python engineConn部署和加载
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine_usage/spark.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine_usage/spark.md
index 6cad8cfc73..57b94b72d9 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine_usage/spark.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine_usage/spark.md
@@ -68,8 +68,11 @@ Linkis提供了Java和Scala 的SDK向Linkis服务端提交任务. 具体可以
 
 Linkis 1.0后提供了cli的方式提交任务,我们只需要指定对应的EngineConn和CodeType标签类型即可,Spark的使用如下:
 ```shell
-#You can also add the queue value in the StartUpMap of the submission parameter: 
-startupMap.put("wds.linkis.rm.yarnqueue", "dws")
+# codeType对应关系 py-->pyspark  sql-->sparkSQL scala-->Spark scala
+sh ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql -code "show tables"  -submitUser hadoop -proxyUser hadoop
+
+# 可以在提交参数通过-confMap wds.linkis.yarnqueue=dws  来指定yarn 队列
+sh ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql  -confMap wds.linkis.yarnqueue=dws -code "show tables"  -submitUser hadoop -proxyUser hadoop
 ```
 具体使用可以参考: [Linkis CLI Manual](user_guide/linkiscli_manual.md).
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/user_guide/linkiscli_manual.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/user_guide/linkiscli_manual.md
index 9167fd5420..720eff7fec 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/user_guide/linkiscli_manual.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/user_guide/linkiscli_manual.md
@@ -26,7 +26,7 @@ Linkis-Cli 是一个用于向Linkis提交任务的Shell命令行程序。
 第二步,进入linkis安装目录,输入指令:
 
 ```bash
-    ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql -code "select count(*) from testdb.test;"  -submitUser hadoop -proxyUser hadoop 
+    sh ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql -code "select count(*) from testdb.test;"  -submitUser hadoop -proxyUser hadoop 
 ```
 
 第三步,您会在控制台看到任务被提交到linkis,并开始执行的信息。
@@ -37,7 +37,7 @@ linkis-cli目前仅支持同步提交,即向linkis提交任务后,不断询
 ## 3. 使用方式
 
 ```bash
-   ./bin/linkis-cli   [客户端参数][引擎参数] [启动运行参数]
+   sh ./bin/linkis-cli   [客户端参数][引擎参数] [启动运行参数]
 ```
            
 ## 4. 支持的参数列表
@@ -92,7 +92,7 @@ linkis-cli目前仅支持同步提交,即向linkis提交任务后,不断询
 客户端参数可以通过手动指定的方式传入,此方式会覆盖默认配置文件`linkis-cli.properties`中的冲突配置项
 可以通过配置文件进行配置
 ```bash
-    ./bin/linkis-cli --gatewayUrl http://127.0.0.1:9001  --authStg token --authKey [tokenKey] --authVal [tokenValue]  -engineType spark-2.4.3 -codeType sql -code "select count(*) from testdb.test;"  -submitUser hadoop -proxyUser hadoop  
+   sh ./bin/linkis-cli --gatewayUrl http://127.0.0.1:9001  --authStg token --authKey [tokenKey] --authVal [tokenValue]  -engineType spark-2.4.3 -codeType sql -code "select count(*) from testdb.test;"  -submitUser hadoop -proxyUser hadoop  
 ```
 
 ### 5.2 添加引擎启动参数
@@ -107,7 +107,7 @@ linkis-cli目前仅支持同步提交,即向linkis提交任务后,不断询
 例如:以下示例设置了引擎启动的yarn队列、spark executor个数等启动参数:
 
 ```bash
-   ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql -confMap wds.linkis.yarnqueue=q02 -confMap spark.executor.instances=3 -code "select count(*) from testdb.test;"  -submitUser hadoop -proxyUser hadoop  
+   sh ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql -confMap wds.linkis.yarnqueue=q02 -confMap spark.executor.instances=3 -code "select count(*) from testdb.test;"  -submitUser hadoop -proxyUser hadoop  
 ```
         
 当然,这些参数也支持以配置文件的方式读取,见【5.5 使用用户的配置文件】
@@ -135,7 +135,7 @@ linkis-cli目前仅支持同步提交,即向linkis提交任务后,不断询
 标签可以通过`-labelMap`参数添加,与`-confMap`一样,`-labelMap`参数的类型也是Map:
 
 ```bash
-   /bin/linkis-cli -engineType spark-2.4.3 -codeType sql -labelMap labelKey=labelVal -code "select count(*) from testdb.test;"  -submitUser hadoop -proxyUser hadoop  
+   sh /bin/linkis-cli -engineType spark-2.4.3 -codeType sql -labelMap labelKey=labelVal -code "select count(*) from testdb.test;"  -submitUser hadoop -proxyUser hadoop  
 ```
 
 ### 5.4 变量替换
@@ -143,7 +143,7 @@ linkis-cli目前仅支持同步提交,即向linkis提交任务后,不断询
 Linkis-cli的变量替换通过`${}`符号和`-varMap`共同实现
 
 ```bash
-   ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql -code "select count(*) from \${key};" -varMap key=testdb.test  -submitUser hadoop -proxyUser hadoop  
+  sh ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql -code "select count(*) from \${key};" -varMap key=testdb.test  -submitUser hadoop -proxyUser hadoop  
 ```
 
 执行过程中sql语句会被替换为:
@@ -159,7 +159,7 @@ Linkis-cli的变量替换通过`${}`符号和`-varMap`共同实现
 1. linkis-cli支持加载用户自定义配置文件,配置文件路径通过`--userConf`参数指定,配置文件需要是`.properties`文件格式,默认是使用 `conf/linkis-cli/linkis-cli.properties` 配置文件
 
 ```bash
-   ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql -code "select count(*) from testdb.test;"  -submitUser hadoop -proxyUser hadoop  --userConf [配置文件路径]
+   sh ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql -code "select count(*) from testdb.test;"  -submitUser hadoop -proxyUser hadoop  --userConf [配置文件路径]
 ``` 
         
 2. 哪些参数可以配置?
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.2/engine_usage/python.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.2/engine_usage/python.md
index 942b92aad9..b72b7ce9d0 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.2/engine_usage/python.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.2/engine_usage/python.md
@@ -5,7 +5,7 @@ sidebar_position: 5
 
 本文主要介绍在Linkis1.0中,Python引擎的配置、部署和使用。
 
-## 1.Spark引擎使用前的环境配置
+## 1.Python引擎使用前的环境配置
 
 如果您希望在您的服务器上使用python引擎,您需要保证用户的PATH中是有python的执行目录和执行权限。
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.2/engine_usage/spark.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.2/engine_usage/spark.md
index 15d85d4299..e9d183551c 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.2/engine_usage/spark.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.2/engine_usage/spark.md
@@ -68,8 +68,11 @@ Linkis提供了Java和Scala 的SDK向Linkis服务端提交任务. 具体可以
 
 Linkis 1.0后提供了cli的方式提交任务,我们只需要指定对应的EngineConn和CodeType标签类型即可,Spark的使用如下:
 ```shell
-You can also add the queue value in the StartUpMap of the submission parameter: `startupMap.put("wds.linkis.rm.yarnqueue", "dws")`
+## codeType对应关系 py-->pyspark  sql-->sparkSQL scala-->Spark scala
+sh ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql -code "show tables"  -submitUser hadoop -proxyUser hadoop
 
+# 可以在提交参数通过-confMap wds.linkis.yarnqueue=dws  来指定yarn 队列
+sh ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql  -confMap wds.linkis.yarnqueue=dws -code "show tables"  -submitUser hadoop -proxyUser hadoop
 ```
 具体使用可以参考: [Linkis CLI Manual](user_guide/linkiscli_manual.md).
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.3/engine_usage/python.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.3/engine_usage/python.md
index 942b92aad9..b72b7ce9d0 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.3/engine_usage/python.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.3/engine_usage/python.md
@@ -5,7 +5,7 @@ sidebar_position: 5
 
 本文主要介绍在Linkis1.0中,Python引擎的配置、部署和使用。
 
-## 1.Spark引擎使用前的环境配置
+## 1.Python引擎使用前的环境配置
 
 如果您希望在您的服务器上使用python引擎,您需要保证用户的PATH中是有python的执行目录和执行权限。
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.3/engine_usage/spark.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.3/engine_usage/spark.md
index 15d85d4299..e9d183551c 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.3/engine_usage/spark.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.3/engine_usage/spark.md
@@ -68,8 +68,11 @@ Linkis提供了Java和Scala 的SDK向Linkis服务端提交任务. 具体可以
 
 Linkis 1.0后提供了cli的方式提交任务,我们只需要指定对应的EngineConn和CodeType标签类型即可,Spark的使用如下:
 ```shell
-You can also add the queue value in the StartUpMap of the submission parameter: `startupMap.put("wds.linkis.rm.yarnqueue", "dws")`
+## codeType对应关系 py-->pyspark  sql-->sparkSQL scala-->Spark scala
+sh ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql -code "show tables"  -submitUser hadoop -proxyUser hadoop
 
+# 可以在提交参数通过-confMap wds.linkis.yarnqueue=dws  来指定yarn 队列
+sh ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql  -confMap wds.linkis.yarnqueue=dws -code "show tables"  -submitUser hadoop -proxyUser hadoop
 ```
 具体使用可以参考: [Linkis CLI Manual](user_guide/linkiscli_manual.md).
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.0/engine_usage/python.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.0/engine_usage/python.md
index 942b92aad9..b72b7ce9d0 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.0/engine_usage/python.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.0/engine_usage/python.md
@@ -5,7 +5,7 @@ sidebar_position: 5
 
 本文主要介绍在Linkis1.0中,Python引擎的配置、部署和使用。
 
-## 1.Spark引擎使用前的环境配置
+## 1.Python引擎使用前的环境配置
 
 如果您希望在您的服务器上使用python引擎,您需要保证用户的PATH中是有python的执行目录和执行权限。
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.0/engine_usage/spark.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.0/engine_usage/spark.md
index 15d85d4299..18d4b3eed6 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.0/engine_usage/spark.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.0/engine_usage/spark.md
@@ -67,10 +67,15 @@ Linkis提供了Java和Scala 的SDK向Linkis服务端提交任务. 具体可以
 ### 3.2 通过Linkis-cli进行任务提交
 
 Linkis 1.0后提供了cli的方式提交任务,我们只需要指定对应的EngineConn和CodeType标签类型即可,Spark的使用如下:
+
 ```shell
-You can also add the queue value in the StartUpMap of the submission parameter: `startupMap.put("wds.linkis.rm.yarnqueue", "dws")`
+## codeType对应关系 py-->pyspark  sql-->sparkSQL scala-->Spark scala
+sh ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql -code "show tables"  -submitUser hadoop -proxyUser hadoop
 
+# 可以在提交参数通过-confMap wds.linkis.yarnqueue=dws  来指定yarn 队列
+sh ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql  -confMap wds.linkis.yarnqueue=dws -code "show tables"  -submitUser hadoop -proxyUser hadoop
 ```
+
 具体使用可以参考: [Linkis CLI Manual](user_guide/linkiscli_manual.md).
 
 ### 3.3 Scriptis的使用方式
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.1/engine_usage/python.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.1/engine_usage/python.md
index e189f2a9aa..eaba419773 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.1/engine_usage/python.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.1/engine_usage/python.md
@@ -5,7 +5,7 @@ sidebar_position: 5
 
 本文主要介绍在Linkis1.X中,Python引擎的配置、部署和使用。
 
-## 1.Spark引擎使用前的环境配置
+## 1.Python引擎使用前的环境配置
 
 如果您希望在您的服务器上使用python引擎,您需要保证用户的PATH中是有python的执行目录和执行权限。
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.1/engine_usage/spark.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.1/engine_usage/spark.md
index 6cad8cfc73..780a4cbe8e 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.1/engine_usage/spark.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.1/engine_usage/spark.md
@@ -67,10 +67,15 @@ Linkis提供了Java和Scala 的SDK向Linkis服务端提交任务. 具体可以
 ### 3.2 通过Linkis-cli进行任务提交
 
 Linkis 1.0后提供了cli的方式提交任务,我们只需要指定对应的EngineConn和CodeType标签类型即可,Spark的使用如下:
+
 ```shell
-#You can also add the queue value in the StartUpMap of the submission parameter: 
-startupMap.put("wds.linkis.rm.yarnqueue", "dws")
+## codeType对应关系 py-->pyspark  sql-->sparkSQL scala-->Spark scala
+sh ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql -code "show tables"  -submitUser hadoop -proxyUser hadoop
+
+# 可以在提交参数通过-confMap wds.linkis.yarnqueue=dws  来指定yarn 队列
+sh ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql  -confMap wds.linkis.yarnqueue=dws -code "show tables"  -submitUser hadoop -proxyUser hadoop
 ```
+
 具体使用可以参考: [Linkis CLI Manual](user_guide/linkiscli_manual.md).
 
 ### 3.3 Scriptis的使用方式
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.2/engine_usage/python.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.2/engine_usage/python.md
index e189f2a9aa..eaba419773 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.2/engine_usage/python.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.2/engine_usage/python.md
@@ -5,7 +5,7 @@ sidebar_position: 5
 
 本文主要介绍在Linkis1.X中,Python引擎的配置、部署和使用。
 
-## 1.Spark引擎使用前的环境配置
+## 1.Python引擎使用前的环境配置
 
 如果您希望在您的服务器上使用python引擎,您需要保证用户的PATH中是有python的执行目录和执行权限。
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.2/engine_usage/spark.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.2/engine_usage/spark.md
index 6cad8cfc73..780a4cbe8e 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.2/engine_usage/spark.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.2/engine_usage/spark.md
@@ -67,10 +67,15 @@ Linkis提供了Java和Scala 的SDK向Linkis服务端提交任务. 具体可以
 ### 3.2 通过Linkis-cli进行任务提交
 
 Linkis 1.0后提供了cli的方式提交任务,我们只需要指定对应的EngineConn和CodeType标签类型即可,Spark的使用如下:
+
 ```shell
-#You can also add the queue value in the StartUpMap of the submission parameter: 
-startupMap.put("wds.linkis.rm.yarnqueue", "dws")
+## codeType对应关系 py-->pyspark  sql-->sparkSQL scala-->Spark scala
+sh ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql -code "show tables"  -submitUser hadoop -proxyUser hadoop
+
+# 可以在提交参数通过-confMap wds.linkis.yarnqueue=dws  来指定yarn 队列
+sh ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql  -confMap wds.linkis.yarnqueue=dws -code "show tables"  -submitUser hadoop -proxyUser hadoop
 ```
+
 具体使用可以参考: [Linkis CLI Manual](user_guide/linkiscli_manual.md).
 
 ### 3.3 Scriptis的使用方式
diff --git a/static/Images/EngineUsage/python-configure.png b/static/Images/EngineUsage/python-configure.png
new file mode 100644
index 0000000000..5a92d168c3
Binary files /dev/null and b/static/Images/EngineUsage/python-configure.png differ


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org