You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@streampark.apache.org by be...@apache.org on 2022/11/08 06:28:29 UTC

[incubator-streampark-website] branch dev updated: Change name (#154)

This is an automated email from the ASF dual-hosted git repository.

benjobs pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/incubator-streampark-website.git


The following commit(s) were added to refs/heads/dev by this push:
     new 3f556e12 Change name (#154)
3f556e12 is described below

commit 3f556e1260ef93a9e083887b1ab8a428c52e59a9
Author: ChunFu Wu <31...@qq.com>
AuthorDate: Tue Nov 8 14:28:25 2022 +0800

    Change name (#154)
    
    * Change name
---
 ...5\217\221\345\210\251\345\231\250StreamPark.md" |   2 +-
 docs/connector/1-kafka.md                          |   4 +-
 docs/connector/4-doris.md                          |   2 +-
 docs/connector/5-es.md                             |   2 +-
 docs/development/conf.md                           |   6 +--
 docs/development/model.md                          |  18 +++----
 docs/flink-k8s/1-deployment.md                     |  24 +++++-----
 docs/intro.md                                      |   2 +-
 docs/user-guide/1-deployment.md                    |  40 ++++++++--------
 docs/user-guide/2-quickstart.md                    |   6 +--
 docs/user-guide/3-development.md                   |  22 ++++-----
 docs/user-guide/4-dockerDeployment.md              |   1 -
 docs/user-guide/5-LDAP.md                          |  10 ++--
 .../current/connector/1-kafka.md                   |   6 +--
 .../current/connector/4-doris.md                   |   2 +-
 .../current/connector/5-es.md                      |   4 +-
 .../current/development/conf.md                    |   4 +-
 .../current/development/model.md                   |  18 +++----
 .../current/flink-k8s/1-deployment.md              |  26 +++++------
 .../current/intro.md                               |  14 +++---
 .../current/user-guide/1-deployment.md             |  44 ++++++++---------
 .../current/user-guide/2-quickstart.md             |   6 +--
 .../current/user-guide/3-development.md            |  20 ++++----
 .../current/user-guide/4-dockerDeployment.md       |   2 +-
 .../current/user-guide/5-LDAP.md                   |  10 ++--
 i18n/zh-CN/docusaurus-theme-classic/footer.json    |   8 ++--
 src/pages/home/hero.jsx                            |   4 +-
 src/pages/home/index.less                          |  52 ++++++++++-----------
 src/pages/user/languages.json                      |   4 +-
 ...{streamx-archite.png => streampark-archite.png} | Bin
 .../{streamx_apis.jpeg => streampark_apis.jpeg}    | Bin
 ...{streamx_archite.png => streampark_archite.png} | Bin
 .../{streamx_build.png => streampark_build.png}    | Bin
 .../{streamx_conf.jpg => streampark_conf.jpg}      | Bin
 ...{streamx_coreapi.png => streampark_coreapi.png} | Bin
 ...eamx_flinkhome.png => streampark_flinkhome.png} | Bin
 ...{streamx_ideaopt.jpg => streampark_ideaopt.jpg} | Bin
 ...eamx_kafkaapi.jpeg => streampark_kafkaapi.jpeg} | Bin
 .../{streamx_login.jpeg => streampark_login.jpeg}  | Bin
 ...e_cycle.png => streampark_scala_life_cycle.png} | Bin
 ...treamx_settings.png => streampark_settings.png} | Bin
 .../{streamx_start.png => streampark_start.png}    | Bin
 ...mx_websetting.png => streampark_websetting.png} | Bin
 43 files changed, 181 insertions(+), 182 deletions(-)

diff --git "a/blog/Flink\345\274\200\345\217\221\345\210\251\345\231\250StreamX.md" "b/blog/Flink\345\274\200\345\217\221\345\210\251\345\231\250StreamPark.md"
similarity index 99%
rename from "blog/Flink\345\274\200\345\217\221\345\210\251\345\231\250StreamX.md"
rename to "blog/Flink\345\274\200\345\217\221\345\210\251\345\231\250StreamPark.md"
index ae02932e..13ef5b40 100644
--- "a/blog/Flink\345\274\200\345\217\221\345\210\251\345\231\250StreamX.md"
+++ "b/blog/Flink\345\274\200\345\217\221\345\210\251\345\231\250StreamPark.md"
@@ -243,7 +243,7 @@ Native-session模式需要事先使用Flink命令创建一个运行在K8s中的F
 
 附:
 
-Streamx Github: https://github.com/streamxhub/streamx <br/>
+StreamPark Github: https://github.com/apache/incubator-streampark <br/>
 Doris Github: https://github.com/apache/incubator-doris
 
 ![](/blog/author.png)
diff --git a/docs/connector/1-kafka.md b/docs/connector/1-kafka.md
index 516b8baa..04c80936 100644
--- a/docs/connector/1-kafka.md
+++ b/docs/connector/1-kafka.md
@@ -173,7 +173,7 @@ Let's take a look at more usage and configuration methods
 
 ### Consume multiple Kafka instances
 
-`StreamPark` has taken into account the configuration of kafka of multiple different instances at the beginning of development . How to unify the configuration, and standardize the format? The solution in streamx is this, if we want to consume two different instances of kafka at the same time, the configuration file is defined as follows,
+`StreamPark` has taken into account the configuration of kafka of multiple different instances at the beginning of development . How to unify the configuration, and standardize the format? The solution in streampark is this, if we want to consume two different instances of kafka at the same time, the configuration file is defined as follows,
 As you can see in the `kafka.source` directly under the kafka instance name, here we unified called **alias** , **alias** must be unique, to distinguish between different instances
 If there is only one kafka instance, then you can not configure `alias`
 When writing the code for consumption, pay attention to the corresponding **alias** can be specified, the configuration and code is as follows
@@ -518,7 +518,7 @@ class JavaUser implements Serializable {
 The returned object is wrapped in a `KafkaRecord`, which has the current `offset`, `partition`, `timestamp` and many other useful information for developers to use, where `value` is the target object returned, as shown below:
 
 
-![](/doc/image/streamx_kafkaapi.jpeg)
+![](/doc/image/streampark_kafkaapi.jpeg)
 
 ### Specific strategy
 
diff --git a/docs/connector/4-doris.md b/docs/connector/4-doris.md
index ec8f1bcf..cdedc7fc 100644
--- a/docs/connector/4-doris.md
+++ b/docs/connector/4-doris.md
@@ -17,7 +17,7 @@ StreamPark encapsulates DoirsSink for writing data to Doris in real-time, based
 ### Write with StreamPark
 
 Use `StreamPark` to write data to `Doris`.  DorisSink only supports JSON format (single-layer) writing currently,
-such as: {"id":1,"name":"streamx"} The example of the running program is java, as follows:
+such as: {"id":1,"name":"streampark"} The example of the running program is java, as follows:
 
 #### configuration list
 
diff --git a/docs/connector/5-es.md b/docs/connector/5-es.md
index f343512a..7f1874f9 100755
--- a/docs/connector/5-es.md
+++ b/docs/connector/5-es.md
@@ -19,7 +19,7 @@ operations for Elasticsearch6 and above.
 
 :::tip hint
 
-Because there are conflicts between different versions of Flink Connector Elasticsearch, Streamx temporarily only
+Because there are conflicts between different versions of Flink Connector Elasticsearch, StreamPark temporarily only
 supports write operations of Elasticsearch6 and above. If you wants to using Elasticsearch5, you need to exclude the
 flink-connector-elasticsearch6 dependency and introduce the flink-connector-elasticsearch5 dependency to create
 org.apache.flink.streaming.connectors.elasticsearch5.ElasticsearchSink instance writes data.
diff --git a/docs/development/conf.md b/docs/development/conf.md
index b45b0a7b..3c8b1a30 100755
--- a/docs/development/conf.md
+++ b/docs/development/conf.md
@@ -100,7 +100,7 @@ A simpler method should be used, such as simplifying some environment initializa
 
 **Absolutely**
 
-`Streamx` proposes the concept of unified program configuration, which is generated by configuring a series of parameters from development to deployment in the `application.yml`according to a specific format a general configuration template, so that the initialization of the environment can be completed by transferring the configuration of the project to the program when the program is started. This is the concept of `configuration file`.
+`StreamPark` proposes the concept of unified program configuration, which is generated by configuring a series of parameters from development to deployment in the `application.yml`according to a specific format a general configuration template, so that the initialization of the environment can be completed by transferring the configuration of the project to the program when the program is started. This is the concept of `configuration file`.
 
 `StreamPark` provides a higher level of abstraction for the `Flink SQL`, developers only need to define SQL to `sql.yaml`, when the program is started, the `sql.yaml` is transferred to the main program, and the SQL will be automatically loaded and executed. This is the concept of `sql file`.
 
@@ -127,7 +127,7 @@ flink:
       jobmanager:
     property: #@see: https://ci.apache.org/projects/flink/flink-docs-release-1.12/deployment/config.html
       $internal.application.main: org.apache.streampark.flink.quickstart.QuickStartApp
-      pipeline.name:  Streamx QuickStart App
+      pipeline.name:  StreamPark QuickStart App
       yarn.application.queue:
       taskmanager.numberOfTaskSlots: 1
       parallelism.default: 2
@@ -399,7 +399,7 @@ sql: |
 
 :::danger attention
 
-In the above content, | after SQL is required. In addition, | will retain the format of the whole section. Streamx can directly define multiple SQLs at once. Each SQLs must be separated by semicolons, and each section of SQLs must follow the format and specification specified by Flink SQL.
+In the above content, | after SQL is required. In addition, | will retain the format of the whole section. StreamPark can directly define multiple SQLs at once. Each SQLs must be separated by semicolons, and each section of SQLs must follow the format and specification specified by Flink SQL.
 :::
 
 ## Summary
diff --git a/docs/development/model.md b/docs/development/model.md
index b6ffc687..4315ba96 100644
--- a/docs/development/model.md
+++ b/docs/development/model.md
@@ -22,10 +22,10 @@ Let's start from these aspects
 
 ## Architecture
 
-[]("/doc/image/streamx_archite.png")
+[]("/doc/image/streampark_archite.png")
 
 ## Programming paradigm
-`streamx-core` is positioned as a programming time framework, rapid development scaffolding, specifically created to simplify Flink development. Developers will use this module during the development phase. Let's take a look at what the programming paradigm of `DataStream` and `Flink Sql` with StreamPark looks like, and what the specifications and requirements are.
+`streampark-core` is positioned as a programming time framework, rapid development scaffolding, specifically created to simplify Flink development. Developers will use this module during the development phase. Let's take a look at what the programming paradigm of `DataStream` and `Flink Sql` with StreamPark looks like, and what the specifications and requirements are.
 
 
 ### DataStream
@@ -189,7 +189,7 @@ The above lines of scala and Java code are the essential skeleton code for devel
 **RunTime Context** - **StreamingContext** , **TableContext** , **StreamTableContext** are three very important objects in StreamPark, next we look at the definition and role of these three **Context**.
 
 <center>
-<img src="/doc/image/streamx_coreapi.png" width="60%"/>
+<img src="/doc/image/streampark_coreapi.png" width="60%"/>
 </center>
 
 ### StreamingContext
@@ -440,7 +440,7 @@ StreamTableContext context = new StreamTableContext(JavaConfig);
 
 You can use the `StreamExecutionEnvironment` `API` directly in the `StreamTableContext`, **methods prefixed with $** are the `StreamExecutionEnvironment` API.
 
-![](/doc/image/streamx_apis.jpeg)
+![](/doc/image/streampark_apis.jpeg)
 
 :::
 
@@ -485,7 +485,7 @@ The life cycle is as follows.
 * **start**         Stages of program initiation
 * **destroy**       Stages of destruction
 
-![Life Cycle](/doc/image/streamx_scala_life_cycle.png)
+![Life Cycle](/doc/image/streampark_scala_life_cycle.png)
 
 ### Life Cycle - init
 In the **init** phase, the framework automatically parses the incoming configuration file and initializes the `StreamExecutionEnvironment` according to the various parameters defined inside. This step is automatically executed by the framework and does not require developer involvement.
@@ -526,7 +526,7 @@ The **destroy** stage is an optional stage that requires developer participation
 
 ## Catalog Structure
 
-The recommended project directory structure is as follows, please refer to the directory structure and configuration in [Streamx-flink-quickstart](https://github.com/apache/incubator-streampark-quickstart)
+The recommended project directory structure is as follows, please refer to the directory structure and configuration in [StreamPark-flink-quickstart](https://github.com/apache/incubator-streampark-quickstart)
 
 ``` tree
 .
@@ -602,11 +602,11 @@ assembly.xml is the configuration file needed for the assembly packaging plugin,
 
 ## Packaged Deployment
 
-The recommended packaging mode in [streamx-flink-quickstart](https://github.com/streamxhub/streampark/streampark-flink/streamx-flink-quickstart) is recommended. It runs `maven package` directly to generate a standard StreamPark recommended project package, after unpacking the directory structure is as follows.
+The recommended packaging mode in [streampark-flink-quickstart](https://github.com/apache/streampark/streampark-flink/streampark-flink-quickstart) is recommended. It runs `maven package` directly to generate a standard StreamPark recommended project package, after unpacking the directory structure is as follows.
 
 ``` text
 .
-Streamx-flink-quickstart-1.0.0
+StreamPark-flink-quickstart-1.0.0
 ├── bin
 │   ├── startup.sh                             //Launch Script
 │   ├── setclasspath.sh                        //Java environment variable-related scripts (used internally, not of concern to users)
@@ -616,7 +616,7 @@ Streamx-flink-quickstart-1.0.0
 │   ├── application.yaml                       //Project's configuration file
 │   ├── sql.yaml                               // flink sql file
 ├── lib
-│   └── Streamx-flink-quickstart-1.0.0.jar     //The project's jar package
+│   └── StreamPark-flink-quickstart-1.0.0.jar     //The project's jar package
 └── temp
 ```
 
diff --git a/docs/flink-k8s/1-deployment.md b/docs/flink-k8s/1-deployment.md
index 58f7bc5b..4100b04a 100644
--- a/docs/flink-k8s/1-deployment.md
+++ b/docs/flink-k8s/1-deployment.md
@@ -56,7 +56,7 @@ On Setting page of StreamPark, user can configure the connection information for
 ![docker register setting](/doc/image/docker_register_setting.png)
 
 
-Building a Namespace named `streamx`(other name should be set at Setting page of StreamPark) at remote Docker.The namespace is push/pull space of StreamPark Flink image and Docker Register User should own `pull`/`push`  permission of this namespace.
+Building a Namespace named `streampark`(other name should be set at Setting page of StreamPark) at remote Docker.The namespace is push/pull space of StreamPark Flink image and Docker Register User should own `pull`/`push`  permission of this namespace.
 
 
 ```shell
@@ -64,10 +64,10 @@ Building a Namespace named `streamx`(other name should be set at Setting page of
 docker login --username=<your_username> <your_register_addr>
 # verify push permission
 docker pull busybox
-docker tag busybox <your_register_addr>/streamx/busybox
-docker push <your_register_addr>/streamx/busybox
+docker tag busybox <your_register_addr>/streampark/busybox
+docker push <your_register_addr>/streampark/busybox
 # verify pull permission
-docker pull <your_register_addr>/streamx/busybox
+docker pull <your_register_addr>/streampark/busybox
 ```
 <br></br>
 ## Job submit
@@ -116,12 +116,12 @@ The additional configuration of Flink-Native-Kubernetes Session Job will be deci
 
 StreamPark parameter related to Flink-K8s in `applicaton.yml` are as below.And in most condition, it is no need to change it.
 
-| Configuration item                                                 | Description                                                                                                          | Default value |
-|--------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------|---------------|
-| streamx.docker.register.image-namespace                            | namespace of Remote docker service repository, flink-job image will be pushed here                                   | streamx       |
-| streamx.flink-k8s.tracking.polling-task-timeout-sec.job-status     | timeout in seconds of flink state tracking task                                                                      | 120           |
-| streamx.flink-k8s.tracking.polling-task-timeout-sec.cluster-metric | timeout in seconds of flink metrics tracking task                                                                    | 120           |
-| streamx.flink-k8s.tracking.polling-interval-sec.job-status         | interval in seconds of flink state tracking task.To maintain accuracy, please set below 5s, the best setting is 2-3s | 5             |
-| streamx.flink-k8s.tracking.polling-interval-sec.cluster-metric     | interval in seconds of flink metrics tracking task                                                                   | 10            |
-| streamx.flink-k8s.tracking.silent-state-keep-sec                   | fault tolerance time in seconds of  silent  metrics                                                                  | 60            |
+| Configuration item                                                    | Description                                                                                                          | Default value |
+|-----------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------|---------------|
+| streampark.docker.register.image-namespace                            | namespace of Remote docker service repository, flink-job image will be pushed here                                   | streampark    |
+| streampark.flink-k8s.tracking.polling-task-timeout-sec.job-status     | timeout in seconds of flink state tracking task                                                                      | 120           |
+| streampark.flink-k8s.tracking.polling-task-timeout-sec.cluster-metric | timeout in seconds of flink metrics tracking task                                                                    | 120           |
+| streampark.flink-k8s.tracking.polling-interval-sec.job-status         | interval in seconds of flink state tracking task.To maintain accuracy, please set below 5s, the best setting is 2-3s | 5             |
+| streampark.flink-k8s.tracking.polling-interval-sec.cluster-metric     | interval in seconds of flink metrics tracking task                                                                   | 10            |
+| streampark.flink-k8s.tracking.silent-state-keep-sec                   | fault tolerance time in seconds of  silent  metrics                                                                  | 60            |
 
diff --git a/docs/intro.md b/docs/intro.md
index 64f734c3..21a59b1f 100644
--- a/docs/intro.md
+++ b/docs/intro.md
@@ -34,7 +34,7 @@ On the other hand, It can be challenge for enterprises to use Flink & Spark if t
 
 The overall architecture of StreamPark is shown in the following figure. StreamPark consists of three parts, they are StreamPark-core, StreamPark-pump, and StreamPark-console.
 
-![StreamPark Archite](/doc/image/streamx_archite.png)
+![StreamPark Archite](/doc/image/streampark_archite.png)
 
 ### 1️⃣ StreamPark-core
 
diff --git a/docs/user-guide/1-deployment.md b/docs/user-guide/1-deployment.md
index a4e3e98b..f91dbf46 100755
--- a/docs/user-guide/1-deployment.md
+++ b/docs/user-guide/1-deployment.md
@@ -6,9 +6,9 @@ sidebar_position: 1
 
 import { ClientEnvs } from '../components/TableData.jsx';
 
-The overall component stack structure of StreamPark is as follows. It consists of two major parts: streamx-core and streampark-console. streampark-console is a very important module, positioned as a **integrated real-time data platform**, ** streaming data warehouse Platform**, **Low Code**, **Flink & Spark task hosting platform**, can better manage Flink tasks, integrate project compilation, publishing, parameter configuration, startup, savepoint, flame graph ( flame graph ), Flink SQL, [...]
+The overall component stack structure of StreamPark is as follows. It consists of two major parts: streampark-core and streampark-console. streampark-console is a very important module, positioned as a **integrated real-time data platform**, ** streaming data warehouse Platform**, **Low Code**, **Flink & Spark task hosting platform**, can better manage Flink tasks, integrate project compilation, publishing, parameter configuration, startup, savepoint, flame graph ( flame graph ), Flink S [...]
 
-![Streamx Archite](/doc/image/streamx_archite.png)
+![StreamPark Archite](/doc/image/streampark_archite.png)
 
 streampark-console provides an out-of-the-box installation package. Before installation, there are some requirements for the environment. The specific requirements are as follows:
 
@@ -131,7 +131,7 @@ Scala 2.12 is compiled, and the relevant scala version specification information
 
 ### Deploy backend
 
-After the installation is complete, you will see the final project file, located in `streamx/streampark-console/streampark-console-service/target/streampark-console-service-${version}-bin.tar.gz`, the installation directory after unpacking as follows
+After the installation is complete, you will see the final project file, located in `streampark/streampark-console/streampark-console-service/target/streampark-console-service-${version}-bin.tar.gz`, the installation directory after unpacking as follows
 
 ```textmate
 .
@@ -151,8 +151,8 @@ streampark-console-service-1.2.1
 ├── lib
 │    └── *.jar                                //Project jar package
 ├── plugins
-│    ├── streamx-jvm-profiler-1.0.0.jar       //jvm-profiler, flame graph related functions (internal use, users do not need to pay attention)
-│    └── streamx-flink-sqlclient-1.0.0.jar    //Flink SQl submit related functions (for internal use, users do not need to pay attention)
+│    ├── streampark-jvm-profiler-1.0.0.jar       //jvm-profiler, flame graph related functions (internal use, users do not need to pay attention)
+│    └── streampark-flink-sqlclient-1.0.0.jar    //Flink SQl submit related functions (for internal use, users do not need to pay attention)
 ├── script
 │     ├── final                               // Complete ddl build table sql
 │     ├── upgrade                             // The sql of the upgrade part of each version (only the sql changes from the previous version to this version are recorded)
@@ -174,8 +174,8 @@ In the installation process of versions before 1.2.1, there is no need to manual
 ##### Modify the configuration
 The installation and unpacking have been completed, and the next step is to prepare the data-related work
 
-###### Create a new database `streamx`
-Make sure to create a new database `streamx` in mysql that the deployment machine can connect to
+###### Create a new database `streampark`
+Make sure to create a new database `streampark` in mysql that the deployment machine can connect to
 
 ###### Modify connection information
 Go to `conf`, modify `conf/application.yml`, find the datasource item, find the mysql configuration, and modify it to the corresponding information, as follows
@@ -200,20 +200,20 @@ datasource:
         username: $user
         password: $password
         driver-class-name: com.mysql.cj.jdbc.Driver
-        url: jdbc: mysql://$host:$port/streamx?useUnicode=true&characterEncoding=UTF-8&useJDBCCompliantTimezoneShift=true&useLegacyDatetimeCode=false&serverTimezone=GMT%2B8
+        url: jdbc: mysql://$host:$port/streampark?useUnicode=true&characterEncoding=UTF-8&useJDBCCompliantTimezoneShift=true&useLegacyDatetimeCode=false&serverTimezone=GMT%2B8
 ```
 
 ###### Modify workspace
-Go to `conf`, modify `conf/application.yml`, find the item streamx, find the workspace configuration, and change it to a directory that the user has permission to.
+Go to `conf`, modify `conf/application.yml`, find the item streampark, find the workspace configuration, and change it to a directory that the user has permission to.
 
 ```yaml
-streamx:
+streampark:
   # HADOOP_USER_NAME If it is on yarn mode ( yarn-prejob | yarn-application | yarn-session), you need to configure hadoop-user-name
   hadoop-user-name: hdfs
   # Local workspace, used to store project source code, build directory, etc.
   workspace:
-    local: /opt/streamx_workspace # A local workspace directory (very important), users can change the directory by themselves, it is recommended to put it in other places separately to store the project source code, the built directory, etc.
-    remote: hdfs:///streamx   # support hdfs:///streamx/ 、 /streamx 、hdfs://host:ip/streamx/
+    local: /opt/streampark_workspace # A local workspace directory (very important), users can change the directory by themselves, it is recommended to put it in other places separately to store the project source code, the built directory, etc.
+    remote: hdfs:///streampark   # support hdfs:///streampark/ 、 /streampark 、hdfs://host:ip/streampark/
 ```
 
 ##### Start the backend
@@ -224,7 +224,7 @@ Enter `bin` and directly execute startup.sh to start the project. The default po
 cd streampark-console-service-1.0.0/bin
 bash startup.sh
 ```
-Relevant logs will be output to **streampark-console-service-1.0.0/logs/streamx.out**
+Relevant logs will be output to **streampark-console-service-1.0.0/logs/streampark.out**
 
 :::info hint
 
@@ -245,13 +245,13 @@ npm install -g pm2
 ##### Release
 
 ###### 1. Copy the dist to the deployment server
-Copy the entire directory of streampark-console-webapp/dist to the deployment directory of the server, such as: `/home/www/streamx`, the copied directory level is /home/www/streamx/dist
+Copy the entire directory of streampark-console-webapp/dist to the deployment directory of the server, such as: `/home/www/streampark`, the copied directory level is /home/www/streampark/dist
 
-###### 2. Copy the streamx.js file to the project deployment directory
-Copy streamx/streampark-console/streampark-console-webapp/streamx.js to `/home/www/streamx`
+###### 2. Copy the streampark.js file to the project deployment directory
+Copy streampark/streampark-console/streampark-console-webapp/streampark.js to `/home/www/streampark`
 
 ###### 3. Modify the service port
-Users can specify the port address of the front-end service by themselves, modify the /home/www/streamx/streamx.js file, and find `serverPort` to modify, the default is as follows:
+Users can specify the port address of the front-end service by themselves, modify the /home/www/streampark/streampark.js file, and find `serverPort` to modify, the default is as follows:
 
 ```
   const serverPort = 1000
@@ -260,7 +260,7 @@ Users can specify the port address of the front-end service by themselves, modif
 4. Start the service
 
 ```shell
-   pm2 start streamx.js
+   pm2 start streampark.js
 ```
 
 For more information about pm2, please refer to [Official Website](https://pm2.keymetrics.io/)
@@ -269,7 +269,7 @@ For more information about pm2, please refer to [Official Website](https://pm2.k
 
 After the above steps, even if the deployment is completed, you can directly log in to the system
 
-![StreamPark Login](/doc/image/streamx_login.jpeg)
+![StreamPark Login](/doc/image/streampark_login.jpeg)
 
 :::tip hint
 Default password: <strong> admin / streampark </strong>
@@ -279,7 +279,7 @@ Default password: <strong> admin / streampark </strong>
 
 After entering the system, the first thing to do is to modify the system configuration. Under the menu/StreamPark/Setting, the operation interface is as follows:
 
-![StreamPark Settings](/doc/image/streamx_settings.png)
+![StreamPark Settings](/doc/image/streampark_settings.png)
 
 The main configuration items are divided into the following categories
 
diff --git a/docs/user-guide/2-quickstart.md b/docs/user-guide/2-quickstart.md
index 969e10f6..023a792a 100644
--- a/docs/user-guide/2-quickstart.md
+++ b/docs/user-guide/2-quickstart.md
@@ -6,9 +6,9 @@ sidebar_position: 2
 
 ## How to use
 
-The installation of the one-stop platform `streampark-console` has been introduced in detail in the previous chapter. In this chapter, let's see how to quickly deploy and run a job with `streampark-console`. The official structure and specification) and projects developed with `streamx` are well supported. Let's use `streamx-quickstart` to quickly start the journey of `streampark-console`
+The installation of the one-stop platform `streampark-console` has been introduced in detail in the previous chapter. In this chapter, let's see how to quickly deploy and run a job with `streampark-console`. The official structure and specification) and projects developed with `streampark` are well supported. Let's use `streampark-quickstart` to quickly start the journey of `streampark-console`
 
-`streamx-quickstart` is a sample program for developing Flink by StreamPark. For details, please refer to:
+`streampark-quickstart` is a sample program for developing Flink by StreamPark. For details, please refer to:
 
 - Github: [https://github.com/apache/incubator-streampark-quickstart.git](https://github.com/apache/streampark-quickstart.git)
 - Gitee: [https://gitee.com/mirrors_apache/incubator-streampark-quickstart.git](https://gitee.com/mirrors_apache/incubator-streampark-quickstart.git)
@@ -113,7 +113,7 @@ GROUP BY DATE_FORMAT(ts, 'yyyy-MM-dd HH:00');
 The task startup flow chart is as follows
 
 <center>
-<img src="/doc/image/streamx_start.png"/><br></br>
+<img src="/doc/image/streampark_start.png"/><br></br>
 <strong>streampark-console submit task process</strong>
 </center>
 
diff --git a/docs/user-guide/3-development.md b/docs/user-guide/3-development.md
index 0f1cf335..02058fc8 100755
--- a/docs/user-guide/3-development.md
+++ b/docs/user-guide/3-development.md
@@ -36,7 +36,7 @@ To ensure that the local machine can connect to the cluster, you need to set the
 
 If you are developing locally, you can use minikube or kubesphere to quickly install kubernetes environment, Of course, it is more recommended to choose the existing k8s cluster facilities. In addition, Tencent cloud tke and Alibaba cloud ack with time billing is also a good choice for qulckly develop.
 
-For additional configuration requirements, please refer to: [**streamx flink-k8s integration support**](../flink-k8s/1-deployment.md)
+For additional configuration requirements, please refer to: [**streampark flink-k8s integration support**](../flink-k8s/1-deployment.md)
 
 ## Install Flink (optional, Standalone Runtime)
 
@@ -90,7 +90,7 @@ mvn clean install -Dscala.version=2.12.8 -Dscala.binary.version=2.12 -DskipTests
 
 #### Backend decompression
 
-After the compilation, the installation package location is `streamx/streampark-console/streampark-console-service/target/streampark-console-service-${version}-bin.tar.gz`, The directory structure after decompressing is as follows:
+After the compilation, the installation package location is `streampark/streampark-console/streampark-console-service/target/streampark-console-service-${version}-bin.tar.gz`, The directory structure after decompressing is as follows:
 
 ```textmate
 .
@@ -110,18 +110,18 @@ streampark-console-service-${version}
 ├── lib
 │    └── *.jar
 ├── plugins
-│    ├── streamx-jvm-profiler-1.0.0.jar
-│    └── streamx-flink-sqlclient-1.0.0.jar
+│    ├── streampark-jvm-profiler-1.0.0.jar
+│    └── streampark-flink-sqlclient-1.0.0.jar
 ├── logs
 ├── temp
 ```
-Copy the unpacked directory to other directories to prevent it from being cleaned up the next time `mvn clean` is executed. For example, if it is placed in `/opt/streamx/`, the full path of the file is `/opt/streampark/streampark-console-service-${version}`, This path will be used later and there is no space in the path.
+Copy the unpacked directory to other directories to prevent it from being cleaned up the next time `mvn clean` is executed. For example, if it is placed in `/opt/streampark/`, the full path of the file is `/opt/streampark/streampark-console-service-${version}`, This path will be used later and there is no space in the path.
 
 #### Backend configuration
 
-Git clone streamx source code, then open it with IntelliJ idea, and modify JDBC connection information of `datasource` in the `resources/application.yml`, Please refer to [modify configuration](http://www.streamxhub.com/zh/doc/console/deploy/#%E4%BF%AE%E6%94%B9%E9%85%8D%E7%BD%AE) in the installation and deployment chapter.
+Git clone streampark source code, then open it with IntelliJ idea, and modify JDBC connection information of `datasource` in the `resources/application.yml`, Please refer to [modify configuration](http://www.streamxhub.com/zh/doc/console/deploy/#%E4%BF%AE%E6%94%B9%E9%85%8D%E7%BD%AE) in the installation and deployment chapter.
 
-<img src="/doc/image/streamx_conf.jpg" />
+<img src="/doc/image/streampark_conf.jpg" />
 
 If the target cluster you want to connect to has Kerberos authentication enabled, you need to find the relative information under `resources/kerberos.xml`and configure it. Kerberos is off by default. To enable it, set `enable` to true, as follows:
 
@@ -141,7 +141,7 @@ java:
 
 #### Backend startup
 
-`Streamx console` is a web application developed based on springboot, `org.apache.streampark.console Streamxconsole` is the main class. Before startup, you need to set `VM options` and `environment variables`
+`StreamPark console` is a web application developed based on springboot, `org.apache.streampark.console StreamPark console` is the main class. Before startup, you need to set `VM options` and `environment variables`
 
 ##### VM options
 
@@ -161,7 +161,7 @@ If the JDK version used by the development machine is above JDK1.8, the followin
 
 If you use a non locally installed Hadoop cluster (test Hadoop), you need to configure `HADOOP_USER_NAME` and `HADOOP_CONF_DIR` in `Environment variables`. The value of `HADOOP_USER_NAME` is the Hadoop user name with read and write permission to HDFS. `HADOOP_CONF_DIR` is the storage location of the configuration file on the development machine. If Hadoop is installed locally, the variable does not need to be configured.
 
-<img src="/doc/image/streamx_ideaopt.jpg" />
+<img src="/doc/image/streampark_ideaopt.jpg" />
 
 If everything is ready, you can start the `StreamParkConsole` main class. If it is started successfully, you will see the printing information of successful startup.
 
@@ -171,9 +171,9 @@ The frontend is developed based on nodejs and Vue, so the node environment needs
 
 #### Frontend configuration
 
-Since it is a frontend and backend separated project, the frontend needs to know the access address of the backend (streamx console) in order to work together. Therefore, the Vue needs to be changed_ APP_ BASE_ The value of API variable is located in:
+Since it is a frontend and backend separated project, the frontend needs to know the access address of the backend (streampark console) in order to work together. Therefore, the Vue needs to be changed_ APP_ BASE_ The value of API variable is located in:
 
-![web config](/doc/image/streamx_websetting.png)
+![web config](/doc/image/streampark_websetting.png)
 
 Default configuration:
 
diff --git a/docs/user-guide/4-dockerDeployment.md b/docs/user-guide/4-dockerDeployment.md
index 617217d6..f7c595c9 100644
--- a/docs/user-guide/4-dockerDeployment.md
+++ b/docs/user-guide/4-dockerDeployment.md
@@ -93,7 +93,6 @@ vim docker-compose
       dockerfile: deploy/docker/console/Dockerfile
 #    image: ${HUB}:${TAG}
 ```
-![img.png](img.png)
 
 ```
 docker-compose up -d
diff --git a/docs/user-guide/5-LDAP.md b/docs/user-guide/5-LDAP.md
index 9adfda0e..f22979dd 100644
--- a/docs/user-guide/5-LDAP.md
+++ b/docs/user-guide/5-LDAP.md
@@ -22,11 +22,11 @@ LDAP unified authentication service is used to solve the above problems.
 
 ### 1.Official website to download the binary installation package
 
-https://github.com/streamxhub/streampark/releases
+https://github.com/apache/incubator-streampark/releases
 
 ### 2.Add LDAP configuration
 ```
-cd streamxpark
+cd streampark
 cd conf
 vim application
 ```
@@ -35,11 +35,11 @@ vim application
 ldap:
   ## This value is the domain name required for company LDAP user login
   urls: ldap://99.99.99.99:389
-  username: cn=Manager,dc=streamx,dc=com
-  password: streamx
+  username: cn=Manager,dc=streampark,dc=com
+  password: streampark
   ## DN distinguished name
   embedded:
-    base-dn: dc=streamx,dc=com
+    base-dn: dc=streampark,dc=com
   user:
     ## Key values for search filtering
     identity:
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/1-kafka.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/1-kafka.md
index f9d20927..27a79dc9 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/1-kafka.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/1-kafka.md
@@ -17,7 +17,7 @@ import TabItem from '@theme/TabItem';
 ```xml
     <!--必须要导入的依赖-->
     <dependency>
-        <groupId>com.streamxhub.streamx</groupId>
+        <groupId>org.apache.streampark</groupId>
         <artifactId>streampark-flink-core</artifactId>
         <version>${project.version}</version>
     </dependency>
@@ -175,7 +175,7 @@ def getDataStream[T: TypeInformation](topic: java.io.Serializable = null,
 
 ### 消费多个Kafka实例
 
-在框架开发之初就考虑到了多个不同实例的kafka的配置情况.如何来统一配置,并且规范格式呢?在streamx中是这么解决的,假如我们要同时消费两个不同实例的kafka,配置文件定义如下,
+在框架开发之初就考虑到了多个不同实例的kafka的配置情况.如何来统一配置,并且规范格式呢?在streampark中是这么解决的,假如我们要同时消费两个不同实例的kafka,配置文件定义如下,
 可以看到在 `kafka.source` 下直接放kafka的实例名称(名字可以任意),在这里我们统一称为 **alias** , **alias** 必须是唯一的,来区别不同的实例,然后别的参数还是按照之前的规范,
 统统放到当前这个实例的 `namespace` 下即可.如果只有一个kafka实例,则可以不用配置 `alias`
 在写代码消费时注意指定对应的 **alias** 即可,配置和代码如下
@@ -529,7 +529,7 @@ class JavaUser implements Serializable {
 
 返回的对象被包装在`KafkaRecord`中,`kafkaRecord`中有当前的`offset`,`partition`,`timestamp`等诸多有用的信息供开发者使用,其中`value`即返回的目标对象,如下图:
 
-![](/doc/image/streamx_kafkaapi.jpeg)
+![](/doc/image/streampark_kafkaapi.jpeg)
 
 ### 指定strategy
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/4-doris.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/4-doris.md
index a600cb08..ee968a7c 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/4-doris.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/4-doris.md
@@ -13,7 +13,7 @@ StreamPark 基于Doris的[stream load](https://doris.apache.org/administrator-gu
 
 ### StreamPark 方式写入
 
-用`StreamPark`写入 `doris`的数据, 目前 DorisSink 只支持 JSON 格式(单层)写入,如:{"id":1,"name":"streamx"}
+用`StreamPark`写入 `doris`的数据, 目前 DorisSink 只支持 JSON 格式(单层)写入,如:{"id":1,"name":"streampark"}
 运行程序样例为java,如下:
 
 #### 配置信息
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/5-es.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/5-es.md
index b5c0a35d..358f12c8 100755
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/5-es.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/5-es.md
@@ -12,10 +12,10 @@ import TabItem from '@theme/TabItem';
 [Flink 官方](https://nightlies.apache.org/flink/flink-docs-release-1.14/zh/docs/connectors/)提供了[Elasticsearch](https://nightlies.apache.org/flink/flink-docs-release-1.14/zh/docs/connectors/datastream/elasticsearch/)的连接器,用于向 elasticsearch 中写入数据,可提供 **至少一次** 的处理语义
 
 ElasticsearchSink 使用 TransportClient(6.x 之前)或者 RestHighLevelClient(6.x 开始)和 Elasticsearch 集群进行通信,
-`Streamx`对 flink-connector-elasticsearch6 进一步封装,屏蔽开发细节,简化Elasticsearch6及以上的写入操作。
+`StreamPark`对 flink-connector-elasticsearch6 进一步封装,屏蔽开发细节,简化Elasticsearch6及以上的写入操作。
 
 :::tip 提示
-因为Flink Connector Elasticsearch 不同版本之间存在冲突`Streamx`暂时仅支持Elasticsearch6及以上的写入操作,如需写入Elasticsearch5需要使用者排除
+因为Flink Connector Elasticsearch 不同版本之间存在冲突`StreamPark`暂时仅支持Elasticsearch6及以上的写入操作,如需写入Elasticsearch5需要使用者排除
 flink-connector-elasticsearch6 依赖,引入 flink-connector-elasticsearch5依赖 创建
 org.apache.flink.streaming.connectors.elasticsearch5.ElasticsearchSink 实例写入数据。
 :::
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/development/conf.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/development/conf.md
index 18dae56f..25ae88de 100755
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/development/conf.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/development/conf.md
@@ -129,8 +129,8 @@ flink:
       shutdownOnAttachedExit:
       jobmanager:
     property: #@see: https://ci.apache.org/projects/flink/flink-docs-release-1.12/deployment/config.html
-      $internal.application.main: com.streamxhub.streamx.flink.quickstart.QuickStartApp
-      yarn.application.name: Streamx QuickStart App
+      $internal.application.main: org.apache.streampark.flink.quickstart.QuickStartApp
+      yarn.application.name: StreamPark QuickStart App
       yarn.application.queue:
       taskmanager.numberOfTaskSlots: 1
       parallelism.default: 2
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/development/model.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/development/model.md
index 5dd5e188..7aa9902d 100755
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/development/model.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/development/model.md
@@ -21,11 +21,11 @@ import TabItem from '@theme/TabItem';
 
 ## 架构
 
-[]("/doc/image/streamx_archite.png")
+[]("/doc/image/streampark_archite.png")
 
 ## 编程模型
 
-`streamx-core` 定位是编程时框架,快速开发脚手架,专门为简化 Flink 开发而生,开发者在开发阶段会使用到该模块,下面我们来看看 `DataStream` 和 `Flink Sql` 用 StreamPark 来开发编程模型是什么样的,有什么规范和要求
+`streampark-core` 定位是编程时框架,快速开发脚手架,专门为简化 Flink 开发而生,开发者在开发阶段会使用到该模块,下面我们来看看 `DataStream` 和 `Flink Sql` 用 StreamPark 来开发编程模型是什么样的,有什么规范和要求
 
 ### DataStream
 
@@ -191,7 +191,7 @@ public class JavaStreamTableApp {
 **RunTime Context** — **StreamingContext** , **TableContext** , **StreamTableContext** 是 StreamPark 中几个非常重要三个对象,接下来我们具体看看这三个 **Context** 的定义和作用
 
 <center>
-<img src="/doc/image/streamx_coreapi.png" width="60%"/>
+<img src="/doc/image/streampark_coreapi.png" width="60%"/>
 </center>
 
 ### StreamingContext
@@ -442,7 +442,7 @@ StreamTableContext context = new StreamTableContext(JavaConfig);
 
 在 `StreamTableContext` 中可以直接使用 `StreamExecutionEnvironment` 的 `API`, **以$打头的方法** 都是 `StreamExecutionEnvironment` 的 API
 
-![](/doc/image/streamx_apis.jpeg)
+![](/doc/image/streampark_apis.jpeg)
 
 :::
 
@@ -487,7 +487,7 @@ StreamTableContext context = new StreamTableContext(JavaConfig);
 * **start**         程序启动阶段
 * **destroy**       销毁阶段
 
-![Life Cycle](/doc/image/streamx_scala_life_cycle.png)
+![Life Cycle](/doc/image/streampark_scala_life_cycle.png)
 
 ### 生命周期之 — init
 **init** 阶段,框架会自动解析传入的配置文件,按照里面的定义的各种参数初始化`StreamExecutionEnvironment`,这一步是框架自动执行,不需要开发者参与
@@ -528,7 +528,7 @@ StreamTableContext context = new StreamTableContext(JavaConfig);
 :::
 
 ## 目录结构
-推荐的项目目录结构如下,具体可以参考[Streamx-flink-quickstart](https://github.com/streamxhub/streamx-quickstart) 里的目录结构和配置
+推荐的项目目录结构如下,具体可以参考[Streampark-flink-quickstart](https://github.com/apache/streampark-quickstart) 里的目录结构和配置
 
 ``` tree
 .
@@ -604,11 +604,11 @@ assembly.xml 是assembly打包插件需要用到的配置文件,定义如下:
 
 ## 打包部署
 
-推荐 [streamx-flink-quickstart](https://github.com/streamxhub/streampark/streampark-flink/streamx-flink-quickstart) 里的打包模式,直接运行`maven package`即可生成一个标准的StreamPark推荐的项目包,解包后目录结构如下
+推荐 [streampark-flink-quickstart](https://github.com/apache/streampark/streampark-flink/streampark-flink-quickstart) 里的打包模式,直接运行`maven package`即可生成一个标准的StreamPark推荐的项目包,解包后目录结构如下
 
 ``` text
 .
-Streamx-flink-quickstart-1.0.0
+Streampark-flink-quickstart-1.0.0
 ├── bin
 │   ├── startup.sh                             //启动脚本
 │   ├── setclasspath.sh                        //Java环境变量相关的脚本(内部使用的,用户无需关注)
@@ -618,7 +618,7 @@ Streamx-flink-quickstart-1.0.0
 │   ├── application.yaml                       //项目的配置文件
 │   ├── sql.yaml                               // flink sql文件
 ├── lib
-│   └── Streamx-flink-quickstart-1.0.0.jar     //项目的jar包
+│   └── Streampark-flink-quickstart-1.0.0.jar     //项目的jar包
 └── temp
 ```
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/flink-k8s/1-deployment.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/flink-k8s/1-deployment.md
index 61d08b42..8bb8a7ac 100755
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/flink-k8s/1-deployment.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/flink-k8s/1-deployment.md
@@ -9,7 +9,7 @@ StreamPark Flink Kubernetes 基于 [Flink Native Kubernetes](https://ci.apache.o
 * Native-Kubernetes Application
 * Native-Kubernetes Session
 
-单个 StreamPark 实例当前只支持单个 Kubernetes 集群,如果您有多 Kubernetes 支持的诉求,欢迎提交相关的 [Fearure Request Issue](https://github.com/streamxhub/streamx/issues) : )
+单个 StreamPark 实例当前只支持单个 Kubernetes 集群,如果您有多 Kubernetes 支持的诉求,欢迎提交相关的 [Fearure Request Issue](https://github.com/apache/incubator-streampark/issues) : )
 
 <br></br>
 
@@ -54,7 +54,7 @@ kubectl create clusterrolebinding flink-role-binding-default --clusterrole=edit
 
 ![docker register setting](/doc/image/docker_register_setting.png)
 
-在远程 Docker 容器服务创建一个名为 `streamx` 的 Namespace(该Namespace可自定义命名,命名不为streamx请在setting页面修改确认) ,为 StreamPark 自动构建的 Flink image 推送空间,请确保使用的 Docker Register User 具有该  Namespace 的 `pull`/`push` 权限。
+在远程 Docker 容器服务创建一个名为 `streampark` 的 Namespace(该Namespace可自定义命名,命名不为 streampark 请在setting页面修改确认) ,为 StreamPark 自动构建的 Flink image 推送空间,请确保使用的 Docker Register User 具有该  Namespace 的 `pull`/`push` 权限。
 
 可以在 StreamPark 所在节点通过 docker command 简单测试权限:
 
@@ -63,10 +63,10 @@ kubectl create clusterrolebinding flink-role-binding-default --clusterrole=edit
 docker login --username=<your_username> <your_register_addr>
 # verify push permission
 docker pull busybox
-docker tag busybox <your_register_addr>/streamx/busybox
-docker push <your_register_addr>/streamx/busybox
+docker tag busybox <your_register_addr>/streampark/busybox
+docker push <your_register_addr>/streampark/busybox
 # verify pull permission
-docker pull <your_register_addr>/streamx/busybox
+docker pull <your_register_addr>/streampark/busybox
 ```
 
 <br></br>
@@ -114,12 +114,12 @@ Flink-Native-Kubernetes Session 任务 K8s 额外的配置(pod-template 等)
 
 StreamPark 在 `applicaton.yml`  Flink-K8s 相关参数如下,默认情况下不需要额外调整默认值。
 
-| 配置项                                                       | 描述                                                         | 默认值  |
-| ------------------------------------------------------------ | ------------------------------------------------------------ | ------- |
-| streamx.docker.register.image-namespace                      | 远程 docker 容器服务仓库命名空间,构建的 flink-job 镜像会推送到该命名空间。 | steramx |
-| streamx.flink-k8s.tracking.polling-task-timeout-sec.job-status | 每组 flink 状态追踪任务的运行超时秒数                        | 120     |
-| streamx.flink-k8s.tracking.polling-task-timeout-sec.cluster-metric | 每组 flink 指标追踪任务的运行超时秒数                        | 120     |
-| streamx.flink-k8s.tracking.polling-interval-sec.job-status   | flink 状态追踪任务运行间隔秒数,为了维持准确性,请设置在 5s 以下,最佳设置在 2-3s | 5       |
-| streamx.flink-k8s.tracking.polling-interval-sec.cluster-metric | flink 指标追踪任务运行间隔秒数                               | 10      |
-| streamx.flink-k8s.tracking.silent-state-keep-sec             | silent 追踪容错时间秒数                                      | 60      |
+| 配置项                                                                    | 描述                                                        | 默认值  |
+|:-----------------------------------------------------------------------|-----------------------------------------------------------| ------- |
+| streampark.docker.register.image-namespace                             | 远程 docker 容器服务仓库命名空间,构建的 flink-job 镜像会推送到该命名空间。           | steramx |
+| streampark.flink-k8s.tracking.polling-task-timeout-sec.job-status      | 每组 flink 状态追踪任务的运行超时秒数                                    | 120     |
+| streampark.flink-k8s.tracking.polling-task-timeout-sec.cluster-metric  | 每组 flink 指标追踪任务的运行超时秒数                                    | 120     |
+| streampark.flink-k8s.tracking.polling-interval-sec.job-status          | flink 状态追踪任务运行间隔秒数,为了维持准确性,请设置在 5s 以下,最佳设置在 2-3s          | 5       |
+| streampark.flink-k8s.tracking.polling-interval-sec.cluster-metric      | flink 指标追踪任务运行间隔秒数                                        | 10      |
+| streampark.flink-k8s.tracking.silent-state-keep-sec                    | silent 追踪容错时间秒数                                           | 60      |
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/intro.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/intro.md
index 97fe141c..ef2de91f 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/intro.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/intro.md
@@ -41,17 +41,17 @@ make stream processing easier!!!
 
 ## 🏳‍🌈 组成部分
 
-`StreamPark`有三部分组成,分别是`streamx-core`,`streamx-pump` 和 `streampark-console`
+`StreamPark`有三部分组成,分别是`streampark-core`,`streampark-pump` 和 `streampark-console`
 
-![Streamx Archite](/doc/image/streamx_archite.png)
+![StreamPark Archite](/doc/image/streampark_archite.png)
 
-### 1️⃣ streamx-core
+### 1️⃣ streampark-core
 
-`streamx-core` 定位是一个开发时框架,关注编码开发,规范了配置文件,按照约定优于配置的方式进行开发,提供了一个开发时 `RunTime Content`和一系列开箱即用的`Connector`,扩展了`DataStream`相关的方法,融合了`DataStream`和`Flink sql` api,简化繁琐的操作,聚焦业务本身,提高开发效率和开发体验
+`streampark-core` 定位是一个开发时框架,关注编码开发,规范了配置文件,按照约定优于配置的方式进行开发,提供了一个开发时 `RunTime Content`和一系列开箱即用的`Connector`,扩展了`DataStream`相关的方法,融合了`DataStream`和`Flink sql` api,简化繁琐的操作,聚焦业务本身,提高开发效率和开发体验
 
-### 2️⃣ streamx-pump
+### 2️⃣ streampark-pump
 
-`pump` 是抽水机,水泵的意思,`streamx-pump`的定位是一个数据抽取的组件,类似于`flinkx`,基于`streamx-core`中提供的各种`connector`开发,目的是打造一个方便快捷,开箱即用的大数据实时数据抽取和迁移组件,并且集成到`streampark-console`中,解决实时数据源获取问题,目前在规划中
+`pump` 是抽水机,水泵的意思,`streampark-pump`的定位是一个数据抽取的组件,类似于`flinkx`,基于`streampark-core`中提供的各种`connector`开发,目的是打造一个方便快捷,开箱即用的大数据实时数据抽取和迁移组件,并且集成到`streampark-console`中,解决实时数据源获取问题,目前在规划中
 
 ### 3️⃣ streampark-console
 
@@ -88,6 +88,6 @@ make stream processing easier!!!
 
 ### FlinkX
 
-[FlinkX](http://github.com/DTStack/flinkx) 是基于flink的分布式数据同步工具,实现了多种异构数据源之间高效的数据迁移,定位比较明确,专门用来做数据抽取和迁移,可以作为一个服务组件来使用,`StreamPark`关注开发阶段和任务后期的管理,定位有所不同,`streamx-pump`模块也在规划中,
+[FlinkX](http://github.com/DTStack/flinkx) 是基于flink的分布式数据同步工具,实现了多种异构数据源之间高效的数据迁移,定位比较明确,专门用来做数据抽取和迁移,可以作为一个服务组件来使用,`StreamPark`关注开发阶段和任务后期的管理,定位有所不同,`streampark-pump`模块也在规划中,
 致力于解决数据源抽取和迁移,最终会集成到`streampark-console`中
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/1-deployment.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/1-deployment.md
index b327aee0..7e49ee48 100755
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/1-deployment.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/1-deployment.md
@@ -6,9 +6,9 @@ sidebar_position: 1
 
 import { ClientEnvs } from '../components/TableData.jsx';
 
-StreamPark 总体组件栈架构如下, 由 streamx-core 和 streampark-console 两个大的部分组成 , streampark-console 是一个非常重要的模块, 定位是一个**综合实时数据平台**,**流式数仓平台**, **低代码 ( Low Code )**, **Flink & Spark 任务托管平台**,可以较好的管理 Flink 任务,集成了项目编译、发布、参数配置、启动、savepoint,火焰图 ( flame graph ),Flink SQL,监控等诸多功能于一体,大大简化了 Flink 任务的日常操作和维护,融合了诸多最佳实践。其最终目标是打造成一个实时数仓,流批一体的一站式大数据解决方案
+StreamPark 总体组件栈架构如下, 由 streampark-core 和 streampark-console 两个大的部分组成 , streampark-console 是一个非常重要的模块, 定位是一个**综合实时数据平台**,**流式数仓平台**, **低代码 ( Low Code )**, **Flink & Spark 任务托管平台**,可以较好的管理 Flink 任务,集成了项目编译、发布、参数配置、启动、savepoint,火焰图 ( flame graph ),Flink SQL,监控等诸多功能于一体,大大简化了 Flink 任务的日常操作和维护,融合了诸多最佳实践。其最终目标是打造成一个实时数仓,流批一体的一站式大数据解决方案
 
-![Streamx Archite](/doc/image/streamx_archite.png)
+![StreamPark Archite](/doc/image/streampark_archite.png)
 
 streampark-console 提供了开箱即用的安装包,安装之前对环境有些要求,具体要求如下:
 
@@ -44,7 +44,7 @@ export HADOOP_YARN_HOME=$HADOOP_HOME/../hadoop-yarn
 
 ## 编译 & 部署
 
-你可以直接下载编译好的[**发行包**](https://github.com/streamxhub/streamx/releases)(推荐),也可以选择手动编译安装,手动编译安装步骤如下:
+你可以直接下载编译好的[**发行包**](https://github.com/apache/incubator-streampark/releases)(推荐),也可以选择手动编译安装,手动编译安装步骤如下:
 
 
 ### 环境要求
@@ -101,7 +101,7 @@ mvn clean install -Dscala.version=2.11.12 -Dscala.binary.version=2.11 -DskipTest
 在前后端独立编译部署的项目里,前端项目需要知道后端服务的base api,才能前后端协同工作. 因此在编译之前我们需要指定下后端服务的base api,修改 streampark-console-webapp/.env.production 里的`VUE_APP_BASE_API`即可
 
 ```bash
-vi streamx/streampark-console/streampark-console-webapp/.env.production
+vi streampark/streampark-console/streampark-console-webapp/.env.production
 ```
 
 - 2.2 编译
@@ -132,7 +132,7 @@ Scala 2.12 编译, 相关 scala 版本指定信息如下:
 
 ### 部署后端
 
-安装完成之后就看到最终的工程文件,位于 `streamx/streampark-console/streampark-console-service/target/streampark-console-service-${version}-bin.tar.gz`,解包后安装目录如下
+安装完成之后就看到最终的工程文件,位于 `streampark/streampark-console/streampark-console-service/target/streampark-console-service-${version}-bin.tar.gz`,解包后安装目录如下
 
 ```textmate
 .
@@ -152,8 +152,8 @@ streampark-console-service-1.2.1
 ├── lib
 │    └── *.jar                                //项目的 jar 包
 ├── plugins
-│    ├── streamx-jvm-profiler-1.0.0.jar       //jvm-profiler,火焰图相关功能 ( 内部使用,用户无需关注 )
-│    └── streamx-flink-sqlclient-1.0.0.jar    //Flink SQl 提交相关功能 ( 内部使用,用户无需关注 )
+│    ├── streampark-jvm-profiler-1.0.0.jar       //jvm-profiler,火焰图相关功能 ( 内部使用,用户无需关注 )
+│    └── streampark-flink-sqlclient-1.0.0.jar    //Flink SQl 提交相关功能 ( 内部使用,用户无需关注 )
 ├── script
 │     ├── final                               // 完整的ddl建表sql
 │     ├── upgrade                             // 每个版本升级部分的sql(只记录从上个版本到本次版本的sql变化)
@@ -175,8 +175,8 @@ streampark-console-service-1.2.1
 ##### 修改配置
 安装解包已完成,接下来准备数据相关的工作
 
-###### 新建数据库 `streamx`
-确保在部署机可以连接的 mysql 里新建数据库 `streamx`
+###### 新建数据库 `streampark`
+确保在部署机可以连接的 mysql 里新建数据库 `streampark`
 
 ###### 修改连接信息
 进入到 `conf` 下,修改 `conf/application.yml`,找到 datasource 这一项,找到 mysql 的配置,修改成对应的信息即可,如下
@@ -201,20 +201,20 @@ datasource:
         username: $user
         password: $password
         driver-class-name: com.mysql.cj.jdbc.Driver
-        url: jdbc: mysql://$host:$port/streamx?useUnicode=true&characterEncoding=UTF-8&useJDBCCompliantTimezoneShift=true&useLegacyDatetimeCode=false&serverTimezone=GMT%2B8
+        url: jdbc: mysql://$host:$port/streampark?useUnicode=true&characterEncoding=UTF-8&useJDBCCompliantTimezoneShift=true&useLegacyDatetimeCode=false&serverTimezone=GMT%2B8
 ```
 
 ###### 修改workspace
-进入到 `conf` 下,修改 `conf/application.yml`,找到 streamx 这一项,找到 workspace 的配置,修改成一个用户有权限的目录
+进入到 `conf` 下,修改 `conf/application.yml`,找到 streampark 这一项,找到 workspace 的配置,修改成一个用户有权限的目录
 
 ```yaml
-streamx:
+streampark:
   # HADOOP_USER_NAME 如果是on yarn模式( yarn-prejob | yarn-application | yarn-session)则需要配置 hadoop-user-name
   hadoop-user-name: hdfs
   # 本地的工作空间,用于存放项目源码,构建的目录等.
   workspace:
-    local: /opt/streamx_workspace # 本地的一个工作空间目录(很重要),用户可自行更改目录,建议单独放到其他地方,用于存放项目源码,构建的目录等.
-    remote: hdfs:///streamx   # support hdfs:///streamx/ 、 /streamx 、hdfs://host:ip/streamx/
+    local: /opt/streampark_workspace # 本地的一个工作空间目录(很重要),用户可自行更改目录,建议单独放到其他地方,用于存放项目源码,构建的目录等.
+    remote: hdfs:///streampark   # support hdfs:///streampark/ 、 /streampark 、hdfs://host:ip/streampark/
 ```
 
 ##### 启动后端
@@ -225,7 +225,7 @@ streamx:
 cd streampark-console-service-1.0.0/bin
 bash startup.sh
 ```
-相关的日志会输出到**streampark-console-service-1.0.0/logs/streamx.out** 里
+相关的日志会输出到**streampark-console-service-1.0.0/logs/streampark.out** 里
 
 :::info 提示
 
@@ -246,13 +246,13 @@ npm install -g pm2
 ##### 发布
 
 ###### 1. 将dist copy到部署服务器
-将streamx-console-webapp/dist 整个目录 copy至服务器的部署目录,如: `/home/www/streamx`,拷贝后的目录层级是/home/www/streamx/dist
+将streampark-console-webapp/dist 整个目录 copy至服务器的部署目录,如: `/home/www/streampark`,拷贝后的目录层级是/home/www/streampark/dist
 
-###### 2. 将streamx.js文件copy到项目部署目录
-将streamx/streampark-console/streampark-console-webapp/streamx.js copy 至 `/home/www/streamx`
+###### 2. 将streampark.js文件copy到项目部署目录
+将streampark/streampark-console/streampark-console-webapp/streampark.js copy 至 `/home/www/streampark`
 
 ###### 3. 修改服务端口
-用户可以自行指定前端服务的端口地址, 修改 /home/www/streamx/streamx.js文件, 找到 `serverPort` 修改即可,默认如下:
+用户可以自行指定前端服务的端口地址, 修改 /home/www/streampark/streampark.js文件, 找到 `serverPort` 修改即可,默认如下:
 
 ```
   const serverPort = 1000
@@ -261,7 +261,7 @@ npm install -g pm2
 4. 启动服务
 
 ```shell
-   pm2 start streamx.js
+   pm2 start streampark.js
 ```
 
 关于 pm2的更多使用请参考[官网](https://pm2.keymetrics.io/)
@@ -270,7 +270,7 @@ npm install -g pm2
 
 经过以上步骤,就算部署完成,可直接登录进入系统
 
-![StreamPark Login](/doc/image/streamx_login.jpeg)
+![StreamPark Login](/doc/image/streampark_login.jpeg)
 
 :::tip 提示
 默认密码: <strong> admin / streampark </strong>
@@ -280,7 +280,7 @@ npm install -g pm2
 
 进入系统之后,第一件要做的事情就是修改系统配置,在菜单/StreamPark/Setting 下,操作界面如下:
 
-![StreamPark Settings](/doc/image/streamx_settings.png)
+![StreamPark Settings](/doc/image/streampark_settings.png)
 
 主要配置项分为以下几类
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/2-quickstart.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/2-quickstart.md
index c05ff669..8c81a208 100755
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/2-quickstart.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/2-quickstart.md
@@ -6,9 +6,9 @@ sidebar_position: 2
 
 ## 如何使用
 
-在上个章节已经详细介绍了一站式平台 `streampark-console` 的安装, 本章节看看如果用 `streampark-console` 快速部署运行一个作业, `streampark-console` 对标准的 Flink 程序 ( 安装 Flink 官方要去的结构和规范 ) 和用 `streamx` 开发的项目都做了很好的支持,下面我们使用 `streamx-quickstart` 来快速开启 `streampark-console` 之旅
+在上个章节已经详细介绍了一站式平台 `streampark-console` 的安装, 本章节看看如果用 `streampark-console` 快速部署运行一个作业, `streampark-console` 对标准的 Flink 程序 ( 安装 Flink 官方要去的结构和规范 ) 和用 `streampark` 开发的项目都做了很好的支持,下面我们使用 `streampark-quickstart` 来快速开启 `streampark-console` 之旅
 
-`streamx-quickstart` 是 StreamPark 开发 Flink 的上手示例程序,具体请查阅:
+`streampark-quickstart` 是 StreamPark 开发 Flink 的上手示例程序,具体请查阅:
 
 - Github: [https://github.com/streamxhub/streamx-quickstart.git](https://github.com/streamxhub/streamx-quickstart.git)
 - Gitee: [https://gitee.com/streamxhub/streamx-quickstart.git](https://gitee.com/streamxhub/streamx-quickstart.git)
@@ -111,7 +111,7 @@ GROUP BY DATE_FORMAT(ts, 'yyyy-MM-dd HH:00');
 任务启动流程图如下
 
 <center>
-<img src="/doc/image/streamx_start.png"/><br></br>
+<img src="/doc/image/streampark_start.png"/><br></br>
 <strong>streampark-console 提交任务流程</strong>
 </center>
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/3-development.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/3-development.md
index 1d2d88a7..8bb7c8b1 100755
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/3-development.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/3-development.md
@@ -4,7 +4,7 @@ title: '开发环境'
 sidebar_position: 3
 ---
 
-> [StreamPark](https://github.com/streamxhub/streamx) 遵循 Apache-2.0 开源协议,将会是个长期更新的活跃项目,欢迎大家提交 [PR](https://github.com/streamxhub/streamx/pulls) 或 [ISSUE](https://github.com/streamxhub/streamx/issues/new/choose) 喜欢请给个 [Star](https://github.com/streamxhub/streamx/stargazers) 您的支持是我们最大的动力。 该项目自开源以来就受到不少朋友的关注和认可,表示感谢。已陆续有来自金融,数据分析,车联网,智能广告,地产等公司的朋友在使用,也不乏来自一线大厂的朋友在使用。
+> [StreamPark](https://github.com/apache/incubator-streampark) 遵循 Apache-2.0 开源协议,将会是个长期更新的活跃项目,欢迎大家提交 [PR](https://github.com/apache/incubator-streampark/pulls) 或 [ISSUE](https://github.com/apache/incubator-streampark/issues/new/choose) 喜欢请给个 [Star](https://github.com/apache/incubator-streampark/stargazers) 您的支持是我们最大的动力。 该项目自开源以来就受到不少朋友的关注和认可,表示感谢。已陆续有来自金融,数据分析,车联网,智能广告,地产等公司的朋友在使用,也不乏来自一线大厂的朋友在使用。
 同时 StreamPark 社区是一个非常开放,相互协助,尊重人才的社区。我们也非常欢迎更多的开发者加入一块贡献,不只是代码的贡献,还寻求使用文档,体验报告,问答等方面的贡献。
 
 越来越多的开发者已经不满足简单的安装使用,需要进一步研究或基于其源码二开或扩展相关功能,这就需要进一步的对 StreamPark 深入了解。 本章节具体讲讲如何在本地搭建 `streampark-console` 流批一体平台的开发环境,为了方便讲解,本文中所说的 `streampark-console` 均指 `streampark-console 平台`。
@@ -90,7 +90,7 @@ mvn clean install -DskipTests -Denv=prod
 
 #### 解包
 
-安装完成之后就看到最终的工程文件解包,位于 `streamx/streampark-console/streampark-console-service/target/streampark-console-service-${version}-bin.tar.gz`,解包之后的目录如下:
+安装完成之后就看到最终的工程文件解包,位于 `streampark/streampark-console/streampark-console-service/target/streampark-console-service-${version}-bin.tar.gz`,解包之后的目录如下:
 
 ```textmate
 .
@@ -110,18 +110,18 @@ streampark-console-service-${version}
 ├── lib
 │    └── *.jar
 ├── plugins
-│    ├── streamx-jvm-profiler-1.0.0.jar
-│    └── streamx-flink-sqlclient-1.0.0.jar
+│    ├── streampark-jvm-profiler-1.0.0.jar
+│    └── streampark-flink-sqlclient-1.0.0.jar
 ├── logs
 ├── temp
 ```
-将解包后的整个工程文件 copy 到 target 之外的其他任意位置即可完成此步骤,该步主要是防止下次执行 mvn clean 被清理,如放到 `/opt/streamx/`,则此时该文件的完整路径是 `/opt/streampark/streampark-console-service-${version}`,记住这个路径,后面会用到,注意该路径中间不要存在空格
+将解包后的整个工程文件 copy 到 target 之外的其他任意位置即可完成此步骤,该步主要是防止下次执行 mvn clean 被清理,如放到 `/opt/streampark/`,则此时该文件的完整路径是 `/opt/streampark/streampark-console-service-${version}`,记住这个路径,后面会用到,注意该路径中间不要存在空格
 
 #### 配置
 
 用 IDE 导入刚从 git 上 clone 下来的 StreamPark 源码 ( 推荐使用 `IntelliJ IDEA` ) ,进入到 `resources` 下,编辑 application.yml,找到 `datasource`,修改下 jdbc 的连接信息,具体可参考安装部署章节 [修改配置](https://streampark.apache.org/zh-CN/docs/development/config) 部分
 
-<img src="/doc/image/streamx_conf.jpg" />
+<img src="/doc/image/streampark_conf.jpg" />
 
 如果你要连接的目标集群开启了 kerberos 认证,则需要配置 kerberos 信息,在 `resources` 下找到 `kerberos.xml` 配置上相关信息即可,默认 kerberos 是关闭状态,要启用需将 `enable` 设置为 true, 如下:
 
@@ -141,7 +141,7 @@ java:
 
 #### 启动
 
-`streampark-console` 是基于 springBoot 开发的 web 应用,`com.streamxhub.streamx.console.StreamParkConsole` 为主类, 在启动主类之前,需要设置下 `VM options` 和 `Environment variables`
+`streampark-console` 是基于 springBoot 开发的 web 应用,`org.apache.streampark.console.StreamParkConsole` 为主类, 在启动主类之前,需要设置下 `VM options` 和 `Environment variables`
 
 ##### VM options
 
@@ -161,20 +161,20 @@ java:
 如使用非本地安装的 hadoop 集群 ( 测试 hadoop ) `Environment variables` 中需要配置 `HADOOP_USER_NAME` 和 `HADOOP_CONF_DIR`,
 `HADOOP_USER_NAME` 为 hdfs 或者有读写权限的 hadoop 用户名,`HADOOP_CONF_DIR` 为上面第一步安装 hadoop 步骤中从测试集群 copy 相关配置文件在开发机器上的存放位置,如果是本地安装的 hadoop 则不需要配置该项,
 
-<img src="/doc/image/streamx_ideaopt.jpg" />
+<img src="/doc/image/streampark_ideaopt.jpg" />
 
 如果一切准假就绪,就可以直接启动 `StreamParkConsole` 主类启动项目,后端就启动成功了。会看到有相关的启动信息打印输出
 
 ### 前端
 
-streamx web 前端部分采用 nodejs + vue 开发,因此需要在机器上按照 node 环境,完整流程如下:
+streampark web 前端部分采用 nodejs + vue 开发,因此需要在机器上按照 node 环境,完整流程如下:
 
 #### 修改配置
 
 由于是前后端分离项目,前端需要知道后端 ( streampark-console ) 的访问地址,才能前后配合工作,因此需要更改Base API,具体位置在:
 `streampark-console/streampark-console-webapp/.env.development`
 
-![web配置](/doc/image/streamx_websetting.png)
+![web配置](/doc/image/streampark_websetting.png)
 
 配置默认如下:
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/4-dockerDeployment.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/4-dockerDeployment.md
index 0ec85a5f..77516f8b 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/4-dockerDeployment.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/4-dockerDeployment.md
@@ -101,7 +101,7 @@ vim docker-compose.yaml
 cd ../..
 ./build.sh
 ```
-![](/doc/image/streamx_build.png)
+![](/doc/image/streampark_build.png)
 
 ```html
 cd deploy/docker
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/5-LDAP.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/5-LDAP.md
index 1f970e8b..1918f728 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/5-LDAP.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/5-LDAP.md
@@ -22,11 +22,11 @@ LDAP统一认证服务用来解决以上的问题。
 
 ### 1.官网下载二进制安装包
 
-https://github.com/streamxhub/streampark/releases
+https://github.com/apache/incubator-streampark/releases
 
 ### 2.增加LDAP配置
 ```
-cd streamxpark
+cd streampark
 cd conf
 vim application
 ```
@@ -36,12 +36,12 @@ ldap:
   ## 该值为公司LDAP用户登录需要的域名
   urls: ldap://99.99.99.99:389
   ## 用户名
-  username: cn=Manager,dc=streamx,dc=com
+  username: cn=Manager,dc=streampark,dc=com
   ## 密码
-  password: streamx
+  password: streampark
   ## DN 分辨名
   embedded:
-    base-dn: dc=streamx,dc=com
+    base-dn: dc=streampark,dc=com
   user:
     ## 用于搜索过滤的Key值
     identity:
diff --git a/i18n/zh-CN/docusaurus-theme-classic/footer.json b/i18n/zh-CN/docusaurus-theme-classic/footer.json
index b85fd8fc..fc69842b 100644
--- a/i18n/zh-CN/docusaurus-theme-classic/footer.json
+++ b/i18n/zh-CN/docusaurus-theme-classic/footer.json
@@ -9,7 +9,7 @@
   },
   "link.item.label.Releases": {
     "message": "版本",
-    "description": "The label of footer link with label=Releases linking to https://github.com/streamxhub/streamx/releases"
+    "description": "The label of footer link with label=Releases linking to https://github.com/apache/incubator-streampark/releases"
   },
   "link.title.Community": {
     "message": "社区",
@@ -17,16 +17,16 @@
   },
   "link.item.label.GitHub": {
     "message": "GitHub",
-    "description": "The label of footer link with label=GitHub linking to https://github.com/streamxhub/streamx"
+    "description": "The label of footer link with label=GitHub linking to https://github.com/apache/incubator-streampark"
   },
   "link.item.label.Issue Tracker": {
     "message": "Issue Tracker",
-    "description": "The label of footer link with label=Issue Tracker linking to https://github.com/streamxhub/streamx/issues"
+    "description": "The label of footer link with label=Issue Tracker linking to https://github.com/apache/incubator-streampark/issues"
   },
 
   "link.item.label.Pull Requests": {
     "message": "Pull Requests",
-    "description": "The label of footer link with label=Pull Requests linking to https://github.com/streamxhub/streamx/pulls"
+    "description": "The label of footer link with label=Pull Requests linking to https://github.com/apache/incubator-streampark/pulls"
   }
 
 }
\ No newline at end of file
diff --git a/src/pages/home/hero.jsx b/src/pages/home/hero.jsx
index 86d7808f..f545bc49 100644
--- a/src/pages/home/hero.jsx
+++ b/src/pages/home/hero.jsx
@@ -86,11 +86,11 @@ export default function () {
                 </div>
                 <p className="lead text-light">{dataSource.slogan.description}</p>
               </div>
-              <a className="btn streamx-btn btn mt-30 ztop" href="https://github.com/apache/incubator-streampark"
+              <a className="btn streampark-btn btn mt-30 ztop" href="https://github.com/apache/incubator-streampark"
                 target="_blank">
                 <i className="lni-github-original"></i>&nbsp;GitHub
               </a>
-              <a className="btn streamx-btn btn-green mt-30 ml-3 ztop" href="/docs/user-guide/quick-start"
+              <a className="btn streampark-btn btn-green mt-30 ml-3 ztop" href="/docs/user-guide/quick-start"
                 style={{ marginLeft: '10px' }}>
                 <i className="lni-play"></i>&nbsp;Get started
               </a>
diff --git a/src/pages/home/index.less b/src/pages/home/index.less
index cc169853..dac238e2 100644
--- a/src/pages/home/index.less
+++ b/src/pages/home/index.less
@@ -100,7 +100,7 @@ h6 {
   margin-right: 50px;
 }
 
-.streamx_video {
+.streampark_video {
   position: fixed;
   left:0;
   top:0;
@@ -1376,9 +1376,9 @@ hr {
   color: #0d6efd;
 }
 
-.streamx-load {
-  -webkit-animation: streamx-load 1500ms linear infinite;
-  animation: streamx-load 1500ms linear infinite;
+.streampark-load {
+  -webkit-animation: streampark-load 1500ms linear infinite;
+  animation: streampark-load 1500ms linear infinite;
   background-color: transparent;
   border-color: #ffffff;
   border-top-color: transparent;
@@ -1395,7 +1395,7 @@ hr {
   z-index: 9;
 }
 
-@-webkit-keyframes streamx-load {
+@-webkit-keyframes streampark-load {
   0% {
     -webkit-transform: rotate(0deg);
     transform: rotate(0deg);
@@ -1406,7 +1406,7 @@ hr {
   }
 }
 
-@keyframes streamx-load {
+@keyframes streampark-load {
   0% {
     -webkit-transform: rotate(0deg);
     transform: rotate(0deg);
@@ -1470,7 +1470,7 @@ hr {
   }
 }
 
-.streamx-btn {
+.streampark-btn {
   color: #ffffff;
   position: relative;
   z-index: 1;
@@ -1484,7 +1484,7 @@ hr {
   text-transform: uppercase;
 }
 
-.streamx-btn.btn {
+.streampark-btn.btn {
   background-color: #2872ff; //
   color: #fff;
   border: 0;
@@ -1493,7 +1493,7 @@ hr {
   position: relative;
 }
 
-.streamx-btn::before {
+.streampark-btn::before {
   content: "";
   position: absolute;
   z-index: -1;
@@ -1510,62 +1510,62 @@ hr {
   border: none;
 }
 
-.streamx-btn.btn-purple:hover:before,
-.streamx-btn.btn-green:hover:before,
-.streamx-btn.btn:hover:before {
+.streampark-btn.btn-purple:hover:before,
+.streampark-btn.btn-green:hover:before,
+.streampark-btn.btn:hover:before {
   -webkit-transform: scaleX(1);
 }
 
-.streamx-btn.btn::before {
+.streampark-btn.btn::before {
   background: #0d6efd;
 }
 
-.streamx-btn.btn-green {
+.streampark-btn.btn-green {
   background-color: #24A35A;
 }
 
-.streamx-btn.btn-purple {
+.streampark-btn.btn-purple {
   background: linear-gradient(-45deg, #5e2ced, #a485fd);
 }
 
-.streamx-btn.btn:hover,
-.streamx-btn.btn:focus {
+.streampark-btn.btn:hover,
+.streampark-btn.btn:focus {
   background-color: #588af2;
   border: 0;
   color: #fff;
 }
 
-.streamx-btn.btn-green::before {
+.streampark-btn.btn-green::before {
   background: green;
 }
 
-.streamx-btn.btn-green:hover,
-.streamx-btn.btn-green:focus {
+.streampark-btn.btn-green:hover,
+.streampark-btn.btn-green:focus {
   background-color: green;
   border: 0;
   color: #fff;
 }
 
-.streamx-btn.btn-purple::before {
+.streampark-btn.btn-purple::before {
   background: #5e2ced;
 }
 
-.streamx-btn.btn-purple:hover,
-.streamx-btn.btn-purple:focus {
+.streampark-btn.btn-purple:hover,
+.streampark-btn.btn-purple:focus {
   background: linear-gradient(-45deg, #5e2ced, #a485fd);
   border: none;
   color: #fff;
 }
 
-.streamx-btn.btn-4 {
+.streampark-btn.btn-4 {
   background-color: #2ecc71;
   color: #ffffff;
   -webkit-box-shadow: 0 2px 15px 3px rgba(7, 10, 87, 0.1);
   box-shadow: 0 2px 15px 3px rgba(7, 10, 87, 0.1);
 }
 
-.streamx-btn.btn-4:hover,
-.streamx-btn.btn-4:focus {
+.streampark-btn.btn-4:hover,
+.streampark-btn.btn-4:focus {
   background-color: #ffffff;
   color: #3f43fd;
 }
diff --git a/src/pages/user/languages.json b/src/pages/user/languages.json
index 224dc880..28230cb8 100644
--- a/src/pages/user/languages.json
+++ b/src/pages/user/languages.json
@@ -2,13 +2,13 @@
   "zh-CN": {
       "common": {
           "ourUsers": "我们的用户",
-          "tip":"诸多公司和组织将 StreamPark 用于研究、生产和商业产品中<br/> 如果您也在使用 ? <a href=\"https://github.com/streamxhub/streamx/issues/163\" target=\"_blank\" rel=\"noopener\"><u>可以在这里添加</u></a>"
+          "tip":"诸多公司和组织将 StreamPark 用于研究、生产和商业产品中<br/> 如果您也在使用 ? <a href=\"https://github.com/apache/incubator-streampark/issues/163\" target=\"_blank\" rel=\"noopener\"><u>可以在这里添加</u></a>"
       }
   },
   "en": {
       "common": {
           "ourUsers": "Our Users",
-          "tip":"Various companies and organizations use StreamPark for research, production and commercial products.<br/> Are you using this project ? <a href=\"https://github.com/streamxhub/streamx/issues/163\" target=\"_blank\" rel=\"noopener\"><u>you can add your company</u></a>"
+          "tip":"Various companies and organizations use StreamPark for research, production and commercial products.<br/> Are you using this project ? <a href=\"https://github.com/apache/incubator-streampark/issues/163\" target=\"_blank\" rel=\"noopener\"><u>you can add your company</u></a>"
       }
   }
 }
diff --git a/static/doc/image/streamx-archite.png b/static/doc/image/streampark-archite.png
similarity index 100%
rename from static/doc/image/streamx-archite.png
rename to static/doc/image/streampark-archite.png
diff --git a/static/doc/image/streamx_apis.jpeg b/static/doc/image/streampark_apis.jpeg
similarity index 100%
rename from static/doc/image/streamx_apis.jpeg
rename to static/doc/image/streampark_apis.jpeg
diff --git a/static/doc/image/streamx_archite.png b/static/doc/image/streampark_archite.png
similarity index 100%
rename from static/doc/image/streamx_archite.png
rename to static/doc/image/streampark_archite.png
diff --git a/static/doc/image/streamx_build.png b/static/doc/image/streampark_build.png
similarity index 100%
rename from static/doc/image/streamx_build.png
rename to static/doc/image/streampark_build.png
diff --git a/static/doc/image/streamx_conf.jpg b/static/doc/image/streampark_conf.jpg
similarity index 100%
rename from static/doc/image/streamx_conf.jpg
rename to static/doc/image/streampark_conf.jpg
diff --git a/static/doc/image/streamx_coreapi.png b/static/doc/image/streampark_coreapi.png
similarity index 100%
rename from static/doc/image/streamx_coreapi.png
rename to static/doc/image/streampark_coreapi.png
diff --git a/static/doc/image/streamx_flinkhome.png b/static/doc/image/streampark_flinkhome.png
similarity index 100%
rename from static/doc/image/streamx_flinkhome.png
rename to static/doc/image/streampark_flinkhome.png
diff --git a/static/doc/image/streamx_ideaopt.jpg b/static/doc/image/streampark_ideaopt.jpg
similarity index 100%
rename from static/doc/image/streamx_ideaopt.jpg
rename to static/doc/image/streampark_ideaopt.jpg
diff --git a/static/doc/image/streamx_kafkaapi.jpeg b/static/doc/image/streampark_kafkaapi.jpeg
similarity index 100%
rename from static/doc/image/streamx_kafkaapi.jpeg
rename to static/doc/image/streampark_kafkaapi.jpeg
diff --git a/static/doc/image/streamx_login.jpeg b/static/doc/image/streampark_login.jpeg
similarity index 100%
rename from static/doc/image/streamx_login.jpeg
rename to static/doc/image/streampark_login.jpeg
diff --git a/static/doc/image/streamx_scala_life_cycle.png b/static/doc/image/streampark_scala_life_cycle.png
similarity index 100%
rename from static/doc/image/streamx_scala_life_cycle.png
rename to static/doc/image/streampark_scala_life_cycle.png
diff --git a/static/doc/image/streamx_settings.png b/static/doc/image/streampark_settings.png
similarity index 100%
rename from static/doc/image/streamx_settings.png
rename to static/doc/image/streampark_settings.png
diff --git a/static/doc/image/streamx_start.png b/static/doc/image/streampark_start.png
similarity index 100%
rename from static/doc/image/streamx_start.png
rename to static/doc/image/streampark_start.png
diff --git a/static/doc/image/streamx_websetting.png b/static/doc/image/streampark_websetting.png
similarity index 100%
rename from static/doc/image/streamx_websetting.png
rename to static/doc/image/streampark_websetting.png