You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@inlong.apache.org by zi...@apache.org on 2023/01/13 06:08:36 UTC

[inlong-website] branch master updated: [INLONG-661][Sort] Add more description for metric (#672)

This is an automated email from the ASF dual-hosted git repository.

zirui pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/inlong-website.git


The following commit(s) were added to refs/heads/master by this push:
     new 70da31316d [INLONG-661][Sort] Add more description for metric (#672)
70da31316d is described below

commit 70da31316d31cda77a6e3e9a13a1a1bcbb67b01a
Author: Xin Gong <ge...@gmail.com>
AuthorDate: Fri Jan 13 14:08:31 2023 +0800

    [INLONG-661][Sort] Add more description for metric (#672)
---
 docs/data_node/extract_node/kafka.md               |  2 +-
 docs/data_node/extract_node/mongodb-cdc.md         |  2 +-
 docs/data_node/extract_node/mysql-cdc.md           |  2 +-
 docs/data_node/extract_node/oracle-cdc.md          |  2 +-
 docs/data_node/extract_node/postgresql-cdc.md      |  2 +-
 docs/data_node/extract_node/pulsar.md              |  2 +-
 docs/data_node/extract_node/sqlserver-cdc.md       |  2 +-
 docs/data_node/load_node/clickhouse.md             |  2 +-
 docs/data_node/load_node/doris.md                  |  1 +
 docs/data_node/load_node/elasticsearch.md          |  2 +-
 docs/data_node/load_node/greenplum.md              |  2 +-
 docs/data_node/load_node/hbase.md                  |  2 +-
 docs/data_node/load_node/hdfs.md                   |  2 +-
 docs/data_node/load_node/hive.md                   |  2 +-
 docs/data_node/load_node/iceberg.md                |  2 +-
 docs/data_node/load_node/kafka.md                  |  2 +-
 docs/data_node/load_node/mysql.md                  |  2 +-
 docs/data_node/load_node/oracle.md                 |  2 +-
 docs/data_node/load_node/postgresql.md             |  2 +-
 docs/data_node/load_node/sqlserver.md              |  2 +-
 docs/data_node/load_node/starrocks.md              |  1 +
 docs/data_node/load_node/tdsql-postgresql.md       |  2 +-
 docs/modules/sort/metrics.md                       | 53 ++++++++++++++++++++--
 .../current/data_node/extract_node/kafka.md        |  2 +-
 .../current/data_node/extract_node/mongodb-cdc.md  |  2 +-
 .../current/data_node/extract_node/mysql-cdc.md    |  4 +-
 .../current/data_node/extract_node/oracle-cdc.md   |  4 +-
 .../data_node/extract_node/postgresql-cdc.md       |  2 +-
 .../current/data_node/extract_node/pulsar.md       |  2 +-
 .../data_node/extract_node/sqlserver-cdc.md        |  4 +-
 .../current/data_node/load_node/clickhouse.md      |  6 +--
 .../current/data_node/load_node/doris.md           |  1 +
 .../current/data_node/load_node/elasticsearch.md   |  4 +-
 .../current/data_node/load_node/greenplum.md       |  2 +-
 .../current/data_node/load_node/hbase.md           |  2 +-
 .../current/data_node/load_node/hdfs.md            |  4 +-
 .../current/data_node/load_node/hive.md            |  4 +-
 .../current/data_node/load_node/iceberg.md         |  2 +-
 .../current/data_node/load_node/kafka.md           |  2 +-
 .../current/data_node/load_node/mysql.md           |  2 +-
 .../current/data_node/load_node/oracle.md          |  2 +-
 .../current/data_node/load_node/postgresql.md      |  2 +-
 .../current/data_node/load_node/sqlserver.md       |  2 +-
 .../current/data_node/load_node/starrocks.md       |  1 +
 .../data_node/load_node/tdsql-postgresql.md        |  2 +-
 .../current/modules/sort/metrics.md                | 51 +++++++++++++++++++--
 .../version-1.4.0/data_node/extract_node/kafka.md  |  2 +-
 .../data_node/extract_node/mongodb-cdc.md          |  2 +-
 .../data_node/extract_node/mysql-cdc.md            |  4 +-
 .../data_node/extract_node/oracle-cdc.md           |  4 +-
 .../data_node/extract_node/postgresql-cdc.md       |  2 +-
 .../version-1.4.0/data_node/extract_node/pulsar.md |  2 +-
 .../data_node/extract_node/sqlserver-cdc.md        |  4 +-
 .../data_node/load_node/clickhouse.md              |  3 +-
 .../version-1.4.0/data_node/load_node/doris.md     |  2 +-
 .../data_node/load_node/elasticsearch.md           |  4 +-
 .../version-1.4.0/data_node/load_node/greenplum.md |  2 +-
 .../version-1.4.0/data_node/load_node/hbase.md     |  2 +-
 .../version-1.4.0/data_node/load_node/hdfs.md      |  4 +-
 .../version-1.4.0/data_node/load_node/hive.md      |  4 +-
 .../version-1.4.0/data_node/load_node/iceberg.md   |  2 +-
 .../version-1.4.0/data_node/load_node/kafka.md     |  3 +-
 .../version-1.4.0/data_node/load_node/mysql.md     |  2 +-
 .../version-1.4.0/data_node/load_node/oracle.md    |  2 +-
 .../data_node/load_node/postgresql.md              |  2 +-
 .../version-1.4.0/data_node/load_node/sqlserver.md |  2 +-
 .../data_node/load_node/tdsql-postgresql.md        |  2 +-
 .../version-1.4.0/modules/sort/metrics.md          | 10 ++--
 .../version-1.4.0/data_node/extract_node/kafka.md  |  2 +-
 .../data_node/extract_node/mongodb-cdc.md          |  2 +-
 .../data_node/extract_node/mysql-cdc.md            |  2 +-
 .../data_node/extract_node/oracle-cdc.md           |  2 +-
 .../data_node/extract_node/postgresql-cdc.md       |  2 +-
 .../version-1.4.0/data_node/extract_node/pulsar.md |  2 +-
 .../data_node/extract_node/sqlserver-cdc.md        |  2 +-
 .../data_node/load_node/clickhouse.md              |  2 +-
 .../data_node/load_node/elasticsearch.md           |  2 +-
 .../version-1.4.0/data_node/load_node/greenplum.md |  2 +-
 .../version-1.4.0/data_node/load_node/hbase.md     |  2 +-
 .../version-1.4.0/data_node/load_node/hdfs.md      |  2 +-
 .../version-1.4.0/data_node/load_node/hive.md      |  2 +-
 .../version-1.4.0/data_node/load_node/iceberg.md   |  2 +-
 .../version-1.4.0/data_node/load_node/kafka.md     |  2 +-
 .../version-1.4.0/data_node/load_node/mysql.md     |  2 +-
 .../version-1.4.0/data_node/load_node/oracle.md    |  2 +-
 .../data_node/load_node/postgresql.md              |  2 +-
 .../version-1.4.0/data_node/load_node/sqlserver.md |  2 +-
 .../data_node/load_node/tdsql-postgresql.md        |  2 +-
 .../version-1.4.0/modules/sort/metrics.md          | 10 ++--
 89 files changed, 209 insertions(+), 111 deletions(-)

diff --git a/docs/data_node/extract_node/kafka.md b/docs/data_node/extract_node/kafka.md
index 5c88e49649..6f1f360d92 100644
--- a/docs/data_node/extract_node/kafka.md
+++ b/docs/data_node/extract_node/kafka.md
@@ -110,7 +110,7 @@ TODO: It will be supported in the future.
 | scan.startup.specific-offsets | optional | (none) | String | Specify offsets for each partition in case of 'specific-offsets' startup mode, e.g. 'partition:0,offset:42;partition:1,offset:300'. |
 | scan.startup.timestamp-millis | optional | (none) | Long | Start from the specified epoch timestamp (milliseconds) used in case of 'timestamp' startup mode. |
 | scan.topic-partition-discovery.interval | optional | (none) | Duration | Interval for consumer to discover dynamically created Kafka topics and partitions periodically. |
-| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`. |
 | sink.ignore.changelog | optional | false | Boolean |  Importing all changelog mode data ingest into Kafka . |
 
 ## Available Metadata
diff --git a/docs/data_node/extract_node/mongodb-cdc.md b/docs/data_node/extract_node/mongodb-cdc.md
index 0f6f27f828..601ef3f9d4 100644
--- a/docs/data_node/extract_node/mongodb-cdc.md
+++ b/docs/data_node/extract_node/mongodb-cdc.md
@@ -134,7 +134,7 @@ TODO: It will be supported in the future.
 | poll.max.batch.size       | optional     | 1000             | Integer  | Maximum number of change stream documents to include in a single batch when polling for new data. |
 | poll.await.time.ms        | optional     | 1500             | Integer  | The amount of time to wait before checking for new results on the change stream. |
 | heartbeat.interval.ms     | optional     | 0                | Integer  | The length of time in milliseconds between sending heartbeat messages. Use 0 to disa |
-| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`. |
 ## Available Metadata
 
 The following format metadata can be exposed as read-only (VIRTUAL) columns in a table definition.
diff --git a/docs/data_node/extract_node/mysql-cdc.md b/docs/data_node/extract_node/mysql-cdc.md
index f1f42e9ceb..f08a9bf3c6 100644
--- a/docs/data_node/extract_node/mysql-cdc.md
+++ b/docs/data_node/extract_node/mysql-cdc.md
@@ -320,7 +320,7 @@ TODO: It will be supported in the future.
       <td>optional</td>
       <td style={{wordWrap: 'break-word'}}>(none)</td>
       <td>String</td>
-      <td>Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode.</td> 
+      <td>Inlong metric label, format of value is groupId=[groupId]&streamId=[streamId]&nodeId=[nodeId].</td> 
     </tr>
     </tbody>
 </table>
diff --git a/docs/data_node/extract_node/oracle-cdc.md b/docs/data_node/extract_node/oracle-cdc.md
index e330965912..0299f8551d 100644
--- a/docs/data_node/extract_node/oracle-cdc.md
+++ b/docs/data_node/extract_node/oracle-cdc.md
@@ -326,7 +326,7 @@ TODO: It will be supported in the future.
       <td>optional</td>
       <td style={{wordWrap: 'break-word'}}>(none)</td>
       <td>String</td>
-      <td>Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode.</td> 
+      <td>Inlong metric label, format of value is groupId=[groupId]&streamId=[streamId]&nodeId=[nodeId].</td> 
     </tr>
     <tr>
        <td>source.multiple.enable</td>
diff --git a/docs/data_node/extract_node/postgresql-cdc.md b/docs/data_node/extract_node/postgresql-cdc.md
index e41285f9c7..026653f32e 100644
--- a/docs/data_node/extract_node/postgresql-cdc.md
+++ b/docs/data_node/extract_node/postgresql-cdc.md
@@ -136,7 +136,7 @@ TODO: It will be supported in the future.
 | decoding.plugin.name | optional | decoderbufs | String | The name of the Postgres logical decoding plug-in installed on the server. Supported values are decoderbufs, wal2json, wal2json_rds, wal2json_streaming, wal2json_rds_streaming and pgoutput. |
 | slot.name | optional | flink | String | The name of the PostgreSQL logical decoding slot that was created for streaming changes from a particular plug-in for a particular database/schema. The server uses this slot to stream events to the connector that you are configuring. Slot names must conform to PostgreSQL replication slot naming rules, which state: "Each replication slot has a name, which can contain lower-case letters, numbers, and the underscore character." |
 | debezium.* | optional | (none) | String | Pass-through Debezium's properties to Debezium Embedded Engine which is used to capture data changes from Postgres server. For example: 'debezium.snapshot.mode' = 'never'. See more about the [Debezium's Postgres Connector properties](https://debezium.io/documentation/reference/1.5/connectors/postgresql.html#postgresql-connector-properties). |
-| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`. |
 
 :::caution
 - `slot.name` is recommended to set for different tables to avoid the potential PSQLException: ERROR: replication slot "flink" is active for PID 974 error.  
diff --git a/docs/data_node/extract_node/pulsar.md b/docs/data_node/extract_node/pulsar.md
index 5d28627b82..1d40c68734 100644
--- a/docs/data_node/extract_node/pulsar.md
+++ b/docs/data_node/extract_node/pulsar.md
@@ -107,7 +107,7 @@ TODO
 | key.fields-prefix             | optional | (none)        | String | Define a custom prefix for all fields in the key format to avoid name conflicts with fields in the value format. By default, the prefix is empty. If a custom prefix is defined, the Table schema and `key.fields` are used. |
 | format or value.format        | required | (none)        | String | Set the name with a prefix. When constructing data types in the key format, the prefix is removed and non-prefixed names are used within the key format. Pulsar message value serialization format, support JSON, Avro, etc. For more information, see the Flink format. |
 | value.fields-include          | optional | ALL           | Enum   | The Pulsar message value contains the field policy, optionally ALL, and EXCEPT_KEY. |
-| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`. |
 
 ## Available Metadata
 
diff --git a/docs/data_node/extract_node/sqlserver-cdc.md b/docs/data_node/extract_node/sqlserver-cdc.md
index 21377471e0..c95bccfaeb 100644
--- a/docs/data_node/extract_node/sqlserver-cdc.md
+++ b/docs/data_node/extract_node/sqlserver-cdc.md
@@ -190,7 +190,7 @@ TODO
       <td>optional</td>
       <td style={{wordWrap: 'break-word'}}>(none)</td>
       <td>String</td>
-      <td>Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode.</td> 
+      <td>Inlong metric label, format of value is groupId=[groupId]&streamId=[streamId]&nodeId=[nodeId].</td> 
     </tr>
     </tbody>
 </table>
diff --git a/docs/data_node/load_node/clickhouse.md b/docs/data_node/load_node/clickhouse.md
index a78b0282d4..0e56848ebc 100644
--- a/docs/data_node/load_node/clickhouse.md
+++ b/docs/data_node/load_node/clickhouse.md
@@ -100,7 +100,7 @@ TODO: It will be supported in the future.
 | sink.max-retries | optional | 3 | Integer | The max retry times if writing records to database failed. |
 | sink.parallelism | optional | (none) | Integer | Defines the parallelism of the JDBC sink operator. By default, the parallelism is determined by the framework using the same parallelism of the upstream chained operator. |
 | sink.ignore.changelog | optional | false | Boolean |  Ignore all `RowKind`, ingest them as `INSERT`. |
-| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`. |
 
 ## Data Type Mapping
 
diff --git a/docs/data_node/load_node/doris.md b/docs/data_node/load_node/doris.md
index 8dd6a15770..f6f9b505c9 100644
--- a/docs/data_node/load_node/doris.md
+++ b/docs/data_node/load_node/doris.md
@@ -303,6 +303,7 @@ TODO: It will be supported in the future.
 | sink.multiple.database-pattern    | optional   | (none)            | string   | Extract database name from the raw binary data, this is only used in the multiple sink writing scenario.                 | 
 | sink.multiple.table-pattern       | optional   | (none)            | string   | Extract table name from the raw binary data, this is only used in the multiple sink writing scenario.                           |
 | sink.multiple.ignore-single-table-errors | optional | true         | boolean  | Whether ignore the single table erros when multiple sink writing scenario. When it is `true`,sink continue when one table occur exception, only stop the exception table sink. When it is `false`, stop the whole sink when one table occur exception.     |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`. |
 
 ## Data Type Mapping
 
diff --git a/docs/data_node/load_node/elasticsearch.md b/docs/data_node/load_node/elasticsearch.md
index e49c326b5e..5a59ed248d 100644
--- a/docs/data_node/load_node/elasticsearch.md
+++ b/docs/data_node/load_node/elasticsearch.md
@@ -254,7 +254,7 @@ TODO: It will be supported in the future.
       <td>optional</td>
       <td style={{wordWrap: 'break-word'}}>(none)</td>
       <td>String</td>
-      <td>Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode.</td> 
+      <td>Inlong metric label, format of value is groupId=[groupId]&streamId=[streamId]&nodeId=[nodeId].</td> 
     </tr>
     </tbody>
 </table>
diff --git a/docs/data_node/load_node/greenplum.md b/docs/data_node/load_node/greenplum.md
index 4803da8ed8..259de3e51b 100644
--- a/docs/data_node/load_node/greenplum.md
+++ b/docs/data_node/load_node/greenplum.md
@@ -98,7 +98,7 @@ TODO: It will be supported in the future.
 | sink.max-retries | optional | 3 | Integer | The max retry times if writing records to database failed. |
 | sink.parallelism | optional | (none) | Integer | Defines the parallelism of the JDBC sink operator. By default, the parallelism is determined by the framework using the same parallelism of the upstream chained operator. |
 | sink.ignore.changelog | optional | false | Boolean |  Ignore all `RowKind`, ingest them as `INSERT`. |
-| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`. |
 
 ## Data Type Mapping
 
diff --git a/docs/data_node/load_node/hbase.md b/docs/data_node/load_node/hbase.md
index 4ab9eb4210..19ab916174 100644
--- a/docs/data_node/load_node/hbase.md
+++ b/docs/data_node/load_node/hbase.md
@@ -96,7 +96,7 @@ TODO: It will be supported in the future.
 | lookup.cache.ttl | optional | (none) | Duration | The max time to live for each rows in lookup cache, over this time, the oldest rows will be expired. Note, "cache.max-rows" and "cache.ttl" options must all be specified if any of them is specified.Lookup cache is disabled by default. |
 | lookup.max-retries | optional | 3 | Integer | The max retry times if lookup database failed. |
 | properties.* | optional | (none) | String | This can set and pass arbitrary HBase configurations. Suffix names must match the configuration key defined in [HBase Configuration documentation](https://hbase.apache.org/2.3/book.html#hbase_default_configurations). Flink will remove the "properties." key prefix and pass the transformed key and values to the underlying HBaseClient. For example, you can add a kerberos authentication parameter 'properties.hbase.security.authentication' = 'kerb [...]
-| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`. |
 
 ## Data Type Mapping
 
diff --git a/docs/data_node/load_node/hdfs.md b/docs/data_node/load_node/hdfs.md
index 5ac161be77..11da0bc95e 100644
--- a/docs/data_node/load_node/hdfs.md
+++ b/docs/data_node/load_node/hdfs.md
@@ -111,7 +111,7 @@ The file sink supports file compactions, which allows applications to have small
       <td>optional</td>
       <td style={{wordWrap: 'break-word'}}>(none)</td>
       <td>String</td>
-      <td>Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode.</td> 
+      <td>Inlong metric label, format of value is groupId=[groupId]&streamId=[streamId]&nodeId=[nodeId].</td> 
     </tr>
     </tbody>
 </table>
diff --git a/docs/data_node/load_node/hive.md b/docs/data_node/load_node/hive.md
index d9723c3482..76063f0808 100644
--- a/docs/data_node/load_node/hive.md
+++ b/docs/data_node/load_node/hive.md
@@ -134,7 +134,7 @@ TODO: It will be supported in the future.
       <td>optional</td>
       <td style={{wordWrap: 'break-word'}}>(none)</td>
       <td>String</td>
-      <td>Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode.</td> 
+      <td>Inlong metric label, format of value is groupId=[groupId]&streamId=[streamId]&nodeId=[nodeId].</td> 
     </tr>
     </tbody>
 </table>
diff --git a/docs/data_node/load_node/iceberg.md b/docs/data_node/load_node/iceberg.md
index a1ade23add..3eb0a67e43 100644
--- a/docs/data_node/load_node/iceberg.md
+++ b/docs/data_node/load_node/iceberg.md
@@ -259,7 +259,7 @@ Iceberg support schema evolution from source table to target table in multiple s
 | clients          | optional for hive catalog                   | 2       | Integer | The Hive metastore client pool size, default value is 2.     |
 | warehouse        | optional for hadoop catalog or hive catalog | (none)  | String  | For Hive catalog,is the Hive warehouse location, users should specify this path if neither set the `hive-conf-dir` to specify a location containing a `hive-site.xml` configuration file nor add a correct `hive-site.xml` to classpath. For hadoop catalog,The HDFS directory to store metadata files and data files. |
 | hive-conf-dir    | optional for hive catalog                   | (none)  | String  | Path to a directory containing a `hive-site.xml` configuration file which will be used to provide custom Hive configuration values. The value of `hive.metastore.warehouse.dir` from `<hive-conf-dir>/hive-site.xml` (or hive configure file from classpath) will be overwrote with the `warehouse` value if setting both `hive-conf-dir` and `warehouse` when creating iceberg catalog. |
-| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`. |
 | sink.multiple.enable | optional                         | false  | Boolean | Whether to enable multiple sink            |
 | sink.multiple.schema-update.policy | optional           | TRY_IT_BEST | Enum | The policy to handle the inconsistency between the schema in the data and the schema of the target table <br/>TRY_IT_BEST: try best, deal with as much as possible, ignore it if can't handled.<br/>  IGNORE_WITH_LOG:ignore it and log it,ignore this table later.<br/> THROW_WITH_STOP:throw exception and stop the job, until user deal with schema conflict and job restore.
 | sink.multiple.pk-auto-generated | optional              | false  | Boolean  | Whether auto generate primary key, regard all field combined as primary key in multiple sink scenes. |
diff --git a/docs/data_node/load_node/kafka.md b/docs/data_node/load_node/kafka.md
index bb42c88307..28f361cc1b 100644
--- a/docs/data_node/load_node/kafka.md
+++ b/docs/data_node/load_node/kafka.md
@@ -97,7 +97,7 @@ TODO: It will be supported in the future.
 | sink.multiple.partition-pattern | optional | (none) | String |  Dynamic partition extraction pattern, like '${VARIABLE_NAME}' which is only used in kafka multiple sink scenarios and is valid when 'format' is 'raw'. |
 | sink.semantic | optional | at-least-once | String | Defines the delivery semantic for the Kafka sink. Valid enumerationns are 'at-least-once', 'exactly-once' and 'none'. See [Consistency guarantees](https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/connectors/table/kafka/#consistency-guarantees) for more details. |
 | sink.parallelism | optional | (none) | Integer | Defines the parallelism of the Kafka sink operator. By default, the parallelism is determined by the framework using the same parallelism of the upstream chained operator. |
-| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`. |
 
 ## Available Metadata
 
diff --git a/docs/data_node/load_node/mysql.md b/docs/data_node/load_node/mysql.md
index 5639fc1396..1f9a940370 100644
--- a/docs/data_node/load_node/mysql.md
+++ b/docs/data_node/load_node/mysql.md
@@ -97,7 +97,7 @@ TODO: It will be supported in the future.
 | sink.buffer-flush.interval | optional | 1s | Duration | The flush interval mills, over this time, asynchronous threads will flush data. Can be set to '0' to disable it. Note, 'sink.buffer-flush.max-rows' can be set to '0' with the flush interval set allowing for complete async processing of buffered actions. | |
 | sink.max-retries | optional | 3 | Integer | The max retry times if writing records to database failed. |
 | sink.parallelism | optional | (none) | Integer | Defines the parallelism of the JDBC sink operator. By default, the parallelism is determined by the framework using the same parallelism of the upstream chained operator. |
-| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`. |
 
 ## Data Type Mapping
 
diff --git a/docs/data_node/load_node/oracle.md b/docs/data_node/load_node/oracle.md
index f7c335ef6a..7f54c53444 100644
--- a/docs/data_node/load_node/oracle.md
+++ b/docs/data_node/load_node/oracle.md
@@ -98,7 +98,7 @@ TODO: It will be supported in the future.
 | sink.max-retries | optional | 3 | Integer | The max retry times if writing records to database failed. |
 | sink.parallelism | optional | (none) | Integer | Defines the parallelism of the JDBC sink operator. By default, the parallelism is determined by the framework using the same parallelism of the upstream chained operator. |
 | sink.ignore.changelog | optional | false | Boolean |  Ignore all `RowKind`, ingest them as `INSERT`. |
-| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`. |
 
 ## Data Type Mapping
 
diff --git a/docs/data_node/load_node/postgresql.md b/docs/data_node/load_node/postgresql.md
index 69cf18a926..648f7e6949 100644
--- a/docs/data_node/load_node/postgresql.md
+++ b/docs/data_node/load_node/postgresql.md
@@ -97,7 +97,7 @@ TODO: It will be supported in the future.
 | sink.max-retries | optional | 3 | Integer | The max retry times if writing records to database failed. |
 | sink.parallelism | optional | (none) | Integer | Defines the parallelism of the JDBC sink operator. By default, the parallelism is determined by the framework using the same parallelism of the upstream chained operator. |
 | sink.ignore.changelog | optional | false | Boolean |  Ignore all `RowKind`, ingest them as `INSERT`. |
-| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`. |
 
 ## Data Type Mapping
 
diff --git a/docs/data_node/load_node/sqlserver.md b/docs/data_node/load_node/sqlserver.md
index be6bc84eba..fe928c6caa 100644
--- a/docs/data_node/load_node/sqlserver.md
+++ b/docs/data_node/load_node/sqlserver.md
@@ -96,7 +96,7 @@ TODO: It will be supported in the future.
 | sink.max-retries | optional | 3 | Integer | The max retry times if writing records to database failed. |
 | sink.parallelism | optional | (none) | Integer | Defines the parallelism of the JDBC sink operator. By default, the parallelism is determined by the framework using the same parallelism of the upstream chained operator. |
 | sink.ignore.changelog | optional | false | Boolean |  Ignore all `RowKind`, ingest them as `INSERT`. |
-| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`. |
 
 ## Data Type Mapping
 
diff --git a/docs/data_node/load_node/starrocks.md b/docs/data_node/load_node/starrocks.md
index 193d8bf5e5..b72d86d677 100644
--- a/docs/data_node/load_node/starrocks.md
+++ b/docs/data_node/load_node/starrocks.md
@@ -295,6 +295,7 @@ TODO: It will be supported in the future.
 | sink.multiple.format              | optional   | (none)            | string   | The format of multiple sink, it represents the real format of the raw binary data. can be `canal-json` or `debezium-json` at present. See [kafka -- Dynamic Topic Extraction](https://github.com/apache/inlong-website/blob/master/docs/data_node/load_node/kafka.md) for more details.  |
 | sink.multiple.database-pattern    | optional   | (none)            | string   | Extract database name from the raw binary data, this is only used in the multiple sink writing scenario.                 | 
 | sink.multiple.table-pattern       | optional   | (none)            | string   | Extract table name from the raw binary data, this is only used in the multiple sink writing scenario. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`. |
 
 ## Data Type Mapping
 
diff --git a/docs/data_node/load_node/tdsql-postgresql.md b/docs/data_node/load_node/tdsql-postgresql.md
index d35fa0e347..9bd3003516 100644
--- a/docs/data_node/load_node/tdsql-postgresql.md
+++ b/docs/data_node/load_node/tdsql-postgresql.md
@@ -96,7 +96,7 @@ TODO: It will be supported in the future.
 | sink.max-retries | optional | 3 | Integer | The max retry times if writing records to database failed. |
 | sink.parallelism | optional | (none) | Integer | Defines the parallelism of the JDBC sink operator. By default, the parallelism is determined by the framework using the same parallelism of the upstream chained operator. |
 | sink.ignore.changelog | optional | false | Boolean |  Ignore all `RowKind`, ingest them as `INSERT`. |
-| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`. |
 
 ## Data Type Mapping
 
diff --git a/docs/modules/sort/metrics.md b/docs/modules/sort/metrics.md
index bf56b4ff3d..6807fda978 100644
--- a/docs/modules/sort/metrics.md
+++ b/docs/modules/sort/metrics.md
@@ -5,12 +5,14 @@ sidebar_position: 4
 
 ## Overview
 
-We add metric computing for node. Sort will compute metric when user just need add with option `inlong.metric.labels` that includes `groupId=xxgroup&streamId=xxstream&nodeId=xxnode`.
+We add metric computing for node. Sort will compute metric when user just need add with option `inlong.metric.labels` that includes groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`.
 Sort will export metric by flink metric group, So user can use [metric reporter](https://nightlies.apache.org/flink/flink-docs-release-1.13/zh/docs/deployment/metric_reporters/) to get metric data.
 
 ## Metric
 
-### supporting extract node
+### Supporting extract node
+
+#### Node level metric
 
 | metric name | extract node | description |
 |-------------|--------------|-------------|
@@ -19,14 +21,59 @@ Sort will export metric by flink metric group, So user can use [metric reporter]
 | groupId_streamId_nodeId_numRecordsInPerSecond | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | input records per second |
 | groupId_streamId_nodeId_numBytesInPerSecond | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | input bytes number per second |
 
+#### Table level metric
+ It is used for all database sync.
+
+| Metric name | Extract node | Description |
+|-------------|--------------|-------------|
+| groupId_streamId_nodeId_database_table_numRecordsIn | mysql-cdc | input records number |
+| groupId_streamId_nodeId_database_schema_table_numRecordsIn | oracle-cdc,postgresql-cdc | input records number |
+| groupId_streamId_nodeId_database_collection_numRecordsIn | mongodb-cdc | input records number |
+| groupId_streamId_nodeId_database_table_numBytesIn | mysql-cdc | input records number |
+| groupId_streamId_nodeId_database_schema_table_numBytesIn | oracle-cdc,postgresql-cdc | input records number |
+| groupId_streamId_nodeId_database_collection_numBytesIn | mongodb-cdc | input bytes number |
+| groupId_streamId_nodeId_database_table_numRecordsInPerSecond | mysql-cdc | input records number per second |
+| groupId_streamId_nodeId_database_schema_table_numRecordsInPerSecond | oracle-cdc,postgresql-cdc | input records number per second |
+| groupId_streamId_nodeId_database_collection_numRecordsInPerSecond | mongodb-cdc | input records number per second |
+| groupId_streamId_nodeId_database_table_numBytesInPerSecond | mysql-cdc | input bytes number per second |
+| groupId_streamId_nodeId_database_schema_table_numBytesInPerSecond | oracle-cdc,postgresql-cdc | input bytes number per second |
+| groupId_streamId_nodeId_database_collection_numBytesInPerSecond | mongodb-cdc | input bytes number per second |
+
 ### supporting load node
 
-| metric name | load node | description |
+#### Node level metric
+
+| Metric name | Load node | Description |
 |-------------|-----------|-------------|
 | groupId_streamId_nodeId_numRecordsOut | clickhouse,elasticsearch,greenplum,hbase,<br/>hdfs,hive,iceberg,kafka,mysql,<br/>oracle,postgresql,sqlserver,tdsql-postgresql | out records number |
 | groupId_streamId_nodeId_numBytesOut |  clickhouse,elasticsearch,greenplum,hbase,<br/>hdfs,hive,iceberg,kafka,mysql,<br/>oracle,postgresql,sqlserver,tdsql-postgresql | output byte number |
 | groupId_streamId_nodeId_numRecordsOutPerSecond |  clickhouse,elasticsearch,greenplum,hbase,<br/>hdfs,hive,iceberg,kafka,mysql,<br/>oracle,postgresql,sqlserver,tdsql-postgresql | output records per second |
 | groupId_streamId_nodeId_numBytesOutPerSecond |  clickhouse,elasticsearch,greenplum,hbase,<br/>hdfs,hive,iceberg,kafka,mysql,<br/>oracle,postgresql,sqlserver,tdsql-postgresql | output bytes  per second |
+| groupId_streamId_nodeId_dirtyRecordsOut |  clickhouse,elasticsearch,greenplum,hbase,<br/>hdfs,hive,iceberg,kafka,mysql,<br/>oracle,postgresql,sqlserver,tdsql-postgresql | output records |
+| groupId_streamId_nodeId_dirtyBytesOut |  clickhouse,elasticsearch,greenplum,hbase,<br/>hdfs,hive,iceberg,kafka,mysql,<br/>oracle,postgresql,sqlserver,tdsql-postgresql | output bytes |
+
+#### Table level metric
+
+| Metric name | Load node | Description |
+|-------------|-----------|-------------|
+| groupId_streamId_nodeId_database_table_numRecordsOut | doris,iceberg,starRocks | out records number |
+| groupId_streamId_nodeId_database_schema_table_numRecordsOut | postgresql | out records number |
+| groupId_streamId_nodeId_topic_numRecordsOut | kafka | out records number |
+| groupId_streamId_nodeId_database_table_numBytesOut | doris,iceberg,starRocks | out byte number |
+| groupId_streamId_nodeId_database_schema_table_numBytesOut | postgresql | out byte number |
+| groupId_streamId_nodeId_topic_numBytesOut | kafka | out byte number |
+| groupId_streamId_nodeId_database_table_numRecordsOutPerSecond | doris,iceberg,starRocks | out records number per second |
+| groupId_streamId_nodeId_database_schema_table_numRecordsOutPerSecond | postgresql | out records number per second |
+| groupId_streamId_nodeId_topic_numRecordsOutPerSecond | kafka | out records number per second |
+| groupId_streamId_nodeId_database_table_numBytesOutPerSecond | doris,iceberg,starRocks | out bytes number per second |
+| groupId_streamId_nodeId_database_schema_table_numBytesOutPerSecond | postgresql | out bytes number per second |
+| groupId_streamId_nodeId_topic_numBytesOutPerSecond | kafka | out bytes number per second |
+| groupId_streamId_nodeId_database_table_dirtyRecordsOut | doris,iceberg,starRocks | out records number |
+| groupId_streamId_nodeId_database_schema_table_dirtyRecordsOut | postgresql | out records number |
+| groupId_streamId_nodeId_topic_dirtyRecordsOut | kafka | out records number |
+| groupId_streamId_nodeId_database_table_dirtyBytesOut | doris,iceberg,starRocks | out byte number |
+| groupId_streamId_nodeId_database_schema_table_dirtyBytesOut | postgresql | out byte number |
+| groupId_streamId_nodeId_topic_dirtyBytesOut | kafka | out byte number |
 
 ## Usage
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/kafka.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/kafka.md
index af259afc3f..ff61be058e 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/kafka.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/kafka.md
@@ -108,7 +108,7 @@ TODO: 将在未来支持此功能。
 | scan.startup.specific-offsets | 可选 | (none) | String | 在使用 'specific-offsets' 启动模式时为每个 partition 指定 offset,例如 'partition:0,offset:42;partition:1,offset:300'。 |
 | scan.startup.timestamp-millis | 可选 | (none) | Long | 在使用 'timestamp' 启动模式时指定启动的时间戳(单位毫秒)。 |
 | scan.topic-partition-discovery.interval | 可选 | (none) | Duration | Consumer 定期探测动态创建的 Kafka topic 和 partition 的时间间隔。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`。|
 | sink.ignore.changelog | 可选 | false | 布尔型 | 支持所有类型的 changelog 流 ingest 到 Kafka。 |
 
 ## 可用的元数据字段
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/mongodb-cdc.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/mongodb-cdc.md
index 8a0a36f352..ad7f76b0a9 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/mongodb-cdc.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/mongodb-cdc.md
@@ -134,7 +134,7 @@ TODO: 未来会支持
 | poll.max.batch.size       | 可选         | 1000       | Integer  | 轮询新数据时,单个批次中包含的最大更改流文档数。             |
 | poll.await.time.ms        | 可选         | 1500       | Integer  | 在更改流上检查新结果之前等待的时间量。                       |
 | heartbeat.interval.ms     | 可选         | 0          | Integer  | 发送心跳消息之间的时间长度(以毫秒为单位)。使用 0 禁用。    |
-| inlong.metric             | 可选         | (none)     | String   | inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels             | 可选         | (none)     | String   | inlong metric 的标签值,该值的构成为groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`。|
 
 ## 可用元数据
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/mysql-cdc.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/mysql-cdc.md
index 40932bf45f..c068dd4e48 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/mysql-cdc.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/mysql-cdc.md
@@ -312,11 +312,11 @@ TODO: 将在未来支持此功能。
           详细了解 <a href="https://debezium.io/documentation/reference/1.5/connectors/mysql.html#mysql-connector-properties">Debezium 的 MySQL 连接器属性。</a></td> 
     </tr>
     <tr>
-      <td>inlong.metric</td>
+      <td>inlong.metric.labels</td>
       <td>可选</td>
       <td style={{wordWrap: 'break-word'}}>(none)</td>
       <td>String</td>
-      <td>inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。</td> 
+      <td>inlong metric 的标签值,该值的构成为groupId=[groupId]&streamId=[streamId]&nodeId=[nodeId]。</td> 
     </tr>
     </tbody>
 </table>
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/oracle-cdc.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/oracle-cdc.md
index fdf261a6f4..d1740608a9 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/oracle-cdc.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/oracle-cdc.md
@@ -323,11 +323,11 @@ Oracle CDC 消费者的可选启动模式,有效枚举为"initial"
           详细了解 <a href="https://debezium.io/documentation/reference/1.5/connectors/oracle.html#oracle-connector-properties">Debezium 的 Oracle 连接器属性</a></td> 
      </tr>
      <tr>
-       <td>inlong.metric</td>
+       <td>inlong.metric.labels</td>
        <td>可选</td>
        <td style={{wordWrap: 'break-word'}}>(none)</td>
        <td>String</td>
-       <td>inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。</td> 
+       <td>inlong metric 的标签值,该值的构成为groupId=[groupId]&streamId=[streamId]&nodeId=[nodeId]。</td> 
      </tr>
      <tr>
        <td>source.multiple.enable</td>
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/postgresql-cdc.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/postgresql-cdc.md
index ff15c847fd..3230322429 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/postgresql-cdc.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/postgresql-cdc.md
@@ -136,7 +136,7 @@ TODO: 将在未来支持此功能。
 | decoding.plugin.name | 可选 | decoderbufs | String | 服务器上安装的 Postgres 逻辑解码插件的名称。 支持的值是 decoderbufs、wal2json、wal2json_rds、wal2json_streaming、wal2json_rds_streaming 和 pgoutput。 |
 | slot.name | 可选 | flink | String | PostgreSQL 逻辑解码槽的名称,它是为从特定数据库/模式的特定插件流式传输更改而创建的。 服务器使用此插槽将事件流式传输到您正在配置的连接器。 插槽名称必须符合 PostgreSQL 复制插槽命名规则,其中规定:“每个复制插槽都有一个名称,可以包含小写字母、数字和下划线字符。” |
 | debezium.* | 可选 | (none) | String | 将 Debezium 的属性传递给用于从 Postgres 服务器捕获数据更改的 Debezium Embedded Engine。 例如:“debezium.snapshot.mode”=“never”。 查看更多关于 [Debezium 的 Postgres 连接器属性](https://debezium.io/documentation/reference/1.5/connectors/postgresql.html#postgresql-connector-properties)。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`。|
 
 :::caution
 - `slot.name` 建议为不同的表设置以避免潜在的 PSQLException: ERROR: replication slot "flink" is active for PID 974 error。  
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/pulsar.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/pulsar.md
index 7e8ffb86b9..5261b7f9d4 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/pulsar.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/pulsar.md
@@ -106,7 +106,7 @@ TODO
 | key.fields-prefix             | 可选     | (none)        | String | 为 key 格式的所有字段定义自定义前缀,以避免与 value 格式的字段名称冲突。默认情况下,前缀为空。如果定义了自定义前缀,`key.fields`则使用表架构和。 |
 | format or value.format        | 必需     | (none)        | String | 使用前缀设置名称。当以键格式构造数据类型时,前缀被移除,并且在键格式中使用非前缀名称。Pulsar 消息值序列化格式,支持 JSON、Avro 等。更多信息请参见 Flink 格式。 |
 | value.fields-include          | 可选     | ALL           | Enum   | Pulsar 消息值包含字段策略、可选的 ALL 和 EXCEPT_KEY。        |
-| inlong.metric | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`。|
 
 ## 可用元数据
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/sqlserver-cdc.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/sqlserver-cdc.md
index d33648a996..8b45c17bf3 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/sqlserver-cdc.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/sqlserver-cdc.md
@@ -186,11 +186,11 @@ TODO
       <td>SQLServer 数据库连接配置时区。 例如: "Asia/Shanghai"。</td>
     </tr>
     <tr>
-      <td>inlong.metric</td>
+      <td>inlong.metric.labels</td>
       <td>可选</td>
       <td style={{wordWrap: 'break-word'}}>(none)</td>
       <td>String</td>
-      <td>inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。</td> 
+      <td>inlong metric 的标签值,该值的构成为groupId=[groupId]&streamId=[streamId]&nodeId=[nodeId]。</td> 
      </tr>
     </tbody>
 </table>
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/clickhouse.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/clickhouse.md
index 67cdf8d11e..7383ddd5be 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/clickhouse.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/clickhouse.md
@@ -62,7 +62,7 @@ CREATE TABLE `clickhouse_load_table`(
   'url' = 'jdbc:clickhouse://localhost:8123/demo',
   'username' = 'inlong',
   'password' = 'inlong',
-  'table-name' = 'user'
+  'table-name' = 'demo.user'
 )
 
 -- 写数据到 ClickHouse
@@ -88,7 +88,7 @@ TODO: 将在未来支持此功能。
 | connector | 必选 | (none) | String | 指定使用什么类型的连接器,这里应该是 'jdbc-inlong'。 |
 | url | 必选 | (none) | String | JDBC 数据库 url。 |
 | dialect-impl | 必选 | (none) |  String | `org.apache.inlong.sort.jdbc.dialect.ClickHouseDialect` |
-| table-name | 必选 | (none) | String | 连接到 JDBC 表的名称。 |
+| table-name | 必选 | (none) | String | 连接到 JDBC 表的名称。例子:database.tableName |
 | driver | 可选 | (none) | String | 用于连接到此 URL 的 JDBC 驱动类名,如果不设置,将自动从 URL 中推导。 |
 | username | 可选 | (none) | String | JDBC 用户名。如果指定了 'username' 和 'password' 中的任一参数,则两者必须都被指定。 |
 | password | 可选 | (none) | String | JDBC 密码。 |
@@ -98,7 +98,7 @@ TODO: 将在未来支持此功能。
 | sink.max-retries | 可选 | 3 | Integer | 写入记录到数据库失败后的最大重试次数。 |
 | sink.parallelism | 可选 | (none) | Integer | 用于定义 JDBC sink 算子的并行度。默认情况下,并行度是由框架决定:使用与上游链式算子相同的并行度。 |
 | sink.ignore.changelog | 可选 | false | Boolean |  忽略所有 RowKind 类型的变更日志,将它们当作 INSERT 的数据来采集。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`。|
 
 ## 数据类型映射
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/doris.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/doris.md
index ceadea0e21..d7d3da3367 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/doris.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/doris.md
@@ -302,6 +302,7 @@ TODO: 将在未来支持此功能。
 | sink.multiple.database-pattern    | 可选   | (none)            | string   | 多表写入时,从源端二进制数据中按照 `sink.multiple.database-pattern` 指定名称提取写入的数据库名称。 `sink.multiple.enable` 为true时有效。                 | 
 | sink.multiple.table-pattern       | 可选   | (none)            | string   | 多表写入时,从源端二进制数据中按照 `sink.multiple.table-pattern` 指定名称提取写入的表名。 `sink.multiple.enable` 为true时有效。                         |
 | sink.multiple.ignore-single-table-errors | 可选 | true         | boolean  | 多表写入时,是否忽略某个表写入失败。为 `true` 时,如果某个表写入异常,则不写入该表数据,其他表的数据正常写入。为 `false` 时,如果某个表写入异常,则所有表均停止写入。     |
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`。|
 
 ## 数据类型映射
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/elasticsearch.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/elasticsearch.md
index d10a6ceb85..838fe01600 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/elasticsearch.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/elasticsearch.md
@@ -246,11 +246,11 @@ TODO: 将在未来支持这个特性。
       </td>
     </tr>
     <tr>
-      <td>inlong.metric</td>
+      <td>inlong.metric.labels</td>
       <td>可选</td>
       <td style={{wordWrap: 'break-word'}}>(none)</td>
       <td>String</td>
-      <td>inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。</td> 
+      <td>inlong metric 的标签值,该值的构成为groupId=[groupId]&streamId=[streamId]&nodeId=[nodeId]。</td> 
      </tr>
     </tbody>
 </table>
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/greenplum.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/greenplum.md
index 5b886d7540..be22caa540 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/greenplum.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/greenplum.md
@@ -96,7 +96,7 @@ TODO: 将在未来支持此功能。
 | sink.max-retries | 可选 | 3 | Integer | 写入记录到数据库失败后的最大重试次数。 |
 | sink.parallelism | 可选 | (none) | Integer | 用于定义 JDBC sink 算子的并行度。默认情况下,并行度是由框架决定:使用与上游链式算子相同的并行度。 |
 | sink.ignore.changelog | 可选 | false | Boolean |  忽略所有 RowKind 类型的变更日志,将它们当作 INSERT 的数据来采集。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`。|
 
 ## 数据类型映射
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/hbase.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/hbase.md
index 644f2fdcfe..dbb7b68898 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/hbase.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/hbase.md
@@ -94,7 +94,7 @@ TODO: 将在未来支持此功能。
 | lookup.cache.ttl | 可选 | (none) | Duration | 查找缓存中每一行的最大生存时间,在这段时间内,最老的行将过期。注意:"lookup.cache.max-rows" 和 "lookup.cache.ttl" 必须同时被设置。默认情况下,查找缓存是禁用的。 |
 | lookup.max-retries | 可选 | 3 | Integer | 查找数据库失败时的最大重试次数。 |
 | properties.* | 可选 | (none) | String | 可以设置任意 HBase 的配置项。后缀名必须匹配在 [HBase 配置文档](https://hbase.apache.org/2.3/book.html#hbase_default_configurations) 中定义的配置键。Flink 将移除 "properties." 配置键前缀并将变换后的配置键和值传入底层的 HBase 客户端。 例如您可以设置 'properties.hbase.security.authentication' = 'kerberos' 等kerberos认证参数。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`。|
 
 ## 数据类型映射
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/hdfs.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/hdfs.md
index 7e48fd2896..f8d212c59d 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/hdfs.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/hdfs.md
@@ -109,11 +109,11 @@ CREATE TABLE hdfs_load_node (
       <td>合并目标文件大小,默认值为滚动文件大小。</td>
     </tr>
     <tr>
-      <td>inlong.metric</td>
+      <td>inlong.metric.labels</td>
       <td>可选</td>
       <td style={{wordWrap: 'break-word'}}>(none)</td>
       <td>String</td>
-      <td>inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。</td> 
+      <td>inlong metric 的标签值,该值的构成为groupId=[groupId]&streamId=[streamId]&nodeId=[nodeId]。</td> 
     </tr>
     </tbody>
 </table>
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/hive.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/hive.md
index 5f11e74986..b84e32212e 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/hive.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/hive.md
@@ -128,11 +128,11 @@ TODO: 未来版本支持
       支持同时指定多个提交策略:'metastore,success-file'。</td>
     </tr>
     <tr>
-      <td>inlong.metric</td>
+      <td>inlong.metric.labels</td>
       <td>可选</td>
       <td style={{wordWrap: 'break-word'}}>(none)</td>
       <td>String</td>
-      <td>inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。</td> 
+      <td>inlong metric 的标签值,该值的构成为groupId=[groupId]&streamId=[streamId]&nodeId=[nodeId]。</td> 
      </tr>
     </tbody>
 </table>
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/iceberg.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/iceberg.md
index 5243e9534e..bd5f00d933 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/iceberg.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/iceberg.md
@@ -255,7 +255,7 @@ Iceberg在多表写入时支持同步源表结构变更到目标表(DDL同步
 | clients          | hive catalog可选                 | 2      | Integer | Hive Metastore 客户端池大小,默认值为 2                      |
 | warehouse        | hive catalog或hadoop catalog可选 | (none) | String  | 对于 Hive 目录,是 Hive 仓库位置,如果既不设置`hive-conf-dir`指定包含`hive-site.xml`配置文件的位置也不添加正确`hive-site.xml`的类路径,用户应指定此路径。对于hadoop目录,HDFS目录存放元数据文件和数据文件 |
 | hive-conf-dir    | hive catalog可选                 | (none) | String  | `hive-site.xml`包含将用于提供自定义 Hive 配置值的配置文件的目录的路径。如果同时设置和创建Iceberg目录时,`hive.metastore.warehouse.dir`from `<hive-conf-dir>/hive-site.xml`(或来自类路径的 hive 配置文件)的值将被该值覆盖。`warehouse``hive-conf-dir``warehouse` |
-| inlong.metric | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`。|
 | sink.multiple.enable | 可选                         | false  | Boolean | 是否开启多路写入            |
 | sink.multiple.schema-update.policy | 可选           | TRY_IT_BEST | Enum | 遇到数据中schema和目标表不一致时的处理策略<br/>TRY_IT_BEST:尽力而为,尽可能处理,处理不了的则忽略<br/>IGNORE_WITH_LOG:忽略并且记录日志,后续该表数据不再处理<br/> THROW_WITH_STOP:抛异常并且停止任务,直到用户手动处理schema不一致的情况
 | sink.multiple.pk-auto-generated | 可选              | false  | Boolean  | 是否自动生成主键,对于多路写入自动建表时当源表无主键时是否将所有字段当作主键  |
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/kafka.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/kafka.md
index 05568fc501..ba389cb15f 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/kafka.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/kafka.md
@@ -94,7 +94,7 @@ TODO: 将在未来支持此功能。
 | sink.multiple.partition-pattern | 可选 | (none) | String |  动态 Partition 提取模式, 形如 '${VARIABLE_NAME}'仅用于 Kafka 多 Sink 场景且当 'format' 为 'raw'、'sink.partitioner' 为 'raw-hash' 时有效。 |
 | sink.semantic | 可选 | at-least-once | String | 定义 Kafka sink 的语义。有效值为 'at-least-once','exactly-once' 和 'none'。请参阅 [一致性保证](https://nightlies.apache.org/flink/flink-docs-release-1.13/zh/docs/connectors/table/kafka/#%E4%B8%80%E8%87%B4%E6%80%A7%E4%BF%9D%E8%AF%81) 以获取更多细节。 |
 | sink.parallelism | 可选 | (none) | Integer | 定义 Kafka sink 算子的并行度。默认情况下,并行度由框架定义为与上游串联的算子相同。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`。|
 
 
 ## 可用的元数据字段
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/mysql.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/mysql.md
index b64e43bcbb..626a0f544e 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/mysql.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/mysql.md
@@ -95,7 +95,7 @@ TODO: 将在未来支持此功能。
 | sink.buffer-flush.interval | 可选 | 1s | Duration | flush 间隔时间,超过该时间后异步线程将 flush 数据。可以设置为 '0' 来禁用它。注意, 为了完全异步地处理缓存的 flush 事件,可以将 'sink.buffer-flush.max-rows' 设置为 '0' 并配置适当的 flush 时间间隔。 |
 | sink.max-retries | 可选 | 3 | Integer | 写入记录到数据库失败后的最大重试次数。 |
 | sink.parallelism | 可选 | (none) | Integer | 用于定义 JDBC sink 算子的并行度。默认情况下,并行度是由框架决定:使用与上游链式算子相同的并行度。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`。|
 
 ## 数据类型映射
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/oracle.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/oracle.md
index c15eece9f5..6caa085a96 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/oracle.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/oracle.md
@@ -95,7 +95,7 @@ TODO: 将在未来支持此功能。
 | sink.max-retries | 可选 | 3 | Integer | 写入记录到数据库失败后的最大重试次数。 |
 | sink.parallelism | 可选 | (none) | Integer | 用于定义 JDBC sink 算子的并行度。默认情况下,并行度是由框架决定:使用与上游链式算子相同的并行度。 |
 | sink.ignore.changelog | 可选 | false | Boolean |  忽略所有 RowKind 类型的变更日志,将它们当作 INSERT 的数据来采集。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`。|
 
 ## 数据类型映射
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/postgresql.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/postgresql.md
index 182d1c3e11..5ccd41e016 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/postgresql.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/postgresql.md
@@ -96,7 +96,7 @@ TODO: 将在未来支持此功能。
 | sink.max-retries | 可选 | 3 | Integer | 写入记录到数据库失败后的最大重试次数。 |
 | sink.parallelism | 可选 | (none) | Integer | 用于定义 JDBC sink 算子的并行度。默认情况下,并行度是由框架决定:使用与上游链式算子相同的并行度。 |
 | sink.ignore.changelog | 可选 | false | Boolean |  忽略所有 RowKind 类型的变更日志,将它们当作 INSERT 的数据来采集。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`。|
 
 ## 数据类型映射
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/sqlserver.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/sqlserver.md
index 954909f618..b785673fc5 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/sqlserver.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/sqlserver.md
@@ -94,7 +94,7 @@ TODO: 将在未来支持此功能。
 | sink.max-retries | 可选 | 3 | Integer | 写入记录到数据库失败后的最大重试次数。 |
 | sink.parallelism | 可选 | (none) | Integer | 用于定义 JDBC sink 算子的并行度。默认情况下,并行度是由框架决定:使用与上游链式算子相同的并行度。 |
 | sink.ignore.changelog | 可选 | false | Boolean |  忽略所有 RowKind 类型的变更日志,将它们当作 INSERT 的数据来采集。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`。|
 
 ## 数据映射
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/starrocks.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/starrocks.md
index 84de4b7573..6995dc23d2 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/starrocks.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/starrocks.md
@@ -292,6 +292,7 @@ TODO: 将在未来支持此功能。
 | sink.multiple.format              | 可选     | 无               | string   | 多表(整库)写入的数据格式,它表示connector之间流转的原始二进制数据的实际格式,目前支持`canal-json` 和 `debezium-json`。可以查看[kafka -- Dynamic Topic Extraction](https://github.com/apache/inlong-website/blob/master/docs/data_node/load_node/kafka.md)获取更多信息。  |
 | sink.multiple.database-pattern    | 可选     | 无               | string   | 从原始二进制数据中提取数据库名,仅在多表(整库)同步场景中使用。 | 
 | sink.multiple.table-pattern       | 可选     | 无               | string   | 从原始二进制数据中提取表名,仅在多表(整库)同步场景中使用。 |
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`。|
 
 ## 数据类型映射
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/tdsql-postgresql.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/tdsql-postgresql.md
index 9c5e320f4a..650e7907dd 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/tdsql-postgresql.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/load_node/tdsql-postgresql.md
@@ -94,7 +94,7 @@ TODO: 将在未来支持此功能。
 | sink.max-retries | 可选 | 3 | Integer | 写入记录到数据库失败后的最大重试次数。 |
 | sink.parallelism | 可选 | (none) | Integer | 用于定义 JDBC sink 算子的并行度。默认情况下,并行度是由框架决定:使用与上游链式算子相同的并行度。 |
 | sink.ignore.changelog | 可选 | false | Boolean |  忽略所有 RowKind 类型的变更日志,将它们当作 INSERT 的数据来采集。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`。|
 
 ## 数据类型映射
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/modules/sort/metrics.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/modules/sort/metrics.md
index 7f05d6e7cf..82a3579335 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/modules/sort/metrics.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/modules/sort/metrics.md
@@ -5,28 +5,73 @@ sidebar_position: 4
 
 ## 概览
 
-我们为节点增加了指标计算。 用户添加 with 选项 `inlong.metric` 后 Sort 会计算指标,`inlong.metric` 选项的值由三部分构成:`groupId&streamId&nodeId`。
+我们为节点增加了指标计算。 用户添加 with 选项 `inlong.metric.labels` 后 Sort 会计算指标,`inlong.metric.labels` 选项的值由三部分构成:groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`。
 用户可以使用 [metric reporter](https://nightlies.apache.org/flink/flink-docs-release-1.13/zh/docs/deployment/metric_reporters/) 去上报数据。
 
 ## 指标
 
 ### 支持的 extract 节点
 
-| 指标名 | extract 节点 | 描述 |
+| 指标名 | Extract 节点 | 描述 |
 |-------------|--------------|-------------|
 | groupId_streamId_nodeId_numRecordsIn | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | 输入记录数 |
 | groupId_streamId_nodeId_numBytesIn | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | 输入字节数 |
 | groupId_streamId_nodeId_numRecordsInPerSecond | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | 每秒输入记录数 |
 | groupId_streamId_nodeId_numBytesInPerSecond | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | 每秒输入字节数 |
 
+#### 支持表级别指标
+它是用于整库同步场景
+
+| 指标名 | Extract 节点 | 描述 |
+|-------------|--------------|-------------|
+| groupId_streamId_nodeId_database_table_numRecordsIn | mysql-cdc | 输入记录数 |
+| groupId_streamId_nodeId_database_schema_table_numRecordsIn | oracle-cdc,postgresql-cdc | 输入记录数 |
+| groupId_streamId_nodeId_database_collection_numRecordsIn | mongodb-cdc | 输入记录数 |
+| groupId_streamId_nodeId_database_table_numBytesIn | mysql-cdc | 输入字节数 |
+| groupId_streamId_nodeId_database_schema_table_numBytesIn | oracle-cdc,postgresql-cdc | 输入字节数 |
+| groupId_streamId_nodeId_database_collection_numBytesIn | mongodb-cdc | 输入字节数 |
+| groupId_streamId_nodeId_database_table_numRecordsInPerSecond | mysql-cdc | 每秒输入记录数 |
+| groupId_streamId_nodeId_database_schema_table_numRecordsInPerSecond | oracle-cdc,postgresql-cdc | 每秒输入记录数 |
+| groupId_streamId_nodeId_database_collection_numRecordsInPerSecond | mongodb-cdc | 每秒输入记录数 |
+| groupId_streamId_nodeId_database_table_numBytesInPerSecond | mysql-cdc | 每秒输入字节数 |
+| groupId_streamId_nodeId_database_schema_table_numBytesInPerSecond | oracle-cdc,postgresql-cdc | 每秒输入字节数 |
+| groupId_streamId_nodeId_database_collection_numBytesInPerSecond | mongodb-cdc | 每秒输入字节数 |
+
 ### 支持的 load 节点
 
-| 指标名 | load 节点 | 描述 |
+| 指标名 | Load 节点 | 描述 |
 |-------------|-----------|-------------|
 | groupId_streamId_nodeId_numRecordsOut | clickhouse,elasticsearch,greenplum,hbase,<br/>hdfs,hive,iceberg,kafka,<br/>mysql,oracle,postgresql,sqlserver,tdsql-postgresql | 输出记录数 |
 | groupId_streamId_nodeId_numBytesOut |  clickhouse,elasticsearch,greenplum,hbase,<br/>hdfs,hive,iceberg,kafka,<br/>mysql,oracle,postgresql,sqlserver,tdsql-postgresql | 输出字节数 |
 | groupId_streamId_nodeId_numRecordsOutPerSecond |  clickhouse,elasticsearch,greenplum,<br/>hbase,hdfs,hive,iceberg,<br/>kafka,mysql,oracle,postgresql,sqlserver,tdsql-postgresql | 每秒输出记录数 |
 | groupId_streamId_nodeId_numBytesOutPerSecond |  clickhouse,elasticsearch,greenplum,<br/>hbase,hdfs,hive,iceberg,kafka,<br/>mysql,oracle,postgresql,sqlserver,tdsql-postgresql | 每秒输出字节数 |
+| groupId_streamId_nodeId_dirtyRecordsOut |  clickhouse,elasticsearch,greenplum,hbase,<br/>hdfs,hive,iceberg,kafka,mysql,<br/>oracle,postgresql,sqlserver,tdsql-postgresql | 输出脏数据记录数 |
+| groupId_streamId_nodeId_dirtyBytesOut |  clickhouse,elasticsearch,greenplum,hbase,<br/>hdfs,hive,iceberg,kafka,mysql,<br/>oracle,postgresql,sqlserver,tdsql-postgresql | 输出脏数据字节数 |
+
+
+### 支持表级别指标
+它是用于整库同步场景
+
+| 指标名 | Load node | 描述 |
+|-------------|-----------|-------------|
+| groupId_streamId_nodeId_database_table_numRecordsOut | doris,iceberg,starRocks | 输出记录数据 |
+| groupId_streamId_nodeId_database_schema_table_numRecordsOut | postgresql | 输出记录数据 |
+| groupId_streamId_nodeId_topic_numRecordsOut | kafka | 输出记录数据 |
+| groupId_streamId_nodeId_database_table_numBytesOut | doris,iceberg,starRocks | 输出字节数据 |
+| groupId_streamId_nodeId_database_schema_table_numBytesOut | postgresql | 输出字节数据 |
+| groupId_streamId_nodeId_topic_numBytesOut | kafka | 输出字节数据 |
+| groupId_streamId_nodeId_database_table_numRecordsOutPerSecond | doris,iceberg,starRocks | 每秒记录数据 |
+| groupId_streamId_nodeId_database_schema_table_numRecordsOutPerSecond | postgresql | 每秒记录数据 |
+| groupId_streamId_nodeId_topic_numRecordsOutPerSecond | kafka | 每秒记录数据 |
+| groupId_streamId_nodeId_database_table_numBytesOutPerSecond | doris,iceberg,starRocks | 每秒输出字节数量 |
+| groupId_streamId_nodeId_database_schema_table_numBytesOutPerSecond | postgresql | 每秒输出字节数量 |
+| groupId_streamId_nodeId_topic_numBytesOutPerSecond | kafka | 每秒输出字节数量 |
+| groupId_streamId_nodeId_database_table_dirtyRecordsOut | doris,iceberg,starRocks | 输出脏数据记录数 |
+| groupId_streamId_nodeId_database_schema_table_dirtyRecordsOut | postgresql | 输出脏数据记录数 |
+| groupId_streamId_nodeId_topic_dirtyRecordsOut | kafka | 输出脏数据记录数 |
+| groupId_streamId_nodeId_database_table_dirtyBytesOut | doris,iceberg,starRocks | 输出脏数据字节数据 |
+| groupId_streamId_nodeId_database_schema_table_dirtyBytesOut | postgresql | 输出脏数据字节数据 |
+| groupId_streamId_nodeId_topic_dirtyBytesOut | kafka | 输出脏数据字节数据 |
 
 ## 用法
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/extract_node/kafka.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/extract_node/kafka.md
index af259afc3f..ff61be058e 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/extract_node/kafka.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/extract_node/kafka.md
@@ -108,7 +108,7 @@ TODO: 将在未来支持此功能。
 | scan.startup.specific-offsets | 可选 | (none) | String | 在使用 'specific-offsets' 启动模式时为每个 partition 指定 offset,例如 'partition:0,offset:42;partition:1,offset:300'。 |
 | scan.startup.timestamp-millis | 可选 | (none) | Long | 在使用 'timestamp' 启动模式时指定启动的时间戳(单位毫秒)。 |
 | scan.topic-partition-discovery.interval | 可选 | (none) | Duration | Consumer 定期探测动态创建的 Kafka topic 和 partition 的时间间隔。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`。|
 | sink.ignore.changelog | 可选 | false | 布尔型 | 支持所有类型的 changelog 流 ingest 到 Kafka。 |
 
 ## 可用的元数据字段
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/extract_node/mongodb-cdc.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/extract_node/mongodb-cdc.md
index 8a0a36f352..ad7f76b0a9 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/extract_node/mongodb-cdc.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/extract_node/mongodb-cdc.md
@@ -134,7 +134,7 @@ TODO: 未来会支持
 | poll.max.batch.size       | 可选         | 1000       | Integer  | 轮询新数据时,单个批次中包含的最大更改流文档数。             |
 | poll.await.time.ms        | 可选         | 1500       | Integer  | 在更改流上检查新结果之前等待的时间量。                       |
 | heartbeat.interval.ms     | 可选         | 0          | Integer  | 发送心跳消息之间的时间长度(以毫秒为单位)。使用 0 禁用。    |
-| inlong.metric             | 可选         | (none)     | String   | inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels             | 可选         | (none)     | String   | inlong metric 的标签值,该值的构成为groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`。|
 
 ## 可用元数据
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/extract_node/mysql-cdc.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/extract_node/mysql-cdc.md
index 40932bf45f..c068dd4e48 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/extract_node/mysql-cdc.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/extract_node/mysql-cdc.md
@@ -312,11 +312,11 @@ TODO: 将在未来支持此功能。
           详细了解 <a href="https://debezium.io/documentation/reference/1.5/connectors/mysql.html#mysql-connector-properties">Debezium 的 MySQL 连接器属性。</a></td> 
     </tr>
     <tr>
-      <td>inlong.metric</td>
+      <td>inlong.metric.labels</td>
       <td>可选</td>
       <td style={{wordWrap: 'break-word'}}>(none)</td>
       <td>String</td>
-      <td>inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。</td> 
+      <td>inlong metric 的标签值,该值的构成为groupId=[groupId]&streamId=[streamId]&nodeId=[nodeId]。</td> 
     </tr>
     </tbody>
 </table>
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/extract_node/oracle-cdc.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/extract_node/oracle-cdc.md
index fdf261a6f4..d1740608a9 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/extract_node/oracle-cdc.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/extract_node/oracle-cdc.md
@@ -323,11 +323,11 @@ Oracle CDC 消费者的可选启动模式,有效枚举为"initial"
           详细了解 <a href="https://debezium.io/documentation/reference/1.5/connectors/oracle.html#oracle-connector-properties">Debezium 的 Oracle 连接器属性</a></td> 
      </tr>
      <tr>
-       <td>inlong.metric</td>
+       <td>inlong.metric.labels</td>
        <td>可选</td>
        <td style={{wordWrap: 'break-word'}}>(none)</td>
        <td>String</td>
-       <td>inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。</td> 
+       <td>inlong metric 的标签值,该值的构成为groupId=[groupId]&streamId=[streamId]&nodeId=[nodeId]。</td> 
      </tr>
      <tr>
        <td>source.multiple.enable</td>
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/extract_node/postgresql-cdc.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/extract_node/postgresql-cdc.md
index 7ad1f01bcb..72ab5c4b29 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/extract_node/postgresql-cdc.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/extract_node/postgresql-cdc.md
@@ -114,7 +114,7 @@ TODO: 将在未来支持此功能。
 | decoding.plugin.name | 可选 | decoderbufs | String | 服务器上安装的 Postgres 逻辑解码插件的名称。 支持的值是 decoderbufs、wal2json、wal2json_rds、wal2json_streaming、wal2json_rds_streaming 和 pgoutput。 |
 | slot.name | 可选 | flink | String | PostgreSQL 逻辑解码槽的名称,它是为从特定数据库/模式的特定插件流式传输更改而创建的。 服务器使用此插槽将事件流式传输到您正在配置的连接器。 插槽名称必须符合 PostgreSQL 复制插槽命名规则,其中规定:“每个复制插槽都有一个名称,可以包含小写字母、数字和下划线字符。” |
 | debezium.* | 可选 | (none) | String | 将 Debezium 的属性传递给用于从 Postgres 服务器捕获数据更改的 Debezium Embedded Engine。 例如:“debezium.snapshot.mode”=“never”。 查看更多关于 [Debezium 的 Postgres 连接器属性](https://debezium.io/documentation/reference/1.5/connectors/postgresql.html#postgresql-connector-properties)。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`。|
 
 **Note**: `slot.name` 建议为不同的表设置以避免潜在的 PSQLException: ERROR: replication slot "flink" is active for PID 974 error。  
 **Note**: PSQLException: ERROR: all replication slots are in use Hint: Free one or increase max_replication_slots. 我们可以通过以下语句删除槽。  
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/extract_node/pulsar.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/extract_node/pulsar.md
index 7e8ffb86b9..5261b7f9d4 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/extract_node/pulsar.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/extract_node/pulsar.md
@@ -106,7 +106,7 @@ TODO
 | key.fields-prefix             | 可选     | (none)        | String | 为 key 格式的所有字段定义自定义前缀,以避免与 value 格式的字段名称冲突。默认情况下,前缀为空。如果定义了自定义前缀,`key.fields`则使用表架构和。 |
 | format or value.format        | 必需     | (none)        | String | 使用前缀设置名称。当以键格式构造数据类型时,前缀被移除,并且在键格式中使用非前缀名称。Pulsar 消息值序列化格式,支持 JSON、Avro 等。更多信息请参见 Flink 格式。 |
 | value.fields-include          | 可选     | ALL           | Enum   | Pulsar 消息值包含字段策略、可选的 ALL 和 EXCEPT_KEY。        |
-| inlong.metric | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`。|
 
 ## 可用元数据
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/extract_node/sqlserver-cdc.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/extract_node/sqlserver-cdc.md
index d33648a996..8b45c17bf3 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/extract_node/sqlserver-cdc.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/extract_node/sqlserver-cdc.md
@@ -186,11 +186,11 @@ TODO
       <td>SQLServer 数据库连接配置时区。 例如: "Asia/Shanghai"。</td>
     </tr>
     <tr>
-      <td>inlong.metric</td>
+      <td>inlong.metric.labels</td>
       <td>可选</td>
       <td style={{wordWrap: 'break-word'}}>(none)</td>
       <td>String</td>
-      <td>inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。</td> 
+      <td>inlong metric 的标签值,该值的构成为groupId=[groupId]&streamId=[streamId]&nodeId=[nodeId]。</td> 
      </tr>
     </tbody>
 </table>
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/clickhouse.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/clickhouse.md
index 68090b4d64..28128ffd27 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/clickhouse.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/clickhouse.md
@@ -98,8 +98,7 @@ TODO: 将在未来支持此功能。
 | sink.max-retries | 可选 | 3 | Integer | 写入记录到数据库失败后的最大重试次数。 |
 | sink.parallelism | 可选 | (none) | Integer | 用于定义 JDBC sink 算子的并行度。默认情况下,并行度是由框架决定:使用与上游链式算子相同的并行度。 |
 | sink.ignore.changelog | 可选 | false | Boolean |  忽略所有 RowKind 类型的变更日志,将它们当作 INSERT 的数据来采集。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。|
-
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`. |
 ## 数据类型映射
 
 | ClickHouse type | Flink SQL type |
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/doris.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/doris.md
index ceadea0e21..db9ab17243 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/doris.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/doris.md
@@ -302,7 +302,7 @@ TODO: 将在未来支持此功能。
 | sink.multiple.database-pattern    | 可选   | (none)            | string   | 多表写入时,从源端二进制数据中按照 `sink.multiple.database-pattern` 指定名称提取写入的数据库名称。 `sink.multiple.enable` 为true时有效。                 | 
 | sink.multiple.table-pattern       | 可选   | (none)            | string   | 多表写入时,从源端二进制数据中按照 `sink.multiple.table-pattern` 指定名称提取写入的表名。 `sink.multiple.enable` 为true时有效。                         |
 | sink.multiple.ignore-single-table-errors | 可选 | true         | boolean  | 多表写入时,是否忽略某个表写入失败。为 `true` 时,如果某个表写入异常,则不写入该表数据,其他表的数据正常写入。为 `false` 时,如果某个表写入异常,则所有表均停止写入。     |
-
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`. |
 ## 数据类型映射
 
 | Doris Type  | Flink Type           |
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/elasticsearch.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/elasticsearch.md
index c967f1d609..cc1712ca17 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/elasticsearch.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/elasticsearch.md
@@ -246,11 +246,11 @@ TODO: 将在未来支持这个特性。
       </td>
     </tr>
     <tr>
-      <td>inlong.metric</td>
+      <td>inlong.metric.labels</td>
       <td>可选</td>
       <td style={{wordWrap: 'break-word'}}>(none)</td>
       <td>String</td>
-      <td>inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。</td> 
+      <td>inlong metric 的标签值,该值的构成为groupId=[groupId]&streamId=[streamId]&nodeId=[nodeId]。</td> 
      </tr>
     </tbody>
 </table>
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/greenplum.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/greenplum.md
index 5b886d7540..be22caa540 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/greenplum.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/greenplum.md
@@ -96,7 +96,7 @@ TODO: 将在未来支持此功能。
 | sink.max-retries | 可选 | 3 | Integer | 写入记录到数据库失败后的最大重试次数。 |
 | sink.parallelism | 可选 | (none) | Integer | 用于定义 JDBC sink 算子的并行度。默认情况下,并行度是由框架决定:使用与上游链式算子相同的并行度。 |
 | sink.ignore.changelog | 可选 | false | Boolean |  忽略所有 RowKind 类型的变更日志,将它们当作 INSERT 的数据来采集。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`。|
 
 ## 数据类型映射
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/hbase.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/hbase.md
index 644f2fdcfe..dbb7b68898 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/hbase.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/hbase.md
@@ -94,7 +94,7 @@ TODO: 将在未来支持此功能。
 | lookup.cache.ttl | 可选 | (none) | Duration | 查找缓存中每一行的最大生存时间,在这段时间内,最老的行将过期。注意:"lookup.cache.max-rows" 和 "lookup.cache.ttl" 必须同时被设置。默认情况下,查找缓存是禁用的。 |
 | lookup.max-retries | 可选 | 3 | Integer | 查找数据库失败时的最大重试次数。 |
 | properties.* | 可选 | (none) | String | 可以设置任意 HBase 的配置项。后缀名必须匹配在 [HBase 配置文档](https://hbase.apache.org/2.3/book.html#hbase_default_configurations) 中定义的配置键。Flink 将移除 "properties." 配置键前缀并将变换后的配置键和值传入底层的 HBase 客户端。 例如您可以设置 'properties.hbase.security.authentication' = 'kerberos' 等kerberos认证参数。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`。|
 
 ## 数据类型映射
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/hdfs.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/hdfs.md
index 7e48fd2896..f8d212c59d 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/hdfs.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/hdfs.md
@@ -109,11 +109,11 @@ CREATE TABLE hdfs_load_node (
       <td>合并目标文件大小,默认值为滚动文件大小。</td>
     </tr>
     <tr>
-      <td>inlong.metric</td>
+      <td>inlong.metric.labels</td>
       <td>可选</td>
       <td style={{wordWrap: 'break-word'}}>(none)</td>
       <td>String</td>
-      <td>inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。</td> 
+      <td>inlong metric 的标签值,该值的构成为groupId=[groupId]&streamId=[streamId]&nodeId=[nodeId]。</td> 
     </tr>
     </tbody>
 </table>
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/hive.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/hive.md
index 5f11e74986..b84e32212e 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/hive.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/hive.md
@@ -128,11 +128,11 @@ TODO: 未来版本支持
       支持同时指定多个提交策略:'metastore,success-file'。</td>
     </tr>
     <tr>
-      <td>inlong.metric</td>
+      <td>inlong.metric.labels</td>
       <td>可选</td>
       <td style={{wordWrap: 'break-word'}}>(none)</td>
       <td>String</td>
-      <td>inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。</td> 
+      <td>inlong metric 的标签值,该值的构成为groupId=[groupId]&streamId=[streamId]&nodeId=[nodeId]。</td> 
      </tr>
     </tbody>
 </table>
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/iceberg.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/iceberg.md
index 5243e9534e..bd5f00d933 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/iceberg.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/iceberg.md
@@ -255,7 +255,7 @@ Iceberg在多表写入时支持同步源表结构变更到目标表(DDL同步
 | clients          | hive catalog可选                 | 2      | Integer | Hive Metastore 客户端池大小,默认值为 2                      |
 | warehouse        | hive catalog或hadoop catalog可选 | (none) | String  | 对于 Hive 目录,是 Hive 仓库位置,如果既不设置`hive-conf-dir`指定包含`hive-site.xml`配置文件的位置也不添加正确`hive-site.xml`的类路径,用户应指定此路径。对于hadoop目录,HDFS目录存放元数据文件和数据文件 |
 | hive-conf-dir    | hive catalog可选                 | (none) | String  | `hive-site.xml`包含将用于提供自定义 Hive 配置值的配置文件的目录的路径。如果同时设置和创建Iceberg目录时,`hive.metastore.warehouse.dir`from `<hive-conf-dir>/hive-site.xml`(或来自类路径的 hive 配置文件)的值将被该值覆盖。`warehouse``hive-conf-dir``warehouse` |
-| inlong.metric | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`。|
 | sink.multiple.enable | 可选                         | false  | Boolean | 是否开启多路写入            |
 | sink.multiple.schema-update.policy | 可选           | TRY_IT_BEST | Enum | 遇到数据中schema和目标表不一致时的处理策略<br/>TRY_IT_BEST:尽力而为,尽可能处理,处理不了的则忽略<br/>IGNORE_WITH_LOG:忽略并且记录日志,后续该表数据不再处理<br/> THROW_WITH_STOP:抛异常并且停止任务,直到用户手动处理schema不一致的情况
 | sink.multiple.pk-auto-generated | 可选              | false  | Boolean  | 是否自动生成主键,对于多路写入自动建表时当源表无主键时是否将所有字段当作主键  |
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/kafka.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/kafka.md
index 05568fc501..fa5002fddb 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/kafka.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/kafka.md
@@ -94,8 +94,7 @@ TODO: 将在未来支持此功能。
 | sink.multiple.partition-pattern | 可选 | (none) | String |  动态 Partition 提取模式, 形如 '${VARIABLE_NAME}'仅用于 Kafka 多 Sink 场景且当 'format' 为 'raw'、'sink.partitioner' 为 'raw-hash' 时有效。 |
 | sink.semantic | 可选 | at-least-once | String | 定义 Kafka sink 的语义。有效值为 'at-least-once','exactly-once' 和 'none'。请参阅 [一致性保证](https://nightlies.apache.org/flink/flink-docs-release-1.13/zh/docs/connectors/table/kafka/#%E4%B8%80%E8%87%B4%E6%80%A7%E4%BF%9D%E8%AF%81) 以获取更多细节。 |
 | sink.parallelism | 可选 | (none) | Integer | 定义 Kafka sink 算子的并行度。默认情况下,并行度由框架定义为与上游串联的算子相同。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。|
-
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`。|
 
 ## 可用的元数据字段
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/mysql.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/mysql.md
index b64e43bcbb..626a0f544e 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/mysql.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/mysql.md
@@ -95,7 +95,7 @@ TODO: 将在未来支持此功能。
 | sink.buffer-flush.interval | 可选 | 1s | Duration | flush 间隔时间,超过该时间后异步线程将 flush 数据。可以设置为 '0' 来禁用它。注意, 为了完全异步地处理缓存的 flush 事件,可以将 'sink.buffer-flush.max-rows' 设置为 '0' 并配置适当的 flush 时间间隔。 |
 | sink.max-retries | 可选 | 3 | Integer | 写入记录到数据库失败后的最大重试次数。 |
 | sink.parallelism | 可选 | (none) | Integer | 用于定义 JDBC sink 算子的并行度。默认情况下,并行度是由框架决定:使用与上游链式算子相同的并行度。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`。|
 
 ## 数据类型映射
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/oracle.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/oracle.md
index c15eece9f5..6caa085a96 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/oracle.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/oracle.md
@@ -95,7 +95,7 @@ TODO: 将在未来支持此功能。
 | sink.max-retries | 可选 | 3 | Integer | 写入记录到数据库失败后的最大重试次数。 |
 | sink.parallelism | 可选 | (none) | Integer | 用于定义 JDBC sink 算子的并行度。默认情况下,并行度是由框架决定:使用与上游链式算子相同的并行度。 |
 | sink.ignore.changelog | 可选 | false | Boolean |  忽略所有 RowKind 类型的变更日志,将它们当作 INSERT 的数据来采集。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`。|
 
 ## 数据类型映射
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/postgresql.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/postgresql.md
index 182d1c3e11..5ccd41e016 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/postgresql.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/postgresql.md
@@ -96,7 +96,7 @@ TODO: 将在未来支持此功能。
 | sink.max-retries | 可选 | 3 | Integer | 写入记录到数据库失败后的最大重试次数。 |
 | sink.parallelism | 可选 | (none) | Integer | 用于定义 JDBC sink 算子的并行度。默认情况下,并行度是由框架决定:使用与上游链式算子相同的并行度。 |
 | sink.ignore.changelog | 可选 | false | Boolean |  忽略所有 RowKind 类型的变更日志,将它们当作 INSERT 的数据来采集。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`。|
 
 ## 数据类型映射
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/sqlserver.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/sqlserver.md
index 954909f618..b785673fc5 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/sqlserver.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/sqlserver.md
@@ -94,7 +94,7 @@ TODO: 将在未来支持此功能。
 | sink.max-retries | 可选 | 3 | Integer | 写入记录到数据库失败后的最大重试次数。 |
 | sink.parallelism | 可选 | (none) | Integer | 用于定义 JDBC sink 算子的并行度。默认情况下,并行度是由框架决定:使用与上游链式算子相同的并行度。 |
 | sink.ignore.changelog | 可选 | false | Boolean |  忽略所有 RowKind 类型的变更日志,将它们当作 INSERT 的数据来采集。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`。|
 
 ## 数据映射
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/tdsql-postgresql.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/tdsql-postgresql.md
index 9c5e320f4a..650e7907dd 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/tdsql-postgresql.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/tdsql-postgresql.md
@@ -94,7 +94,7 @@ TODO: 将在未来支持此功能。
 | sink.max-retries | 可选 | 3 | Integer | 写入记录到数据库失败后的最大重试次数。 |
 | sink.parallelism | 可选 | (none) | Integer | 用于定义 JDBC sink 算子的并行度。默认情况下,并行度是由框架决定:使用与上游链式算子相同的并行度。 |
 | sink.ignore.changelog | 可选 | false | Boolean |  忽略所有 RowKind 类型的变更日志,将它们当作 INSERT 的数据来采集。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`。|
 
 ## 数据类型映射
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/modules/sort/metrics.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/modules/sort/metrics.md
index 7f05d6e7cf..9d6c6490b2 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/modules/sort/metrics.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/modules/sort/metrics.md
@@ -5,14 +5,16 @@ sidebar_position: 4
 
 ## 概览
 
-我们为节点增加了指标计算。 用户添加 with 选项 `inlong.metric` 后 Sort 会计算指标,`inlong.metric` 选项的值由三部分构成:`groupId&streamId&nodeId`。
+我们为节点增加了指标计算。 用户添加 with 选项 `inlong.metric.lables` 后 Sort 会计算指标,`inlong.metric.labels` 选项的值由三部分构成:groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`。
 用户可以使用 [metric reporter](https://nightlies.apache.org/flink/flink-docs-release-1.13/zh/docs/deployment/metric_reporters/) 去上报数据。
 
 ## 指标
 
 ### 支持的 extract 节点
 
-| 指标名 | extract 节点 | 描述 |
+#### 支持节点级别指标
+
+| 指标名 | Extract 节点 | 描述 |
 |-------------|--------------|-------------|
 | groupId_streamId_nodeId_numRecordsIn | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | 输入记录数 |
 | groupId_streamId_nodeId_numBytesIn | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | 输入字节数 |
@@ -21,7 +23,9 @@ sidebar_position: 4
 
 ### 支持的 load 节点
 
-| 指标名 | load 节点 | 描述 |
+#### 支持节点级别指标
+
+| 指标名 | Load 节点 | 描述 |
 |-------------|-----------|-------------|
 | groupId_streamId_nodeId_numRecordsOut | clickhouse,elasticsearch,greenplum,hbase,<br/>hdfs,hive,iceberg,kafka,<br/>mysql,oracle,postgresql,sqlserver,tdsql-postgresql | 输出记录数 |
 | groupId_streamId_nodeId_numBytesOut |  clickhouse,elasticsearch,greenplum,hbase,<br/>hdfs,hive,iceberg,kafka,<br/>mysql,oracle,postgresql,sqlserver,tdsql-postgresql | 输出字节数 |
diff --git a/versioned_docs/version-1.4.0/data_node/extract_node/kafka.md b/versioned_docs/version-1.4.0/data_node/extract_node/kafka.md
index 5c88e49649..6f1f360d92 100644
--- a/versioned_docs/version-1.4.0/data_node/extract_node/kafka.md
+++ b/versioned_docs/version-1.4.0/data_node/extract_node/kafka.md
@@ -110,7 +110,7 @@ TODO: It will be supported in the future.
 | scan.startup.specific-offsets | optional | (none) | String | Specify offsets for each partition in case of 'specific-offsets' startup mode, e.g. 'partition:0,offset:42;partition:1,offset:300'. |
 | scan.startup.timestamp-millis | optional | (none) | Long | Start from the specified epoch timestamp (milliseconds) used in case of 'timestamp' startup mode. |
 | scan.topic-partition-discovery.interval | optional | (none) | Duration | Interval for consumer to discover dynamically created Kafka topics and partitions periodically. |
-| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`. |
 | sink.ignore.changelog | optional | false | Boolean |  Importing all changelog mode data ingest into Kafka . |
 
 ## Available Metadata
diff --git a/versioned_docs/version-1.4.0/data_node/extract_node/mongodb-cdc.md b/versioned_docs/version-1.4.0/data_node/extract_node/mongodb-cdc.md
index 0f6f27f828..601ef3f9d4 100644
--- a/versioned_docs/version-1.4.0/data_node/extract_node/mongodb-cdc.md
+++ b/versioned_docs/version-1.4.0/data_node/extract_node/mongodb-cdc.md
@@ -134,7 +134,7 @@ TODO: It will be supported in the future.
 | poll.max.batch.size       | optional     | 1000             | Integer  | Maximum number of change stream documents to include in a single batch when polling for new data. |
 | poll.await.time.ms        | optional     | 1500             | Integer  | The amount of time to wait before checking for new results on the change stream. |
 | heartbeat.interval.ms     | optional     | 0                | Integer  | The length of time in milliseconds between sending heartbeat messages. Use 0 to disa |
-| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`. |
 ## Available Metadata
 
 The following format metadata can be exposed as read-only (VIRTUAL) columns in a table definition.
diff --git a/versioned_docs/version-1.4.0/data_node/extract_node/mysql-cdc.md b/versioned_docs/version-1.4.0/data_node/extract_node/mysql-cdc.md
index f1f42e9ceb..f08a9bf3c6 100644
--- a/versioned_docs/version-1.4.0/data_node/extract_node/mysql-cdc.md
+++ b/versioned_docs/version-1.4.0/data_node/extract_node/mysql-cdc.md
@@ -320,7 +320,7 @@ TODO: It will be supported in the future.
       <td>optional</td>
       <td style={{wordWrap: 'break-word'}}>(none)</td>
       <td>String</td>
-      <td>Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode.</td> 
+      <td>Inlong metric label, format of value is groupId=[groupId]&streamId=[streamId]&nodeId=[nodeId].</td> 
     </tr>
     </tbody>
 </table>
diff --git a/versioned_docs/version-1.4.0/data_node/extract_node/oracle-cdc.md b/versioned_docs/version-1.4.0/data_node/extract_node/oracle-cdc.md
index e330965912..0299f8551d 100644
--- a/versioned_docs/version-1.4.0/data_node/extract_node/oracle-cdc.md
+++ b/versioned_docs/version-1.4.0/data_node/extract_node/oracle-cdc.md
@@ -326,7 +326,7 @@ TODO: It will be supported in the future.
       <td>optional</td>
       <td style={{wordWrap: 'break-word'}}>(none)</td>
       <td>String</td>
-      <td>Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode.</td> 
+      <td>Inlong metric label, format of value is groupId=[groupId]&streamId=[streamId]&nodeId=[nodeId].</td> 
     </tr>
     <tr>
        <td>source.multiple.enable</td>
diff --git a/versioned_docs/version-1.4.0/data_node/extract_node/postgresql-cdc.md b/versioned_docs/version-1.4.0/data_node/extract_node/postgresql-cdc.md
index 9252e03906..f4129a6e15 100644
--- a/versioned_docs/version-1.4.0/data_node/extract_node/postgresql-cdc.md
+++ b/versioned_docs/version-1.4.0/data_node/extract_node/postgresql-cdc.md
@@ -114,7 +114,7 @@ TODO: It will be supported in the future.
 | decoding.plugin.name | optional | decoderbufs | String | The name of the Postgres logical decoding plug-in installed on the server. Supported values are decoderbufs, wal2json, wal2json_rds, wal2json_streaming, wal2json_rds_streaming and pgoutput. |
 | slot.name | optional | flink | String | The name of the PostgreSQL logical decoding slot that was created for streaming changes from a particular plug-in for a particular database/schema. The server uses this slot to stream events to the connector that you are configuring. Slot names must conform to PostgreSQL replication slot naming rules, which state: "Each replication slot has a name, which can contain lower-case letters, numbers, and the underscore character." |
 | debezium.* | optional | (none) | String | Pass-through Debezium's properties to Debezium Embedded Engine which is used to capture data changes from Postgres server. For example: 'debezium.snapshot.mode' = 'never'. See more about the [Debezium's Postgres Connector properties](https://debezium.io/documentation/reference/1.5/connectors/postgresql.html#postgresql-connector-properties). |
-| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`. |
 
 **Note**: `slot.name` is recommended to set for different tables to avoid the potential PSQLException: ERROR: replication slot "flink" is active for PID 974 error.  
 **Note**: PSQLException: ERROR: all replication slots are in use Hint: Free one or increase max_replication_slots. We can delete slot by the following statement.
diff --git a/versioned_docs/version-1.4.0/data_node/extract_node/pulsar.md b/versioned_docs/version-1.4.0/data_node/extract_node/pulsar.md
index 5d28627b82..1d40c68734 100644
--- a/versioned_docs/version-1.4.0/data_node/extract_node/pulsar.md
+++ b/versioned_docs/version-1.4.0/data_node/extract_node/pulsar.md
@@ -107,7 +107,7 @@ TODO
 | key.fields-prefix             | optional | (none)        | String | Define a custom prefix for all fields in the key format to avoid name conflicts with fields in the value format. By default, the prefix is empty. If a custom prefix is defined, the Table schema and `key.fields` are used. |
 | format or value.format        | required | (none)        | String | Set the name with a prefix. When constructing data types in the key format, the prefix is removed and non-prefixed names are used within the key format. Pulsar message value serialization format, support JSON, Avro, etc. For more information, see the Flink format. |
 | value.fields-include          | optional | ALL           | Enum   | The Pulsar message value contains the field policy, optionally ALL, and EXCEPT_KEY. |
-| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`. |
 
 ## Available Metadata
 
diff --git a/versioned_docs/version-1.4.0/data_node/extract_node/sqlserver-cdc.md b/versioned_docs/version-1.4.0/data_node/extract_node/sqlserver-cdc.md
index 21377471e0..c95bccfaeb 100644
--- a/versioned_docs/version-1.4.0/data_node/extract_node/sqlserver-cdc.md
+++ b/versioned_docs/version-1.4.0/data_node/extract_node/sqlserver-cdc.md
@@ -190,7 +190,7 @@ TODO
       <td>optional</td>
       <td style={{wordWrap: 'break-word'}}>(none)</td>
       <td>String</td>
-      <td>Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode.</td> 
+      <td>Inlong metric label, format of value is groupId=[groupId]&streamId=[streamId]&nodeId=[nodeId].</td> 
     </tr>
     </tbody>
 </table>
diff --git a/versioned_docs/version-1.4.0/data_node/load_node/clickhouse.md b/versioned_docs/version-1.4.0/data_node/load_node/clickhouse.md
index dd63a0c75f..0ac5468098 100644
--- a/versioned_docs/version-1.4.0/data_node/load_node/clickhouse.md
+++ b/versioned_docs/version-1.4.0/data_node/load_node/clickhouse.md
@@ -100,7 +100,7 @@ TODO: It will be supported in the future.
 | sink.max-retries | optional | 3 | Integer | The max retry times if writing records to database failed. |
 | sink.parallelism | optional | (none) | Integer | Defines the parallelism of the JDBC sink operator. By default, the parallelism is determined by the framework using the same parallelism of the upstream chained operator. |
 | sink.ignore.changelog | optional | false | Boolean |  Ignore all `RowKind`, ingest them as `INSERT`. |
-| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`. |
 
 ## Data Type Mapping
 
diff --git a/versioned_docs/version-1.4.0/data_node/load_node/elasticsearch.md b/versioned_docs/version-1.4.0/data_node/load_node/elasticsearch.md
index 2178a2060a..93491d69e7 100644
--- a/versioned_docs/version-1.4.0/data_node/load_node/elasticsearch.md
+++ b/versioned_docs/version-1.4.0/data_node/load_node/elasticsearch.md
@@ -254,7 +254,7 @@ TODO: It will be supported in the future.
       <td>optional</td>
       <td style={{wordWrap: 'break-word'}}>(none)</td>
       <td>String</td>
-      <td>Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode.</td> 
+      <td>Inlong metric label, format of value is groupId=[groupId]&streamId=[streamId]&nodeId=[nodeId].</td> 
     </tr>
     </tbody>
 </table>
diff --git a/versioned_docs/version-1.4.0/data_node/load_node/greenplum.md b/versioned_docs/version-1.4.0/data_node/load_node/greenplum.md
index 4803da8ed8..259de3e51b 100644
--- a/versioned_docs/version-1.4.0/data_node/load_node/greenplum.md
+++ b/versioned_docs/version-1.4.0/data_node/load_node/greenplum.md
@@ -98,7 +98,7 @@ TODO: It will be supported in the future.
 | sink.max-retries | optional | 3 | Integer | The max retry times if writing records to database failed. |
 | sink.parallelism | optional | (none) | Integer | Defines the parallelism of the JDBC sink operator. By default, the parallelism is determined by the framework using the same parallelism of the upstream chained operator. |
 | sink.ignore.changelog | optional | false | Boolean |  Ignore all `RowKind`, ingest them as `INSERT`. |
-| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`. |
 
 ## Data Type Mapping
 
diff --git a/versioned_docs/version-1.4.0/data_node/load_node/hbase.md b/versioned_docs/version-1.4.0/data_node/load_node/hbase.md
index 4ab9eb4210..19ab916174 100644
--- a/versioned_docs/version-1.4.0/data_node/load_node/hbase.md
+++ b/versioned_docs/version-1.4.0/data_node/load_node/hbase.md
@@ -96,7 +96,7 @@ TODO: It will be supported in the future.
 | lookup.cache.ttl | optional | (none) | Duration | The max time to live for each rows in lookup cache, over this time, the oldest rows will be expired. Note, "cache.max-rows" and "cache.ttl" options must all be specified if any of them is specified.Lookup cache is disabled by default. |
 | lookup.max-retries | optional | 3 | Integer | The max retry times if lookup database failed. |
 | properties.* | optional | (none) | String | This can set and pass arbitrary HBase configurations. Suffix names must match the configuration key defined in [HBase Configuration documentation](https://hbase.apache.org/2.3/book.html#hbase_default_configurations). Flink will remove the "properties." key prefix and pass the transformed key and values to the underlying HBaseClient. For example, you can add a kerberos authentication parameter 'properties.hbase.security.authentication' = 'kerb [...]
-| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`. |
 
 ## Data Type Mapping
 
diff --git a/versioned_docs/version-1.4.0/data_node/load_node/hdfs.md b/versioned_docs/version-1.4.0/data_node/load_node/hdfs.md
index 5ac161be77..11da0bc95e 100644
--- a/versioned_docs/version-1.4.0/data_node/load_node/hdfs.md
+++ b/versioned_docs/version-1.4.0/data_node/load_node/hdfs.md
@@ -111,7 +111,7 @@ The file sink supports file compactions, which allows applications to have small
       <td>optional</td>
       <td style={{wordWrap: 'break-word'}}>(none)</td>
       <td>String</td>
-      <td>Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode.</td> 
+      <td>Inlong metric label, format of value is groupId=[groupId]&streamId=[streamId]&nodeId=[nodeId].</td> 
     </tr>
     </tbody>
 </table>
diff --git a/versioned_docs/version-1.4.0/data_node/load_node/hive.md b/versioned_docs/version-1.4.0/data_node/load_node/hive.md
index d9723c3482..76063f0808 100644
--- a/versioned_docs/version-1.4.0/data_node/load_node/hive.md
+++ b/versioned_docs/version-1.4.0/data_node/load_node/hive.md
@@ -134,7 +134,7 @@ TODO: It will be supported in the future.
       <td>optional</td>
       <td style={{wordWrap: 'break-word'}}>(none)</td>
       <td>String</td>
-      <td>Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode.</td> 
+      <td>Inlong metric label, format of value is groupId=[groupId]&streamId=[streamId]&nodeId=[nodeId].</td> 
     </tr>
     </tbody>
 </table>
diff --git a/versioned_docs/version-1.4.0/data_node/load_node/iceberg.md b/versioned_docs/version-1.4.0/data_node/load_node/iceberg.md
index a1ade23add..3eb0a67e43 100644
--- a/versioned_docs/version-1.4.0/data_node/load_node/iceberg.md
+++ b/versioned_docs/version-1.4.0/data_node/load_node/iceberg.md
@@ -259,7 +259,7 @@ Iceberg support schema evolution from source table to target table in multiple s
 | clients          | optional for hive catalog                   | 2       | Integer | The Hive metastore client pool size, default value is 2.     |
 | warehouse        | optional for hadoop catalog or hive catalog | (none)  | String  | For Hive catalog,is the Hive warehouse location, users should specify this path if neither set the `hive-conf-dir` to specify a location containing a `hive-site.xml` configuration file nor add a correct `hive-site.xml` to classpath. For hadoop catalog,The HDFS directory to store metadata files and data files. |
 | hive-conf-dir    | optional for hive catalog                   | (none)  | String  | Path to a directory containing a `hive-site.xml` configuration file which will be used to provide custom Hive configuration values. The value of `hive.metastore.warehouse.dir` from `<hive-conf-dir>/hive-site.xml` (or hive configure file from classpath) will be overwrote with the `warehouse` value if setting both `hive-conf-dir` and `warehouse` when creating iceberg catalog. |
-| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`. |
 | sink.multiple.enable | optional                         | false  | Boolean | Whether to enable multiple sink            |
 | sink.multiple.schema-update.policy | optional           | TRY_IT_BEST | Enum | The policy to handle the inconsistency between the schema in the data and the schema of the target table <br/>TRY_IT_BEST: try best, deal with as much as possible, ignore it if can't handled.<br/>  IGNORE_WITH_LOG:ignore it and log it,ignore this table later.<br/> THROW_WITH_STOP:throw exception and stop the job, until user deal with schema conflict and job restore.
 | sink.multiple.pk-auto-generated | optional              | false  | Boolean  | Whether auto generate primary key, regard all field combined as primary key in multiple sink scenes. |
diff --git a/versioned_docs/version-1.4.0/data_node/load_node/kafka.md b/versioned_docs/version-1.4.0/data_node/load_node/kafka.md
index bb42c88307..28f361cc1b 100644
--- a/versioned_docs/version-1.4.0/data_node/load_node/kafka.md
+++ b/versioned_docs/version-1.4.0/data_node/load_node/kafka.md
@@ -97,7 +97,7 @@ TODO: It will be supported in the future.
 | sink.multiple.partition-pattern | optional | (none) | String |  Dynamic partition extraction pattern, like '${VARIABLE_NAME}' which is only used in kafka multiple sink scenarios and is valid when 'format' is 'raw'. |
 | sink.semantic | optional | at-least-once | String | Defines the delivery semantic for the Kafka sink. Valid enumerationns are 'at-least-once', 'exactly-once' and 'none'. See [Consistency guarantees](https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/connectors/table/kafka/#consistency-guarantees) for more details. |
 | sink.parallelism | optional | (none) | Integer | Defines the parallelism of the Kafka sink operator. By default, the parallelism is determined by the framework using the same parallelism of the upstream chained operator. |
-| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`. |
 
 ## Available Metadata
 
diff --git a/versioned_docs/version-1.4.0/data_node/load_node/mysql.md b/versioned_docs/version-1.4.0/data_node/load_node/mysql.md
index 5639fc1396..1f9a940370 100644
--- a/versioned_docs/version-1.4.0/data_node/load_node/mysql.md
+++ b/versioned_docs/version-1.4.0/data_node/load_node/mysql.md
@@ -97,7 +97,7 @@ TODO: It will be supported in the future.
 | sink.buffer-flush.interval | optional | 1s | Duration | The flush interval mills, over this time, asynchronous threads will flush data. Can be set to '0' to disable it. Note, 'sink.buffer-flush.max-rows' can be set to '0' with the flush interval set allowing for complete async processing of buffered actions. | |
 | sink.max-retries | optional | 3 | Integer | The max retry times if writing records to database failed. |
 | sink.parallelism | optional | (none) | Integer | Defines the parallelism of the JDBC sink operator. By default, the parallelism is determined by the framework using the same parallelism of the upstream chained operator. |
-| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`. |
 
 ## Data Type Mapping
 
diff --git a/versioned_docs/version-1.4.0/data_node/load_node/oracle.md b/versioned_docs/version-1.4.0/data_node/load_node/oracle.md
index f7c335ef6a..7f54c53444 100644
--- a/versioned_docs/version-1.4.0/data_node/load_node/oracle.md
+++ b/versioned_docs/version-1.4.0/data_node/load_node/oracle.md
@@ -98,7 +98,7 @@ TODO: It will be supported in the future.
 | sink.max-retries | optional | 3 | Integer | The max retry times if writing records to database failed. |
 | sink.parallelism | optional | (none) | Integer | Defines the parallelism of the JDBC sink operator. By default, the parallelism is determined by the framework using the same parallelism of the upstream chained operator. |
 | sink.ignore.changelog | optional | false | Boolean |  Ignore all `RowKind`, ingest them as `INSERT`. |
-| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`. |
 
 ## Data Type Mapping
 
diff --git a/versioned_docs/version-1.4.0/data_node/load_node/postgresql.md b/versioned_docs/version-1.4.0/data_node/load_node/postgresql.md
index 69cf18a926..648f7e6949 100644
--- a/versioned_docs/version-1.4.0/data_node/load_node/postgresql.md
+++ b/versioned_docs/version-1.4.0/data_node/load_node/postgresql.md
@@ -97,7 +97,7 @@ TODO: It will be supported in the future.
 | sink.max-retries | optional | 3 | Integer | The max retry times if writing records to database failed. |
 | sink.parallelism | optional | (none) | Integer | Defines the parallelism of the JDBC sink operator. By default, the parallelism is determined by the framework using the same parallelism of the upstream chained operator. |
 | sink.ignore.changelog | optional | false | Boolean |  Ignore all `RowKind`, ingest them as `INSERT`. |
-| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`. |
 
 ## Data Type Mapping
 
diff --git a/versioned_docs/version-1.4.0/data_node/load_node/sqlserver.md b/versioned_docs/version-1.4.0/data_node/load_node/sqlserver.md
index be6bc84eba..fe928c6caa 100644
--- a/versioned_docs/version-1.4.0/data_node/load_node/sqlserver.md
+++ b/versioned_docs/version-1.4.0/data_node/load_node/sqlserver.md
@@ -96,7 +96,7 @@ TODO: It will be supported in the future.
 | sink.max-retries | optional | 3 | Integer | The max retry times if writing records to database failed. |
 | sink.parallelism | optional | (none) | Integer | Defines the parallelism of the JDBC sink operator. By default, the parallelism is determined by the framework using the same parallelism of the upstream chained operator. |
 | sink.ignore.changelog | optional | false | Boolean |  Ignore all `RowKind`, ingest them as `INSERT`. |
-| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`. |
 
 ## Data Type Mapping
 
diff --git a/versioned_docs/version-1.4.0/data_node/load_node/tdsql-postgresql.md b/versioned_docs/version-1.4.0/data_node/load_node/tdsql-postgresql.md
index d35fa0e347..9bd3003516 100644
--- a/versioned_docs/version-1.4.0/data_node/load_node/tdsql-postgresql.md
+++ b/versioned_docs/version-1.4.0/data_node/load_node/tdsql-postgresql.md
@@ -96,7 +96,7 @@ TODO: It will be supported in the future.
 | sink.max-retries | optional | 3 | Integer | The max retry times if writing records to database failed. |
 | sink.parallelism | optional | (none) | Integer | Defines the parallelism of the JDBC sink operator. By default, the parallelism is determined by the framework using the same parallelism of the upstream chained operator. |
 | sink.ignore.changelog | optional | false | Boolean |  Ignore all `RowKind`, ingest them as `INSERT`. |
-| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=xxgroup&streamId=xxstream&nodeId=xxnode. |
+| inlong.metric.labels | optional | (none) | String | Inlong metric label, format of value is groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`. |
 
 ## Data Type Mapping
 
diff --git a/versioned_docs/version-1.4.0/modules/sort/metrics.md b/versioned_docs/version-1.4.0/modules/sort/metrics.md
index bf56b4ff3d..03b4523af5 100644
--- a/versioned_docs/version-1.4.0/modules/sort/metrics.md
+++ b/versioned_docs/version-1.4.0/modules/sort/metrics.md
@@ -5,23 +5,23 @@ sidebar_position: 4
 
 ## Overview
 
-We add metric computing for node. Sort will compute metric when user just need add with option `inlong.metric.labels` that includes `groupId=xxgroup&streamId=xxstream&nodeId=xxnode`.
+We add metric computing for node. Sort will compute metric when user just need add with option `inlong.metric.labels` that includes groupId=`{groupId}`&streamId=`{streamId}`&nodeId=`{nodeId}`.
 Sort will export metric by flink metric group, So user can use [metric reporter](https://nightlies.apache.org/flink/flink-docs-release-1.13/zh/docs/deployment/metric_reporters/) to get metric data.
 
 ## Metric
 
-### supporting extract node
+### Supporting extract node
 
-| metric name | extract node | description |
+| Metric name | Extract node | Description |
 |-------------|--------------|-------------|
 | groupId_streamId_nodeId_numRecordsIn | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | input records number |
 | groupId_streamId_nodeId_numBytesIn | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | input bytes number |
 | groupId_streamId_nodeId_numRecordsInPerSecond | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | input records per second |
 | groupId_streamId_nodeId_numBytesInPerSecond | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | input bytes number per second |
 
-### supporting load node
+### Supporting load node
 
-| metric name | load node | description |
+| Metric name | Load node | Description |
 |-------------|-----------|-------------|
 | groupId_streamId_nodeId_numRecordsOut | clickhouse,elasticsearch,greenplum,hbase,<br/>hdfs,hive,iceberg,kafka,mysql,<br/>oracle,postgresql,sqlserver,tdsql-postgresql | out records number |
 | groupId_streamId_nodeId_numBytesOut |  clickhouse,elasticsearch,greenplum,hbase,<br/>hdfs,hive,iceberg,kafka,mysql,<br/>oracle,postgresql,sqlserver,tdsql-postgresql | output byte number |