You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@inlong.apache.org by GitBox <gi...@apache.org> on 2023/01/10 09:33:35 UTC

[GitHub] [inlong-website] gong opened a new pull request, #672: [INLONG-661][Sort] Add more description for metric

gong opened a new pull request, #672:
URL: https://github.com/apache/inlong-website/pull/672

   ### Prepare a Pull Request
   
   - [INLONG-661][Sort] Add more description for metric
   
   - Fixes #661 
   
   ### Motivation
   * Add more description for metric
   
   ### Modifications
   
   * Add table level metric and dirty data metric decription
   * Fix doc lost for metric
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@inlong.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [inlong-website] gong commented on a diff in pull request #672: [INLONG-661][Sort] Add more description for metric

Posted by GitBox <gi...@apache.org>.
gong commented on code in PR #672:
URL: https://github.com/apache/inlong-website/pull/672#discussion_r1066626259


##########
i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/clickhouse.md:
##########
@@ -98,7 +98,7 @@ TODO: 将在未来支持此功能。
 | sink.max-retries | 可选 | 3 | Integer | 写入记录到数据库失败后的最大重试次数。 |
 | sink.parallelism | 可选 | (none) | Integer | 用于定义 JDBC sink 算子的并行度。默认情况下,并行度是由框架决定:使用与上游链式算子相同的并行度。 |
 | sink.ignore.changelog | 可选 | false | Boolean |  忽略所有 RowKind 类型的变更日志,将它们当作 INSERT 的数据来采集。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId=xxgroup&streamId=xxstream&nodeId=xxnode。|

Review Comment:
   done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@inlong.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [inlong-website] EMsnap merged pull request #672: [INLONG-661][Sort] Add more description for metric

Posted by GitBox <gi...@apache.org>.
EMsnap merged PR #672:
URL: https://github.com/apache/inlong-website/pull/672


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@inlong.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [inlong-website] yunqingmoswu commented on a diff in pull request #672: [INLONG-661][Sort] Add more description for metric

Posted by GitBox <gi...@apache.org>.
yunqingmoswu commented on code in PR #672:
URL: https://github.com/apache/inlong-website/pull/672#discussion_r1066646201


##########
docs/modules/sort/metrics.md:
##########
@@ -12,21 +12,68 @@ Sort will export metric by flink metric group, So user can use [metric reporter]
 
 ### supporting extract node

Review Comment:
   首字母大写



##########
docs/modules/sort/metrics.md:
##########
@@ -12,21 +12,68 @@ Sort will export metric by flink metric group, So user can use [metric reporter]
 
 ### supporting extract node
 
+#### node level metric
+
 | metric name | extract node | description |

Review Comment:
   首字母大写



##########
docs/modules/sort/metrics.md:
##########
@@ -12,21 +12,68 @@ Sort will export metric by flink metric group, So user can use [metric reporter]
 
 ### supporting extract node
 
+#### node level metric
+
 | metric name | extract node | description |
 |-------------|--------------|-------------|
 | groupId_streamId_nodeId_numRecordsIn | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | input records number |
 | groupId_streamId_nodeId_numBytesIn | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | input bytes number |
 | groupId_streamId_nodeId_numRecordsInPerSecond | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | input records per second |
 | groupId_streamId_nodeId_numBytesInPerSecond | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | input bytes number per second |
 
+#### table level metric
+ It is used for all database sync.
+
+| metric name | extract node | description |
+|-------------|--------------|-------------|
+| groupId_streamId_nodeId_database_table_numRecordsIn | mysql-cdc | input records number |
+| groupId_streamId_nodeId_database_schema_table_numRecordsIn | oracle-cdc,postgresql-cdc | input records number |
+| groupId_streamId_nodeId_database_collection_numRecordsIn | mongodb-cdc | input records number |
+| groupId_streamId_nodeId_database_table_numBytesIn | mysql-cdc | input records number |
+| groupId_streamId_nodeId_database_schema_table_numBytesIn | oracle-cdc,postgresql-cdc | input records number |
+| groupId_streamId_nodeId_database_collection_numBytesIn | mongodb-cdc | input bytes number |
+| groupId_streamId_nodeId_database_table_numRecordsInPerSecond | mysql-cdc | input records number per second |
+| groupId_streamId_nodeId_database_schema_table_numRecordsInPerSecond | oracle-cdc,postgresql-cdc | input records number per second |
+| groupId_streamId_nodeId_database_collection_numRecordsInPerSecond | mongodb-cdc | input records number per second |
+| groupId_streamId_nodeId_database_table_numBytesInPerSecond | mysql-cdc | input bytes number per second |
+| groupId_streamId_nodeId_database_schema_table_numBytesInPerSecond | oracle-cdc,postgresql-cdc | input bytes number per second |
+| groupId_streamId_nodeId_database_collection_numBytesInPerSecond | mongodb-cdc | input bytes number per second |
+
 ### supporting load node
 
+#### node level metric

Review Comment:
   首字母大写



##########
docs/modules/sort/metrics.md:
##########
@@ -12,21 +12,68 @@ Sort will export metric by flink metric group, So user can use [metric reporter]
 
 ### supporting extract node
 
+#### node level metric
+
 | metric name | extract node | description |
 |-------------|--------------|-------------|
 | groupId_streamId_nodeId_numRecordsIn | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | input records number |
 | groupId_streamId_nodeId_numBytesIn | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | input bytes number |
 | groupId_streamId_nodeId_numRecordsInPerSecond | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | input records per second |
 | groupId_streamId_nodeId_numBytesInPerSecond | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | input bytes number per second |
 
+#### table level metric
+ It is used for all database sync.
+
+| metric name | extract node | description |
+|-------------|--------------|-------------|
+| groupId_streamId_nodeId_database_table_numRecordsIn | mysql-cdc | input records number |
+| groupId_streamId_nodeId_database_schema_table_numRecordsIn | oracle-cdc,postgresql-cdc | input records number |
+| groupId_streamId_nodeId_database_collection_numRecordsIn | mongodb-cdc | input records number |
+| groupId_streamId_nodeId_database_table_numBytesIn | mysql-cdc | input records number |
+| groupId_streamId_nodeId_database_schema_table_numBytesIn | oracle-cdc,postgresql-cdc | input records number |
+| groupId_streamId_nodeId_database_collection_numBytesIn | mongodb-cdc | input bytes number |
+| groupId_streamId_nodeId_database_table_numRecordsInPerSecond | mysql-cdc | input records number per second |
+| groupId_streamId_nodeId_database_schema_table_numRecordsInPerSecond | oracle-cdc,postgresql-cdc | input records number per second |
+| groupId_streamId_nodeId_database_collection_numRecordsInPerSecond | mongodb-cdc | input records number per second |
+| groupId_streamId_nodeId_database_table_numBytesInPerSecond | mysql-cdc | input bytes number per second |
+| groupId_streamId_nodeId_database_schema_table_numBytesInPerSecond | oracle-cdc,postgresql-cdc | input bytes number per second |
+| groupId_streamId_nodeId_database_collection_numBytesInPerSecond | mongodb-cdc | input bytes number per second |
+
 ### supporting load node

Review Comment:
   首字母大写



##########
docs/modules/sort/metrics.md:
##########
@@ -12,21 +12,68 @@ Sort will export metric by flink metric group, So user can use [metric reporter]
 
 ### supporting extract node
 
+#### node level metric

Review Comment:
   首字母大写



##########
i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/modules/sort/metrics.md:
##########
@@ -12,21 +12,68 @@ sidebar_position: 4
 
 ### 支持的 extract 节点
 
+#### 支持节点级别指标
+
 | 指标名 | extract 节点 | 描述 |
 |-------------|--------------|-------------|
 | groupId_streamId_nodeId_numRecordsIn | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | 输入记录数 |
 | groupId_streamId_nodeId_numBytesIn | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | 输入字节数 |
 | groupId_streamId_nodeId_numRecordsInPerSecond | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | 每秒输入记录数 |
 | groupId_streamId_nodeId_numBytesInPerSecond | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | 每秒输入字节数 |
 
+#### 支持表级别指标
+它是用于整库同步场景
+
+| metric name | extract node | description |

Review Comment:
   首字母大写,翻译下?



##########
docs/modules/sort/metrics.md:
##########
@@ -12,21 +12,68 @@ Sort will export metric by flink metric group, So user can use [metric reporter]
 
 ### supporting extract node
 
+#### node level metric
+
 | metric name | extract node | description |
 |-------------|--------------|-------------|
 | groupId_streamId_nodeId_numRecordsIn | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | input records number |
 | groupId_streamId_nodeId_numBytesIn | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | input bytes number |
 | groupId_streamId_nodeId_numRecordsInPerSecond | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | input records per second |
 | groupId_streamId_nodeId_numBytesInPerSecond | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | input bytes number per second |
 
+#### table level metric
+ It is used for all database sync.
+
+| metric name | extract node | description |
+|-------------|--------------|-------------|
+| groupId_streamId_nodeId_database_table_numRecordsIn | mysql-cdc | input records number |
+| groupId_streamId_nodeId_database_schema_table_numRecordsIn | oracle-cdc,postgresql-cdc | input records number |
+| groupId_streamId_nodeId_database_collection_numRecordsIn | mongodb-cdc | input records number |
+| groupId_streamId_nodeId_database_table_numBytesIn | mysql-cdc | input records number |
+| groupId_streamId_nodeId_database_schema_table_numBytesIn | oracle-cdc,postgresql-cdc | input records number |
+| groupId_streamId_nodeId_database_collection_numBytesIn | mongodb-cdc | input bytes number |
+| groupId_streamId_nodeId_database_table_numRecordsInPerSecond | mysql-cdc | input records number per second |
+| groupId_streamId_nodeId_database_schema_table_numRecordsInPerSecond | oracle-cdc,postgresql-cdc | input records number per second |
+| groupId_streamId_nodeId_database_collection_numRecordsInPerSecond | mongodb-cdc | input records number per second |
+| groupId_streamId_nodeId_database_table_numBytesInPerSecond | mysql-cdc | input bytes number per second |
+| groupId_streamId_nodeId_database_schema_table_numBytesInPerSecond | oracle-cdc,postgresql-cdc | input bytes number per second |
+| groupId_streamId_nodeId_database_collection_numBytesInPerSecond | mongodb-cdc | input bytes number per second |
+
 ### supporting load node
 
+#### node level metric
+
 | metric name | load node | description |
 |-------------|-----------|-------------|
 | groupId_streamId_nodeId_numRecordsOut | clickhouse,elasticsearch,greenplum,hbase,<br/>hdfs,hive,iceberg,kafka,mysql,<br/>oracle,postgresql,sqlserver,tdsql-postgresql | out records number |
 | groupId_streamId_nodeId_numBytesOut |  clickhouse,elasticsearch,greenplum,hbase,<br/>hdfs,hive,iceberg,kafka,mysql,<br/>oracle,postgresql,sqlserver,tdsql-postgresql | output byte number |
 | groupId_streamId_nodeId_numRecordsOutPerSecond |  clickhouse,elasticsearch,greenplum,hbase,<br/>hdfs,hive,iceberg,kafka,mysql,<br/>oracle,postgresql,sqlserver,tdsql-postgresql | output records per second |
 | groupId_streamId_nodeId_numBytesOutPerSecond |  clickhouse,elasticsearch,greenplum,hbase,<br/>hdfs,hive,iceberg,kafka,mysql,<br/>oracle,postgresql,sqlserver,tdsql-postgresql | output bytes  per second |
+| groupId_streamId_nodeId_dirtyRecordsOut |  clickhouse,elasticsearch,greenplum,hbase,<br/>hdfs,hive,iceberg,kafka,mysql,<br/>oracle,postgresql,sqlserver,tdsql-postgresql | output records |
+| groupId_streamId_nodeId_dirtyBytesOut |  clickhouse,elasticsearch,greenplum,hbase,<br/>hdfs,hive,iceberg,kafka,mysql,<br/>oracle,postgresql,sqlserver,tdsql-postgresql | output bytes |
+
+#### table level metric

Review Comment:
   首字母大写



##########
i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/modules/sort/metrics.md:
##########
@@ -12,21 +12,68 @@ sidebar_position: 4
 
 ### 支持的 extract 节点
 
+#### 支持节点级别指标
+
 | 指标名 | extract 节点 | 描述 |
 |-------------|--------------|-------------|
 | groupId_streamId_nodeId_numRecordsIn | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | 输入记录数 |
 | groupId_streamId_nodeId_numBytesIn | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | 输入字节数 |
 | groupId_streamId_nodeId_numRecordsInPerSecond | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | 每秒输入记录数 |
 | groupId_streamId_nodeId_numBytesInPerSecond | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | 每秒输入字节数 |
 
+#### 支持表级别指标
+它是用于整库同步场景
+
+| metric name | extract node | description |
+|-------------|--------------|-------------|
+| groupId_streamId_nodeId_database_table_numRecordsIn | mysql-cdc | input records number |
+| groupId_streamId_nodeId_database_schema_table_numRecordsIn | oracle-cdc,postgresql-cdc | input records number |
+| groupId_streamId_nodeId_database_collection_numRecordsIn | mongodb-cdc | input records number |
+| groupId_streamId_nodeId_database_table_numBytesIn | mysql-cdc | input records number |
+| groupId_streamId_nodeId_database_schema_table_numBytesIn | oracle-cdc,postgresql-cdc | input records number |
+| groupId_streamId_nodeId_database_collection_numBytesIn | mongodb-cdc | input bytes number |
+| groupId_streamId_nodeId_database_table_numRecordsInPerSecond | mysql-cdc | input records number per second |
+| groupId_streamId_nodeId_database_schema_table_numRecordsInPerSecond | oracle-cdc,postgresql-cdc | input records number per second |
+| groupId_streamId_nodeId_database_collection_numRecordsInPerSecond | mongodb-cdc | input records number per second |
+| groupId_streamId_nodeId_database_table_numBytesInPerSecond | mysql-cdc | input bytes number per second |
+| groupId_streamId_nodeId_database_schema_table_numBytesInPerSecond | oracle-cdc,postgresql-cdc | input bytes number per second |
+| groupId_streamId_nodeId_database_collection_numBytesInPerSecond | mongodb-cdc | input bytes number per second |
+
 ### 支持的 load 节点
 
+#### 支持节点级别指标
+
 | 指标名 | load 节点 | 描述 |

Review Comment:
   首字母大写



##########
i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/modules/sort/metrics.md:
##########
@@ -12,21 +12,68 @@ sidebar_position: 4
 
 ### 支持的 extract 节点
 
+#### 支持节点级别指标
+
 | 指标名 | extract 节点 | 描述 |
 |-------------|--------------|-------------|
 | groupId_streamId_nodeId_numRecordsIn | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | 输入记录数 |
 | groupId_streamId_nodeId_numBytesIn | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | 输入字节数 |
 | groupId_streamId_nodeId_numRecordsInPerSecond | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | 每秒输入记录数 |
 | groupId_streamId_nodeId_numBytesInPerSecond | kafka,mongodb-cdc,mysql-cdc,oracle-cdc,postgresql-cdc,pulsar,sqlserver-cdc | 每秒输入字节数 |
 
+#### 支持表级别指标
+它是用于整库同步场景
+
+| metric name | extract node | description |
+|-------------|--------------|-------------|
+| groupId_streamId_nodeId_database_table_numRecordsIn | mysql-cdc | input records number |
+| groupId_streamId_nodeId_database_schema_table_numRecordsIn | oracle-cdc,postgresql-cdc | input records number |
+| groupId_streamId_nodeId_database_collection_numRecordsIn | mongodb-cdc | input records number |
+| groupId_streamId_nodeId_database_table_numBytesIn | mysql-cdc | input records number |
+| groupId_streamId_nodeId_database_schema_table_numBytesIn | oracle-cdc,postgresql-cdc | input records number |
+| groupId_streamId_nodeId_database_collection_numBytesIn | mongodb-cdc | input bytes number |
+| groupId_streamId_nodeId_database_table_numRecordsInPerSecond | mysql-cdc | input records number per second |
+| groupId_streamId_nodeId_database_schema_table_numRecordsInPerSecond | oracle-cdc,postgresql-cdc | input records number per second |
+| groupId_streamId_nodeId_database_collection_numRecordsInPerSecond | mongodb-cdc | input records number per second |
+| groupId_streamId_nodeId_database_table_numBytesInPerSecond | mysql-cdc | input bytes number per second |
+| groupId_streamId_nodeId_database_schema_table_numBytesInPerSecond | oracle-cdc,postgresql-cdc | input bytes number per second |
+| groupId_streamId_nodeId_database_collection_numBytesInPerSecond | mongodb-cdc | input bytes number per second |
+
 ### 支持的 load 节点
 
+#### 支持节点级别指标
+
 | 指标名 | load 节点 | 描述 |
 |-------------|-----------|-------------|
 | groupId_streamId_nodeId_numRecordsOut | clickhouse,elasticsearch,greenplum,hbase,<br/>hdfs,hive,iceberg,kafka,<br/>mysql,oracle,postgresql,sqlserver,tdsql-postgresql | 输出记录数 |
 | groupId_streamId_nodeId_numBytesOut |  clickhouse,elasticsearch,greenplum,hbase,<br/>hdfs,hive,iceberg,kafka,<br/>mysql,oracle,postgresql,sqlserver,tdsql-postgresql | 输出字节数 |
 | groupId_streamId_nodeId_numRecordsOutPerSecond |  clickhouse,elasticsearch,greenplum,<br/>hbase,hdfs,hive,iceberg,<br/>kafka,mysql,oracle,postgresql,sqlserver,tdsql-postgresql | 每秒输出记录数 |
 | groupId_streamId_nodeId_numBytesOutPerSecond |  clickhouse,elasticsearch,greenplum,<br/>hbase,hdfs,hive,iceberg,kafka,<br/>mysql,oracle,postgresql,sqlserver,tdsql-postgresql | 每秒输出字节数 |
+| groupId_streamId_nodeId_dirtyRecordsOut |  clickhouse,elasticsearch,greenplum,hbase,<br/>hdfs,hive,iceberg,kafka,mysql,<br/>oracle,postgresql,sqlserver,tdsql-postgresql | output records |
+| groupId_streamId_nodeId_dirtyBytesOut |  clickhouse,elasticsearch,greenplum,hbase,<br/>hdfs,hive,iceberg,kafka,mysql,<br/>oracle,postgresql,sqlserver,tdsql-postgresql | output bytes |
+
+#### 支持表级别指标
+
+| metric name | load node | description |

Review Comment:
   首字母大写



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@inlong.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [inlong-website] dockerzhang commented on a diff in pull request #672: [INLONG-661][Sort] Add more description for metric

Posted by GitBox <gi...@apache.org>.
dockerzhang commented on code in PR #672:
URL: https://github.com/apache/inlong-website/pull/672#discussion_r1066618350


##########
i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/data_node/load_node/clickhouse.md:
##########
@@ -98,7 +98,7 @@ TODO: 将在未来支持此功能。
 | sink.max-retries | 可选 | 3 | Integer | 写入记录到数据库失败后的最大重试次数。 |
 | sink.parallelism | 可选 | (none) | Integer | 用于定义 JDBC sink 算子的并行度。默认情况下,并行度是由框架决定:使用与上游链式算子相同的并行度。 |
 | sink.ignore.changelog | 可选 | false | Boolean |  忽略所有 RowKind 类型的变更日志,将它们当作 INSERT 的数据来采集。 |
-| inlong.metric | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId&streamId&nodeId。|
+| inlong.metric.labels | 可选 | (none) | String | inlong metric 的标签值,该值的构成为groupId=xxgroup&streamId=xxstream&nodeId=xxnode。|

Review Comment:
   It's better to avoid using `xx`. here we can use `${group_Id}`, `${stream_Id}` and `${node_Id}`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@inlong.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org