You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "michaelxiang (Jira)" <ji...@apache.org> on 2022/06/23 02:47:00 UTC

[jira] [Updated] (FLINK-28205) memory leak in the timing refresh of the jdbc-connector

     [ https://issues.apache.org/jira/browse/FLINK-28205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

michaelxiang updated FLINK-28205:
---------------------------------
    Description: 
Bug position: org.apache.flink.connector.jdbc.internal.JdbcBatchingOutputFormat.scheduler

When writing with the jdbc-connector, the RuntimeException thrown by the scheduled thread to process the flush record is caught, this will cause the flink task to not fail out until new data arrives. So, during this time, the scheduled thread will continue to wrap the previous flushException by creating a RuntimeException. For each flushException, the object reference cannot be released and cannot be reclaimed by the GC, resulting in a memory leak.

  was:
类路径:org.apache.flink.connector.jdbc.internal.JdbcBatchingOutputFormat

bug位置:open方法,调度线程Runnable 实例

采用flink-connector-jdbc 进行写入时, 定时调度线程进行 flush 出现异常情况时对 RuntimeException 进行了捕获,这会导致在新数据到达 Task 前不会发生故障退出,因而定时调度线程则会不停的通过创建RuntimeException 进行包裹 上一个创建的 flushException,对于flushException 无法释放引用被GC回收,从而导致内存泄漏。

        Summary: memory leak in the timing refresh of the jdbc-connector  (was: jdbc connector 定时调度 flush 存在内存泄漏 bug)

> memory leak in the timing refresh of the jdbc-connector
> -------------------------------------------------------
>
>                 Key: FLINK-28205
>                 URL: https://issues.apache.org/jira/browse/FLINK-28205
>             Project: Flink
>          Issue Type: Bug
>          Components: Connectors / JDBC
>    Affects Versions: 1.15.0, 1.13.6, 1.14.5
>            Reporter: michaelxiang
>            Priority: Major
>
> Bug position: org.apache.flink.connector.jdbc.internal.JdbcBatchingOutputFormat.scheduler
> When writing with the jdbc-connector, the RuntimeException thrown by the scheduled thread to process the flush record is caught, this will cause the flink task to not fail out until new data arrives. So, during this time, the scheduled thread will continue to wrap the previous flushException by creating a RuntimeException. For each flushException, the object reference cannot be released and cannot be reclaimed by the GC, resulting in a memory leak.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)