You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Qingsheng Ren (Jira)" <ji...@apache.org> on 2022/07/18 08:45:00 UTC

[jira] [Resolved] (FLINK-28250) exactly-once sink kafka cause out of memory

     [ https://issues.apache.org/jira/browse/FLINK-28250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Qingsheng Ren resolved FLINK-28250.
-----------------------------------
    Fix Version/s: 1.15.2
     Release Note: 
master: 74f90d722f7be5db5298b84626935a585391f0df
release-1.15: adbf09fb941c8f793df6d322ed95df87bc4254f3
       Resolution: Fixed

> exactly-once sink kafka cause out of memory
> -------------------------------------------
>
>                 Key: FLINK-28250
>                 URL: https://issues.apache.org/jira/browse/FLINK-28250
>             Project: Flink
>          Issue Type: Bug
>          Components: Connectors / Kafka
>    Affects Versions: 1.15.0
>         Environment: *flink version: flink-1.15.0*
> *tm: 8* parallelism, 1 slot, 2g
> centos7
>            Reporter: jinshuangxian
>            Assignee: Chalres Tan
>            Priority: Critical
>              Labels: pull-request-available
>             Fix For: 1.15.2
>
>         Attachments: image-2022-06-25-22-07-35-686.png, image-2022-06-25-22-07-54-649.png, image-2022-06-25-22-08-04-891.png, image-2022-06-25-22-08-15-024.png
>
>
> *my sql code:*
> CREATE TABLE sourceTable (
> data bytes
> )WITH(
> 'connector'='kafka',
> 'topic'='topic1',
> 'properties.bootstrap.servers'='host1',
> 'properties.group.id'='gorup1',
> 'scan.startup.mode'='latest-offset',
> 'format'='raw'
> );
>  
> CREATE TABLE sinkTable (
> data bytes
> )
> WITH (
> 'connector'='kafka',
> 'topic'='topic2',
> 'properties.bootstrap.servers'='host2',
> 'properties.transaction.timeout.ms'='30000',
> 'sink.semantic'='exactly-once',
> 'sink.transactional-id-prefix'='xx-kafka-sink-a',
> 'format'='raw'
> );
> insert into sinkTable
> select data
> from sourceTable;
>  
> *problem:*
> After the program runs online for about half an hour, full gc frequently appears
>  
> {*}Troubleshoot{*}:
> I use command 'jmap -dump:live,format=b,file=/tmp/dump2.hprof' dump the problem tm memory. It is found that there are 115 FlinkKafkaInternalProducers, which is not normal.
> !image-2022-06-25-22-07-54-649.png!!image-2022-06-25-22-07-35-686.png!
> After reading the code of KafkaCommitter, it is found that after the commit is successful, the producer is not recycled, only abnormal situations are recycled.
> !image-2022-06-25-22-08-04-891.png!
> I added a few lines of code. After the online test, the program works normally, and the problem of oom memory is solved.
> !image-2022-06-25-22-08-15-024.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)