You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "YufeiLiu (Jira)" <ji...@apache.org> on 2019/09/19 08:59:00 UTC

[jira] [Updated] (FLINK-14124) potential memory leak in netty server

     [ https://issues.apache.org/jira/browse/FLINK-14124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

YufeiLiu updated FLINK-14124:
-----------------------------
    Description: 
I have a job running in flink 1.4.2, end of the task is use Phoenix jdbc driver write record into Apache Phoenix.
_mqStream
            .keyBy(0)
            .window(TumblingProcessingTimeWindows.of(Time.of(300, TimeUnit.SECONDS)))
            .process(new MyProcessWindowFunction())
            .addSink(new PhoenixSinkFunction());_

But the TaskManager of sink subtask off-heap memory keep increasing, precisely is might case by DirectByteBuffer.
I analyze heap dump, find there are hundreds of DirectByteBuffer object, each of them reference to over 3MB memory address, they are all link to Flink Netty Server Thread.
 !image-2019-09-19-15-53-32-294.png! 

It only happened in sink task, other nodes just work fine. I think is problem of Phoenix at first, but heap dump show memory is consume by netty. I didn't know much about flink network, I will be appreciated if someone can tell me the might causation or how to dig durther.

  was:
I have a job running in flink 1.4.2, end of the task is use Phoenix jdbc driver write record into Apache Phoenix.
_mqStream
            .keyBy(0)
            .window(TumblingProcessingTimeWindows.of(Time.of(300, TimeUnit.SECONDS)))
            .process(new MyProcessWindowFunction())
            .addSink(new PhoenixSinkFunction());_

But the TaskManager of sink subtask off-heap memory keep increasing, precisely is might case by DirectByteBuffer.
I analyze heap dump, find there are hundreds of DirectByteBuffer object reference to over 3MB memory address, they are all leak to Flink Netty Server Thread.
 !image-2019-09-19-15-53-32-294.png! 

It only happened in sink task, other nodes just work fine. I think is problem of Phoenix at first, but heap dump show memory is consume by netty. I didn't know much about flink network, I will be appreciated if someone can tell me the might causation or how to dig durther.


> potential memory leak in netty server
> -------------------------------------
>
>                 Key: FLINK-14124
>                 URL: https://issues.apache.org/jira/browse/FLINK-14124
>             Project: Flink
>          Issue Type: Improvement
>          Components: Runtime / Network
>    Affects Versions: 1.6.3
>            Reporter: YufeiLiu
>            Priority: Critical
>         Attachments: image-2019-09-19-15-53-32-294.png
>
>
> I have a job running in flink 1.4.2, end of the task is use Phoenix jdbc driver write record into Apache Phoenix.
> _mqStream
>             .keyBy(0)
>             .window(TumblingProcessingTimeWindows.of(Time.of(300, TimeUnit.SECONDS)))
>             .process(new MyProcessWindowFunction())
>             .addSink(new PhoenixSinkFunction());_
> But the TaskManager of sink subtask off-heap memory keep increasing, precisely is might case by DirectByteBuffer.
> I analyze heap dump, find there are hundreds of DirectByteBuffer object, each of them reference to over 3MB memory address, they are all link to Flink Netty Server Thread.
>  !image-2019-09-19-15-53-32-294.png! 
> It only happened in sink task, other nodes just work fine. I think is problem of Phoenix at first, but heap dump show memory is consume by netty. I didn't know much about flink network, I will be appreciated if someone can tell me the might causation or how to dig durther.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)