You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Till Rohrmann (JIRA)" <ji...@apache.org> on 2017/02/05 21:00:42 UTC
[jira] [Closed] (FLINK-5652) Memory leak in AsyncDataStream
[ https://issues.apache.org/jira/browse/FLINK-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Till Rohrmann closed FLINK-5652.
--------------------------------
Resolution: Fixed
1.3.0: 215776b81a52cd380e8ccabd65da612f77da25e6
1.2.1: 36c7de1aef7b349b9d66c9d92398f50ebec9d186
> Memory leak in AsyncDataStream
> ------------------------------
>
> Key: FLINK-5652
> URL: https://issues.apache.org/jira/browse/FLINK-5652
> Project: Flink
> Issue Type: Bug
> Components: DataStream API
> Affects Versions: 1.3.0
> Reporter: Dmitry Golubets
> Assignee: Till Rohrmann
> Fix For: 1.3.0, 1.2.1
>
>
> When async operation timeout is > 0, the number of StreamRecordQueueEntry instances keeps growing.
> It can be easily reproduced with the following code:
> {code}
> val src: DataStream[Int] = env.fromCollection((1 to Int.MaxValue).iterator)
>
> val asyncFunction = new AsyncFunction[Int, Int] with Serializable {
> override def asyncInvoke(input: Int, collector: AsyncCollector[Int]): Unit = {
> collector.collect(List(input))
> }
> }
>
> AsyncDataStream.unorderedWait(src, asyncFunction, 1, TimeUnit.MINUTES, 1).print()
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)