You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Till Rohrmann (JIRA)" <ji...@apache.org> on 2017/02/03 11:12:51 UTC
[jira] [Commented] (FLINK-5652) Memory leak in AsyncDataStream
[ https://issues.apache.org/jira/browse/FLINK-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15851351#comment-15851351 ]
Till Rohrmann commented on FLINK-5652:
--------------------------------------
Actually I think, we can simply register a completion callback on the {{StreamRecordQueueEntry}} which cancels the {{ScheduledFuture}} of the {{TriggerTask}}. Given that the {{ProcessingTimeSerivce}} removes the trigger tasks on cancelation, this should fix the problem.
> Memory leak in AsyncDataStream
> ------------------------------
>
> Key: FLINK-5652
> URL: https://issues.apache.org/jira/browse/FLINK-5652
> Project: Flink
> Issue Type: Bug
> Components: DataStream API
> Affects Versions: 1.3.0
> Reporter: Dmitry Golubets
>
> When async operation timeout is > 0, the number of StreamRecordQueueEntry instances keeps growing.
> It can be easily reproduced with the following code:
> {code}
> val src: DataStream[Int] = env.fromCollection((1 to Int.MaxValue).iterator)
>
> val asyncFunction = new AsyncFunction[Int, Int] with Serializable {
> override def asyncInvoke(input: Int, collector: AsyncCollector[Int]): Unit = {
> collector.collect(List(input))
> }
> }
>
> AsyncDataStream.unorderedWait(src, asyncFunction, 1, TimeUnit.MINUTES, 1).print()
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)