You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by GitBox <gi...@apache.org> on 2018/11/13 18:31:25 UTC

[GitHub] stevenzwu commented on issue #7020: [FLINK-10774] [Kafka] connection leak when partition discovery is disabled an…

stevenzwu commented on issue #7020: [FLINK-10774] [Kafka] connection leak when partition discovery is disabled an…
URL: https://github.com/apache/flink/pull/7020#issuecomment-438384937
 
 
   @tillrohrmann 
   
   > shouldn't we close the partitionDiscoverer in the open method in case of a failure. Moreover, we could also close it there in the case if automatic partition discovery is disabled. 
   
   right now, the if-else check of partition discovery is done in `run` method to decide if we need to close the `partitionDiscoverer` before `runFetchLoop`. I didn't want to change that, unless we want to move the starting of `discoveryLoopThread` into open method as well. is that what you have in mind?
   
   I was thinking `cancel` method as the catch/finally block in Java. Plus it was the place where we close `partitionDiscoverer` for the enabled case. I though it might makes sense to ensure the cleanup in `cancel` method for both disabled and enabled cases
   
   > in line FlinkKafkaConsumerBase.java:721 fails with an exception? 
   
   line 721 is for the partition discovery enabled case, partitionDiscoverer is closed in the cancel method in line 748

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services