You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by GitBox <gi...@apache.org> on 2022/09/21 13:08:52 UTC

[GitHub] [kafka] dajac commented on a diff in pull request #12590: KAFKA-7109: Close fetch sessions on close of consumer

dajac commented on code in PR #12590:
URL: https://github.com/apache/kafka/pull/12590#discussion_r976472636


##########
clients/src/main/java/org/apache/kafka/clients/FetchSessionHandler.java:
##########
@@ -590,6 +595,14 @@ public boolean handleResponse(FetchResponse response, short version) {
         }
     }
 
+    /**
+     * The client will initiate the session close on next fetch request.
+     */
+    public void notifyClose() {
+        log.info("Set the metadata for next fetch request to close the existing session ID={} ", nextMetadata.sessionId());
+        nextMetadata = nextMetadata.nextCloseExisting();

Review Comment:
   What would happen if the session handler is reused after this is called? Should we add unit tests in `FetchSessionHandlerTest` to be complete?



##########
clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java:
##########
@@ -1933,11 +1941,79 @@ private Map<String, String> topicPartitionTags(TopicPartition tp) {
         }
     }
 
+    // Visible for testing
+    void maybeCloseFetchSessions(final Timer timer) {
+        final Cluster cluster = metadata.fetch();
+        final List<RequestFuture<ClientResponse>> requestFutures = new ArrayList<>();
+        for (final Map.Entry<Integer, FetchSessionHandler> entry : sessionHandlers.entrySet()) {

Review Comment:
   nit: Could we use `foreach`?



##########
clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java:
##########
@@ -1933,11 +1941,79 @@ private Map<String, String> topicPartitionTags(TopicPartition tp) {
         }
     }
 
+    // Visible for testing
+    void maybeCloseFetchSessions(final Timer timer) {
+        final Cluster cluster = metadata.fetch();
+        final List<RequestFuture<ClientResponse>> requestFutures = new ArrayList<>();
+        for (final Map.Entry<Integer, FetchSessionHandler> entry : sessionHandlers.entrySet()) {
+            final FetchSessionHandler sessionHandler = entry.getValue();
+            // set the session handler to notify close. This will set the next metadata request to send close message.
+            sessionHandler.notifyClose();
+
+            final int sessionId = sessionHandler.sessionId();
+            final Integer fetchTargetNodeId = entry.getKey();
+            // FetchTargetNode may not be available as it may have disconnected the connection. In such cases, we will
+            // skip sending the close request.
+            final Node fetchTarget = cluster.nodeById(fetchTargetNodeId);
+            if (fetchTarget == null || client.isUnavailable(fetchTarget)) {
+                log.debug("Skip sending close session request to broker {} since it is not reachable", fetchTarget);
+                continue;
+            }
+
+            final RequestFuture<ClientResponse> responseFuture = sendFetchRequestToNode(sessionHandler.newBuilder().build(), fetchTarget);
+            responseFuture.addListener(new RequestFutureListener<ClientResponse>() {
+                @Override
+                public void onSuccess(ClientResponse value) {
+                    log.debug("Successfully sent a close message for fetch session: {} to node: {}", sessionId, fetchTarget);
+                }
+
+                @Override
+                public void onFailure(RuntimeException e) {
+                    log.info("Unable to a close message for fetch session: {} to node: {}. " +
+                        "This may result in unnecessary fetch sessions at the broker.", sessionId, fetchTarget, e);
+                }
+            });
+
+            requestFutures.add(responseFuture);
+        }
+
+        // Poll to ensure that request has been written to the socket. Wait until either the timer has expired or until
+        // all requests have received a response.
+        do {
+            client.poll(timer, null, true);
+        } while (timer.notExpired() && !requestFutures.stream().allMatch(RequestFuture::isDone));
+
+        if (!requestFutures.stream().allMatch(RequestFuture::isDone)) {
+            // we ran out of time before completing all futures. It is ok since we don't want to block the shutdown
+            // here.
+            log.warn("All requests couldn't be sent in the specific timeout period {}ms. " +
+                "This may result in unnecessary fetch sessions at the broker. Consider increasing the timeout passed for " +
+                "KafkaConsumer.close(Duration timeout)", timer.timeoutMs());

Review Comment:
   ditto.



##########
clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java:
##########
@@ -1933,11 +1941,79 @@ private Map<String, String> topicPartitionTags(TopicPartition tp) {
         }
     }
 
+    // Visible for testing
+    void maybeCloseFetchSessions(final Timer timer) {
+        final Cluster cluster = metadata.fetch();
+        final List<RequestFuture<ClientResponse>> requestFutures = new ArrayList<>();
+        for (final Map.Entry<Integer, FetchSessionHandler> entry : sessionHandlers.entrySet()) {
+            final FetchSessionHandler sessionHandler = entry.getValue();
+            // set the session handler to notify close. This will set the next metadata request to send close message.
+            sessionHandler.notifyClose();
+
+            final int sessionId = sessionHandler.sessionId();
+            final Integer fetchTargetNodeId = entry.getKey();
+            // FetchTargetNode may not be available as it may have disconnected the connection. In such cases, we will
+            // skip sending the close request.
+            final Node fetchTarget = cluster.nodeById(fetchTargetNodeId);
+            if (fetchTarget == null || client.isUnavailable(fetchTarget)) {
+                log.debug("Skip sending close session request to broker {} since it is not reachable", fetchTarget);
+                continue;
+            }
+
+            final RequestFuture<ClientResponse> responseFuture = sendFetchRequestToNode(sessionHandler.newBuilder().build(), fetchTarget);
+            responseFuture.addListener(new RequestFutureListener<ClientResponse>() {
+                @Override
+                public void onSuccess(ClientResponse value) {
+                    log.debug("Successfully sent a close message for fetch session: {} to node: {}", sessionId, fetchTarget);
+                }
+
+                @Override
+                public void onFailure(RuntimeException e) {
+                    log.info("Unable to a close message for fetch session: {} to node: {}. " +
+                        "This may result in unnecessary fetch sessions at the broker.", sessionId, fetchTarget, e);

Review Comment:
   I wonder if this is really useful for end users. I mean, it is good to close those sessions but they will be eventually evicted by the broker so it is not a catastrophe is they are not. I think that the risk is that lambda users will see, they won't understand so they will ask questions.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscribe@kafka.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org