You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "dhiraj prajapati (JIRA)" <ji...@apache.org> on 2017/05/22 12:50:04 UTC

[jira] [Comment Edited] (KAFKA-4477) Node reduces its ISR to itself, and doesn't recover. Other nodes do not take leadership, cluster remains sick until node is restarted.

    [ https://issues.apache.org/jira/browse/KAFKA-4477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16019536#comment-16019536 ] 

dhiraj prajapati edited comment on KAFKA-4477 at 5/22/17 12:49 PM:
-------------------------------------------------------------------

Hi all,
We have a 3-node cluster on our production environment. We recently upgraded kafka from 0.9.0.1 to 0.10.1.0 and we are seeing a similar issue of intermittent disconnection. We never had this issue in 0.9.0.1. 

Is this issue fixed in later versions? I am asking this because I saw a similar thread for version 10.2:
https://issues.apache.org/jira/browse/KAFKA-5153

Please assist.


was (Author: dhirajpraj):
Hi all,
We have a 3-node cluster on our production environment. We recenctly upgraded kafka from 0.9.0.1 to 0.10.1.0 and we are seeing a similar issue of intermittent disconnection. We never had this issue in 0.9.0.1. 

Is this issue fixed in later versions? I am asking this because I saw a similar thread for version 10.2:
https://issues.apache.org/jira/browse/KAFKA-5153

Please assist.

> Node reduces its ISR to itself, and doesn't recover. Other nodes do not take leadership, cluster remains sick until node is restarted.
> --------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: KAFKA-4477
>                 URL: https://issues.apache.org/jira/browse/KAFKA-4477
>             Project: Kafka
>          Issue Type: Bug
>          Components: core
>    Affects Versions: 0.10.1.0
>         Environment: RHEL7
> java version "1.8.0_66"
> Java(TM) SE Runtime Environment (build 1.8.0_66-b17)
> Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode)
>            Reporter: Michael Andre Pearce
>            Assignee: Apurva Mehta
>            Priority: Critical
>              Labels: reliability
>             Fix For: 0.10.1.1
>
>         Attachments: 2016_12_15.zip, 72_Server_Thread_Dump.txt, 73_Server_Thread_Dump.txt, 74_Server_Thread_Dump, issue_node_1001_ext.log, issue_node_1001.log, issue_node_1002_ext.log, issue_node_1002.log, issue_node_1003_ext.log, issue_node_1003.log, kafka.jstack, server_1_72server.log, server_2_73_server.log, server_3_74Server.log, state_change_controller.tar.gz
>
>
> We have encountered a critical issue that has re-occured in different physical environments. We haven't worked out what is going on. We do though have a nasty work around to keep service alive. 
> We do have not had this issue on clusters still running 0.9.01.
> We have noticed a node randomly shrinking for the partitions it owns the ISR's down to itself, moments later we see other nodes having disconnects, followed by finally app issues, where producing to these partitions is blocked.
> It seems only by restarting the kafka instance java process resolves the issues.
> We have had this occur multiple times and from all network and machine monitoring the machine never left the network, or had any other glitches.
> Below are seen logs from the issue.
> Node 7:
> [2016-12-01 07:01:28,112] INFO Partition [com_ig_trade_v1_position_event--demo--compacted,10] on broker 7: Shrinking ISR for partition [com_ig_trade_v1_position_event--demo--compacted,10] from 1,2,7 to 7 (kafka.cluster.Partition)
> All other nodes:
> [2016-12-01 07:01:38,172] WARN [ReplicaFetcherThread-0-7], Error in fetch kafka.server.ReplicaFetcherThread$FetchRequest@5aae6d42 (kafka.server.ReplicaFetcherThread)
> java.io.IOException: Connection to 7 was disconnected before the response was read
> All clients:
> java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.NetworkException: The server disconnected before a response was received.
> After this occurs, we then suddenly see on the sick machine an increasing amount of close_waits and file descriptors.
> As a work around to keep service we are currently putting in an automated process that tails and regex's for: and where new_partitions hit just itself we restart the node. 
> "\[(?P<time>.+)\] INFO Partition \[.*\] on broker .* Shrinking ISR for partition \[.*\] from (?P<old_partitions>.+) to (?P<new_partitions>.+) \(kafka.cluster.Partition\)"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)