You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "Igor (Jira)" <ji...@apache.org> on 2019/08/20 13:05:00 UTC

[jira] [Commented] (KAFKA-7248) Kafka creating topic with no leader. Issue started showing up after unkerberizing the cluster

    [ https://issues.apache.org/jira/browse/KAFKA-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16911303#comment-16911303 ] 

Igor commented on KAFKA-7248:
-----------------------------

Hello, I have got the same problem after mass deleting topics, some of them stuck as marked for deletion. At this moment restart all brokers helped me.

confluent-kafka-2.11-2.1.1cp1-1.noarch

count topics: 578

count partition: 4250

authorizer disable and my config
{noformat}
auto.create.topics.enable=true
controlled.shutdown.enable=true
default.replication.factor=3
delete.topic.enable=true
inter.broker.protocol.version=2.1
log.flush.interval.messages=20000
log.flush.interval.ms=10000
log.flush.scheduler.interval.ms=2000
log.message.format.version=2.1
log.retention.minutes=10000
min.insync.replicas=2
num.partitions=2
num.recovery.threads.per.data.dir=6
offsets.retention.minutes=10080
unclean.leader.election.enable=false{noformat}
 

--describe:

 
{noformat}
Topic:topic_v1 PartitionCount:2 ReplicationFactor:3 Configs:
Topic: topic_v1 Partition: 0 Leader: none Replicas: 1,2,3 Isr:
Topic: topic_v1 Partition: 1 Leader: none Replicas: 2,3,1 Isr:{noformat}
 

That I found in log: broker1

 
{noformat}
[2019-08-20 11:55:01,678] INFO Topic creation Map(topic_v1-1 -> ArrayBuffer(2, 3, 1), topic_v1-0 -> ArrayBuffer(1, 2, 3)) (kafka.zk.AdminZkClient)
[2019-08-20 11:55:01,680] INFO [KafkaApi-1] Auto creation of topic topic_v1 with 2 partitions and replication factor 3 is successful (kafka.server.KafkaApis)
[2019-08-20 12:06:26,114] INFO [Admin Manager on Broker 1]: Error processing create topic request for topic topic_v1 with arguments (numPartitions=15, replicationFactor=1, replicasAssignments={}, configs={}) (kafka.server.AdminManager)
org.apache.kafka.common.errors.TopicExistsException: Topic 'topic_v1' already exists.
[2019-08-20 12:13:31,640] INFO [Admin Manager on Broker 1]: Error processing create topic request for topic topic_v1 with arguments (numPartitions=15, replicationFactor=1, replicasAssignments={}, configs={}) (kafka.server.AdminManager)
org.apache.kafka.common.errors.TopicExistsException: Topic 'topic_v1' already exists.{noformat}
broker2

 
{noformat}
[2019-08-20 12:51:33,169] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(...topic_v1-0,topic_v1-1,...
[2019-08-20 12:51:33,288] INFO [Log partition=topic_v1-0, dir=/app/kafka/log] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2019-08-20 12:51:33,288] INFO [Log partition=topic_v1-0, dir=/app/kafka/log] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 0 ms (kafka.log.Log)
[2019-08-20 12:51:33,288] INFO Created log for partition topic_v1-0 in /app/kafka/log with properties {compression.type -> producer, message.format.version -> 2.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, message.downconversion.enable -> true, min.insync.replicas -> 2, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 10000, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 600000000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 20000}. (kafka.log.LogManager)
[2019-08-20 12:51:33,292] INFO [Partition topic_v1-0 broker=2] No checkpointed highwatermark is found for partition topic_v1-0 (kafka.cluster.Partition)
[2019-08-20 12:51:33,292] INFO Replica loaded for partition topic_v1-0 with initial high watermark 0 (kafka.cluster.Replica)
[2019-08-20 12:51:33,292] INFO Replica loaded for partition topic_v1-0 with initial high watermark 0 (kafka.cluster.Replica)
[2019-08-20 12:51:33,292] INFO Replica loaded for partition topic_v1-0 with initial high watermark 0 (kafka.cluster.Replica)
[2019-08-20 12:51:33,292] INFO [Partition topic_v1-0 broker=2] topic_v1-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-08-20 12:51:33,458] INFO [Log partition=topic_v1-1, dir=/app/kafka/log] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2019-08-20 12:51:33,459] INFO [Log partition=topic_v1-1, dir=/app/kafka/log] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2019-08-20 12:51:33,459] INFO Created log for partition topic_v1-1 in /app/kafka/log with properties {compression.type -> producer, message.format.version -> 2.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, message.downconversion.enable -> true, min.insync.replicas -> 2, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 10000, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 600000000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 20000}. (kafka.log.LogManager)
[2019-08-20 12:51:33,462] INFO [Partition topic_v1-1 broker=2] No checkpointed highwatermark is found for partition topic_v1-1 (kafka.cluster.Partition)
[2019-08-20 12:51:33,462] INFO Replica loaded for partition topic_v1-1 with initial high watermark 0 (kafka.cluster.Replica)
[2019-08-20 12:51:33,462] INFO Replica loaded for partition topic_v1-1 with initial high watermark 0 (kafka.cluster.Replica)
[2019-08-20 12:51:33,462] INFO Replica loaded for partition topic_v1-1 with initial high watermark 0 (kafka.cluster.Replica)
[2019-08-20 12:51:33,462] INFO [Partition topic_v1-1 broker=2] topic_v1-1 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition){noformat}
broker3
{noformat}
/var/log/kafka/server.log:[2019-08-20 12:51:39,305] INFO Replica loaded for partition topic_v1-0 with initial high watermark 0 (kafka.cluster.Replica)
/var/log/kafka/server.log:[2019-08-20 12:51:39,305] INFO Replica loaded for partition topic_v1-0 with initial high watermark 0 (kafka.cluster.Replica)
/var/log/kafka/server.log:[2019-08-20 12:51:39,306] INFO [Log partition=topic_v1-0, dir=/app/kafka/log] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
/var/log/kafka/server.log:[2019-08-20 12:51:39,306] INFO [Log partition=topic_v1-0, dir=/app/kafka/log] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 0 ms (kafka.log.Log)
/var/log/kafka/server.log:[2019-08-20 12:51:39,307] INFO Created log for partition topic_v1-0 in /app/kafka/log with properties {compression.type -> producer, message.format.version -> 2.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, message.downconversion.enable -> true, min.insync.replicas -> 2, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 10000, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 600000000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 20000}. (kafka.log.LogManager)
/var/log/kafka/server.log:[2019-08-20 12:51:39,310] INFO [Partition topic_v1-0 broker=3] No checkpointed highwatermark is found for partition topic_v1-0 (kafka.cluster.Partition)
/var/log/kafka/server.log:[2019-08-20 12:51:39,310] INFO Replica loaded for partition topic_v1-0 with initial high watermark 0 (kafka.cluster.Replica)
/var/log/kafka/server.log:[2019-08-20 12:51:39,330] INFO Replica loaded for partition topic_v1-1 with initial high watermark 0 (kafka.cluster.Replica)
/var/log/kafka/server.log:[2019-08-20 12:51:39,330] INFO [Log partition=topic_v1-1, dir=/app/kafka/log] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
/var/log/kafka/server.log:[2019-08-20 12:51:39,330] INFO [Log partition=topic_v1-1, dir=/app/kafka/log] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 0 ms (kafka.log.Log)
/var/log/kafka/server.log:[2019-08-20 12:51:39,331] INFO Created log for partition topic_v1-1 in /app/kafka/log with properties {compression.type -> producer, message.format.version -> 2.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, message.downconversion.enable -> true, min.insync.replicas -> 2, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 10000, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 600000000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 20000}. (kafka.log.LogManager)
/var/log/kafka/server.log:[2019-08-20 12:51:39,334] INFO [Partition topic_v1-1 broker=3] No checkpointed highwatermark is found for partition topic_v1-1 (kafka.cluster.Partition)
/var/log/kafka/server.log:[2019-08-20 12:51:39,334] INFO Replica loaded for partition topic_v1-1 with initial high watermark 0 (kafka.cluster.Replica)
/var/log/kafka/server.log:[2019-08-20 12:51:39,334] INFO Replica loaded for partition topic_v1-1 with initial high watermark 0 (kafka.cluster.Replica)
/var/log/kafka/server.log:[2019-08-20 12:51:39,383] INFO [ReplicaFetcherManager on broker 3] Removed fetcher for partitions Set(...
/var/log/kafka/server.log:[2019-08-20 12:51:39,644] INFO [ReplicaFetcher replicaId=3, leaderId=2, fetcherId=0] Truncating partition topic_v1-0 to local high watermark 0 (kafka.server.ReplicaFetcherThread)
/var/log/kafka/server.log:[2019-08-20 12:51:39,644] INFO [Log partition=topic_v1-0, dir=/app/kafka/log] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log)
/var/log/kafka/server.log:[2019-08-20 12:51:39,650] INFO [ReplicaFetcher replicaId=3, leaderId=2, fetcherId=0] Truncating partition topic_v1-1 to local high watermark 0 (kafka.server.ReplicaFetcherThread)
/var/log/kafka/server.log:[2019-08-20 12:51:39,650] INFO [Log partition=topic_v1-1, dir=/app/kafka/log] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log){noformat}
 

 

> Kafka creating topic with no leader. Issue started showing up after unkerberizing the cluster
> ---------------------------------------------------------------------------------------------
>
>                 Key: KAFKA-7248
>                 URL: https://issues.apache.org/jira/browse/KAFKA-7248
>             Project: Kafka
>          Issue Type: Bug
>          Components: clients, core
>    Affects Versions: 1.0.0
>         Environment: Azure Cluster with HDP : 2.6.5 & HDF 3.1
>            Reporter: Umesh Bansod
>            Priority: Critical
>
> Kafka fails to assign Leader to Topics created by Atlas or manually(from command line)
> This started to happen after disabling kerberos from the Cluster.
> I have Tried,
>  * Cleaning the ZK
>  * Cleaning Kafka metadata 
>  * Syncing conf with all node clinets
> *Kafka Version*: 1.0.0.2
> *HDP version*: 2.6.5
> *Kafka Cli:*
> --describe --topic ATLAS_HOOK
> Topic:ATLAS_HOOK PartitionCount:1 ReplicationFactor:3 Configs: MarkedForDeletion:true
>  Topic: ATLAS_HOOK Partition: 0 Leader: *none* Replicas: 1001,1002,1003 Isr:
> */var/log/kafka/server/log:*
> TRACE [Kafka Request Handler 5 on Broker 1002], Kafka request handler 5 on broker 1002 handling request Request(processor=0, connectionId=10.165.132.8:6667-10.165.132.4:39378-0, session=Session(User:ANONYMOUS,/10.165.132.4), listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, buffer=null) (kafka.server.KafkaRequestHandler)
> [2018-08-05 13:47:51,209] TRACE [KafkaApi-1002] Handling request:RequestHeader(apiKey=METADATA, apiVersion=5, clientId=producer-1, correlationId=34390) -- \{topics=[ATLAS_HOOK],allow_auto_topic_creation=true} from connection 10.165.132.8:6667-10.165.132.4:39378-0;securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (kafka.server.KafkaApis)
>  
> *zookeeper error:* 
> INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1142] - Unable to read additional data from server sessionid 0x36505af3670006a, likely server has closed socket, closing socket connection and attempting reconnect
> WATCHER::
> WatchedEvent state:Disconnected type:None path:null
> 2018-08-05 14:33:19,172 - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1019] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
> 2018-08-05 14:33:19,174 - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@864] - Socket connection established, initiating session, client: /127.0.0.1:57774, server: localhost/127.0.0.1:2181
> 2018-08-05 14:33:20,027 - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1142] - Unable to read additional data from server sessionid 0x36505af3670006a, likely server has closed socket, closing socket connection and attempting reconnect
> 2018-08-05 14:33:21,457 - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1019] - Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)