You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Pengwei (JIRA)" <ji...@apache.org> on 2017/02/23 14:44:44 UTC

[jira] [Commented] (KAFKA-4790) Kafka can't not recover after a disk full

    [ https://issues.apache.org/jira/browse/KAFKA-4790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15880567#comment-15880567 ] 

Pengwei commented on KAFKA-4790:
--------------------------------

The index max file set to 1MB
log.index.size.max.bytes=1024000


The reason I found is below:

1. Producer batch a lot of message into kafka(for example 512k) , so every write will have more than 4k(index write interval ),  for example write 2050 index entry.  


2. At the same time disk is full, the kafka is dead before recover point is flush into disk

3. Restart the kafka,  the recover function, will check every msgs to re-append the index item into the index file, then every 4k message will write a index entry, then the total index entry will exceed 2050 or maybe exceed the index full's max entries

> Kafka can't not recover after a disk full
> -----------------------------------------
>
>                 Key: KAFKA-4790
>                 URL: https://issues.apache.org/jira/browse/KAFKA-4790
>             Project: Kafka
>          Issue Type: Bug
>    Affects Versions: 0.9.0.1, 0.10.1.1
>            Reporter: Pengwei
>              Labels: reliability
>             Fix For: 0.10.2.1
>
>
> [2017-02-23 18:43:57,736] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
> [2017-02-23 18:43:57,887] INFO Loading logs. (kafka.log.LogManager)
> [2017-02-23 18:43:57,935] INFO Recovering unflushed segment 0 in log test1-0. (kafka.log.Log)
> [2017-02-23 18:43:59,297] ERROR There was an error in one of the threads during logs loading: java.lang.IllegalArgumentException: requirement failed: Attempt to append to a full index (size = 128000). (kafka.log.LogManager)
> [2017-02-23 18:43:59,299] FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
> java.lang.IllegalArgumentException: requirement failed: Attempt to append to a full index (size = 128000).
> 	at scala.Predef$.require(Predef.scala:219)
> 	at kafka.log.OffsetIndex$$anonfun$append$1.apply$mcV$sp(OffsetIndex.scala:200)
> 	at kafka.log.OffsetIndex$$anonfun$append$1.apply(OffsetIndex.scala:199)
> 	at kafka.log.OffsetIndex$$anonfun$append$1.apply(OffsetIndex.scala:199)
> 	at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> 	at kafka.log.OffsetIndex.append(OffsetIndex.scala:199)
> 	at kafka.log.LogSegment.recover(LogSegment.scala:191)
> 	at kafka.log.Log.recoverLog(Log.scala:259)
> 	at kafka.log.Log.loadSegments(Log.scala:234)
> 	at kafka.log.Log.<init>(Log.scala:92)
> 	at kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$4$$anonfun$apply$10$$anonfun$apply$1.apply$mcV$sp(LogManager.scala:201)
> 	at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60)
> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> 	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 	at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)