You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "Nenad Maric (JIRA)" <ji...@apache.org> on 2018/11/08 14:56:00 UTC

[jira] [Commented] (KAFKA-4972) Kafka 0.10.0 Found a corrupted index file during Kafka broker startup

    [ https://issues.apache.org/jira/browse/KAFKA-4972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16679867#comment-16679867 ] 

Nenad Maric commented on KAFKA-4972:
------------------------------------

Any news about this bug?
We have a similar problem on Kafka 1.1.0. Here is the log output:
{code:java}
[2018-11-08 10:45:04,471] WARN [Log partition=<PRODUCTION-TOPIC-21>, dir=/data] Found a corrupted index file corresponding to log file /data/<PRODUCTION-TOPIC-21>/00000000000006723263.log due to Corrupt index found, index file (/data/<PRODUCTION-TOPIC-21>/00000000000006723263.index) has non-zero size but the last offset is 6723263 which is no greater than the base offset 6723263.}, recovering segment and rebuilding index files... (kafka.log.Log){code}
{code:java}
[2018-11-08 10:46:28,351] ERROR There was an error in one of the threads during logs loading: java.lang.IllegalArgumentException: inconsistent range (kafka.log.LogManager)
[2018-11-08 10:46:28,356] ERROR [KafkaServer id=4] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.lang.IllegalArgumentException: inconsistent range
 at java.util.concurrent.ConcurrentSkipListMap$SubMap.<init>(ConcurrentSkipListMap.java:2620)
 at java.util.concurrent.ConcurrentSkipListMap.subMap(ConcurrentSkipListMap.java:2078)
 at java.util.concurrent.ConcurrentSkipListMap.subMap(ConcurrentSkipListMap.java:2114)
 at kafka.log.Log$$anonfun$12.apply(Log.scala:1561)
 at kafka.log.Log$$anonfun$12.apply(Log.scala:1560)
 at scala.Option.map(Option.scala:146)
 at kafka.log.Log.logSegments(Log.scala:1560)
 at kafka.log.Log.kafka$log$Log$$recoverSegment(Log.scala:358)
 at kafka.log.Log$$anonfun$completeSwapOperations$1.apply(Log.scala:389)
 at kafka.log.Log$$anonfun$completeSwapOperations$1.apply(Log.scala:380)
 at scala.collection.immutable.Set$Set1.foreach(Set.scala:94)
 at kafka.log.Log.completeSwapOperations(Log.scala:380)
 at kafka.log.Log.loadSegments(Log.scala:408)
 at kafka.log.Log.<init>(Log.scala:216)
 at kafka.log.Log$.apply(Log.scala:1747)
 at kafka.log.LogManager.kafka$log$LogManager$$loadLog(LogManager.scala:255)
 at kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$11$$anonfun$apply$15$$anonfun$apply$2.apply$mcV$sp(LogManager.scala:335)
 at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)
[2018-11-08 10:46:28,402] INFO [KafkaServer id=4] shutting down (kafka.server.KafkaServer){code}

> Kafka 0.10.0  Found a corrupted index file during Kafka broker startup
> ----------------------------------------------------------------------
>
>                 Key: KAFKA-4972
>                 URL: https://issues.apache.org/jira/browse/KAFKA-4972
>             Project: Kafka
>          Issue Type: Bug
>          Components: log
>    Affects Versions: 0.10.0.0
>         Environment: JDK: HotSpot  x64  1.7.0_80
> Tag: 0.10.0
>            Reporter: fangjinuo
>            Priority: Critical
>              Labels: reliability
>         Attachments: Snap3.png
>
>
> -deleted text-After force shutdown all kafka brokers one by one, restart them one by one, but a broker startup failure.
> The following WARN leval log was found in the log file:
> found a corrutped index file,  xxxx.index , delet it  ...
> you can view details by following attachment.
> ~I look up some codes in core module, found out :
> the nonthreadsafe method LogSegment.append(offset, messages)  has tow caller:
> 1) Log.append(messages)                          // here has a synchronized lock 
> 2) LogCleaner.cleanInto(topicAndPartition, source, dest, map, retainDeletes, messageFormatVersion)   // here has not 
> So I guess this may be the reason for the repeated offset in 00000xx.log file (logsegment's .log) ~
> Although this is just my inference, but I hope that this problem can be quickly repaired



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)