You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "huxihx (JIRA)" <ji...@apache.org> on 2018/07/09 02:37:00 UTC

[jira] [Commented] (KAFKA-7078) Kafka 1.0.1 Broker version crashes when deleting log

    [ https://issues.apache.org/jira/browse/KAFKA-7078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16536498#comment-16536498 ] 

huxihx commented on KAFKA-7078:
-------------------------------

[~zxj1009] Did you change the cleanup policy for topic `__consumer_offsets`?

> Kafka 1.0.1 Broker version crashes when deleting log
> ----------------------------------------------------
>
>                 Key: KAFKA-7078
>                 URL: https://issues.apache.org/jira/browse/KAFKA-7078
>             Project: Kafka
>          Issue Type: Bug
>    Affects Versions: 1.0.1
>            Reporter: xiaojing zhou
>            Priority: Critical
>
> Hello
> We are running Kafka 1.0.1 version in CentOS for 3 months. Today Kafka crashed. When we checked server.log and log-cleaner.log file the following log was found.
> server.log
> {code}
> [2018-06-11 00:04:12,349] INFO Rolled new log segment for '__consumer_offsets-7' in 205 ms. (kafka.log.Log)
> [2018-06-11 00:04:23,282] ERROR Failed to clean up log for __consumer_offsets-7 in dir /nas/kafka_logs/lvsp01hkf001 due to IOException (kafka.server.LogDirFailureChannel)
> java.nio.file.NoSuchFileException: /nas/kafka_logs/lvsp01hkf001/__consumer_offsets-7/00000000019668089841.log
>  at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
>  at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>  at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>  at sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:409)
>  at sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262)
>  at java.nio.file.Files.move(Files.java:1395)
>  at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:682)
>  at org.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:212)
>  at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:398)
>  at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:1592)
> --
> --
>  at kafka.log.Log.replaceSegments(Log.scala:1639)
>  at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:485)
>  at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:396)
>  at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:395)
>  at scala.collection.immutable.List.foreach(List.scala:392)
>  at kafka.log.Cleaner.doClean(LogCleaner.scala:395)
>  at kafka.log.Cleaner.clean(LogCleaner.scala:372)
>  at kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:263)
>  at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:243)
>  at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:64)
>  Suppressed: java.nio.file.NoSuchFileException: /nas/kafka_logs/lvsp01hkf001/__consumer_offsets-7/00000000019668089841.log -> /nas/kafka_logs/lvsp01hkf001/__consumer_offsets-7/00000000019668089841.log.deleted
>  at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
>  at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>  at sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:396)
>  at sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262)
>  at java.nio.file.Files.move(Files.java:1395)
>  at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:679)
>  ... 16 more
> [2018-06-11 00:04:23,338] INFO [ReplicaManager broker=0] Stopping serving replicas in dir /nas/kafka_logs/lvsp01hkf001 (kafka.server.ReplicaManager)
> {code}
>  
> log-cleaner.log
> {code}
> [2018-06-11 00:04:21,677] INFO Cleaner 0: Beginning cleaning of log __consumer_offsets-7. (kafka.log.LogCleaner)
> [2018-06-11 00:04:21,677] INFO Cleaner 0: Building offset map for __consumer_offsets-7... (kafka.log.LogCleaner)
> [2018-06-11 00:04:21,722] INFO Cleaner 0: Building offset map for log __consumer_offsets-7 for 1 segments in offset range [23914565941, 23915674371). (kafka.log.LogCleaner)
> [2018-06-11 00:04:23,212] INFO Cleaner 0: Offset map for log __consumer_offsets-7 complete. (kafka.log.LogCleaner)
> [2018-06-11 00:04:23,212] INFO Cleaner 0: Cleaning log __consumer_offsets-7 (cleaning prior to Mon Jun 11 00:04:12 UTC 2018, discarding tombstones prior to Sat Jun 09 23:17:35 UTC 2018)... (kafka.log.LogCleaner)
> [2018-06-11 00:04:23,216] INFO Cleaner 0: Cleaning segment 19668089841 in log __consumer_offsets-7 (largest timestamp Thu Jan 01 00:00:00 UTC 1970) into 19668089841, discarding deletes. (kafka.log.LogCleaner)
> [2018-06-11 00:04:23,220] INFO Cleaner 0: Swapping in cleaned segment 19668089841 for segment(s) 19668089841 in log __consumer_offsets-7. (kafka.log.LogCleaner)
> [2018-06-11 00:04:23,343] INFO Cleaner 0: Beginning cleaning of log __consumer_offsets-7. (kafka.log.LogCleaner)
> [2018-06-11 00:04:23,343] INFO Cleaner 0: Building offset map for __consumer_offsets-7... (kafka.log.LogCleaner)
> [2018-06-11 00:04:23,388] INFO Cleaner 0: Building offset map for log __consumer_offsets-7 for 1 segments in offset range [23914565941, 23915674371). (kafka.log.LogCleaner)
> {code}
>  
> Our log files are stored in NAS folder, I checked /nas/kafka_logs/lvsp01hkf001/__consumer_offsets-7/00000000019668089841.log, this file does exist, not sure why kafka throws NoSuchFileException.
> Anyone know what may be issue?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)