You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Dhruvil Shah (JIRA)" <ji...@apache.org> on 2018/06/19 22:28:00 UTC
[jira] [Resolved] (KAFKA-6881) Kafka 1.1 Broker version crashes
when deleting log
[ https://issues.apache.org/jira/browse/KAFKA-6881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Dhruvil Shah resolved KAFKA-6881.
---------------------------------
Resolution: Not A Bug
Closing this JIRA because /tmp was being used as the log directory.
> Kafka 1.1 Broker version crashes when deleting log
> --------------------------------------------------
>
> Key: KAFKA-6881
> URL: https://issues.apache.org/jira/browse/KAFKA-6881
> Project: Kafka
> Issue Type: Bug
> Environment: Linux
> Reporter: K B Parthasarathy
> Priority: Critical
>
> Hello
> We are running Kafka 1.1 version in Linux from past 3 weeks. Today Kafka crashed. When we checked server.log file the following log was found
> [2018-05-07 16:53:06,721] ERROR Failed to clean up log for __consumer_offsets-24 in dir /tmp/kafka-logs due to IOException (kafka.server.LogDirFailureChannel)
> java.nio.file.NoSuchFileException: /tmp/kafka-logs/__consumer_offsets-24/00000000000000000000.log
> at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
> at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
> at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
> at sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:409)
> at sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262)
> at java.nio.file.Files.move(Files.java:1395)
> at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:697)
> at org.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:212)
> at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:415)
> at kafka.log.Log.asyncDeleteSegment(Log.scala:1601)
> at kafka.log.Log.$anonfun$replaceSegments$1(Log.scala:1653)
> at kafka.log.Log.$anonfun$replaceSegments$1$adapted(Log.scala:1648)
> at scala.collection.immutable.List.foreach(List.scala:389)
> at kafka.log.Log.replaceSegments(Log.scala:1648)
> at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:535)
> at kafka.log.Cleaner.$anonfun$doClean$6(LogCleaner.scala:462)
> at kafka.log.Cleaner.$anonfun$doClean$6$adapted(LogCleaner.scala:461)
> at scala.collection.immutable.List.foreach(List.scala:389)
> at kafka.log.Cleaner.doClean(LogCleaner.scala:461)
> at kafka.log.Cleaner.clean(LogCleaner.scala:438)
> at kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:305)
> at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:291)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
> Suppressed: java.nio.file.NoSuchFileException: /tmp/kafka-logs/__consumer_offsets-24/00000000000000000000.log -> /tmp/kafka-logs/__consumer_offsets-24/00000000000000000000.log.deleted
> at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
> at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
> at sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:396)
> at sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262)
> at java.nio.file.Files.move(Files.java:1395)
> at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:694)
> ... 16 more
> [2018-05-07 16:53:06,725] INFO [ReplicaManager broker=0] Stopping serving replicas in dir /tmp/kafka-logs (kafka.server.ReplicaManager)
> [2018-05-07 16:53:06,762] INFO Stopping serving logs in dir /tmp/kafka-logs (kafka.log.LogManager)
> [2018-05-07 16:53:07,032] ERROR Shutdown broker because all log dirs in /tmp/kafka-logs have failed (kafka.log.LogManager)
>
> Please let me know what may be the issue
>
> Partha
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)