You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "Swaroop Kumar Sahu (Jira)" <ji...@apache.org> on 2020/02/20 11:34:00 UTC

[jira] [Commented] (KAFKA-1194) The kafka broker cannot delete the old log files after the configured time

    [ https://issues.apache.org/jira/browse/KAFKA-1194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17040878#comment-17040878 ] 

Swaroop Kumar Sahu commented on KAFKA-1194:
-------------------------------------------

Hi [~tqin],

The issue is exist in the latest stable version 2.4.0 also.

Please update the Affects Versions.

 

Logs:

at kafka.cluster.Partition.delete(Partition.scala:470)
 at kafka.server.ReplicaManager.stopReplica(ReplicaManager.scala:360)
 at kafka.server.ReplicaManager.$anonfun$stopReplicas$2(ReplicaManager.scala:404)
 at scala.collection.immutable.HashSet.foreach(HashSet.scala:932)
 at kafka.server.ReplicaManager.stopReplicas(ReplicaManager.scala:402)
 at kafka.server.KafkaApis.handleStopReplicaRequest(KafkaApis.scala:235)
 at kafka.server.KafkaApis.handle(KafkaApis.scala:131)
 at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:70)
 at java.lang.Thread.run(Thread.java:748)
 *{color:#de350b}Suppressed: java.nio.file.AccessDeniedException: C:\kafka_2.13-2.4.0\data\kafka\second_topic-4 -> C:\kafka_2.13-2.4.0\data\kafka\second_topic-4.2f5254be07e947f7b0d999fa29a384f3-delete{color}*
 at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
 at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
 at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301)
 at sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
 at java.nio.file.Files.move(Files.java:1395)
 at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:792)
 ... 17 more
[2020-02-20 17:01:10,761] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(first_topic-0, first_topic-1, first_topic-2) (kafka.server.ReplicaFetcherManager)
[2020-02-20 17:01:10,762] INFO [ReplicaAlterLogDirsManager on broker 0] Removed fetcher for partitions Set(first_topic-0, first_topic-1, first_topic-2) (kafka.server.ReplicaAlterLogDirsManager)
[2020-02-20 17:01:10,765] INFO [ReplicaManager broker=0] Broker 0 stopped fetcher for partitions first_topic-0,first_topic-1,first_topic-2 and stopped moving logs for partitions because they are in the failed log directory C:\kafka_2.13-2.4.0\data\kafka. (kafka.server.ReplicaManager)
[2020-02-20 17:01:10,766] INFO Stopping serving logs in dir C:\kafka_2.13-2.4.0\data\kafka (kafka.log.LogManager)
[2020-02-20 17:01:10,770] ERROR Shutdown broker because all log dirs in C:\kafka_2.13-2.4.0\data\kafka have failed (kafka.log.LogManager)

C:\kafka_2.13-2.4.0>

> The kafka broker cannot delete the old log files after the configured time
> --------------------------------------------------------------------------
>
>                 Key: KAFKA-1194
>                 URL: https://issues.apache.org/jira/browse/KAFKA-1194
>             Project: Kafka
>          Issue Type: Bug
>          Components: log
>    Affects Versions: 0.10.0.0, 0.11.0.0, 1.0.0
>         Environment: window
>            Reporter: Tao Qin
>            Priority: Critical
>              Labels: features, patch, windows
>         Attachments: KAFKA-1194.patch, RetentionExpiredWindows.txt, Untitled.jpg, image-2018-09-12-14-25-52-632.png, image-2018-11-26-10-18-59-381.png, kafka-1194-v1.patch, kafka-1194-v2.patch, kafka-bombarder.7z, screenshot-1.png
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> We tested it in windows environment, and set the log.retention.hours to 24 hours.
> # The minimum age of a log file to be eligible for deletion
> log.retention.hours=24
> After several days, the kafka broker still cannot delete the old log file. And we get the following exceptions:
> [2013-12-19 01:57:38,528] ERROR Uncaught exception in scheduled task 'kafka-log-retention' (kafka.utils.KafkaScheduler)
> kafka.common.KafkaStorageException: Failed to change the log file suffix from  to .deleted for log segment 1516723
>          at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:249)
>          at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:638)
>          at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:629)
>          at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:418)
>          at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:418)
>          at scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59)
>          at scala.collection.immutable.List.foreach(List.scala:76)
>          at kafka.log.Log.deleteOldSegments(Log.scala:418)
>          at kafka.log.LogManager.kafka$log$LogManager$$cleanupExpiredSegments(LogManager.scala:284)
>          at kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:316)
>          at kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:314)
>          at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:743)
>          at scala.collection.Iterator$class.foreach(Iterator.scala:772)
>          at scala.collection.JavaConversions$JIteratorWrapper.foreach(JavaConversions.scala:573)
>          at scala.collection.IterableLike$class.foreach(IterableLike.scala:73)
>          at scala.collection.JavaConversions$JListWrapper.foreach(JavaConversions.scala:615)
>          at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:742)
>          at kafka.log.LogManager.cleanupLogs(LogManager.scala:314)
>          at kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:143)
>          at kafka.utils.KafkaScheduler$$anon$1.run(KafkaScheduler.scala:100)
>          at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>          at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>          at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>          at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>          at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>          at java.lang.Thread.run(Thread.java:724)
> I think this error happens because kafka tries to rename the log file when it is still opened.  So we should close the file first before rename.
> The index file uses a special data structure, the MappedByteBuffer. Javadoc describes it as:
> A mapped byte buffer and the file mapping that it represents remain valid until the buffer itself is garbage-collected.
> Fortunately, I find a forceUnmap function in kafka code, and perhaps it can be used to free the MappedByteBuffer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)