You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "Law (JIRA)" <ji...@apache.org> on 2018/02/05 18:03:00 UTC

[jira] [Created] (KAFKA-6533) Kafka log cleaner stopped due to "cannot allocate memory" error

Law created KAFKA-6533:
--------------------------

             Summary: Kafka log cleaner stopped due to "cannot allocate memory" error
                 Key: KAFKA-6533
                 URL: https://issues.apache.org/jira/browse/KAFKA-6533
             Project: Kafka
          Issue Type: Bug
    Affects Versions: 0.10.2.0
            Reporter: Law


Hi,

I am on Kafka 0.10.2.0 and have an issue where the log cleaner is running okay but suddenly stops because of a "cannot allocate memory" error.

Here is the error from log-cleaner.log file:

[2018-02-04 02:57:41,343] INFO [kafka-log-cleaner-thread-0],
        Log cleaner thread 0 cleaned log __consumer_offsets-35 (dirty section = [31740820448, 31740820448])
        100.1 MB of log processed in 1.5 seconds (67.5 MB/sec).
        Indexed 100.0 MB in 0.8 seconds (131.8 Mb/sec, 51.2% of total time)
        Buffer utilization: 0.0%
        Cleaned 100.1 MB in 0.7 seconds (138.2 Mb/sec, 48.8% of total time)
        Start size: 100.1 MB (771,501 messages)
        End size: 0.1 MB (501 messages)
        99.9% size reduction (99.9% fewer messages)
 (kafka.log.LogCleaner)
[2018-02-04 02:57:41,348] INFO Cleaner 0: Beginning cleaning of log __consumer_offsets-15. (kafka.log.LogCleaner)
[2018-02-04 02:57:41,348] INFO Cleaner 0: Building offset map for __consumer_offsets-15... (kafka.log.LogCleaner)
[2018-02-04 02:57:41,359] INFO Cleaner 0: Building offset map for log __consumer_offsets-15 for 1 segments in offset range [19492717509, 19493524087). (kafka.log.LogCleaner)
[2018-02-04 02:57:42,067] INFO Cleaner 0: Offset map for log __consumer_offsets-15 complete. (kafka.log.LogCleaner)
[2018-02-04 02:57:42,067] INFO Cleaner 0: Cleaning log __consumer_offsets-15 (cleaning prior to Sun Feb 04 02:57:34 GMT 2018, discarding tombstones prior to Sat Feb 03 02:53:31 GMT 2018)... (k
[2018-02-04 02:57:42,068] INFO Cleaner 0: Cleaning segment 0 in log __consumer_offsets-15 (largest timestamp Sat Sep 02 15:26:15 GMT 2017) into 0, discarding deletes. (kafka.log.LogCleaner)
[2018-02-04 02:57:42,078] INFO Cleaner 0: Swapping in cleaned segment 0 for segment(s) 0 in log __consumer_offsets-15. (kafka.log.LogCleaner)
[2018-02-04 02:57:42,078] INFO Cleaner 0: Cleaning segment 2148231985 in log __consumer_offsets-15 (largest timestamp Thu Sep 28 15:50:19 GMT 2017) into 2148231985, discarding deletes. (kafka.
[2018-02-04 02:57:42,080] INFO Cleaner 0: Swapping in cleaned segment 2148231985 for segment(s) 2148231985 in log __consumer_offsets-15. (kafka.log.LogCleaner)
[2018-02-04 02:57:42,081] INFO Cleaner 0: Cleaning segment 4296532622 in log __consumer_offsets-15 (largest timestamp Tue Oct 24 10:33:20 GMT 2017) into 4296532622, discarding deletes. (kafka.
[2018-02-04 02:57:42,083] INFO Cleaner 0: Swapping in cleaned segment 4296532622 for segment(s) 4296532622 in log __consumer_offsets-15. (kafka.log.LogCleaner)
[2018-02-04 02:57:42,083] INFO Cleaner 0: Cleaning segment 6444525822 in log __consumer_offsets-15 (largest timestamp Mon Nov 20 11:33:30 GMT 2017) into 6444525822, discarding deletes. (kafka.
[2018-02-04 02:57:42,085] INFO Cleaner 0: Swapping in cleaned segment 6444525822 for segment(s) 6444525822 in log __consumer_offsets-15. (kafka.log.LogCleaner)
[2018-02-04 02:57:42,086] INFO Cleaner 0: Cleaning segment 8592045249 in log __consumer_offsets-15 (largest timestamp Sat Dec 16 06:35:53 GMT 2017) into 8592045249, discarding deletes. (kafka.
[2018-02-04 02:57:42,088] INFO Cleaner 0: Swapping in cleaned segment 8592045249 for segment(s) 8592045249 in log __consumer_offsets-15. (kafka.log.LogCleaner)
[2018-02-04 02:57:42,088] INFO Cleaner 0: Cleaning segment 10739582585 in log __consumer_offsets-15 (largest timestamp Wed Dec 27 21:15:44 GMT 2017) into 10739582585, discarding deletes. (kafk
[2018-02-04 02:57:42,091] INFO Cleaner 0: Swapping in cleaned segment 10739582585 for segment(s) 10739582585 in log __consumer_offsets-15. (kafka.log.LogCleaner)
[2018-02-04 02:57:42,096] ERROR [kafka-log-cleaner-thread-0], Error due to  (kafka.log.LogCleaner)
java.io.FileNotFoundException: /kafka/broker1-logs/__consumer_offsets-15/00000000012887210320.log.cleaned (Cannot allocate memory)
        at java.io.RandomAccessFile.open0(Native Method)
        at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
        at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
        at org.apache.kafka.common.record.FileRecords.openChannel(FileRecords.java:428)
        at org.apache.kafka.common.record.FileRecords.open(FileRecords.java:384)
        at org.apache.kafka.common.record.FileRecords.open(FileRecords.java:393)
        at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:394)
        at kafka.log.Cleaner.$anonfun$clean$6(LogCleaner.scala:363)
        at kafka.log.Cleaner.$anonfun$clean$6$adapted(LogCleaner.scala:362)
        at scala.collection.immutable.List.foreach(List.scala:378)
        at kafka.log.Cleaner.clean(LogCleaner.scala:362)
        at kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:241)
        at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:220)
        at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
[2018-02-04 02:57:42,096] INFO [kafka-log-cleaner-thread-0], Stopped  (kafka.log.LogCleaner)

 

Here are log cleaner settings that I have set:

        log.cleaner.backoff.ms = 15000
        log.cleaner.dedupe.buffer.size = 134217728
        log.cleaner.delete.retention.ms = 86400000
        log.cleaner.enable = true
        log.cleaner.io.buffer.load.factor = 0.9
        log.cleaner.io.buffer.size = 524288
        log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
        log.cleaner.min.cleanable.ratio = 0.5
        log.cleaner.min.compaction.lag.ms = 0
        log.cleaner.threads = 2
        log.cleanup.policy =

 

I haven't found any explanation online of what the issue might be. Can anyone help?

The only way that I know of to fix this is to restart the broker. Is there another way to restart the logcleaner without restart the broker?

Thanks in advance



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)