You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Chen Song <ch...@gmail.com> on 2016/01/21 21:13:20 UTC

exception in log cleaner

I am testing compact policy on a topic and got the following exception in
log-cleaner.log. It seems to be related to the size of the ByteBuffer. Has
anyone seen this error before, or any config I can tune to increase this?


[2016-01-21 04:21:23,083] INFO Cleaner 0: Beginning cleaning of log
topic-0. (kafka.log.LogCleaner)
[2016-01-21 04:21:23,083] INFO Cleaner 0: Building offset map for
topic-0... (kafka.log.LogCleaner)
[2016-01-21 04:21:23,148] INFO Cleaner 0: Building offset map for log
topic-0 for 6 segments in offset range [0, 56070). (kafka.log.LogCleaner)
[2016-01-21 04:21:23,441] INFO Cleaner 0: Offset map for log topic-0
complete. (kafka.log.LogCleaner)
[2016-01-21 04:21:23,442] INFO Cleaner 0: Cleaning log topic-0 (discarding
tombstones prior to Thu Jan 01 00:00:00 UTC 1970)... (kafka.log.LogCleaner)
[2016-01-21 04:21:23,442] INFO Cleaner 0: Cleaning segment 0 in log topic-0
(last modified Wed Jan 20 18:38:33 UTC 2016) into 0, retaining deletes.
(kafka.log.LogCleaner)
[2016-01-21 04:21:23,455] INFO Cleaner 0: Cleaning segment 6230 in log
topic-0 (last modified Wed Jan 20 20:27:59 UTC 2016) into 0, retaining
deletes. (kafka.log.LogCleaner)
[2016-01-21 04:21:23,467] INFO Cleaner 0: Cleaning segment 18690 in log
topic-0 (last modified Wed Jan 20 21:26:58 UTC 2016) into 0, retaining
deletes. (kafka.log.LogCleaner)
[2016-01-21 04:21:23,479] INFO Cleaner 0: Cleaning segment 24920 in log
topic-0 (last modified Wed Jan 20 23:26:48 UTC 2016) into 0, retaining
deletes. (kafka.log.LogCleaner)
[2016-01-21 04:21:23,489] INFO Cleaner 0: Cleaning segment 37380 in log
topic-0 (last modified Thu Jan 21 00:25:16 UTC 2016) into 0, retaining
deletes. (kafka.log.LogCleaner)
[2016-01-21 04:21:23,495] INFO Cleaner 0: Cleaning segment 43610 in log
topic-0 (last modified Thu Jan 21 02:26:17 UTC 2016) into 0, retaining
deletes. (kafka.log.LogCleaner)
[2016-01-21 04:21:23,513] ERROR [kafka-log-cleaner-thread-0], Error due to
 (kafka.log.LogCleaner)
java.nio.BufferOverflowException
        at java.nio.Buffer.nextPutIndex(Buffer.java:519)
        at java.nio.HeapByteBuffer.putLong(HeapByteBuffer.java:417)
        at
kafka.message.ByteBufferMessageSet$.writeMessage(ByteBufferMessageSet.scala:80)
        at
kafka.log.Cleaner$$anonfun$cleanInto$1.apply(LogCleaner.scala:419)
        at
kafka.log.Cleaner$$anonfun$cleanInto$1.apply(LogCleaner.scala:404)
        at scala.collection.Iterator$class.foreach(Iterator.scala:727)
        at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:32)
        at
scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
        at kafka.message.MessageSet.foreach(MessageSet.scala:67)
        at kafka.log.Cleaner.cleanInto(LogCleaner.scala:404)
        at
kafka.log.Cleaner$$anonfun$cleanSegments$1.apply(LogCleaner.scala:358)
        at
kafka.log.Cleaner$$anonfun$cleanSegments$1.apply(LogCleaner.scala:354)
        at scala.collection.immutable.List.foreach(List.scala:318)
        at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:354)
        at kafka.log.Cleaner$$anonfun$clean$4.apply(LogCleaner.scala:321)
        at kafka.log.Cleaner$$anonfun$clean$4.apply(LogCleaner.scala:320)
        at scala.collection.immutable.List.foreach(List.scala:318)

-- 
Chen Song

Re: exception in log cleaner

Posted by Chen Song <ch...@gmail.com>.
I read a bit about the code (see below) and confused.

For each message in the message set generate from the read buffer, it
iterates and writes into the buffer. In theory, the number of bytes being
written should never exceed the number of  bytes read.

val messages = new
ByteBufferMessageSet(source.log.readInto(readBuffer, position))

    for (entry <- messages.shallowIterator) {

        ByteBufferMessageSet.writeMessage(writeBuffer, entry.message,
entry.offset)


On Thu, Jan 21, 2016 at 3:13 PM, Chen Song <ch...@gmail.com> wrote:

> I am testing compact policy on a topic and got the following exception in
> log-cleaner.log. It seems to be related to the size of the ByteBuffer. Has
> anyone seen this error before, or any config I can tune to increase this?
>
>
> [2016-01-21 04:21:23,083] INFO Cleaner 0: Beginning cleaning of log
> topic-0. (kafka.log.LogCleaner)
> [2016-01-21 04:21:23,083] INFO Cleaner 0: Building offset map for
> topic-0... (kafka.log.LogCleaner)
> [2016-01-21 04:21:23,148] INFO Cleaner 0: Building offset map for log
> topic-0 for 6 segments in offset range [0, 56070). (kafka.log.LogCleaner)
> [2016-01-21 04:21:23,441] INFO Cleaner 0: Offset map for log topic-0
> complete. (kafka.log.LogCleaner)
> [2016-01-21 04:21:23,442] INFO Cleaner 0: Cleaning log topic-0 (discarding
> tombstones prior to Thu Jan 01 00:00:00 UTC 1970)... (kafka.log.LogCleaner)
> [2016-01-21 04:21:23,442] INFO Cleaner 0: Cleaning segment 0 in log
> topic-0 (last modified Wed Jan 20 18:38:33 UTC 2016) into 0, retaining
> deletes. (kafka.log.LogCleaner)
> [2016-01-21 04:21:23,455] INFO Cleaner 0: Cleaning segment 6230 in log
> topic-0 (last modified Wed Jan 20 20:27:59 UTC 2016) into 0, retaining
> deletes. (kafka.log.LogCleaner)
> [2016-01-21 04:21:23,467] INFO Cleaner 0: Cleaning segment 18690 in log
> topic-0 (last modified Wed Jan 20 21:26:58 UTC 2016) into 0, retaining
> deletes. (kafka.log.LogCleaner)
> [2016-01-21 04:21:23,479] INFO Cleaner 0: Cleaning segment 24920 in log
> topic-0 (last modified Wed Jan 20 23:26:48 UTC 2016) into 0, retaining
> deletes. (kafka.log.LogCleaner)
> [2016-01-21 04:21:23,489] INFO Cleaner 0: Cleaning segment 37380 in log
> topic-0 (last modified Thu Jan 21 00:25:16 UTC 2016) into 0, retaining
> deletes. (kafka.log.LogCleaner)
> [2016-01-21 04:21:23,495] INFO Cleaner 0: Cleaning segment 43610 in log
> topic-0 (last modified Thu Jan 21 02:26:17 UTC 2016) into 0, retaining
> deletes. (kafka.log.LogCleaner)
> [2016-01-21 04:21:23,513] ERROR [kafka-log-cleaner-thread-0], Error due to
>  (kafka.log.LogCleaner)
> java.nio.BufferOverflowException
>         at java.nio.Buffer.nextPutIndex(Buffer.java:519)
>         at java.nio.HeapByteBuffer.putLong(HeapByteBuffer.java:417)
>         at
> kafka.message.ByteBufferMessageSet$.writeMessage(ByteBufferMessageSet.scala:80)
>         at
> kafka.log.Cleaner$$anonfun$cleanInto$1.apply(LogCleaner.scala:419)
>         at
> kafka.log.Cleaner$$anonfun$cleanInto$1.apply(LogCleaner.scala:404)
>         at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>         at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:32)
>         at
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>         at kafka.message.MessageSet.foreach(MessageSet.scala:67)
>         at kafka.log.Cleaner.cleanInto(LogCleaner.scala:404)
>         at
> kafka.log.Cleaner$$anonfun$cleanSegments$1.apply(LogCleaner.scala:358)
>         at
> kafka.log.Cleaner$$anonfun$cleanSegments$1.apply(LogCleaner.scala:354)
>         at scala.collection.immutable.List.foreach(List.scala:318)
>         at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:354)
>         at kafka.log.Cleaner$$anonfun$clean$4.apply(LogCleaner.scala:321)
>         at kafka.log.Cleaner$$anonfun$clean$4.apply(LogCleaner.scala:320)
>         at scala.collection.immutable.List.foreach(List.scala:318)
>
> --
> Chen Song
>
>


-- 
Chen Song