You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by 黄杰斌 <jb...@gmail.com> on 2016/07/01 06:19:18 UTC

Brokers are crash due to __consumer_offsets folder are deleted

Hi All,

Do you encounter below issue when using kafka_2.11-0.10.0.0?
All brokers are crash due to __consumer_offsets folder are deleted.
 sample log:
[2016-06-30 12:46:32,579] FATAL [Replica Manager on Broker 2]: Halting due
to unrecoverable I/O error while handling produce request:
 (kafka.server.ReplicaManager)
kafka.common.KafkaStorageException: I/O exception in append to log
'__consumer_offsets-32'
        at kafka.log.Log.append(Log.scala:329)
        at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:443)
        at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:429)
        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
        at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:237)
        at
kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:429)
        at
kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:406)
        at
kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:392)
        at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
        at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
        at scala.collection.immutable.Map$Map1.foreach(Map.scala:116)
        at
scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
        at scala.collection.AbstractTraversable.map(Traversable.scala:104)
        at
kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:392)
        at
kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:328)
        at
kafka.coordinator.GroupMetadataManager.store(GroupMetadataManager.scala:232)
        at
kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:424)
        at
kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:424)
        at scala.Option.foreach(Option.scala:257)
        at
kafka.coordinator.GroupCoordinator.handleCommitOffsets(GroupCoordinator.scala:424)
        at
kafka.server.KafkaApis.handleOffsetCommitRequest(KafkaApis.scala:310)
        at kafka.server.KafkaApis.handle(KafkaApis.scala:84)
        at
kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.FileNotFoundException:
/tmp/kafka2-logs/__consumer_offsets-32/00000000000000000000.index (No such
file or directory)
        at java.io.RandomAccessFile.open0(Native Method)
        at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
        at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
        at
kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:286)
        at
kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:285)
        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
        at kafka.log.OffsetIndex.resize(OffsetIndex.scala:285)
        at
kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply$mcV$sp(OffsetIndex.scala:274)
        at
kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:274)
        at
kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:274)
        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
        at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:273)
        at kafka.log.Log.roll(Log.scala:655)
        at kafka.log.Log.maybeRoll(Log.scala:630)
        at kafka.log.Log.append(Log.scala:383)
        ... 23 more

No one remove those folders, and topic __consumer_offsets is handled by
broker, no one can remove this topic.
Do you know why this happened? And how to avoid it?

Best Regards,
Ben

Re: Brokers are crash due to __consumer_offsets folder are deleted

Posted by "Tauzell, Dave" <Da...@surescripts.com>.
/var/log would be a better default.

Dave

> On Jul 2, 2016, at 07:09, Ismael Juma <is...@juma.me.uk> wrote:
>
> Hi Peter,
>
> It's a good question why `log.dir` defaults to `/tmp`. I assume it's to
> make it easier for people to get started with Kafka, but unsafe defaults
> should be avoided as much as possible in my opinion.
>
> Ismael
>
>> On Sat, Jul 2, 2016 at 5:15 AM, Peter Davis <da...@gmail.com> wrote:
>>
>>
>> Dear Community: why does log.dir default under /tmp?  It is unsafe as a
>> default.
>>
>> -Peter
>>
>>
>>> On Jun 30, 2016, at 11:19 PM, 黄杰斌 <jb...@gmail.com> wrote:
>>>
>>> Hi All,
>>>
>>> Do you encounter below issue when using kafka_2.11-0.10.0.0?
>>> All brokers are crash due to __consumer_offsets folder are deleted.
>>> sample log:
>>> [2016-06-30 12:46:32,579] FATAL [Replica Manager on Broker 2]: Halting
>> due
>>> to unrecoverable I/O error while handling produce request:
>>> (kafka.server.ReplicaManager)
>>> kafka.common.KafkaStorageException: I/O exception in append to log
>>> '__consumer_offsets-32'
>>>       at kafka.log.Log.append(Log.scala:329)
>>>       at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:443)
>>>       at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:429)
>>>       at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
>>>       at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:237)
>>>       at
>>> kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:429)
>>>       at
>> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:406)
>>>       at
>> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:392)
>>>       at
>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>>>       at
>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>>>       at scala.collection.immutable.Map$Map1.foreach(Map.scala:116)
>>>       at
>>> scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>>>       at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>>>       at
>>> kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:392)
>>>       at
>>> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:328)
>>>       at
>> kafka.coordinator.GroupMetadataManager.store(GroupMetadataManager.scala:232)
>>>       at
>> kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:424)
>>>       at
>> kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:424)
>>>       at scala.Option.foreach(Option.scala:257)
>>>       at
>> kafka.coordinator.GroupCoordinator.handleCommitOffsets(GroupCoordinator.scala:424)
>>>       at
>>> kafka.server.KafkaApis.handleOffsetCommitRequest(KafkaApis.scala:310)
>>>       at kafka.server.KafkaApis.handle(KafkaApis.scala:84)
>>>       at
>>> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
>>>       at java.lang.Thread.run(Thread.java:745)
>>> Caused by: java.io.FileNotFoundException:
>>> /tmp/kafka2-logs/__consumer_offsets-32/00000000000000000000.index (No
>> such
>>> file or directory)
>>>       at java.io.RandomAccessFile.open0(Native Method)
>>>       at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
>>>       at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
>>>       at
>>> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:286)
>>>       at
>>> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:285)
>>>       at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
>>>       at kafka.log.OffsetIndex.resize(OffsetIndex.scala:285)
>>>       at
>> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply$mcV$sp(OffsetIndex.scala:274)
>>>       at
>> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:274)
>>>       at
>> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:274)
>>>       at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
>>>       at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:273)
>>>       at kafka.log.Log.roll(Log.scala:655)
>>>       at kafka.log.Log.maybeRoll(Log.scala:630)
>>>       at kafka.log.Log.append(Log.scala:383)
>>>       ... 23 more
>>>
>>> No one remove those folders, and topic __consumer_offsets is handled by
>>> broker, no one can remove this topic.
>>> Do you know why this happened? And how to avoid it?
>>>
>>> Best Regards,
>>> Ben
>>
This e-mail and any files transmitted with it are confidential, may contain sensitive information, and are intended solely for the use of the individual or entity to whom they are addressed. If you have received this e-mail in error, please notify the sender by reply e-mail immediately and destroy all copies of the e-mail and any attachments.

Re: Brokers are crash due to __consumer_offsets folder are deleted

Posted by "Tauzell, Dave" <Da...@surescripts.com>.
/var/log would be a better default.

Dave

> On Jul 2, 2016, at 07:09, Ismael Juma <is...@juma.me.uk> wrote:
>
> Hi Peter,
>
> It's a good question why `log.dir` defaults to `/tmp`. I assume it's to
> make it easier for people to get started with Kafka, but unsafe defaults
> should be avoided as much as possible in my opinion.
>
> Ismael
>
>> On Sat, Jul 2, 2016 at 5:15 AM, Peter Davis <da...@gmail.com> wrote:
>>
>>
>> Dear Community: why does log.dir default under /tmp?  It is unsafe as a
>> default.
>>
>> -Peter
>>
>>
>>> On Jun 30, 2016, at 11:19 PM, 黄杰斌 <jb...@gmail.com> wrote:
>>>
>>> Hi All,
>>>
>>> Do you encounter below issue when using kafka_2.11-0.10.0.0?
>>> All brokers are crash due to __consumer_offsets folder are deleted.
>>> sample log:
>>> [2016-06-30 12:46:32,579] FATAL [Replica Manager on Broker 2]: Halting
>> due
>>> to unrecoverable I/O error while handling produce request:
>>> (kafka.server.ReplicaManager)
>>> kafka.common.KafkaStorageException: I/O exception in append to log
>>> '__consumer_offsets-32'
>>>       at kafka.log.Log.append(Log.scala:329)
>>>       at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:443)
>>>       at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:429)
>>>       at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
>>>       at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:237)
>>>       at
>>> kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:429)
>>>       at
>> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:406)
>>>       at
>> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:392)
>>>       at
>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>>>       at
>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>>>       at scala.collection.immutable.Map$Map1.foreach(Map.scala:116)
>>>       at
>>> scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>>>       at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>>>       at
>>> kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:392)
>>>       at
>>> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:328)
>>>       at
>> kafka.coordinator.GroupMetadataManager.store(GroupMetadataManager.scala:232)
>>>       at
>> kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:424)
>>>       at
>> kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:424)
>>>       at scala.Option.foreach(Option.scala:257)
>>>       at
>> kafka.coordinator.GroupCoordinator.handleCommitOffsets(GroupCoordinator.scala:424)
>>>       at
>>> kafka.server.KafkaApis.handleOffsetCommitRequest(KafkaApis.scala:310)
>>>       at kafka.server.KafkaApis.handle(KafkaApis.scala:84)
>>>       at
>>> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
>>>       at java.lang.Thread.run(Thread.java:745)
>>> Caused by: java.io.FileNotFoundException:
>>> /tmp/kafka2-logs/__consumer_offsets-32/00000000000000000000.index (No
>> such
>>> file or directory)
>>>       at java.io.RandomAccessFile.open0(Native Method)
>>>       at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
>>>       at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
>>>       at
>>> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:286)
>>>       at
>>> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:285)
>>>       at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
>>>       at kafka.log.OffsetIndex.resize(OffsetIndex.scala:285)
>>>       at
>> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply$mcV$sp(OffsetIndex.scala:274)
>>>       at
>> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:274)
>>>       at
>> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:274)
>>>       at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
>>>       at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:273)
>>>       at kafka.log.Log.roll(Log.scala:655)
>>>       at kafka.log.Log.maybeRoll(Log.scala:630)
>>>       at kafka.log.Log.append(Log.scala:383)
>>>       ... 23 more
>>>
>>> No one remove those folders, and topic __consumer_offsets is handled by
>>> broker, no one can remove this topic.
>>> Do you know why this happened? And how to avoid it?
>>>
>>> Best Regards,
>>> Ben
>>
This e-mail and any files transmitted with it are confidential, may contain sensitive information, and are intended solely for the use of the individual or entity to whom they are addressed. If you have received this e-mail in error, please notify the sender by reply e-mail immediately and destroy all copies of the e-mail and any attachments.

Re: Brokers are crash due to __consumer_offsets folder are deleted

Posted by Ismael Juma <is...@juma.me.uk>.
Hi Peter,

It's a good question why `log.dir` defaults to `/tmp`. I assume it's to
make it easier for people to get started with Kafka, but unsafe defaults
should be avoided as much as possible in my opinion.

Ismael

On Sat, Jul 2, 2016 at 5:15 AM, Peter Davis <da...@gmail.com> wrote:

>
> Dear Community: why does log.dir default under /tmp?  It is unsafe as a
> default.
>
> -Peter
>
>
> > On Jun 30, 2016, at 11:19 PM, 黄杰斌 <jb...@gmail.com> wrote:
> >
> > Hi All,
> >
> > Do you encounter below issue when using kafka_2.11-0.10.0.0?
> > All brokers are crash due to __consumer_offsets folder are deleted.
> > sample log:
> > [2016-06-30 12:46:32,579] FATAL [Replica Manager on Broker 2]: Halting
> due
> > to unrecoverable I/O error while handling produce request:
> > (kafka.server.ReplicaManager)
> > kafka.common.KafkaStorageException: I/O exception in append to log
> > '__consumer_offsets-32'
> >        at kafka.log.Log.append(Log.scala:329)
> >        at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:443)
> >        at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:429)
> >        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
> >        at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:237)
> >        at
> > kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:429)
> >        at
> >
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:406)
> >        at
> >
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:392)
> >        at
> >
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> >        at
> >
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> >        at scala.collection.immutable.Map$Map1.foreach(Map.scala:116)
> >        at
> > scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> >        at scala.collection.AbstractTraversable.map(Traversable.scala:104)
> >        at
> > kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:392)
> >        at
> > kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:328)
> >        at
> >
> kafka.coordinator.GroupMetadataManager.store(GroupMetadataManager.scala:232)
> >        at
> >
> kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:424)
> >        at
> >
> kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:424)
> >        at scala.Option.foreach(Option.scala:257)
> >        at
> >
> kafka.coordinator.GroupCoordinator.handleCommitOffsets(GroupCoordinator.scala:424)
> >        at
> > kafka.server.KafkaApis.handleOffsetCommitRequest(KafkaApis.scala:310)
> >        at kafka.server.KafkaApis.handle(KafkaApis.scala:84)
> >        at
> > kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> >        at java.lang.Thread.run(Thread.java:745)
> > Caused by: java.io.FileNotFoundException:
> > /tmp/kafka2-logs/__consumer_offsets-32/00000000000000000000.index (No
> such
> > file or directory)
> >        at java.io.RandomAccessFile.open0(Native Method)
> >        at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
> >        at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
> >        at
> > kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:286)
> >        at
> > kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:285)
> >        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
> >        at kafka.log.OffsetIndex.resize(OffsetIndex.scala:285)
> >        at
> >
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply$mcV$sp(OffsetIndex.scala:274)
> >        at
> >
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:274)
> >        at
> >
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:274)
> >        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
> >        at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:273)
> >        at kafka.log.Log.roll(Log.scala:655)
> >        at kafka.log.Log.maybeRoll(Log.scala:630)
> >        at kafka.log.Log.append(Log.scala:383)
> >        ... 23 more
> >
> > No one remove those folders, and topic __consumer_offsets is handled by
> > broker, no one can remove this topic.
> > Do you know why this happened? And how to avoid it?
> >
> > Best Regards,
> > Ben
>

Re: Brokers are crash due to __consumer_offsets folder are deleted

Posted by Ismael Juma <is...@juma.me.uk>.
Hi Peter,

It's a good question why `log.dir` defaults to `/tmp`. I assume it's to
make it easier for people to get started with Kafka, but unsafe defaults
should be avoided as much as possible in my opinion.

Ismael

On Sat, Jul 2, 2016 at 5:15 AM, Peter Davis <da...@gmail.com> wrote:

>
> Dear Community: why does log.dir default under /tmp?  It is unsafe as a
> default.
>
> -Peter
>
>
> > On Jun 30, 2016, at 11:19 PM, 黄杰斌 <jb...@gmail.com> wrote:
> >
> > Hi All,
> >
> > Do you encounter below issue when using kafka_2.11-0.10.0.0?
> > All brokers are crash due to __consumer_offsets folder are deleted.
> > sample log:
> > [2016-06-30 12:46:32,579] FATAL [Replica Manager on Broker 2]: Halting
> due
> > to unrecoverable I/O error while handling produce request:
> > (kafka.server.ReplicaManager)
> > kafka.common.KafkaStorageException: I/O exception in append to log
> > '__consumer_offsets-32'
> >        at kafka.log.Log.append(Log.scala:329)
> >        at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:443)
> >        at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:429)
> >        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
> >        at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:237)
> >        at
> > kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:429)
> >        at
> >
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:406)
> >        at
> >
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:392)
> >        at
> >
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> >        at
> >
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> >        at scala.collection.immutable.Map$Map1.foreach(Map.scala:116)
> >        at
> > scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> >        at scala.collection.AbstractTraversable.map(Traversable.scala:104)
> >        at
> > kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:392)
> >        at
> > kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:328)
> >        at
> >
> kafka.coordinator.GroupMetadataManager.store(GroupMetadataManager.scala:232)
> >        at
> >
> kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:424)
> >        at
> >
> kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:424)
> >        at scala.Option.foreach(Option.scala:257)
> >        at
> >
> kafka.coordinator.GroupCoordinator.handleCommitOffsets(GroupCoordinator.scala:424)
> >        at
> > kafka.server.KafkaApis.handleOffsetCommitRequest(KafkaApis.scala:310)
> >        at kafka.server.KafkaApis.handle(KafkaApis.scala:84)
> >        at
> > kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> >        at java.lang.Thread.run(Thread.java:745)
> > Caused by: java.io.FileNotFoundException:
> > /tmp/kafka2-logs/__consumer_offsets-32/00000000000000000000.index (No
> such
> > file or directory)
> >        at java.io.RandomAccessFile.open0(Native Method)
> >        at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
> >        at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
> >        at
> > kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:286)
> >        at
> > kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:285)
> >        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
> >        at kafka.log.OffsetIndex.resize(OffsetIndex.scala:285)
> >        at
> >
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply$mcV$sp(OffsetIndex.scala:274)
> >        at
> >
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:274)
> >        at
> >
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:274)
> >        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
> >        at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:273)
> >        at kafka.log.Log.roll(Log.scala:655)
> >        at kafka.log.Log.maybeRoll(Log.scala:630)
> >        at kafka.log.Log.append(Log.scala:383)
> >        ... 23 more
> >
> > No one remove those folders, and topic __consumer_offsets is handled by
> > broker, no one can remove this topic.
> > Do you know why this happened? And how to avoid it?
> >
> > Best Regards,
> > Ben
>

Re: Brokers are crash due to __consumer_offsets folder are deleted

Posted by 黄杰斌 <jb...@gmail.com>.
Hi Peter,

The server is not restart. Anyway I will change log.dir, and will monitor
it again. Thanks for your help.

Best Regards,
Ben

Peter Davis <da...@gmail.com>于2016年7月2日周六 下午12:15写道:

> Dear 黄杰斌:
>
> I am guessing your operating system is configured to delete your /tmp
> directory when you restart the server.
>
> You will need to change the "log.dir" property in your broker's
> server.properties file to someplace permanent.  Unfortunately, your data is
> lost unless you had a backup or had configured replication.
>
> log.dir The directory in which the log data is kept (supplemental for
> log.dirs property)        string  /tmp/kafka-logs         high
>
>
> Dear Community: why does log.dir default under /tmp?  It is unsafe as a
> default.
>
> -Peter
>
>
> > On Jun 30, 2016, at 11:19 PM, 黄杰斌 <jb...@gmail.com> wrote:
> >
> > Hi All,
> >
> > Do you encounter below issue when using kafka_2.11-0.10.0.0?
> > All brokers are crash due to __consumer_offsets folder are deleted.
> > sample log:
> > [2016-06-30 12:46:32,579] FATAL [Replica Manager on Broker 2]: Halting
> due
> > to unrecoverable I/O error while handling produce request:
> > (kafka.server.ReplicaManager)
> > kafka.common.KafkaStorageException: I/O exception in append to log
> > '__consumer_offsets-32'
> >        at kafka.log.Log.append(Log.scala:329)
> >        at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:443)
> >        at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:429)
> >        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
> >        at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:237)
> >        at
> > kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:429)
> >        at
> >
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:406)
> >        at
> >
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:392)
> >        at
> >
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> >        at
> >
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> >        at scala.collection.immutable.Map$Map1.foreach(Map.scala:116)
> >        at
> > scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> >        at scala.collection.AbstractTraversable.map(Traversable.scala:104)
> >        at
> > kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:392)
> >        at
> > kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:328)
> >        at
> >
> kafka.coordinator.GroupMetadataManager.store(GroupMetadataManager.scala:232)
> >        at
> >
> kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:424)
> >        at
> >
> kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:424)
> >        at scala.Option.foreach(Option.scala:257)
> >        at
> >
> kafka.coordinator.GroupCoordinator.handleCommitOffsets(GroupCoordinator.scala:424)
> >        at
> > kafka.server.KafkaApis.handleOffsetCommitRequest(KafkaApis.scala:310)
> >        at kafka.server.KafkaApis.handle(KafkaApis.scala:84)
> >        at
> > kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> >        at java.lang.Thread.run(Thread.java:745)
> > Caused by: java.io.FileNotFoundException:
> > /tmp/kafka2-logs/__consumer_offsets-32/00000000000000000000.index (No
> such
> > file or directory)
> >        at java.io.RandomAccessFile.open0(Native Method)
> >        at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
> >        at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
> >        at
> > kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:286)
> >        at
> > kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:285)
> >        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
> >        at kafka.log.OffsetIndex.resize(OffsetIndex.scala:285)
> >        at
> >
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply$mcV$sp(OffsetIndex.scala:274)
> >        at
> >
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:274)
> >        at
> >
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:274)
> >        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
> >        at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:273)
> >        at kafka.log.Log.roll(Log.scala:655)
> >        at kafka.log.Log.maybeRoll(Log.scala:630)
> >        at kafka.log.Log.append(Log.scala:383)
> >        ... 23 more
> >
> > No one remove those folders, and topic __consumer_offsets is handled by
> > broker, no one can remove this topic.
> > Do you know why this happened? And how to avoid it?
> >
> > Best Regards,
> > Ben
>

Re: Brokers are crash due to __consumer_offsets folder are deleted

Posted by 黄杰斌 <jb...@gmail.com>.
Hi Peter,

The server is not restart. Anyway I will change log.dir, and will monitor
it again. Thanks for your help.

Best Regards,
Ben

Peter Davis <da...@gmail.com>于2016年7月2日周六 下午12:15写道:

> Dear 黄杰斌:
>
> I am guessing your operating system is configured to delete your /tmp
> directory when you restart the server.
>
> You will need to change the "log.dir" property in your broker's
> server.properties file to someplace permanent.  Unfortunately, your data is
> lost unless you had a backup or had configured replication.
>
> log.dir The directory in which the log data is kept (supplemental for
> log.dirs property)        string  /tmp/kafka-logs         high
>
>
> Dear Community: why does log.dir default under /tmp?  It is unsafe as a
> default.
>
> -Peter
>
>
> > On Jun 30, 2016, at 11:19 PM, 黄杰斌 <jb...@gmail.com> wrote:
> >
> > Hi All,
> >
> > Do you encounter below issue when using kafka_2.11-0.10.0.0?
> > All brokers are crash due to __consumer_offsets folder are deleted.
> > sample log:
> > [2016-06-30 12:46:32,579] FATAL [Replica Manager on Broker 2]: Halting
> due
> > to unrecoverable I/O error while handling produce request:
> > (kafka.server.ReplicaManager)
> > kafka.common.KafkaStorageException: I/O exception in append to log
> > '__consumer_offsets-32'
> >        at kafka.log.Log.append(Log.scala:329)
> >        at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:443)
> >        at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:429)
> >        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
> >        at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:237)
> >        at
> > kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:429)
> >        at
> >
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:406)
> >        at
> >
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:392)
> >        at
> >
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> >        at
> >
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> >        at scala.collection.immutable.Map$Map1.foreach(Map.scala:116)
> >        at
> > scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> >        at scala.collection.AbstractTraversable.map(Traversable.scala:104)
> >        at
> > kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:392)
> >        at
> > kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:328)
> >        at
> >
> kafka.coordinator.GroupMetadataManager.store(GroupMetadataManager.scala:232)
> >        at
> >
> kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:424)
> >        at
> >
> kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:424)
> >        at scala.Option.foreach(Option.scala:257)
> >        at
> >
> kafka.coordinator.GroupCoordinator.handleCommitOffsets(GroupCoordinator.scala:424)
> >        at
> > kafka.server.KafkaApis.handleOffsetCommitRequest(KafkaApis.scala:310)
> >        at kafka.server.KafkaApis.handle(KafkaApis.scala:84)
> >        at
> > kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> >        at java.lang.Thread.run(Thread.java:745)
> > Caused by: java.io.FileNotFoundException:
> > /tmp/kafka2-logs/__consumer_offsets-32/00000000000000000000.index (No
> such
> > file or directory)
> >        at java.io.RandomAccessFile.open0(Native Method)
> >        at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
> >        at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
> >        at
> > kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:286)
> >        at
> > kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:285)
> >        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
> >        at kafka.log.OffsetIndex.resize(OffsetIndex.scala:285)
> >        at
> >
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply$mcV$sp(OffsetIndex.scala:274)
> >        at
> >
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:274)
> >        at
> >
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:274)
> >        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
> >        at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:273)
> >        at kafka.log.Log.roll(Log.scala:655)
> >        at kafka.log.Log.maybeRoll(Log.scala:630)
> >        at kafka.log.Log.append(Log.scala:383)
> >        ... 23 more
> >
> > No one remove those folders, and topic __consumer_offsets is handled by
> > broker, no one can remove this topic.
> > Do you know why this happened? And how to avoid it?
> >
> > Best Regards,
> > Ben
>

Re: Brokers are crash due to __consumer_offsets folder are deleted

Posted by Peter Davis <da...@gmail.com>.
Dear 黄杰斌:

I am guessing your operating system is configured to delete your /tmp directory when you restart the server.

You will need to change the "log.dir" property in your broker's server.properties file to someplace permanent.  Unfortunately, your data is lost unless you had a backup or had configured replication. 

log.dir	The directory in which the log data is kept (supplemental for log.dirs property)	string	/tmp/kafka-logs		high


Dear Community: why does log.dir default under /tmp?  It is unsafe as a default.

-Peter


> On Jun 30, 2016, at 11:19 PM, 黄杰斌 <jb...@gmail.com> wrote:
> 
> Hi All,
> 
> Do you encounter below issue when using kafka_2.11-0.10.0.0?
> All brokers are crash due to __consumer_offsets folder are deleted.
> sample log:
> [2016-06-30 12:46:32,579] FATAL [Replica Manager on Broker 2]: Halting due
> to unrecoverable I/O error while handling produce request:
> (kafka.server.ReplicaManager)
> kafka.common.KafkaStorageException: I/O exception in append to log
> '__consumer_offsets-32'
>        at kafka.log.Log.append(Log.scala:329)
>        at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:443)
>        at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:429)
>        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
>        at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:237)
>        at
> kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:429)
>        at
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:406)
>        at
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:392)
>        at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>        at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>        at scala.collection.immutable.Map$Map1.foreach(Map.scala:116)
>        at
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>        at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>        at
> kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:392)
>        at
> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:328)
>        at
> kafka.coordinator.GroupMetadataManager.store(GroupMetadataManager.scala:232)
>        at
> kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:424)
>        at
> kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:424)
>        at scala.Option.foreach(Option.scala:257)
>        at
> kafka.coordinator.GroupCoordinator.handleCommitOffsets(GroupCoordinator.scala:424)
>        at
> kafka.server.KafkaApis.handleOffsetCommitRequest(KafkaApis.scala:310)
>        at kafka.server.KafkaApis.handle(KafkaApis.scala:84)
>        at
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
>        at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.FileNotFoundException:
> /tmp/kafka2-logs/__consumer_offsets-32/00000000000000000000.index (No such
> file or directory)
>        at java.io.RandomAccessFile.open0(Native Method)
>        at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
>        at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
>        at
> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:286)
>        at
> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:285)
>        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
>        at kafka.log.OffsetIndex.resize(OffsetIndex.scala:285)
>        at
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply$mcV$sp(OffsetIndex.scala:274)
>        at
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:274)
>        at
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:274)
>        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
>        at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:273)
>        at kafka.log.Log.roll(Log.scala:655)
>        at kafka.log.Log.maybeRoll(Log.scala:630)
>        at kafka.log.Log.append(Log.scala:383)
>        ... 23 more
> 
> No one remove those folders, and topic __consumer_offsets is handled by
> broker, no one can remove this topic.
> Do you know why this happened? And how to avoid it?
> 
> Best Regards,
> Ben

Re: Brokers are crash due to __consumer_offsets folder are deleted

Posted by Peter Davis <da...@gmail.com>.
Dear 黄杰斌:

I am guessing your operating system is configured to delete your /tmp directory when you restart the server.

You will need to change the "log.dir" property in your broker's server.properties file to someplace permanent.  Unfortunately, your data is lost unless you had a backup or had configured replication. 

log.dir	The directory in which the log data is kept (supplemental for log.dirs property)	string	/tmp/kafka-logs		high


Dear Community: why does log.dir default under /tmp?  It is unsafe as a default.

-Peter


> On Jun 30, 2016, at 11:19 PM, 黄杰斌 <jb...@gmail.com> wrote:
> 
> Hi All,
> 
> Do you encounter below issue when using kafka_2.11-0.10.0.0?
> All brokers are crash due to __consumer_offsets folder are deleted.
> sample log:
> [2016-06-30 12:46:32,579] FATAL [Replica Manager on Broker 2]: Halting due
> to unrecoverable I/O error while handling produce request:
> (kafka.server.ReplicaManager)
> kafka.common.KafkaStorageException: I/O exception in append to log
> '__consumer_offsets-32'
>        at kafka.log.Log.append(Log.scala:329)
>        at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:443)
>        at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:429)
>        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
>        at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:237)
>        at
> kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:429)
>        at
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:406)
>        at
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:392)
>        at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>        at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>        at scala.collection.immutable.Map$Map1.foreach(Map.scala:116)
>        at
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>        at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>        at
> kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:392)
>        at
> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:328)
>        at
> kafka.coordinator.GroupMetadataManager.store(GroupMetadataManager.scala:232)
>        at
> kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:424)
>        at
> kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:424)
>        at scala.Option.foreach(Option.scala:257)
>        at
> kafka.coordinator.GroupCoordinator.handleCommitOffsets(GroupCoordinator.scala:424)
>        at
> kafka.server.KafkaApis.handleOffsetCommitRequest(KafkaApis.scala:310)
>        at kafka.server.KafkaApis.handle(KafkaApis.scala:84)
>        at
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
>        at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.FileNotFoundException:
> /tmp/kafka2-logs/__consumer_offsets-32/00000000000000000000.index (No such
> file or directory)
>        at java.io.RandomAccessFile.open0(Native Method)
>        at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
>        at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
>        at
> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:286)
>        at
> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:285)
>        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
>        at kafka.log.OffsetIndex.resize(OffsetIndex.scala:285)
>        at
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply$mcV$sp(OffsetIndex.scala:274)
>        at
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:274)
>        at
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:274)
>        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
>        at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:273)
>        at kafka.log.Log.roll(Log.scala:655)
>        at kafka.log.Log.maybeRoll(Log.scala:630)
>        at kafka.log.Log.append(Log.scala:383)
>        ... 23 more
> 
> No one remove those folders, and topic __consumer_offsets is handled by
> broker, no one can remove this topic.
> Do you know why this happened? And how to avoid it?
> 
> Best Regards,
> Ben