You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Yonghui Zhao <zh...@gmail.com> on 2015/01/06 13:46:45 UTC

kafka deleted old logs but not released

Hi,

We use kafka_2.10-0.8.1.1 in our server. Today we found disk space alert.

We find many kafka data files are deleted, but still opened by kafka.

such as:

_yellowpageV2-0/00000000000068170670.log (deleted)
java       8446         root  724u      REG              253,2 536937911
26087362
/home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_ypgsearch_yellowpageV2-0/00000000000068818668.log
(deleted)
java       8446         root  725u      REG              253,2 536910838
26087364
/home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_ypgsearch_yellowpageV2-0/00000000000069457098.log
(deleted)
java       8446         root  726u      REG              253,2 536917902
26087368
/home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_ypgsearch_yellowpageV2-0/00000000000070104914.log
(deleted)


Is there anything wrong or wrong configed?

Re: kafka deleted old logs but not released

Posted by Jaikiran Pai <ja...@gmail.com>.
Create a JIRA for this https://issues.apache.org/jira/browse/KAFKA-1853

-Jaikiran
On Thursday 08 January 2015 01:18 PM, Jaikiran Pai wrote:
> Apart from the fact that the file rename is failing (the API notes 
> that there are chances of the rename failing), it looks like the 
> implementation in FileMessageSet's rename can cause a couple of 
> issues, one of them being a leak.
>
> The implementation looks like this 
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/log/FileMessageSet.scala#L268. 
> Notice that the reference to the original file member variable is 
> switched with a new one but the (old) FileChannel held in that 
> FileMessageSet isn't closed. That I think explains the leak. 
> Furthermore, a new fileChannel for the new File instance isn't being 
> created either and that's a different issue.
>
> P.S: Not much familiar with Kafka code yet. The above explanation is 
> just based on a quick look at that piece of code and doesn't take into 
> account any other context there might be to this.
>
> -Jaikiran
>
>
> On Thursday 08 January 2015 11:19 AM, Yonghui Zhao wrote:
>> CentOS release 6.3 (Final)
>>
>>
>> 2015-01-07 22:18 GMT+08:00 Harsha <ka...@harsha.io>:
>>
>>> Yonghui,
>>>             Which OS you are running.
>>> -Harsha
>>>
>>> On Wed, Jan 7, 2015, at 01:38 AM, Yonghui Zhao wrote:
>>>> Yes  and I found the reason rename in deletion is failed.
>>>> In rename progress the files is deleted? and then exception blocks 
>>>> file
>>>> closed in kafka.
>>>> But I don't know how can rename failure happen,
>>>>
>>>> [2015-01-07 00:10:48,685] ERROR Uncaught exception in scheduled task
>>>> 'kafka-log-retention' (kafka.utils.KafkaScheduler)
>>>> kafka.common.KafkaStorageException: Failed to change the log file 
>>>> suffix
>>>> from  to .deleted for log segment 70781650
>>>>          at 
>>>> kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:249)
>>>>          at 
>>>> kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:636)
>>>>          at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:627)
>>>>          at
>>>> kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:415)
>>>>          at
>>>> kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:415)
>>>>          at scala.collection.immutable.List.foreach(List.scala:318)
>>>>          at kafka.log.Log.deleteOldSegments(Log.scala:415)
>>>>          at
>>>>
>>> kafka.log.LogManager.kafka$log$LogManager$$cleanupExpiredSegments(LogManager.scala:325) 
>>>
>>>>          at
>>>> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:356) 
>>>>
>>>>          at
>>>> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:354) 
>>>>
>>>>          at
>>>>
>>> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772) 
>>>
>>>>          at 
>>>> scala.collection.Iterator$class.foreach(Iterator.scala:727)
>>>>          at 
>>>> scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>>>>          at
>>>> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>>>>          at 
>>>> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>>>>          at
>>>>
>>> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771) 
>>>
>>>>          at kafka.log.LogManager.cleanupLogs(LogManager.scala:354)
>>>>          at
>>>>
>>> kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:141) 
>>>
>>>>          at
>>>> kafka.utils.KafkaScheduler$$anon$1.run(KafkaScheduler.scala:100)
>>>>          at
>>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) 
>>>>
>>>>          at
>>>>
>>> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317) 
>>>
>>>>          at
>>>> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
>>>>          at
>>>>
>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98) 
>>>
>>>>          at
>>>>
>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180) 
>>>
>>>>          at
>>>>
>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204) 
>>>
>>>>          at
>>>>
>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) 
>>>
>>>>          at
>>>>
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) 
>>>
>>>>          at java.lang.Thread.run(Thread.java:662)
>>>>
>>>>
>>>> 2015-01-07 13:56 GMT+08:00 Jun Rao <ju...@confluent.io>:
>>>>
>>>>> Do you mean that the Kafka broker still holds a file handler on a
>>> deleted
>>>>> file? Do you see those files being deleted in the Kafka log4j log?
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Jun
>>>>>
>>>>> On Tue, Jan 6, 2015 at 4:46 AM, Yonghui Zhao <zh...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> We use kafka_2.10-0.8.1.1 in our server. Today we found disk space
>>> alert.
>>>>>> We find many kafka data files are deleted, but still opened by 
>>>>>> kafka.
>>>>>>
>>>>>> such as:
>>>>>>
>>>>>> _yellowpageV2-0/00000000000068170670.log (deleted)
>>>>>> java       8446         root  724u      REG 253,2
>>> 536937911
>>>>>> 26087362
>>>>>>
>>>>>>
>>> /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_ypgsearch_yellowpageV2-0/00000000000068818668.log 
>>>
>>>>>> (deleted)
>>>>>> java       8446         root  725u      REG 253,2
>>> 536910838
>>>>>> 26087364
>>>>>>
>>>>>>
>>> /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_ypgsearch_yellowpageV2-0/00000000000069457098.log 
>>>
>>>>>> (deleted)
>>>>>> java       8446         root  726u      REG 253,2
>>> 536917902
>>>>>> 26087368
>>>>>>
>>>>>>
>>> /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_ypgsearch_yellowpageV2-0/00000000000070104914.log 
>>>
>>>>>> (deleted)
>>>>>>
>>>>>>
>>>>>> Is there anything wrong or wrong configed?
>>>>>>
>


Re: kafka deleted old logs but not released

Posted by Jaikiran Pai <ja...@gmail.com>.
Hi Yonghui,

I have tried a few ways with different retention strategies to try and 
reproduce this issue, but haven't been able to do it. Since it looks 
like you can consistently reproduce this, would you able to share a 
sample reproducible application (maybe as a github repo) for this issue?

-Jaikiran
On Monday 26 January 2015 09:27 AM, Yonghui Zhao wrote:
> I have fixed this issue like this patch
> https://reviews.apache.org/r/29755/diff/5/.
>
> I find rename failure still happens:
>
> server.log.2015-01-26-06:[2015-01-26 06:10:54,513] ERROR File rename
> failed, forcefully deleting file (kafka.log.Log)
> server.log.2015-01-26-06:[2015-01-26 06:10:54,600] ERROR File rename
> failed, forcefully deleting file (kafka.log.Log)
> server.log.2015-01-26-06:[2015-01-26 06:10:54,685] ERROR File rename
> failed, forcefully deleting file (kafka.log.Log)
> server.log.2015-01-26-06:[2015-01-26 06:10:54,797] ERROR File rename
> failed, forcefully deleting file (kafka.log.Log)
> ....
>
> And use lsof I can still find some files opened by kafka but deleted,  but
> those files sizes are 0.
>
> java       3228 root   34uw     REG              253,2         0   26084228
> /home/work/data/soft/kafka-0.8/data/.lock (deleted)
> java       3228 root   35u      REG              253,2         0   26084232
> /home/work/data/soft/kafka-0.8/data/cube-0/00000000000000000000.log
> (deleted)
> java       3228 root   36u      REG              253,2         0   26869778
> /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_misearch_appstore-search-0/00000000000000003116.log
> (deleted)
> java       3228 root   37u      REG              253,2         0   26084234
> /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_mishop-search_mishop_v1-0/00000000000000000000.log
> (deleted)
>
>
>
> Here is my configuration:
>
> Binary: kafka_2.10-0.8.1.1
> Retention config:
>
> # The minimum age of a log file to be eligible for deletion
> log.retention.hours=168
>
> # A size-based retention policy for logs. Segments are pruned from the log
> as long as the remaining
> # segments don't drop below log.retention.bytes.
> #log.retention.bytes=1073741824
>
> # The maximum size of a log segment file. When this size is reached a new
> log segment will be created.
> log.segment.bytes=536870912
>
> # The interval at which log segments are checked to see if they can be
> deleted according
> # to the retention policies
> log.retention.check.interval.ms=60000
>
> # By default the log cleaner is disabled and the log retention policy will
> default to just delete segments after their retention expires.
> # If log.cleaner.enable=true is set the cleaner will be enabled and
> individual logs can then be marked for log compaction.
> log.cleaner.enable=false
>
>
>
> OS:  CentOS release 6.4 (Final)
> JDK:
> *java version "1.6.0_37"*
> Java(TM) SE Runtime Environment (build 1.6.0_37-b06)
> Java HotSpot(TM) 64-Bit Server VM (build 20.12-b01, mixed mode)
>
> JDK is too old, but not sure if this result in rename failure.
>
>
>
>
>
>
> 2015-01-26 0:42 GMT+08:00 Jay Kreps <ja...@gmail.com>:
>
>> Also, what is the configuration for the servers? In particular it would be
>> good to know the retention and/or log compaction settings as those delete
>> files.
>>
>> -Jay
>>
>> On Sun, Jan 25, 2015 at 4:34 AM, Jaikiran Pai <ja...@gmail.com>
>> wrote:
>>
>>> Hi Yonghui,
>>>
>>> Do you still have this happening? If yes, can you tell us a bit more
>>> about your setup? Is there something else that accesses or maybe deleting
>>> these log files? For more context to this question, please read the
>>> discussion related to this here http://mail-archives.apache.
>>> org/mod_mbox/kafka-dev/201501.mbox/%3C54C47E9B.5060401%40gmail.com%3E
>>>
>>>
>>> -Jaikiran
>>>
>>>
>>>> On Thursday 08 January 2015 11:19 AM, Yonghui Zhao wrote:
>>>>
>>>>> CentOS release 6.3 (Final)
>>>>>
>>>>>
>>>>> 2015-01-07 22:18 GMT+08:00 Harsha <ka...@harsha.io>:
>>>>>
>>>>>   Yonghui,
>>>>>>              Which OS you are running.
>>>>>> -Harsha
>>>>>>
>>>>>> On Wed, Jan 7, 2015, at 01:38 AM, Yonghui Zhao wrote:
>>>>>>
>>>>>>> Yes  and I found the reason rename in deletion is failed.
>>>>>>> In rename progress the files is deleted? and then exception blocks
>>>>>>> file
>>>>>>> closed in kafka.
>>>>>>> But I don't know how can rename failure happen,
>>>>>>>
>>>>>>> [2015-01-07 00:10:48,685] ERROR Uncaught exception in scheduled task
>>>>>>> 'kafka-log-retention' (kafka.utils.KafkaScheduler)
>>>>>>> kafka.common.KafkaStorageException: Failed to change the log file
>>>>>>> suffix
>>>>>>> from  to .deleted for log segment 70781650
>>>>>>>           at kafka.log.LogSegment.changeFileSuffixes(LogSegment.
>>>>>>> scala:249)
>>>>>>>           at kafka.log.Log.kafka$log$Log$$
>>>>>>> asyncDeleteSegment(Log.scala:636)
>>>>>>>           at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:627)
>>>>>>>           at
>>>>>>> kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:415)
>>>>>>>           at
>>>>>>> kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:415)
>>>>>>>           at scala.collection.immutable.List.foreach(List.scala:318)
>>>>>>>           at kafka.log.Log.deleteOldSegments(Log.scala:415)
>>>>>>>           at
>>>>>>>
>>>>>>>   kafka.log.LogManager.kafka$log$LogManager$$cleanupExpiredSegments(LogManager.scala:325)
>>>>>>
>>>>>>>           at
>>>>>>> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:356)
>>>>>>>
>>>>>>>           at
>>>>>>> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:354)
>>>>>>>
>>>>>>>           at
>>>>>>>
>>>>>>>   scala.collection.TraversableLike$WithFilter$$
>>>>>> anonfun$foreach$1.apply(TraversableLike.scala:772)
>>>>>>
>>>>>>>           at scala.collection.Iterator$class.foreach(Iterator.scala:
>>>>>>> 727)
>>>>>>>           at scala.collection.AbstractIterator.foreach(
>>>>>>> Iterator.scala:1157)
>>>>>>>           at
>>>>>>> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>>>>>>>           at scala.collection.AbstractIterable.foreach(
>>>>>>> Iterable.scala:54)
>>>>>>>           at
>>>>>>>
>>>>>>>   scala.collection.TraversableLike$WithFilter.
>>>>>> foreach(TraversableLike.scala:771)
>>>>>>
>>>>>>>           at kafka.log.LogManager.cleanupLogs(LogManager.scala:354)
>>>>>>>           at
>>>>>>>
>>>>>>>   kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:141)
>>>>>>
>>>>>>>           at
>>>>>>> kafka.utils.KafkaScheduler$$anon$1.run(KafkaScheduler.scala:100)
>>>>>>>           at
>>>>>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
>>>>>>>
>>>>>>>           at
>>>>>>>
>>>>>>>   java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
>>>>>>
>>>>>>>           at
>>>>>>> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
>>>>>>>           at
>>>>>>>
>>>>>>>   java.util.concurrent.ScheduledThreadPoolExecutor$
>>>>>> ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
>>>>>>
>>>>>>>           at
>>>>>>>
>>>>>>>   java.util.concurrent.ScheduledThreadPoolExecutor$
>>>>>> ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
>>>>>>
>>>>>>>           at
>>>>>>>
>>>>>>>   java.util.concurrent.ScheduledThreadPoolExecutor$
>>>>>> ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
>>>>>>
>>>>>>>           at
>>>>>>>
>>>>>>>   java.util.concurrent.ThreadPoolExecutor$Worker.
>>>>>> runTask(ThreadPoolExecutor.java:886)
>>>>>>
>>>>>>>           at
>>>>>>>
>>>>>>>   java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>>
>>>>>>>           at java.lang.Thread.run(Thread.java:662)
>>>>>>>
>>>>>>>
>>>>>>> 2015-01-07 13:56 GMT+08:00 Jun Rao <ju...@confluent.io>:
>>>>>>>
>>>>>>>   Do you mean that the Kafka broker still holds a file handler on a
>>>>>>> deleted
>>>>>>> file? Do you see those files being deleted in the Kafka log4j log?
>>>>>>>> Thanks,
>>>>>>>>
>>>>>>>> Jun
>>>>>>>>
>>>>>>>> On Tue, Jan 6, 2015 at 4:46 AM, Yonghui Zhao <zh...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>   Hi,
>>>>>>>>> We use kafka_2.10-0.8.1.1 in our server. Today we found disk space
>>>>>>>>>
>>>>>>>> alert.
>>>>>>> We find many kafka data files are deleted, but still opened by kafka.
>>>>>>>>> such as:
>>>>>>>>>
>>>>>>>>> _yellowpageV2-0/00000000000068170670.log (deleted)
>>>>>>>>> java       8446         root  724u      REG 253,2
>>>>>>>>>
>>>>>>>> 536937911
>>>>>>> 26087362
>>>>>>>>>
>>>>>>>>>   /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_
>>>>>> topic_ypgsearch_yellowpageV2-0/00000000000068818668.log
>>>>>>
>>>>>>> (deleted)
>>>>>>>>> java       8446         root  725u      REG 253,2
>>>>>>>>>
>>>>>>>> 536910838
>>>>>>> 26087364
>>>>>>>>>
>>>>>>>>>   /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_
>>>>>> topic_ypgsearch_yellowpageV2-0/00000000000069457098.log
>>>>>>
>>>>>>> (deleted)
>>>>>>>>> java       8446         root  726u      REG 253,2
>>>>>>>>>
>>>>>>>> 536917902
>>>>>>> 26087368
>>>>>>>>>
>>>>>>>>>   /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_
>>>>>> topic_ypgsearch_yellowpageV2-0/00000000000070104914.log
>>>>>>
>>>>>>> (deleted)
>>>>>>>>>
>>>>>>>>> Is there anything wrong or wrong configed?
>>>>>>>>>
>>>>>>>>>


Re: kafka deleted old logs but not released

Posted by nitin sharma <ku...@gmail.com>.
hi All,

I am facing the same issue.. i  have got Kafka_2.9.2-0.8.1.1.jar deployed
in my application.
My Operation team had reported significant increase in the directory where
Kafak-log folder exists . when i checked the size of parent directory is it
99GB (100%) utilized..  This was very confusing because when i calculated
the sizes of .log files of my partitions it came upto only 40GB...

Moreover, when i checked Kafka logs, i saw following error message, which
means something wrong with Kafka code..

Can anyone tell where can i find the old Log files which Kafka code still
pointing to..? are they in hidden folders?

 [2015-01-23 19:00:11,233] ERROR Uncaught exception in scheduled task
'kafka-log-retention' (kafka.utils.KafkaScheduler)

kafka.common.KafkaStorageException: Failed to change the log file suffix
from  to .deleted for log segment 449238458

    at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:249)

    at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:636)

    at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:627)

    at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:415)

    at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:415)

Regards,
Nitin Kumar Sharma.


On Sun, Jan 25, 2015 at 10:57 PM, Yonghui Zhao <zh...@gmail.com>
wrote:

> I have fixed this issue like this patch
> https://reviews.apache.org/r/29755/diff/5/.
>
> I find rename failure still happens:
>
> server.log.2015-01-26-06:[2015-01-26 06:10:54,513] ERROR File rename
> failed, forcefully deleting file (kafka.log.Log)
> server.log.2015-01-26-06:[2015-01-26 06:10:54,600] ERROR File rename
> failed, forcefully deleting file (kafka.log.Log)
> server.log.2015-01-26-06:[2015-01-26 06:10:54,685] ERROR File rename
> failed, forcefully deleting file (kafka.log.Log)
> server.log.2015-01-26-06:[2015-01-26 06:10:54,797] ERROR File rename
> failed, forcefully deleting file (kafka.log.Log)
> ....
>
> And use lsof I can still find some files opened by kafka but deleted,  but
> those files sizes are 0.
>
> java       3228 root   34uw     REG              253,2         0   26084228
> /home/work/data/soft/kafka-0.8/data/.lock (deleted)
> java       3228 root   35u      REG              253,2         0   26084232
> /home/work/data/soft/kafka-0.8/data/cube-0/00000000000000000000.log
> (deleted)
> java       3228 root   36u      REG              253,2         0   26869778
>
> /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_misearch_appstore-search-0/00000000000000003116.log
> (deleted)
> java       3228 root   37u      REG              253,2         0   26084234
>
> /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_mishop-search_mishop_v1-0/00000000000000000000.log
> (deleted)
>
>
>
> Here is my configuration:
>
> Binary: kafka_2.10-0.8.1.1
> Retention config:
>
> # The minimum age of a log file to be eligible for deletion
> log.retention.hours=168
>
> # A size-based retention policy for logs. Segments are pruned from the log
> as long as the remaining
> # segments don't drop below log.retention.bytes.
> #log.retention.bytes=1073741824
>
> # The maximum size of a log segment file. When this size is reached a new
> log segment will be created.
> log.segment.bytes=536870912
>
> # The interval at which log segments are checked to see if they can be
> deleted according
> # to the retention policies
> log.retention.check.interval.ms=60000
>
> # By default the log cleaner is disabled and the log retention policy will
> default to just delete segments after their retention expires.
> # If log.cleaner.enable=true is set the cleaner will be enabled and
> individual logs can then be marked for log compaction.
> log.cleaner.enable=false
>
>
>
> OS:  CentOS release 6.4 (Final)
> JDK:
> *java version "1.6.0_37"*
> Java(TM) SE Runtime Environment (build 1.6.0_37-b06)
> Java HotSpot(TM) 64-Bit Server VM (build 20.12-b01, mixed mode)
>
> JDK is too old, but not sure if this result in rename failure.
>
>
>
>
>
>
> 2015-01-26 0:42 GMT+08:00 Jay Kreps <ja...@gmail.com>:
>
> > Also, what is the configuration for the servers? In particular it would
> be
> > good to know the retention and/or log compaction settings as those delete
> > files.
> >
> > -Jay
> >
> > On Sun, Jan 25, 2015 at 4:34 AM, Jaikiran Pai <ja...@gmail.com>
> > wrote:
> >
> >> Hi Yonghui,
> >>
> >> Do you still have this happening? If yes, can you tell us a bit more
> >> about your setup? Is there something else that accesses or maybe
> deleting
> >> these log files? For more context to this question, please read the
> >> discussion related to this here http://mail-archives.apache.
> >> org/mod_mbox/kafka-dev/201501.mbox/%3C54C47E9B.5060401%40gmail.com%3E
> >>
> >>
> >> -Jaikiran
> >>
> >>
> >>> On Thursday 08 January 2015 11:19 AM, Yonghui Zhao wrote:
> >>>
> >>>> CentOS release 6.3 (Final)
> >>>>
> >>>>
> >>>> 2015-01-07 22:18 GMT+08:00 Harsha <ka...@harsha.io>:
> >>>>
> >>>>  Yonghui,
> >>>>>             Which OS you are running.
> >>>>> -Harsha
> >>>>>
> >>>>> On Wed, Jan 7, 2015, at 01:38 AM, Yonghui Zhao wrote:
> >>>>>
> >>>>>> Yes  and I found the reason rename in deletion is failed.
> >>>>>> In rename progress the files is deleted? and then exception blocks
> >>>>>> file
> >>>>>> closed in kafka.
> >>>>>> But I don't know how can rename failure happen,
> >>>>>>
> >>>>>> [2015-01-07 00:10:48,685] ERROR Uncaught exception in scheduled task
> >>>>>> 'kafka-log-retention' (kafka.utils.KafkaScheduler)
> >>>>>> kafka.common.KafkaStorageException: Failed to change the log file
> >>>>>> suffix
> >>>>>> from  to .deleted for log segment 70781650
> >>>>>>          at kafka.log.LogSegment.changeFileSuffixes(LogSegment.
> >>>>>> scala:249)
> >>>>>>          at kafka.log.Log.kafka$log$Log$$
> >>>>>> asyncDeleteSegment(Log.scala:636)
> >>>>>>          at
> kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:627)
> >>>>>>          at
> >>>>>> kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:415)
> >>>>>>          at
> >>>>>> kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:415)
> >>>>>>          at scala.collection.immutable.List.foreach(List.scala:318)
> >>>>>>          at kafka.log.Log.deleteOldSegments(Log.scala:415)
> >>>>>>          at
> >>>>>>
> >>>>>>
> kafka.log.LogManager.kafka$log$LogManager$$cleanupExpiredSegments(LogManager.scala:325)
> >>>>>
> >>>>>
> >>>>>>          at
> >>>>>>
> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:356)
> >>>>>>
> >>>>>>          at
> >>>>>>
> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:354)
> >>>>>>
> >>>>>>          at
> >>>>>>
> >>>>>>  scala.collection.TraversableLike$WithFilter$$
> >>>>> anonfun$foreach$1.apply(TraversableLike.scala:772)
> >>>>>
> >>>>>>          at scala.collection.Iterator$class.foreach(Iterator.scala:
> >>>>>> 727)
> >>>>>>          at scala.collection.AbstractIterator.foreach(
> >>>>>> Iterator.scala:1157)
> >>>>>>          at
> >>>>>> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> >>>>>>          at scala.collection.AbstractIterable.foreach(
> >>>>>> Iterable.scala:54)
> >>>>>>          at
> >>>>>>
> >>>>>>  scala.collection.TraversableLike$WithFilter.
> >>>>> foreach(TraversableLike.scala:771)
> >>>>>
> >>>>>>          at kafka.log.LogManager.cleanupLogs(LogManager.scala:354)
> >>>>>>          at
> >>>>>>
> >>>>>>
> kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:141)
> >>>>>
> >>>>>
> >>>>>>          at
> >>>>>> kafka.utils.KafkaScheduler$$anon$1.run(KafkaScheduler.scala:100)
> >>>>>>          at
> >>>>>>
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
> >>>>>>
> >>>>>>          at
> >>>>>>
> >>>>>>
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
> >>>>>
> >>>>>
> >>>>>>          at
> >>>>>> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
> >>>>>>          at
> >>>>>>
> >>>>>>  java.util.concurrent.ScheduledThreadPoolExecutor$
> >>>>> ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
> >>>>>
> >>>>>>          at
> >>>>>>
> >>>>>>  java.util.concurrent.ScheduledThreadPoolExecutor$
> >>>>> ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
> >>>>>
> >>>>>>          at
> >>>>>>
> >>>>>>  java.util.concurrent.ScheduledThreadPoolExecutor$
> >>>>> ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
> >>>>>
> >>>>>>          at
> >>>>>>
> >>>>>>  java.util.concurrent.ThreadPoolExecutor$Worker.
> >>>>> runTask(ThreadPoolExecutor.java:886)
> >>>>>
> >>>>>>          at
> >>>>>>
> >>>>>>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> >>>>>
> >>>>>
> >>>>>>          at java.lang.Thread.run(Thread.java:662)
> >>>>>>
> >>>>>>
> >>>>>> 2015-01-07 13:56 GMT+08:00 Jun Rao <ju...@confluent.io>:
> >>>>>>
> >>>>>>  Do you mean that the Kafka broker still holds a file handler on a
> >>>>>>>
> >>>>>> deleted
> >>>>>
> >>>>>> file? Do you see those files being deleted in the Kafka log4j log?
> >>>>>>>
> >>>>>>> Thanks,
> >>>>>>>
> >>>>>>> Jun
> >>>>>>>
> >>>>>>> On Tue, Jan 6, 2015 at 4:46 AM, Yonghui Zhao <
> zhaoyonghui@gmail.com>
> >>>>>>> wrote:
> >>>>>>>
> >>>>>>>  Hi,
> >>>>>>>>
> >>>>>>>> We use kafka_2.10-0.8.1.1 in our server. Today we found disk space
> >>>>>>>>
> >>>>>>> alert.
> >>>>>
> >>>>>> We find many kafka data files are deleted, but still opened by
> kafka.
> >>>>>>>>
> >>>>>>>> such as:
> >>>>>>>>
> >>>>>>>> _yellowpageV2-0/00000000000068170670.log (deleted)
> >>>>>>>> java       8446         root  724u      REG 253,2
> >>>>>>>>
> >>>>>>> 536937911
> >>>>>
> >>>>>> 26087362
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>  /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_
> >>>>> topic_ypgsearch_yellowpageV2-0/00000000000068818668.log
> >>>>>
> >>>>>> (deleted)
> >>>>>>>> java       8446         root  725u      REG 253,2
> >>>>>>>>
> >>>>>>> 536910838
> >>>>>
> >>>>>> 26087364
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>  /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_
> >>>>> topic_ypgsearch_yellowpageV2-0/00000000000069457098.log
> >>>>>
> >>>>>> (deleted)
> >>>>>>>> java       8446         root  726u      REG 253,2
> >>>>>>>>
> >>>>>>> 536917902
> >>>>>
> >>>>>> 26087368
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>  /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_
> >>>>> topic_ypgsearch_yellowpageV2-0/00000000000070104914.log
> >>>>>
> >>>>>> (deleted)
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> Is there anything wrong or wrong configed?
> >>>>>>>>
> >>>>>>>>
> >>>
> >>
> >
>

Re: kafka deleted old logs but not released

Posted by Yonghui Zhao <zh...@gmail.com>.
I have fixed this issue like this patch
https://reviews.apache.org/r/29755/diff/5/.

I find rename failure still happens:

server.log.2015-01-26-06:[2015-01-26 06:10:54,513] ERROR File rename
failed, forcefully deleting file (kafka.log.Log)
server.log.2015-01-26-06:[2015-01-26 06:10:54,600] ERROR File rename
failed, forcefully deleting file (kafka.log.Log)
server.log.2015-01-26-06:[2015-01-26 06:10:54,685] ERROR File rename
failed, forcefully deleting file (kafka.log.Log)
server.log.2015-01-26-06:[2015-01-26 06:10:54,797] ERROR File rename
failed, forcefully deleting file (kafka.log.Log)
....

And use lsof I can still find some files opened by kafka but deleted,  but
those files sizes are 0.

java       3228 root   34uw     REG              253,2         0   26084228
/home/work/data/soft/kafka-0.8/data/.lock (deleted)
java       3228 root   35u      REG              253,2         0   26084232
/home/work/data/soft/kafka-0.8/data/cube-0/00000000000000000000.log
(deleted)
java       3228 root   36u      REG              253,2         0   26869778
/home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_misearch_appstore-search-0/00000000000000003116.log
(deleted)
java       3228 root   37u      REG              253,2         0   26084234
/home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_mishop-search_mishop_v1-0/00000000000000000000.log
(deleted)



Here is my configuration:

Binary: kafka_2.10-0.8.1.1
Retention config:

# The minimum age of a log file to be eligible for deletion
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log
as long as the remaining
# segments don't drop below log.retention.bytes.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new
log segment will be created.
log.segment.bytes=536870912

# The interval at which log segments are checked to see if they can be
deleted according
# to the retention policies
log.retention.check.interval.ms=60000

# By default the log cleaner is disabled and the log retention policy will
default to just delete segments after their retention expires.
# If log.cleaner.enable=true is set the cleaner will be enabled and
individual logs can then be marked for log compaction.
log.cleaner.enable=false



OS:  CentOS release 6.4 (Final)
JDK:
*java version "1.6.0_37"*
Java(TM) SE Runtime Environment (build 1.6.0_37-b06)
Java HotSpot(TM) 64-Bit Server VM (build 20.12-b01, mixed mode)

JDK is too old, but not sure if this result in rename failure.






2015-01-26 0:42 GMT+08:00 Jay Kreps <ja...@gmail.com>:

> Also, what is the configuration for the servers? In particular it would be
> good to know the retention and/or log compaction settings as those delete
> files.
>
> -Jay
>
> On Sun, Jan 25, 2015 at 4:34 AM, Jaikiran Pai <ja...@gmail.com>
> wrote:
>
>> Hi Yonghui,
>>
>> Do you still have this happening? If yes, can you tell us a bit more
>> about your setup? Is there something else that accesses or maybe deleting
>> these log files? For more context to this question, please read the
>> discussion related to this here http://mail-archives.apache.
>> org/mod_mbox/kafka-dev/201501.mbox/%3C54C47E9B.5060401%40gmail.com%3E
>>
>>
>> -Jaikiran
>>
>>
>>> On Thursday 08 January 2015 11:19 AM, Yonghui Zhao wrote:
>>>
>>>> CentOS release 6.3 (Final)
>>>>
>>>>
>>>> 2015-01-07 22:18 GMT+08:00 Harsha <ka...@harsha.io>:
>>>>
>>>>  Yonghui,
>>>>>             Which OS you are running.
>>>>> -Harsha
>>>>>
>>>>> On Wed, Jan 7, 2015, at 01:38 AM, Yonghui Zhao wrote:
>>>>>
>>>>>> Yes  and I found the reason rename in deletion is failed.
>>>>>> In rename progress the files is deleted? and then exception blocks
>>>>>> file
>>>>>> closed in kafka.
>>>>>> But I don't know how can rename failure happen,
>>>>>>
>>>>>> [2015-01-07 00:10:48,685] ERROR Uncaught exception in scheduled task
>>>>>> 'kafka-log-retention' (kafka.utils.KafkaScheduler)
>>>>>> kafka.common.KafkaStorageException: Failed to change the log file
>>>>>> suffix
>>>>>> from  to .deleted for log segment 70781650
>>>>>>          at kafka.log.LogSegment.changeFileSuffixes(LogSegment.
>>>>>> scala:249)
>>>>>>          at kafka.log.Log.kafka$log$Log$$
>>>>>> asyncDeleteSegment(Log.scala:636)
>>>>>>          at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:627)
>>>>>>          at
>>>>>> kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:415)
>>>>>>          at
>>>>>> kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:415)
>>>>>>          at scala.collection.immutable.List.foreach(List.scala:318)
>>>>>>          at kafka.log.Log.deleteOldSegments(Log.scala:415)
>>>>>>          at
>>>>>>
>>>>>>  kafka.log.LogManager.kafka$log$LogManager$$cleanupExpiredSegments(LogManager.scala:325)
>>>>>
>>>>>
>>>>>>          at
>>>>>> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:356)
>>>>>>
>>>>>>          at
>>>>>> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:354)
>>>>>>
>>>>>>          at
>>>>>>
>>>>>>  scala.collection.TraversableLike$WithFilter$$
>>>>> anonfun$foreach$1.apply(TraversableLike.scala:772)
>>>>>
>>>>>>          at scala.collection.Iterator$class.foreach(Iterator.scala:
>>>>>> 727)
>>>>>>          at scala.collection.AbstractIterator.foreach(
>>>>>> Iterator.scala:1157)
>>>>>>          at
>>>>>> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>>>>>>          at scala.collection.AbstractIterable.foreach(
>>>>>> Iterable.scala:54)
>>>>>>          at
>>>>>>
>>>>>>  scala.collection.TraversableLike$WithFilter.
>>>>> foreach(TraversableLike.scala:771)
>>>>>
>>>>>>          at kafka.log.LogManager.cleanupLogs(LogManager.scala:354)
>>>>>>          at
>>>>>>
>>>>>>  kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:141)
>>>>>
>>>>>
>>>>>>          at
>>>>>> kafka.utils.KafkaScheduler$$anon$1.run(KafkaScheduler.scala:100)
>>>>>>          at
>>>>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
>>>>>>
>>>>>>          at
>>>>>>
>>>>>>  java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
>>>>>
>>>>>
>>>>>>          at
>>>>>> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
>>>>>>          at
>>>>>>
>>>>>>  java.util.concurrent.ScheduledThreadPoolExecutor$
>>>>> ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
>>>>>
>>>>>>          at
>>>>>>
>>>>>>  java.util.concurrent.ScheduledThreadPoolExecutor$
>>>>> ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
>>>>>
>>>>>>          at
>>>>>>
>>>>>>  java.util.concurrent.ScheduledThreadPoolExecutor$
>>>>> ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
>>>>>
>>>>>>          at
>>>>>>
>>>>>>  java.util.concurrent.ThreadPoolExecutor$Worker.
>>>>> runTask(ThreadPoolExecutor.java:886)
>>>>>
>>>>>>          at
>>>>>>
>>>>>>  java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>
>>>>>
>>>>>>          at java.lang.Thread.run(Thread.java:662)
>>>>>>
>>>>>>
>>>>>> 2015-01-07 13:56 GMT+08:00 Jun Rao <ju...@confluent.io>:
>>>>>>
>>>>>>  Do you mean that the Kafka broker still holds a file handler on a
>>>>>>>
>>>>>> deleted
>>>>>
>>>>>> file? Do you see those files being deleted in the Kafka log4j log?
>>>>>>>
>>>>>>> Thanks,
>>>>>>>
>>>>>>> Jun
>>>>>>>
>>>>>>> On Tue, Jan 6, 2015 at 4:46 AM, Yonghui Zhao <zh...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>  Hi,
>>>>>>>>
>>>>>>>> We use kafka_2.10-0.8.1.1 in our server. Today we found disk space
>>>>>>>>
>>>>>>> alert.
>>>>>
>>>>>> We find many kafka data files are deleted, but still opened by kafka.
>>>>>>>>
>>>>>>>> such as:
>>>>>>>>
>>>>>>>> _yellowpageV2-0/00000000000068170670.log (deleted)
>>>>>>>> java       8446         root  724u      REG 253,2
>>>>>>>>
>>>>>>> 536937911
>>>>>
>>>>>> 26087362
>>>>>>>>
>>>>>>>>
>>>>>>>>  /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_
>>>>> topic_ypgsearch_yellowpageV2-0/00000000000068818668.log
>>>>>
>>>>>> (deleted)
>>>>>>>> java       8446         root  725u      REG 253,2
>>>>>>>>
>>>>>>> 536910838
>>>>>
>>>>>> 26087364
>>>>>>>>
>>>>>>>>
>>>>>>>>  /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_
>>>>> topic_ypgsearch_yellowpageV2-0/00000000000069457098.log
>>>>>
>>>>>> (deleted)
>>>>>>>> java       8446         root  726u      REG 253,2
>>>>>>>>
>>>>>>> 536917902
>>>>>
>>>>>> 26087368
>>>>>>>>
>>>>>>>>
>>>>>>>>  /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_
>>>>> topic_ypgsearch_yellowpageV2-0/00000000000070104914.log
>>>>>
>>>>>> (deleted)
>>>>>>>>
>>>>>>>>
>>>>>>>> Is there anything wrong or wrong configed?
>>>>>>>>
>>>>>>>>
>>>
>>
>

Re: kafka deleted old logs but not released

Posted by Jay Kreps <ja...@gmail.com>.
Also, what is the configuration for the servers? In particular it would be
good to know the retention and/or log compaction settings as those delete
files.

-Jay

On Sun, Jan 25, 2015 at 4:34 AM, Jaikiran Pai <ja...@gmail.com>
wrote:

> Hi Yonghui,
>
> Do you still have this happening? If yes, can you tell us a bit more about
> your setup? Is there something else that accesses or maybe deleting these
> log files? For more context to this question, please read the discussion
> related to this here http://mail-archives.apache.
> org/mod_mbox/kafka-dev/201501.mbox/%3C54C47E9B.5060401%40gmail.com%3E
>
>
> -Jaikiran
>
>
>> On Thursday 08 January 2015 11:19 AM, Yonghui Zhao wrote:
>>
>>> CentOS release 6.3 (Final)
>>>
>>>
>>> 2015-01-07 22:18 GMT+08:00 Harsha <ka...@harsha.io>:
>>>
>>>  Yonghui,
>>>>             Which OS you are running.
>>>> -Harsha
>>>>
>>>> On Wed, Jan 7, 2015, at 01:38 AM, Yonghui Zhao wrote:
>>>>
>>>>> Yes  and I found the reason rename in deletion is failed.
>>>>> In rename progress the files is deleted? and then exception blocks file
>>>>> closed in kafka.
>>>>> But I don't know how can rename failure happen,
>>>>>
>>>>> [2015-01-07 00:10:48,685] ERROR Uncaught exception in scheduled task
>>>>> 'kafka-log-retention' (kafka.utils.KafkaScheduler)
>>>>> kafka.common.KafkaStorageException: Failed to change the log file
>>>>> suffix
>>>>> from  to .deleted for log segment 70781650
>>>>>          at kafka.log.LogSegment.changeFileSuffixes(LogSegment.
>>>>> scala:249)
>>>>>          at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:
>>>>> 636)
>>>>>          at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:627)
>>>>>          at
>>>>> kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:415)
>>>>>          at
>>>>> kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:415)
>>>>>          at scala.collection.immutable.List.foreach(List.scala:318)
>>>>>          at kafka.log.Log.deleteOldSegments(Log.scala:415)
>>>>>          at
>>>>>
>>>>>  kafka.log.LogManager.kafka$log$LogManager$$cleanupExpiredSegments(LogManager.scala:325)
>>>>
>>>>
>>>>>          at
>>>>> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:356)
>>>>>
>>>>>          at
>>>>> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:354)
>>>>>
>>>>>          at
>>>>>
>>>>>  scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
>>>>
>>>>
>>>>>          at scala.collection.Iterator$class.foreach(Iterator.scala:
>>>>> 727)
>>>>>          at scala.collection.AbstractIterator.foreach(
>>>>> Iterator.scala:1157)
>>>>>          at
>>>>> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>>>>>          at scala.collection.AbstractIterable.foreach(
>>>>> Iterable.scala:54)
>>>>>          at
>>>>>
>>>>>  scala.collection.TraversableLike$WithFilter.
>>>> foreach(TraversableLike.scala:771)
>>>>
>>>>>          at kafka.log.LogManager.cleanupLogs(LogManager.scala:354)
>>>>>          at
>>>>>
>>>>>  kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:141)
>>>>
>>>>
>>>>>          at
>>>>> kafka.utils.KafkaScheduler$$anon$1.run(KafkaScheduler.scala:100)
>>>>>          at
>>>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
>>>>>
>>>>>          at
>>>>>
>>>>>  java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
>>>>
>>>>
>>>>>          at
>>>>> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
>>>>>          at
>>>>>
>>>>>  java.util.concurrent.ScheduledThreadPoolExecutor$
>>>> ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
>>>>
>>>>>          at
>>>>>
>>>>>  java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.
>>>> runPeriodic(ScheduledThreadPoolExecutor.java:180)
>>>>
>>>>>          at
>>>>>
>>>>>  java.util.concurrent.ScheduledThreadPoolExecutor$
>>>> ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
>>>>
>>>>>          at
>>>>>
>>>>>  java.util.concurrent.ThreadPoolExecutor$Worker.
>>>> runTask(ThreadPoolExecutor.java:886)
>>>>
>>>>>          at
>>>>>
>>>>>  java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>
>>>>
>>>>>          at java.lang.Thread.run(Thread.java:662)
>>>>>
>>>>>
>>>>> 2015-01-07 13:56 GMT+08:00 Jun Rao <ju...@confluent.io>:
>>>>>
>>>>>  Do you mean that the Kafka broker still holds a file handler on a
>>>>>>
>>>>> deleted
>>>>
>>>>> file? Do you see those files being deleted in the Kafka log4j log?
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> Jun
>>>>>>
>>>>>> On Tue, Jan 6, 2015 at 4:46 AM, Yonghui Zhao <zh...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>  Hi,
>>>>>>>
>>>>>>> We use kafka_2.10-0.8.1.1 in our server. Today we found disk space
>>>>>>>
>>>>>> alert.
>>>>
>>>>> We find many kafka data files are deleted, but still opened by kafka.
>>>>>>>
>>>>>>> such as:
>>>>>>>
>>>>>>> _yellowpageV2-0/00000000000068170670.log (deleted)
>>>>>>> java       8446         root  724u      REG 253,2
>>>>>>>
>>>>>> 536937911
>>>>
>>>>> 26087362
>>>>>>>
>>>>>>>
>>>>>>>  /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_
>>>> topic_ypgsearch_yellowpageV2-0/00000000000068818668.log
>>>>
>>>>> (deleted)
>>>>>>> java       8446         root  725u      REG 253,2
>>>>>>>
>>>>>> 536910838
>>>>
>>>>> 26087364
>>>>>>>
>>>>>>>
>>>>>>>  /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_
>>>> topic_ypgsearch_yellowpageV2-0/00000000000069457098.log
>>>>
>>>>> (deleted)
>>>>>>> java       8446         root  726u      REG 253,2
>>>>>>>
>>>>>> 536917902
>>>>
>>>>> 26087368
>>>>>>>
>>>>>>>
>>>>>>>  /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_
>>>> topic_ypgsearch_yellowpageV2-0/00000000000070104914.log
>>>>
>>>>> (deleted)
>>>>>>>
>>>>>>>
>>>>>>> Is there anything wrong or wrong configed?
>>>>>>>
>>>>>>>
>>
>

Re: kafka deleted old logs but not released

Posted by Jaikiran Pai <ja...@gmail.com>.
Hi Yonghui,

Do you still have this happening? If yes, can you tell us a bit more 
about your setup? Is there something else that accesses or maybe 
deleting these log files? For more context to this question, please read 
the discussion related to this here 
http://mail-archives.apache.org/mod_mbox/kafka-dev/201501.mbox/%3C54C47E9B.5060401%40gmail.com%3E

-Jaikiran

>
> On Thursday 08 January 2015 11:19 AM, Yonghui Zhao wrote:
>> CentOS release 6.3 (Final)
>>
>>
>> 2015-01-07 22:18 GMT+08:00 Harsha <ka...@harsha.io>:
>>
>>> Yonghui,
>>>             Which OS you are running.
>>> -Harsha
>>>
>>> On Wed, Jan 7, 2015, at 01:38 AM, Yonghui Zhao wrote:
>>>> Yes  and I found the reason rename in deletion is failed.
>>>> In rename progress the files is deleted? and then exception blocks 
>>>> file
>>>> closed in kafka.
>>>> But I don't know how can rename failure happen,
>>>>
>>>> [2015-01-07 00:10:48,685] ERROR Uncaught exception in scheduled task
>>>> 'kafka-log-retention' (kafka.utils.KafkaScheduler)
>>>> kafka.common.KafkaStorageException: Failed to change the log file 
>>>> suffix
>>>> from  to .deleted for log segment 70781650
>>>>          at 
>>>> kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:249)
>>>>          at 
>>>> kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:636)
>>>>          at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:627)
>>>>          at
>>>> kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:415)
>>>>          at
>>>> kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:415)
>>>>          at scala.collection.immutable.List.foreach(List.scala:318)
>>>>          at kafka.log.Log.deleteOldSegments(Log.scala:415)
>>>>          at
>>>>
>>> kafka.log.LogManager.kafka$log$LogManager$$cleanupExpiredSegments(LogManager.scala:325) 
>>>
>>>>          at
>>>> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:356) 
>>>>
>>>>          at
>>>> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:354) 
>>>>
>>>>          at
>>>>
>>> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772) 
>>>
>>>>          at 
>>>> scala.collection.Iterator$class.foreach(Iterator.scala:727)
>>>>          at 
>>>> scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>>>>          at
>>>> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>>>>          at 
>>>> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>>>>          at
>>>>
>>> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771) 
>>>
>>>>          at kafka.log.LogManager.cleanupLogs(LogManager.scala:354)
>>>>          at
>>>>
>>> kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:141) 
>>>
>>>>          at
>>>> kafka.utils.KafkaScheduler$$anon$1.run(KafkaScheduler.scala:100)
>>>>          at
>>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) 
>>>>
>>>>          at
>>>>
>>> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317) 
>>>
>>>>          at
>>>> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
>>>>          at
>>>>
>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98) 
>>>
>>>>          at
>>>>
>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180) 
>>>
>>>>          at
>>>>
>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204) 
>>>
>>>>          at
>>>>
>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) 
>>>
>>>>          at
>>>>
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) 
>>>
>>>>          at java.lang.Thread.run(Thread.java:662)
>>>>
>>>>
>>>> 2015-01-07 13:56 GMT+08:00 Jun Rao <ju...@confluent.io>:
>>>>
>>>>> Do you mean that the Kafka broker still holds a file handler on a
>>> deleted
>>>>> file? Do you see those files being deleted in the Kafka log4j log?
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Jun
>>>>>
>>>>> On Tue, Jan 6, 2015 at 4:46 AM, Yonghui Zhao <zh...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> We use kafka_2.10-0.8.1.1 in our server. Today we found disk space
>>> alert.
>>>>>> We find many kafka data files are deleted, but still opened by 
>>>>>> kafka.
>>>>>>
>>>>>> such as:
>>>>>>
>>>>>> _yellowpageV2-0/00000000000068170670.log (deleted)
>>>>>> java       8446         root  724u      REG 253,2
>>> 536937911
>>>>>> 26087362
>>>>>>
>>>>>>
>>> /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_ypgsearch_yellowpageV2-0/00000000000068818668.log 
>>>
>>>>>> (deleted)
>>>>>> java       8446         root  725u      REG 253,2
>>> 536910838
>>>>>> 26087364
>>>>>>
>>>>>>
>>> /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_ypgsearch_yellowpageV2-0/00000000000069457098.log 
>>>
>>>>>> (deleted)
>>>>>> java       8446         root  726u      REG 253,2
>>> 536917902
>>>>>> 26087368
>>>>>>
>>>>>>
>>> /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_ypgsearch_yellowpageV2-0/00000000000070104914.log 
>>>
>>>>>> (deleted)
>>>>>>
>>>>>>
>>>>>> Is there anything wrong or wrong configed?
>>>>>>
>


Re: kafka deleted old logs but not released

Posted by Jaikiran Pai <ja...@gmail.com>.
Apart from the fact that the file rename is failing (the API notes that 
there are chances of the rename failing), it looks like the 
implementation in FileMessageSet's rename can cause a couple of issues, 
one of them being a leak.

The implementation looks like this 
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/log/FileMessageSet.scala#L268. 
Notice that the reference to the original file member variable is 
switched with a new one but the (old) FileChannel held in that 
FileMessageSet isn't closed. That I think explains the leak. 
Furthermore, a new fileChannel for the new File instance isn't being 
created either and that's a different issue.

P.S: Not much familiar with Kafka code yet. The above explanation is 
just based on a quick look at that piece of code and doesn't take into 
account any other context there might be to this.

-Jaikiran


On Thursday 08 January 2015 11:19 AM, Yonghui Zhao wrote:
> CentOS release 6.3 (Final)
>
>
> 2015-01-07 22:18 GMT+08:00 Harsha <ka...@harsha.io>:
>
>> Yonghui,
>>             Which OS you are running.
>> -Harsha
>>
>> On Wed, Jan 7, 2015, at 01:38 AM, Yonghui Zhao wrote:
>>> Yes  and I found the reason rename in deletion is failed.
>>> In rename progress the files is deleted? and then exception blocks file
>>> closed in kafka.
>>> But I don't know how can rename failure happen,
>>>
>>> [2015-01-07 00:10:48,685] ERROR Uncaught exception in scheduled task
>>> 'kafka-log-retention' (kafka.utils.KafkaScheduler)
>>> kafka.common.KafkaStorageException: Failed to change the log file suffix
>>> from  to .deleted for log segment 70781650
>>>          at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:249)
>>>          at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:636)
>>>          at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:627)
>>>          at
>>>          kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:415)
>>>          at
>>>          kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:415)
>>>          at scala.collection.immutable.List.foreach(List.scala:318)
>>>          at kafka.log.Log.deleteOldSegments(Log.scala:415)
>>>          at
>>>
>> kafka.log.LogManager.kafka$log$LogManager$$cleanupExpiredSegments(LogManager.scala:325)
>>>          at
>>> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:356)
>>>          at
>>> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:354)
>>>          at
>>>
>> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
>>>          at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>>>          at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>>>          at
>>> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>>>          at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>>>          at
>>>
>> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
>>>          at kafka.log.LogManager.cleanupLogs(LogManager.scala:354)
>>>          at
>>>
>> kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:141)
>>>          at
>>>          kafka.utils.KafkaScheduler$$anon$1.run(KafkaScheduler.scala:100)
>>>          at
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
>>>          at
>>>
>> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
>>>          at
>>>          java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
>>>          at
>>>
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
>>>          at
>>>
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
>>>          at
>>>
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
>>>          at
>>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>          at
>>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>          at java.lang.Thread.run(Thread.java:662)
>>>
>>>
>>> 2015-01-07 13:56 GMT+08:00 Jun Rao <ju...@confluent.io>:
>>>
>>>> Do you mean that the Kafka broker still holds a file handler on a
>> deleted
>>>> file? Do you see those files being deleted in the Kafka log4j log?
>>>>
>>>> Thanks,
>>>>
>>>> Jun
>>>>
>>>> On Tue, Jan 6, 2015 at 4:46 AM, Yonghui Zhao <zh...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> We use kafka_2.10-0.8.1.1 in our server. Today we found disk space
>> alert.
>>>>> We find many kafka data files are deleted, but still opened by kafka.
>>>>>
>>>>> such as:
>>>>>
>>>>> _yellowpageV2-0/00000000000068170670.log (deleted)
>>>>> java       8446         root  724u      REG              253,2
>> 536937911
>>>>> 26087362
>>>>>
>>>>>
>> /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_ypgsearch_yellowpageV2-0/00000000000068818668.log
>>>>> (deleted)
>>>>> java       8446         root  725u      REG              253,2
>> 536910838
>>>>> 26087364
>>>>>
>>>>>
>> /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_ypgsearch_yellowpageV2-0/00000000000069457098.log
>>>>> (deleted)
>>>>> java       8446         root  726u      REG              253,2
>> 536917902
>>>>> 26087368
>>>>>
>>>>>
>> /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_ypgsearch_yellowpageV2-0/00000000000070104914.log
>>>>> (deleted)
>>>>>
>>>>>
>>>>> Is there anything wrong or wrong configed?
>>>>>


Re: kafka deleted old logs but not released

Posted by Yonghui Zhao <zh...@gmail.com>.
CentOS release 6.3 (Final)


2015-01-07 22:18 GMT+08:00 Harsha <ka...@harsha.io>:

> Yonghui,
>            Which OS you are running.
> -Harsha
>
> On Wed, Jan 7, 2015, at 01:38 AM, Yonghui Zhao wrote:
> > Yes  and I found the reason rename in deletion is failed.
> > In rename progress the files is deleted? and then exception blocks file
> > closed in kafka.
> > But I don't know how can rename failure happen,
> >
> > [2015-01-07 00:10:48,685] ERROR Uncaught exception in scheduled task
> > 'kafka-log-retention' (kafka.utils.KafkaScheduler)
> > kafka.common.KafkaStorageException: Failed to change the log file suffix
> > from  to .deleted for log segment 70781650
> >         at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:249)
> >         at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:636)
> >         at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:627)
> >         at
> >         kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:415)
> >         at
> >         kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:415)
> >         at scala.collection.immutable.List.foreach(List.scala:318)
> >         at kafka.log.Log.deleteOldSegments(Log.scala:415)
> >         at
> >
> kafka.log.LogManager.kafka$log$LogManager$$cleanupExpiredSegments(LogManager.scala:325)
> >         at
> > kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:356)
> >         at
> > kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:354)
> >         at
> >
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
> >         at scala.collection.Iterator$class.foreach(Iterator.scala:727)
> >         at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> >         at
> > scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> >         at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> >         at
> >
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> >         at kafka.log.LogManager.cleanupLogs(LogManager.scala:354)
> >         at
> >
> kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:141)
> >         at
> >         kafka.utils.KafkaScheduler$$anon$1.run(KafkaScheduler.scala:100)
> >         at
> > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
> >         at
> >
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
> >         at
> >         java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
> >         at
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
> >         at
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
> >         at
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> >         at java.lang.Thread.run(Thread.java:662)
> >
> >
> > 2015-01-07 13:56 GMT+08:00 Jun Rao <ju...@confluent.io>:
> >
> > > Do you mean that the Kafka broker still holds a file handler on a
> deleted
> > > file? Do you see those files being deleted in the Kafka log4j log?
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Tue, Jan 6, 2015 at 4:46 AM, Yonghui Zhao <zh...@gmail.com>
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > We use kafka_2.10-0.8.1.1 in our server. Today we found disk space
> alert.
> > > >
> > > > We find many kafka data files are deleted, but still opened by kafka.
> > > >
> > > > such as:
> > > >
> > > > _yellowpageV2-0/00000000000068170670.log (deleted)
> > > > java       8446         root  724u      REG              253,2
> 536937911
> > > > 26087362
> > > >
> > > >
> > >
> /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_ypgsearch_yellowpageV2-0/00000000000068818668.log
> > > > (deleted)
> > > > java       8446         root  725u      REG              253,2
> 536910838
> > > > 26087364
> > > >
> > > >
> > >
> /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_ypgsearch_yellowpageV2-0/00000000000069457098.log
> > > > (deleted)
> > > > java       8446         root  726u      REG              253,2
> 536917902
> > > > 26087368
> > > >
> > > >
> > >
> /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_ypgsearch_yellowpageV2-0/00000000000070104914.log
> > > > (deleted)
> > > >
> > > >
> > > > Is there anything wrong or wrong configed?
> > > >
> > >
>

Re: kafka deleted old logs but not released

Posted by Harsha <ka...@harsha.io>.
Yonghui,
           Which OS you are running.
-Harsha

On Wed, Jan 7, 2015, at 01:38 AM, Yonghui Zhao wrote:
> Yes  and I found the reason rename in deletion is failed.
> In rename progress the files is deleted? and then exception blocks file
> closed in kafka.
> But I don't know how can rename failure happen,
> 
> [2015-01-07 00:10:48,685] ERROR Uncaught exception in scheduled task
> 'kafka-log-retention' (kafka.utils.KafkaScheduler)
> kafka.common.KafkaStorageException: Failed to change the log file suffix
> from  to .deleted for log segment 70781650
>         at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:249)
>         at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:636)
>         at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:627)
>         at
>         kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:415)
>         at
>         kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:415)
>         at scala.collection.immutable.List.foreach(List.scala:318)
>         at kafka.log.Log.deleteOldSegments(Log.scala:415)
>         at
> kafka.log.LogManager.kafka$log$LogManager$$cleanupExpiredSegments(LogManager.scala:325)
>         at
> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:356)
>         at
> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:354)
>         at
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
>         at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>         at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>         at
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>         at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>         at
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
>         at kafka.log.LogManager.cleanupLogs(LogManager.scala:354)
>         at
> kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:141)
>         at
>         kafka.utils.KafkaScheduler$$anon$1.run(KafkaScheduler.scala:100)
>         at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
>         at
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
>         at
>         java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
>         at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
>         at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
>         at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> 
> 
> 2015-01-07 13:56 GMT+08:00 Jun Rao <ju...@confluent.io>:
> 
> > Do you mean that the Kafka broker still holds a file handler on a deleted
> > file? Do you see those files being deleted in the Kafka log4j log?
> >
> > Thanks,
> >
> > Jun
> >
> > On Tue, Jan 6, 2015 at 4:46 AM, Yonghui Zhao <zh...@gmail.com>
> > wrote:
> >
> > > Hi,
> > >
> > > We use kafka_2.10-0.8.1.1 in our server. Today we found disk space alert.
> > >
> > > We find many kafka data files are deleted, but still opened by kafka.
> > >
> > > such as:
> > >
> > > _yellowpageV2-0/00000000000068170670.log (deleted)
> > > java       8446         root  724u      REG              253,2 536937911
> > > 26087362
> > >
> > >
> > /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_ypgsearch_yellowpageV2-0/00000000000068818668.log
> > > (deleted)
> > > java       8446         root  725u      REG              253,2 536910838
> > > 26087364
> > >
> > >
> > /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_ypgsearch_yellowpageV2-0/00000000000069457098.log
> > > (deleted)
> > > java       8446         root  726u      REG              253,2 536917902
> > > 26087368
> > >
> > >
> > /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_ypgsearch_yellowpageV2-0/00000000000070104914.log
> > > (deleted)
> > >
> > >
> > > Is there anything wrong or wrong configed?
> > >
> >

Re: kafka deleted old logs but not released

Posted by Yonghui Zhao <zh...@gmail.com>.
Yes  and I found the reason rename in deletion is failed.
In rename progress the files is deleted? and then exception blocks file
closed in kafka.
But I don't know how can rename failure happen,

[2015-01-07 00:10:48,685] ERROR Uncaught exception in scheduled task
'kafka-log-retention' (kafka.utils.KafkaScheduler)
kafka.common.KafkaStorageException: Failed to change the log file suffix
from  to .deleted for log segment 70781650
        at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:249)
        at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:636)
        at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:627)
        at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:415)
        at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:415)
        at scala.collection.immutable.List.foreach(List.scala:318)
        at kafka.log.Log.deleteOldSegments(Log.scala:415)
        at
kafka.log.LogManager.kafka$log$LogManager$$cleanupExpiredSegments(LogManager.scala:325)
        at
kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:356)
        at
kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:354)
        at
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
        at scala.collection.Iterator$class.foreach(Iterator.scala:727)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
        at
scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
        at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
        at
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
        at kafka.log.LogManager.cleanupLogs(LogManager.scala:354)
        at
kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:141)
        at kafka.utils.KafkaScheduler$$anon$1.run(KafkaScheduler.scala:100)
        at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
        at
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
        at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
        at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
        at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)


2015-01-07 13:56 GMT+08:00 Jun Rao <ju...@confluent.io>:

> Do you mean that the Kafka broker still holds a file handler on a deleted
> file? Do you see those files being deleted in the Kafka log4j log?
>
> Thanks,
>
> Jun
>
> On Tue, Jan 6, 2015 at 4:46 AM, Yonghui Zhao <zh...@gmail.com>
> wrote:
>
> > Hi,
> >
> > We use kafka_2.10-0.8.1.1 in our server. Today we found disk space alert.
> >
> > We find many kafka data files are deleted, but still opened by kafka.
> >
> > such as:
> >
> > _yellowpageV2-0/00000000000068170670.log (deleted)
> > java       8446         root  724u      REG              253,2 536937911
> > 26087362
> >
> >
> /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_ypgsearch_yellowpageV2-0/00000000000068818668.log
> > (deleted)
> > java       8446         root  725u      REG              253,2 536910838
> > 26087364
> >
> >
> /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_ypgsearch_yellowpageV2-0/00000000000069457098.log
> > (deleted)
> > java       8446         root  726u      REG              253,2 536917902
> > 26087368
> >
> >
> /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_ypgsearch_yellowpageV2-0/00000000000070104914.log
> > (deleted)
> >
> >
> > Is there anything wrong or wrong configed?
> >
>

Re: kafka deleted old logs but not released

Posted by Jun Rao <ju...@confluent.io>.
Do you mean that the Kafka broker still holds a file handler on a deleted
file? Do you see those files being deleted in the Kafka log4j log?

Thanks,

Jun

On Tue, Jan 6, 2015 at 4:46 AM, Yonghui Zhao <zh...@gmail.com> wrote:

> Hi,
>
> We use kafka_2.10-0.8.1.1 in our server. Today we found disk space alert.
>
> We find many kafka data files are deleted, but still opened by kafka.
>
> such as:
>
> _yellowpageV2-0/00000000000068170670.log (deleted)
> java       8446         root  724u      REG              253,2 536937911
> 26087362
>
> /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_ypgsearch_yellowpageV2-0/00000000000068818668.log
> (deleted)
> java       8446         root  725u      REG              253,2 536910838
> 26087364
>
> /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_ypgsearch_yellowpageV2-0/00000000000069457098.log
> (deleted)
> java       8446         root  726u      REG              253,2 536917902
> 26087368
>
> /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_ypgsearch_yellowpageV2-0/00000000000070104914.log
> (deleted)
>
>
> Is there anything wrong or wrong configed?
>