You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "Ismael Juma (JIRA)" <ji...@apache.org> on 2018/11/16 15:39:00 UTC

[jira] [Commented] (KAFKA-7637) Error while writing to checkpoint file due to too many open files

    [ https://issues.apache.org/jira/browse/KAFKA-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16689570#comment-16689570 ] 

Ismael Juma commented on KAFKA-7637:
------------------------------------

Thanks for the report. 65k is generally low for Kafka. It would be good to verify if there is a leak of some sort here although we have clusters running for a long time and haven't seen errors like this one.

> Error while writing to checkpoint file due to too many open files
> -----------------------------------------------------------------
>
>                 Key: KAFKA-7637
>                 URL: https://issues.apache.org/jira/browse/KAFKA-7637
>             Project: Kafka
>          Issue Type: Bug
>    Affects Versions: 1.1.1
>         Environment: Red Hat Enterprise Linux Server release 7.4 (Maipo)
>            Reporter: Sander van Loo
>            Priority: Major
>
> We are running a 3 node Kafka cluster on version 1.1.1 on Red Hat Linux 7.
> Max open files is set to 65000.
> After running for a few days the nodes have the following open file counts:
>  * node01d: 2712
>  * node01e: 2770
>  * node01f: 4102
> After a few weeks of runtime cluster crashes with the following error:
>  
> {noformat}
> [2018-11-12 07:05:16,790] ERROR Error while writing to checkpoint file /var/lib/kafka/topics/replication-offset-checkpoint (kafka.server.LogDirFailureChannel)
> java.io.FileNotFoundException: /var/lib/kafka/topics/replication-offset-checkpoint.tmp (Too many open files)
>         at java.io.FileOutputStream.open0(Native Method)
>         at java.io.FileOutputStream.open(FileOutputStream.java:270)
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:162)
>         at kafka.server.checkpoints.CheckpointFile.liftedTree1$1(CheckpointFile.scala:52)
>         at kafka.server.checkpoints.CheckpointFile.write(CheckpointFile.scala:50)
>         at kafka.server.checkpoints.OffsetCheckpointFile.write(OffsetCheckpointFile.scala:59)
>         at kafka.server.ReplicaManager.$anonfun$checkpointHighWatermarks$9(ReplicaManager.scala:1384)
>         at kafka.server.ReplicaManager.$anonfun$checkpointHighWatermarks$9$adapted(ReplicaManager.scala:1384)
>         at scala.Option.foreach(Option.scala:257)
>         at kafka.server.ReplicaManager.$anonfun$checkpointHighWatermarks$7(ReplicaManager.scala:1384)
>         at kafka.server.ReplicaManager.$anonfun$checkpointHighWatermarks$7$adapted(ReplicaManager.scala:1381)
>         at scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:789)
>         at scala.collection.immutable.Map$Map1.foreach(Map.scala:120)
>         at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:788)
>         at kafka.server.ReplicaManager.checkpointHighWatermarks(ReplicaManager.scala:1381)
>         at kafka.server.ReplicaManager.$anonfun$startHighWaterMarksCheckPointThread$1(ReplicaManager.scala:242)
>         at kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:114)
>         at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
>         at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>         at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>         at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>         at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>         at java.lang.Thread.run(Thread.java:745)
> {noformat}
> followed by this one:
> {noformat}
> [2018-11-12 07:05:16,792] ERROR [ReplicaManager broker=3] Error while writing to highwatermark file in directory /var/lib/kafka/topics (kafka.server.ReplicaManager)
> org.apache.kafka.common.errors.KafkaStorageException: Error while writing to checkpoint file /var/lib/kafka/topics/replication-offset-checkpoint
> Caused by: java.io.FileNotFoundException: /var/lib/kafka/topics/replication-offset-checkpoint.tmp (Too many open files)
>         at java.io.FileOutputStream.open0(Native Method)
>         at java.io.FileOutputStream.open(FileOutputStream.java:270)
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:162)
>         at kafka.server.checkpoints.CheckpointFile.liftedTree1$1(CheckpointFile.scala:52)
>         at kafka.server.checkpoints.CheckpointFile.write(CheckpointFile.scala:50)
>         at kafka.server.checkpoints.OffsetCheckpointFile.write(OffsetCheckpointFile.scala:59)
>         at kafka.server.ReplicaManager.$anonfun$checkpointHighWatermarks$9(ReplicaManager.scala:1384)
>         at kafka.server.ReplicaManager.$anonfun$checkpointHighWatermarks$9$adapted(ReplicaManager.scala:1384)
>         at scala.Option.foreach(Option.scala:257)
>         at kafka.server.ReplicaManager.$anonfun$checkpointHighWatermarks$7(ReplicaManager.scala:1384)
>         at kafka.server.ReplicaManager.$anonfun$checkpointHighWatermarks$7$adapted(ReplicaManager.scala:1381)
>         at scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:789)
>         at scala.collection.immutable.Map$Map1.foreach(Map.scala:120)
>         at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:788)
>         at kafka.server.ReplicaManager.checkpointHighWatermarks(ReplicaManager.scala:1381)
>         at kafka.server.ReplicaManager.$anonfun$startHighWaterMarksCheckPointThread$1(ReplicaManager.scala:242)
>         at kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:114)
>         at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
>         at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>         at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>         at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>         at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>         at java.lang.Thread.run(Thread.java:745)
> {noformat}
> and the cluster never recovers from this.
>  
> The number of open files appears to creap up slowly as time progresses and during normal operations we see many errors like below:
> {noformat}
> [2018-11-16 00:46:35,082] ERROR [ReplicaFetcher replicaId=3, leaderId=2, fetcherId=0] Error for partition check_node01d.local-0 at offset 1 (kafka.server.ReplicaFetcherThread)
> [2018-11-16 00:49:34,947] ERROR [ReplicaFetcher replicaId=3, leaderId=2, fetcherId=0] Error for partition check_node01d.local-0 at offset 1 (kafka.server.ReplicaFetcherThread)
> [2018-11-16 01:11:01,754] ERROR [ReplicaFetcher replicaId=3, leaderId=1, fetcherId=0] Error for partition check_node01f.local-0 at offset 1 (kafka.server.ReplicaFetcherThread)
> [2018-11-16 01:28:34,982] ERROR [ReplicaFetcher replicaId=3, leaderId=2, fetcherId=0] Error for partition check_node01d.local-0 at offset 1 (kafka.server.ReplicaFetcherThread)
> [2018-11-16 01:38:48,423] ERROR [ReplicaFetcher replicaId=3, leaderId=1, fetcherId=0] Error for partition check_node01e.local-0 at offset 1 (kafka.server.ReplicaFetcherThread)
> [2018-11-16 01:40:35,052] ERROR [ReplicaFetcher replicaId=3, leaderId=1, fetcherId=0] Error for partition check_node01d.local-0 at offset 1 (kafka.server.ReplicaFetcherThread)
> [2018-11-16 02:11:01,734] ERROR [ReplicaFetcher replicaId=3, leaderId=1, fetcherId=0] Error for partition check_node01f.local-0 at offset 1 (kafka.server.ReplicaFetcherThread)
> [2018-11-16 04:01:35,036] ERROR [ReplicaFetcher replicaId=3, leaderId=2, fetcherId=0] Error for partition check_node01d.local-0 at offset 1 (kafka.server.ReplicaFetcherThread)
> [2018-11-16 04:19:35,013] ERROR [ReplicaFetcher replicaId=3, leaderId=2, fetcherId=0] Error for partition check_node01d.local-0 at offset 1 (kafka.server.ReplicaFetcherThread)
> [2018-11-16 04:59:48,750] ERROR [ReplicaFetcher replicaId=3, leaderId=1, fetcherId=0] Error for partition check_node01e.local-0 at offset 1 (kafka.server.ReplicaFetcherThread)
> [2018-11-16 05:08:01,681] ERROR [ReplicaFetcher replicaId=3, leaderId=2, fetcherId=0] Error for partition check_node01f.local-0 at offset 1 (kafka.server.ReplicaFetcherThread)
> [2018-11-16 05:56:01,536] ERROR [ReplicaFetcher replicaId=3, leaderId=2, fetcherId=0] Error for partition check_node01f.local-0 at offset 1 (kafka.server.ReplicaFetcherThread)
> [2018-11-16 06:17:48,516] ERROR [ReplicaFetcher replicaId=3, leaderId=2, fetcherId=0] Error for partition check_node01e.local-0 at offset 1 (kafka.server.ReplicaFetcherThread)
> [2018-11-16 06:20:01,709] ERROR [ReplicaFetcher replicaId=3, leaderId=2, fetcherId=0] Error for partition check_node01f.local-0 at offset 1 (kafka.server.ReplicaFetcherThread)
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)