You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "Dong Lin (JIRA)" <ji...@apache.org> on 2018/07/12 22:32:00 UTC

[jira] [Commented] (KAFKA-6488) Prevent log corruption in case of OOM

    [ https://issues.apache.org/jira/browse/KAFKA-6488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16542271#comment-16542271 ] 

Dong Lin commented on KAFKA-6488:
---------------------------------

One reasonable solution is to add the JVM flag ExitOnOutOfMemoryError so that broker will exist immediately on OOM. This seems reasonable to let the broker stop immediately because the broker is generally in undefined state if it hits OOM. And it is recommended to also add JVM flag HeapDumpOnOutOfMemoryError so that system administrator can investigate the root cause of OOM.

> Prevent log corruption in case of OOM
> -------------------------------------
>
>                 Key: KAFKA-6488
>                 URL: https://issues.apache.org/jira/browse/KAFKA-6488
>             Project: Kafka
>          Issue Type: Bug
>            Reporter: Dong Lin
>            Assignee: Dong Lin
>            Priority: Major
>
> Currently we will append the message to the log before updating the LEO. However, if there is OOM in between these two steps, KafkaRequestHandler thread can append a message to the log without updating the LEO. The next message may be appended with the same offset as the first message. This can prevent broker from being started because two messages have the same offset in the log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)