You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ambari.apache.org by "Andrew Onischuk (JIRA)" <ji...@apache.org> on 2015/10/02 11:44:26 UTC

[jira] [Updated] (AMBARI-13290) Set kafka userid ulimit open files to 32k for kafka broker through Ambari

     [ https://issues.apache.org/jira/browse/AMBARI-13290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Andrew Onischuk updated AMBARI-13290:
-------------------------------------
    Description: 
PROBLEM: Currently Ambari does not set any ulimit for the kafka user during
install.

    
    
    [2015-09-22 07:01:01,380] FATAL [Replica Manager on Broker 2]: Error writing to highwatermark file: (kafka.server.ReplicaManager) 
    java.io.FileNotFoundException: /mnt/data2/kafka-logs/replication-offset-checkpoint.tmp (Too many open files) 
    at java.io.FileOutputStream.open(Native Method) 
    at java.io.FileOutputStream.<init>(FileOutputStream.java:221) 
    at java.io.FileOutputStream.<init>(FileOutputStream.java:171) 
    .
    .
    [2015-09-22 07:01:01,380] ERROR Error in acceptor (kafka.network.Acceptor) 
    java.io.IOException: Too many open files 
    at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) 
    at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) 
    at kafka.network.Acceptor.accept(SocketServer.scala:200) 
    at kafka.network.Acceptor.run(SocketServer.scala:154) 
    at java.lang.Thread.run(Thread.java:745) 
    

The open files limit in ulimits for kafka userid was set to 1024. We increased
to 32000 and got over the error.



  was:
PROBLEM: Currently Ambari does not set any ulimit for the kafka user during
install. This particular customer hit issues on the brokers:

    
    
    [2015-09-22 07:01:01,380] FATAL [Replica Manager on Broker 2]: Error writing to highwatermark file: (kafka.server.ReplicaManager) 
    java.io.FileNotFoundException: /mnt/data2/kafka-logs/replication-offset-checkpoint.tmp (Too many open files) 
    at java.io.FileOutputStream.open(Native Method) 
    at java.io.FileOutputStream.<init>(FileOutputStream.java:221) 
    at java.io.FileOutputStream.<init>(FileOutputStream.java:171) 
    .
    .
    [2015-09-22 07:01:01,380] ERROR Error in acceptor (kafka.network.Acceptor) 
    java.io.IOException: Too many open files 
    at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) 
    at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) 
    at kafka.network.Acceptor.accept(SocketServer.scala:200) 
    at kafka.network.Acceptor.run(SocketServer.scala:154) 
    at java.lang.Thread.run(Thread.java:745) 
    

The open files limit in ulimits for kafka userid was set to 1024. We increased
to 32000 and got over the error.




> Set kafka userid ulimit open files to 32k for kafka broker through Ambari
> -------------------------------------------------------------------------
>
>                 Key: AMBARI-13290
>                 URL: https://issues.apache.org/jira/browse/AMBARI-13290
>             Project: Ambari
>          Issue Type: Bug
>            Reporter: Andrew Onischuk
>            Assignee: Andrew Onischuk
>             Fix For: 2.1.3
>
>
> PROBLEM: Currently Ambari does not set any ulimit for the kafka user during
> install.
>     
>     
>     [2015-09-22 07:01:01,380] FATAL [Replica Manager on Broker 2]: Error writing to highwatermark file: (kafka.server.ReplicaManager) 
>     java.io.FileNotFoundException: /mnt/data2/kafka-logs/replication-offset-checkpoint.tmp (Too many open files) 
>     at java.io.FileOutputStream.open(Native Method) 
>     at java.io.FileOutputStream.<init>(FileOutputStream.java:221) 
>     at java.io.FileOutputStream.<init>(FileOutputStream.java:171) 
>     .
>     .
>     [2015-09-22 07:01:01,380] ERROR Error in acceptor (kafka.network.Acceptor) 
>     java.io.IOException: Too many open files 
>     at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) 
>     at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) 
>     at kafka.network.Acceptor.accept(SocketServer.scala:200) 
>     at kafka.network.Acceptor.run(SocketServer.scala:154) 
>     at java.lang.Thread.run(Thread.java:745) 
>     
> The open files limit in ulimits for kafka userid was set to 1024. We increased
> to 32000 and got over the error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)