You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ambari.apache.org by Andrew Onischuk <ao...@hortonworks.com> on 2015/10/01 18:39:21 UTC
Review Request 38931: Set kafka userid ulimit open files to 32k for
kafka broker through Ambari
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/38931/
-----------------------------------------------------------
Review request for Ambari and Vitalyi Brodetskyi.
Bugs: AMBARI-13290
https://issues.apache.org/jira/browse/AMBARI-13290
Repository: ambari
Description
-------
PROBLEM: Currently Ambari does not set any ulimit for the kafka user during
install. This particular customer hit issues on the brokers:
[2015-09-22 07:01:01,380] FATAL [Replica Manager on Broker 2]: Error writing to highwatermark file: (kafka.server.ReplicaManager)
java.io.FileNotFoundException: /mnt/data2/kafka-logs/replication-offset-checkpoint.tmp (Too many open files)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
.
.
[2015-09-22 07:01:01,380] ERROR Error in acceptor (kafka.network.Acceptor)
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241)
at kafka.network.Acceptor.accept(SocketServer.scala:200)
at kafka.network.Acceptor.run(SocketServer.scala:154)
at java.lang.Thread.run(Thread.java:745)
The open files limit in ulimits for kafka userid was set to 1024. We increased
to 32000 and got over the error.
Diffs
-----
ambari-server/src/main/resources/common-services/KAFKA/0.8.1.2.2/configuration/kafka-env.xml ecc0782
ambari-server/src/main/resources/common-services/KAFKA/0.8.1.2.2/package/scripts/kafka.py 11492a7
ambari-server/src/main/resources/common-services/KAFKA/0.8.1.2.2/package/scripts/params.py 26ea3e2
ambari-server/src/main/resources/common-services/KAFKA/0.8.1.2.2/package/templates/kafka.conf.j2 PRE-CREATION
Diff: https://reviews.apache.org/r/38931/diff/
Testing
-------
mvn clean test
Thanks,
Andrew Onischuk
Re: Review Request 38931: Set kafka userid ulimit open files to 32k
for kafka broker through Ambari
Posted by Vitalyi Brodetskyi <vb...@hortonworks.com>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/38931/#review101311
-----------------------------------------------------------
Ship it!
Ship It!
- Vitalyi Brodetskyi
On Жов. 2, 2015, 9:43 до полудня, Andrew Onischuk wrote:
>
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/38931/
> -----------------------------------------------------------
>
> (Updated Жов. 2, 2015, 9:43 до полудня)
>
>
> Review request for Ambari and Vitalyi Brodetskyi.
>
>
> Bugs: AMBARI-13290
> https://issues.apache.org/jira/browse/AMBARI-13290
>
>
> Repository: ambari
>
>
> Description
> -------
>
> Currently Ambari does not set any ulimit for the kafka user during
> install.
>
>
>
> [2015-09-22 07:01:01,380] FATAL [Replica Manager on Broker 2]: Error writing to highwatermark file: (kafka.server.ReplicaManager)
> java.io.FileNotFoundException: /mnt/data2/kafka-logs/replication-offset-checkpoint.tmp (Too many open files)
> at java.io.FileOutputStream.open(Native Method)
> at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
> at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
> .
> .
> [2015-09-22 07:01:01,380] ERROR Error in acceptor (kafka.network.Acceptor)
> java.io.IOException: Too many open files
> at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
> at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241)
> at kafka.network.Acceptor.accept(SocketServer.scala:200)
> at kafka.network.Acceptor.run(SocketServer.scala:154)
> at java.lang.Thread.run(Thread.java:745)
>
>
> The open files limit in ulimits for kafka userid was set to 1024. We increased
> to 32000 and got over the error.
>
>
> Diffs
> -----
>
> ambari-server/src/main/resources/common-services/KAFKA/0.8.1.2.2/configuration/kafka-env.xml ecc0782
> ambari-server/src/main/resources/common-services/KAFKA/0.8.1.2.2/package/scripts/kafka.py 11492a7
> ambari-server/src/main/resources/common-services/KAFKA/0.8.1.2.2/package/scripts/params.py 26ea3e2
> ambari-server/src/main/resources/common-services/KAFKA/0.8.1.2.2/package/templates/kafka.conf.j2 PRE-CREATION
>
> Diff: https://reviews.apache.org/r/38931/diff/
>
>
> Testing
> -------
>
> mvn clean test
>
>
> Thanks,
>
> Andrew Onischuk
>
>
Re: Review Request 38931: Set kafka userid ulimit open files to 32k
for kafka broker through Ambari
Posted by Andrew Onischuk <ao...@hortonworks.com>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/38931/
-----------------------------------------------------------
(Updated Oct. 2, 2015, 9:43 a.m.)
Review request for Ambari and Vitalyi Brodetskyi.
Bugs: AMBARI-13290
https://issues.apache.org/jira/browse/AMBARI-13290
Repository: ambari
Description (updated)
-------
Currently Ambari does not set any ulimit for the kafka user during
install.
[2015-09-22 07:01:01,380] FATAL [Replica Manager on Broker 2]: Error writing to highwatermark file: (kafka.server.ReplicaManager)
java.io.FileNotFoundException: /mnt/data2/kafka-logs/replication-offset-checkpoint.tmp (Too many open files)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
.
.
[2015-09-22 07:01:01,380] ERROR Error in acceptor (kafka.network.Acceptor)
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241)
at kafka.network.Acceptor.accept(SocketServer.scala:200)
at kafka.network.Acceptor.run(SocketServer.scala:154)
at java.lang.Thread.run(Thread.java:745)
The open files limit in ulimits for kafka userid was set to 1024. We increased
to 32000 and got over the error.
Diffs
-----
ambari-server/src/main/resources/common-services/KAFKA/0.8.1.2.2/configuration/kafka-env.xml ecc0782
ambari-server/src/main/resources/common-services/KAFKA/0.8.1.2.2/package/scripts/kafka.py 11492a7
ambari-server/src/main/resources/common-services/KAFKA/0.8.1.2.2/package/scripts/params.py 26ea3e2
ambari-server/src/main/resources/common-services/KAFKA/0.8.1.2.2/package/templates/kafka.conf.j2 PRE-CREATION
Diff: https://reviews.apache.org/r/38931/diff/
Testing
-------
mvn clean test
Thanks,
Andrew Onischuk
Re: Review Request 38931: Set kafka userid ulimit open files to 32k
for kafka broker through Ambari
Posted by Aravindan Vijayan <av...@hortonworks.com>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/38931/#review101296
-----------------------------------------------------------
Ship it!
Ship It!
- Aravindan Vijayan
On Oct. 1, 2015, 4:39 p.m., Andrew Onischuk wrote:
>
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/38931/
> -----------------------------------------------------------
>
> (Updated Oct. 1, 2015, 4:39 p.m.)
>
>
> Review request for Ambari and Vitalyi Brodetskyi.
>
>
> Bugs: AMBARI-13290
> https://issues.apache.org/jira/browse/AMBARI-13290
>
>
> Repository: ambari
>
>
> Description
> -------
>
> PROBLEM: Currently Ambari does not set any ulimit for the kafka user during
> install. This particular customer hit issues on the brokers:
>
>
>
> [2015-09-22 07:01:01,380] FATAL [Replica Manager on Broker 2]: Error writing to highwatermark file: (kafka.server.ReplicaManager)
> java.io.FileNotFoundException: /mnt/data2/kafka-logs/replication-offset-checkpoint.tmp (Too many open files)
> at java.io.FileOutputStream.open(Native Method)
> at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
> at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
> .
> .
> [2015-09-22 07:01:01,380] ERROR Error in acceptor (kafka.network.Acceptor)
> java.io.IOException: Too many open files
> at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
> at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241)
> at kafka.network.Acceptor.accept(SocketServer.scala:200)
> at kafka.network.Acceptor.run(SocketServer.scala:154)
> at java.lang.Thread.run(Thread.java:745)
>
>
> The open files limit in ulimits for kafka userid was set to 1024. We increased
> to 32000 and got over the error.
>
>
> Diffs
> -----
>
> ambari-server/src/main/resources/common-services/KAFKA/0.8.1.2.2/configuration/kafka-env.xml ecc0782
> ambari-server/src/main/resources/common-services/KAFKA/0.8.1.2.2/package/scripts/kafka.py 11492a7
> ambari-server/src/main/resources/common-services/KAFKA/0.8.1.2.2/package/scripts/params.py 26ea3e2
> ambari-server/src/main/resources/common-services/KAFKA/0.8.1.2.2/package/templates/kafka.conf.j2 PRE-CREATION
>
> Diff: https://reviews.apache.org/r/38931/diff/
>
>
> Testing
> -------
>
> mvn clean test
>
>
> Thanks,
>
> Andrew Onischuk
>
>