You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Manikumar Reddy (JIRA)" <ji...@apache.org> on 2014/10/03 10:45:33 UTC

[jira] [Commented] (KAFKA-1666) Issue for sending more message to Kafka Broker

    [ https://issues.apache.org/jira/browse/KAFKA-1666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14157803#comment-14157803 ] 

Manikumar Reddy commented on KAFKA-1666:
----------------------------------------

Exception shows  "Too many open files" error.  You need to increase no. of open files limit on your machine.

http://askubuntu.com/questions/162229/how-do-i-increase-the-open-files-limit-for-a-non-root-user

> Issue for sending more message to Kafka Broker
> ----------------------------------------------
>
>                 Key: KAFKA-1666
>                 URL: https://issues.apache.org/jira/browse/KAFKA-1666
>             Project: Kafka
>          Issue Type: Bug
>          Components: config
>    Affects Versions: 0.8.1.1
>         Environment: Ubundu 14
>            Reporter: rajendram kathees
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> I tried to send 5000 message to kafka broker using Jmeter ( 10 thread and 500 messages per thread,one message is 105 byes). After 2100 messages I am getting the following exception and I changed buffer size (socket.request.max.bytes) value in server.properties file but still I am getting same exception. When I send  2000 message,all messages are sent to kafka broker. Can you give a solution?
> [2014-10-03 12:31:07,051] ERROR - Utils$ fetching topic metadata for topics [Set(test1)] from broker [ArrayBuffer(id:0,host:localhost,port:9092)] failed
> kafka.common.KafkaException: fetching topic metadata for topics [Set(test1)] from broker [ArrayBuffer(id:0,host:localhost,port:9092)] failed
> 	at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:67)
> 	at kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
> 	at kafka.producer.async.DefaultEventHandler$$anonfun$handle$2.apply$mcV$sp(DefaultEventHandler.scala:78)
> 	at kafka.utils.Utils$.swallow(Utils.scala:167)
> 	at kafka.utils.Logging$class.swallowError(Logging.scala:106)
> 	at kafka.utils.Utils$.swallowError(Utils.scala:46)
> 	at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:78)
> 	at kafka.producer.Producer.send(Producer.scala:76)
> 	at kafka.javaapi.producer.Producer.send(Producer.scala:33)
> 	at org.wso2.carbon.connector.KafkaProduce.send(KafkaProduce.java:71)
> 	at org.wso2.carbon.connector.KafkaProduce.connect(KafkaProduce.java:28)
> 	at org.wso2.carbon.connector.core.AbstractConnector.mediate(AbstractConnector.java:32)
> 	at org.apache.synapse.mediators.ext.ClassMediator.mediate(ClassMediator.java:78)
> 	at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:77)
> 	at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:47)
> 	at org.apache.synapse.mediators.template.TemplateMediator.mediate(TemplateMediator.java:77)
> 	at org.apache.synapse.mediators.template.InvokeMediator.mediate(InvokeMediator.java:129)
> 	at org.apache.synapse.mediators.template.InvokeMediator.mediate(InvokeMediator.java:78)
> 	at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:77)
> 	at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:47)
> 	at org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:131)
> 	at org.apache.synapse.core.axis2.ProxyServiceMessageReceiver.receive(ProxyServiceMessageReceiver.java:166)
> 	at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
> 	at org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(ServerWorker.java:344)
> 	at org.apache.synapse.transport.passthru.ServerWorker.processEntityEnclosingRequest(ServerWorker.java:385)
> 	at org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:183)
> 	at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:745)
> Caused by: java.net.SocketException: Too many open files
> 	at sun.nio.ch.Net.socket0(Native Method)
> 	at sun.nio.ch.Net.socket(Net.java:423)
> 	at sun.nio.ch.Net.socket(Net.java:416)
> 	at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:104)
> 	at sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:60)
> 	at java.nio.channels.SocketChannel.open(SocketChannel.java:142)
> 	at kafka.network.BlockingChannel.connect(BlockingChannel.scala:48)
> 	at kafka.producer.SyncProducer.connect(SyncProducer.scala:141)
> 	at kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:156)
> 	at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:68)
> 	at kafka.producer.SyncProducer.send(SyncProducer.scala:112)
> 	at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:53)
> 	... 29 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)