You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Uwe Geercken <uw...@web.de> on 2016/09/26 20:22:50 UTC

Error: Failed to collate messages by topic, partition due to: fetching topic metadata for topics

Hello,

my first mailing here and I am pretty new to kafka so I search for your professional help.

I have zookeeper and kafka 2.11-0.9.0.0 running on my laptop with Fedora24.

After some successful tests with console producer and consumer I started a project in eclipse working on:

- reading a csv file
- split each row of the file into fields and fill an object with the data from the fields
- send the message to kafka - kafka uses my own class to serialize the objects
- run a consumer that retrieves the messages and deserializes the message back to the original object

This works ok, but after about 19000 messages sent to kafka I get the error: 

[2016-09-26 22:10:17,957] ERROR Failed to collate messages by topic, partition due to: fetching topic metadata for topics [Set(arrivals5)] from broker [ArrayBuffer(BrokerEndPoint(0,127.0.0.1,9092))] failed (kafka.producer.async.DefaultEventHandler:97)
[2016-09-26 22:10:18,080] ERROR fetching topic metadata for topics [Set(arrivals5)] from broker [ArrayBuffer(BrokerEndPoint(0,127.0.0.1,9092))] failed (kafka.utils.CoreUtils$:106)
kafka.common.KafkaException: fetching topic metadata for topics [Set(arrivals5)] from broker [ArrayBuffer(BrokerEndPoint(0,127.0.0.1,9092))] failed
	at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:73)
	at kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:83)
	at kafka.producer.async.DefaultEventHandler$$anonfun$handle$1.apply$mcV$sp(DefaultEventHandler.scala:73)
	at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:76)
	at kafka.utils.Logging$class.swallowError(Logging.scala:106)
	at kafka.utils.CoreUtils$.swallowError(CoreUtils.scala:47)
	at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:73)
	at kafka.producer.Producer.send(Producer.scala:78)
	at kafka.javaapi.producer.Producer.send(Producer.scala:35)
	at com.datamelt.kafka.message.flight.FlightRecordMessageProducer.sendMessage(FlightRecordMessageProducer.java:59)
	at com.datamelt.kafka.FlightRecordMessageProducerTest.main(FlightRecordMessageProducerTest.java:60)
Caused by: java.nio.channels.ClosedChannelException
	at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
	at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:80)
	at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:79)
	at kafka.producer.SyncProducer.send(SyncProducer.scala:124)
	at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:59)
	... 10 more

In the console of the server I do not find any errors. Can somebody explain to me what the error could be? I assume that - because it basically works - it is either some server settings, memory, buffers or something like that.

Thanks for your feedback,

uwe