You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Graeme Wallace <gr...@farecompare.com> on 2013/10/02 15:46:01 UTC
Lots of Exceptions showing up in logs
Hi,
Can anyone help me debug whats going on to cause these exceptions ?
1. Gettings lots of these IllegalArgumentExceptions (and there were a few
other nio.Buffer related exceptions in our logs too)
19:22:34,597 WARN [kafka.consumer.ConsumerFetcherThread]
(ConsumerFetcherThread-hbaseApeConsumer_ape-aux109.dc.farecompare.com-1380666397017-66bf5430-0-2)
[ConsumerFetcherThread-hbaseApeConsumer_ape-aux109.dc.farecompare.com-1380666397017-66bf5430-0-2],
Error in fetch Name: FetchRequest; Version: 0; CorrelationId: 15; ClientId:
APEConsumer-ConsumerFetcherThread-hbaseApeConsumer_ape-aux109.dc.farecompare.com-1380666397017-66bf5430-0-2;
ReplicaId: -1; MaxWait: 400 ms; MinBytes: 1000 bytes; RequestInfo: [APE,1]
-> PartitionFetchInfo(120858,500000000),[APE,5] ->
PartitionFetchInfo(120858,500000000),[APE,9] ->
PartitionFetchInfo(120836,500000000),[APE,2] ->
PartitionFetchInfo(120858,500000000),[APE,11] ->
PartitionFetchInfo(120836,500000000),[APE,6] ->
PartitionFetchInfo(460329,500000000),[APE,0] ->
PartitionFetchInfo(124608,500000000),[APE,7] ->
PartitionFetchInfo(126691,500000000),[APE,3] ->
PartitionFetchInfo(127107,500000000),[APE,4] ->
PartitionFetchInfo(127107,500000000),[APE,10] ->
PartitionFetchInfo(2056250,500000000),[APE,8] ->
PartitionFetchInfo(469501,500000000): java.lang.IllegalArgumentException
at java.nio.Buffer.limit(Buffer.java:267) [rt.jar:1.7.0_25]
at
kafka.api.FetchResponsePartitionData$.readFrom(FetchResponse.scala:33)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at kafka.api.TopicData$$anonfun$1.apply(FetchResponse.scala:87)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at kafka.api.TopicData$$anonfun$1.apply(FetchResponse.scala:85)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
[scala-library-2.8.0.jar:]
at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
[scala-library-2.8.0.jar:]
at
scala.collection.immutable.Range$ByOne$class.foreach(Range.scala:282)
[scala-library-2.8.0.jar:]
at
scala.collection.immutable.Range$$anon$1.foreach(Range.scala:274)
[scala-library-2.8.0.jar:]
at
scala.collection.TraversableLike$class.map(TraversableLike.scala:206)
[scala-library-2.8.0.jar:]
at scala.collection.immutable.Range.map(Range.scala:39)
[scala-library-2.8.0.jar:]
at kafka.api.TopicData$.readFrom(FetchResponse.scala:85)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at
kafka.api.FetchResponse$$anonfun$3.apply(FetchResponse.scala:146)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at
kafka.api.FetchResponse$$anonfun$3.apply(FetchResponse.scala:145)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:227)
[scala-library-2.8.0.jar:]
at
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:227)
[scala-library-2.8.0.jar:]
at
scala.collection.immutable.Range$ByOne$class.foreach(Range.scala:285)
[scala-library-2.8.0.jar:]
at
scala.collection.immutable.Range$$anon$1.foreach(Range.scala:274)
[scala-library-2.8.0.jar:]
at
scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:227)
[scala-library-2.8.0.jar:]
at scala.collection.immutable.Range.flatMap(Range.scala:39)
[scala-library-2.8.0.jar:]
at kafka.api.FetchResponse$.readFrom(FetchResponse.scala:145)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:113)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at
kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:96)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at
kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:88)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
2. We're running on a 10Gb network, with a couple of pretty beefy broker
boxes - so i dont understand why we would get a lot of
SocketTimeoutExceptions
19:27:28,900 INFO [kafka.consumer.SimpleConsumer]
(ConsumerFetcherThread-hbaseApeConsumer_ape-aux109.dc.farecompare.com-1380666397017-66bf5430-0-2)
Reconnect due to socket error: : java.net.SocketTimeoutException
at
sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:226)
[rt.jar:1.7.0_25]
at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
[rt.jar:1.7.0_25]
at
java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385)
[rt.jar:1.7.0_25]
at kafka.utils.Utils$.read(Utils.scala:394)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at
kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:67)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at
kafka.network.Receive$class.readCompletely(Transmission.scala:56)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at
kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at kafka.network.BlockingChannel.receive(BlockingChannel.scala:100)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at
kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:73)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at
kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:71)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at
kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:110)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at
kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:110)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at
kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:110)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at
kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:109)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at
kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:109)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at
kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:109)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:108)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at
kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:96)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at
kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:88)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
3. Then latterly (we've been testing out our messaging producing/consuming
cluster overnight) we been getting at least one of these a second.
09:26:12,452 INFO [kafka.consumer.SimpleConsumer]
(ConsumerFetcherThread-hbaseApeConsumer_ape-aux109.dc.farecompare.com-1380666397017-66bf5430-0-1)
Reconnect due to socket error: : java.io.EOFException: Received -1 when
reading from channel, socket has likely been closed.
at kafka.utils.Utils$.read(Utils.scala:395)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at
kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at
kafka.network.Receive$class.readCompletely(Transmission.scala:56)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at
kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at kafka.network.BlockingChannel.receive(BlockingChannel.scala:100)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at
kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:73)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at
kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:71)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at
kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:110)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at
kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:110)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at
kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:110)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at
kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:109)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at
kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:109)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at
kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:109)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:108)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at
kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:96)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at
kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:88)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51)
[core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
--
Graeme Wallace
CTO
FareCompare.com
O: 972 588 1414
M: 214 681 9018
Re: Lots of Exceptions showing up in logs
Posted by Jun Rao <ju...@gmail.com>.
Are you running the beta1 release?
1. That seems like a corrupted response for a fetch request.
2. Could you look at the request log and check the time the broker takes to
complete the fetch request? If the completion time is larger than the
socket timeout in the consumer, the fetch request will timeout.
3. This means the broker closed the socket. If you look at the server log
on the broker, it will tell you the reason why it closed the socket.
It seem that all the above could be related to the network. Anything
abnormal for the network?
Thanks,
Jun
On Wed, Oct 2, 2013 at 6:46 AM, Graeme Wallace <
graeme.wallace@farecompare.com> wrote:
> Hi,
>
> Can anyone help me debug whats going on to cause these exceptions ?
>
>
> 1. Gettings lots of these IllegalArgumentExceptions (and there were a few
> other nio.Buffer related exceptions in our logs too)
>
> 19:22:34,597 WARN [kafka.consumer.ConsumerFetcherThread]
>
> (ConsumerFetcherThread-hbaseApeConsumer_ape-aux109.dc.farecompare.com-1380666397017-66bf5430-0-2)
>
> [ConsumerFetcherThread-hbaseApeConsumer_ape-aux109.dc.farecompare.com-1380666397017-66bf5430-0-2],
> Error in fetch Name: FetchRequest; Version: 0; CorrelationId: 15; ClientId:
>
> APEConsumer-ConsumerFetcherThread-hbaseApeConsumer_ape-aux109.dc.farecompare.com-1380666397017-66bf5430-0-2;
> ReplicaId: -1; MaxWait: 400 ms; MinBytes: 1000 bytes; RequestInfo: [APE,1]
> -> PartitionFetchInfo(120858,500000000),[APE,5] ->
> PartitionFetchInfo(120858,500000000),[APE,9] ->
> PartitionFetchInfo(120836,500000000),[APE,2] ->
> PartitionFetchInfo(120858,500000000),[APE,11] ->
> PartitionFetchInfo(120836,500000000),[APE,6] ->
> PartitionFetchInfo(460329,500000000),[APE,0] ->
> PartitionFetchInfo(124608,500000000),[APE,7] ->
> PartitionFetchInfo(126691,500000000),[APE,3] ->
> PartitionFetchInfo(127107,500000000),[APE,4] ->
> PartitionFetchInfo(127107,500000000),[APE,10] ->
> PartitionFetchInfo(2056250,500000000),[APE,8] ->
> PartitionFetchInfo(469501,500000000): java.lang.IllegalArgumentException
> at java.nio.Buffer.limit(Buffer.java:267) [rt.jar:1.7.0_25]
> at
> kafka.api.FetchResponsePartitionData$.readFrom(FetchResponse.scala:33)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at kafka.api.TopicData$$anonfun$1.apply(FetchResponse.scala:87)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at kafka.api.TopicData$$anonfun$1.apply(FetchResponse.scala:85)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at
>
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
> [scala-library-2.8.0.jar:]
> at
>
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
> [scala-library-2.8.0.jar:]
> at
> scala.collection.immutable.Range$ByOne$class.foreach(Range.scala:282)
> [scala-library-2.8.0.jar:]
> at
> scala.collection.immutable.Range$$anon$1.foreach(Range.scala:274)
> [scala-library-2.8.0.jar:]
> at
> scala.collection.TraversableLike$class.map(TraversableLike.scala:206)
> [scala-library-2.8.0.jar:]
> at scala.collection.immutable.Range.map(Range.scala:39)
> [scala-library-2.8.0.jar:]
> at kafka.api.TopicData$.readFrom(FetchResponse.scala:85)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at
> kafka.api.FetchResponse$$anonfun$3.apply(FetchResponse.scala:146)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at
> kafka.api.FetchResponse$$anonfun$3.apply(FetchResponse.scala:145)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at
>
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:227)
> [scala-library-2.8.0.jar:]
> at
>
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:227)
> [scala-library-2.8.0.jar:]
> at
> scala.collection.immutable.Range$ByOne$class.foreach(Range.scala:285)
> [scala-library-2.8.0.jar:]
> at
> scala.collection.immutable.Range$$anon$1.foreach(Range.scala:274)
> [scala-library-2.8.0.jar:]
> at
> scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:227)
> [scala-library-2.8.0.jar:]
> at scala.collection.immutable.Range.flatMap(Range.scala:39)
> [scala-library-2.8.0.jar:]
> at kafka.api.FetchResponse$.readFrom(FetchResponse.scala:145)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:113)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at
>
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:96)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:88)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
>
>
>
> 2. We're running on a 10Gb network, with a couple of pretty beefy broker
> boxes - so i dont understand why we would get a lot of
> SocketTimeoutExceptions
>
> 19:27:28,900 INFO [kafka.consumer.SimpleConsumer]
>
> (ConsumerFetcherThread-hbaseApeConsumer_ape-aux109.dc.farecompare.com-1380666397017-66bf5430-0-2)
> Reconnect due to socket error: : java.net.SocketTimeoutException
> at
> sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:226)
> [rt.jar:1.7.0_25]
> at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
> [rt.jar:1.7.0_25]
> at
> java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385)
> [rt.jar:1.7.0_25]
> at kafka.utils.Utils$.read(Utils.scala:394)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at
>
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:67)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at
> kafka.network.Receive$class.readCompletely(Transmission.scala:56)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at
>
> kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at kafka.network.BlockingChannel.receive(BlockingChannel.scala:100)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at
> kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:73)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at
>
> kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:71)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at
>
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:110)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at
>
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:110)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at
>
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:110)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at
>
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:109)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at
>
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:109)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at
>
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:109)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:108)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at
>
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:96)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:88)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
>
>
> 3. Then latterly (we've been testing out our messaging producing/consuming
> cluster overnight) we been getting at least one of these a second.
>
> 09:26:12,452 INFO [kafka.consumer.SimpleConsumer]
>
> (ConsumerFetcherThread-hbaseApeConsumer_ape-aux109.dc.farecompare.com-1380666397017-66bf5430-0-1)
> Reconnect due to socket error: : java.io.EOFException: Received -1 when
> reading from channel, socket has likely been closed.
> at kafka.utils.Utils$.read(Utils.scala:395)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at
>
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at
> kafka.network.Receive$class.readCompletely(Transmission.scala:56)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at
>
> kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at kafka.network.BlockingChannel.receive(BlockingChannel.scala:100)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at
> kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:73)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at
>
> kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:71)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at
>
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:110)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at
>
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:110)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at
>
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:110)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at
>
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:109)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at
>
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:109)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at
>
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:109)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:108)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at
>
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:96)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:88)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51)
> [core-kafka-0.8.0-beta1.jar:0.8.0-beta1]
>
> --
> Graeme Wallace
> CTO
> FareCompare.com
> O: 972 588 1414
> M: 214 681 9018
>