You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Jason Rosenberg <jb...@squareup.com> on 2013/05/08 09:16:12 UTC

expected exceptions?

I'm porting some unit tests from 0.7.2 to 0.8.0.  The test does the
following, all embedded in the same java process:

-- spins up a zk instance
-- spins up a kafka server using a fresh log directory
-- creates a producer and sends a message
-- creates a high-level consumer and verifies that it can consume the
message
-- shuts down the consumer
-- stops the kafka server
-- stops zk

The test seems to be working fine now, however, I consistently see the
following exceptions (which from poking around the mailing list seem to be
expected?).  If these are expected, can we suppress the logging of these
exceptions, since it clutters the output of tests, and presumably, clutters
the logs of the running server/consumers, during clean startup and
shutdown......

When I call producer.send(), I get:

kafka.common.LeaderNotAvailableException: No leader for any partition
at
kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$getPartition(DefaultEventHandler.scala:212)
at
kafka.producer.async.DefaultEventHandler$$anonfun$partitionAndCollate$1.apply(DefaultEventHandler.scala:150)
at
kafka.producer.async.DefaultEventHandler$$anonfun$partitionAndCollate$1.apply(DefaultEventHandler.scala:148)
at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:57)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:43)
at
kafka.producer.async.DefaultEventHandler.partitionAndCollate(DefaultEventHandler.scala:148)
at
kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:94)
at
kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:72)
at kafka.producer.Producer.send(Producer.scala:74)
at kafka.javaapi.producer.Producer.send(Producer.scala:32)
...
  ...

When I call consumerConnector.shutdown(), I get:

java.nio.channels.ClosedByInterruptException
at
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:543)
at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57)
at kafka.consumer.SimpleConsumer.connect(SimpleConsumer.scala:47)
at kafka.consumer.SimpleConsumer.reconnect(SimpleConsumer.scala:60)
at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:81)
at
kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:73)
at
kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:112)
at
kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
at
kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at
kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:111)
at
kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
at
kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:110)
at
kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:96)
at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:88)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51)

Jason

Re: expected exceptions?

Posted by Jason Rosenberg <jb...@squareup.com>.
Filed:

https://issues.apache.org/jira/browse/KAFKA-899
https://issues.apache.org/jira/browse/KAFKA-900

Jason


On Wed, May 8, 2013 at 9:22 PM, Jun Rao <ju...@gmail.com> wrote:

> Yes, could you file a jira? Please include the log messages before those
> exceptions.
>
> Thanks,
>
> Jun
>
>
> On Wed, May 8, 2013 at 9:55 AM, Jason Rosenberg <jb...@squareup.com> wrote:
>
> > If expected, does it make sense to log them as exceptions as such?  Can
> we
> > instead log something meaningful to the console, like:
> >
> > "No leader was available, one will now be created"
> >
> > or
> >
> > "ConsumerConnector has shutdown"
> >
> > etc.
> >
> > Should I file jira's for these?
> >
> > Jason
> >
> >
> > On Wed, May 8, 2013 at 8:22 AM, Jun Rao <ju...@gmail.com> wrote:
> >
> > > Yes, both are expected.
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > >
> > > On Wed, May 8, 2013 at 12:16 AM, Jason Rosenberg <jb...@squareup.com>
> > wrote:
> > >
> > > > I'm porting some unit tests from 0.7.2 to 0.8.0.  The test does the
> > > > following, all embedded in the same java process:
> > > >
> > > > -- spins up a zk instance
> > > > -- spins up a kafka server using a fresh log directory
> > > > -- creates a producer and sends a message
> > > > -- creates a high-level consumer and verifies that it can consume the
> > > > message
> > > > -- shuts down the consumer
> > > > -- stops the kafka server
> > > > -- stops zk
> > > >
> > > > The test seems to be working fine now, however, I consistently see
> the
> > > > following exceptions (which from poking around the mailing list seem
> to
> > > be
> > > > expected?).  If these are expected, can we suppress the logging of
> > these
> > > > exceptions, since it clutters the output of tests, and presumably,
> > > clutters
> > > > the logs of the running server/consumers, during clean startup and
> > > > shutdown......
> > > >
> > > > When I call producer.send(), I get:
> > > >
> > > > kafka.common.LeaderNotAvailableException: No leader for any partition
> > > > at
> > > >
> > > >
> > >
> >
> kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$getPartition(DefaultEventHandler.scala:212)
> > > > at
> > > >
> > > >
> > >
> >
> kafka.producer.async.DefaultEventHandler$$anonfun$partitionAndCollate$1.apply(DefaultEventHandler.scala:150)
> > > > at
> > > >
> > > >
> > >
> >
> kafka.producer.async.DefaultEventHandler$$anonfun$partitionAndCollate$1.apply(DefaultEventHandler.scala:148)
> > > > at
> > > >
> > > >
> > >
> >
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:57)
> > > > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:43)
> > > > at
> > > >
> > > >
> > >
> >
> kafka.producer.async.DefaultEventHandler.partitionAndCollate(DefaultEventHandler.scala:148)
> > > > at
> > > >
> > > >
> > >
> >
> kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:94)
> > > > at
> > > >
> > > >
> > >
> >
> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:72)
> > > > at kafka.producer.Producer.send(Producer.scala:74)
> > > > at kafka.javaapi.producer.Producer.send(Producer.scala:32)
> > > > ...
> > > >   ...
> > > >
> > > > When I call consumerConnector.shutdown(), I get:
> > > >
> > > > java.nio.channels.ClosedByInterruptException
> > > > at
> > > >
> > > >
> > >
> >
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
> > > > at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:543)
> > > > at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57)
> > > > at kafka.consumer.SimpleConsumer.connect(SimpleConsumer.scala:47)
> > > > at kafka.consumer.SimpleConsumer.reconnect(SimpleConsumer.scala:60)
> > > > at
> kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:81)
> > > > at
> > > >
> > > >
> > >
> >
> kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:73)
> > > > at
> > > >
> > > >
> > >
> >
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:112)
> > > > at
> > > >
> > > >
> > >
> >
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
> > > > at
> > > >
> > > >
> > >
> >
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
> > > > at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
> > > > at
> > > >
> > > >
> > >
> >
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:111)
> > > > at
> > > >
> > > >
> > >
> >
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
> > > > at
> > > >
> > > >
> > >
> >
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
> > > > at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
> > > > at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:110)
> > > > at
> > > >
> > > >
> > >
> >
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:96)
> > > > at
> > > >
> > kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:88)
> > > > at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51)
> > > >
> > > > Jason
> > > >
> > >
> >
>

Re: expected exceptions?

Posted by Jun Rao <ju...@gmail.com>.
Yes, could you file a jira? Please include the log messages before those
exceptions.

Thanks,

Jun


On Wed, May 8, 2013 at 9:55 AM, Jason Rosenberg <jb...@squareup.com> wrote:

> If expected, does it make sense to log them as exceptions as such?  Can we
> instead log something meaningful to the console, like:
>
> "No leader was available, one will now be created"
>
> or
>
> "ConsumerConnector has shutdown"
>
> etc.
>
> Should I file jira's for these?
>
> Jason
>
>
> On Wed, May 8, 2013 at 8:22 AM, Jun Rao <ju...@gmail.com> wrote:
>
> > Yes, both are expected.
> >
> > Thanks,
> >
> > Jun
> >
> >
> > On Wed, May 8, 2013 at 12:16 AM, Jason Rosenberg <jb...@squareup.com>
> wrote:
> >
> > > I'm porting some unit tests from 0.7.2 to 0.8.0.  The test does the
> > > following, all embedded in the same java process:
> > >
> > > -- spins up a zk instance
> > > -- spins up a kafka server using a fresh log directory
> > > -- creates a producer and sends a message
> > > -- creates a high-level consumer and verifies that it can consume the
> > > message
> > > -- shuts down the consumer
> > > -- stops the kafka server
> > > -- stops zk
> > >
> > > The test seems to be working fine now, however, I consistently see the
> > > following exceptions (which from poking around the mailing list seem to
> > be
> > > expected?).  If these are expected, can we suppress the logging of
> these
> > > exceptions, since it clutters the output of tests, and presumably,
> > clutters
> > > the logs of the running server/consumers, during clean startup and
> > > shutdown......
> > >
> > > When I call producer.send(), I get:
> > >
> > > kafka.common.LeaderNotAvailableException: No leader for any partition
> > > at
> > >
> > >
> >
> kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$getPartition(DefaultEventHandler.scala:212)
> > > at
> > >
> > >
> >
> kafka.producer.async.DefaultEventHandler$$anonfun$partitionAndCollate$1.apply(DefaultEventHandler.scala:150)
> > > at
> > >
> > >
> >
> kafka.producer.async.DefaultEventHandler$$anonfun$partitionAndCollate$1.apply(DefaultEventHandler.scala:148)
> > > at
> > >
> > >
> >
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:57)
> > > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:43)
> > > at
> > >
> > >
> >
> kafka.producer.async.DefaultEventHandler.partitionAndCollate(DefaultEventHandler.scala:148)
> > > at
> > >
> > >
> >
> kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:94)
> > > at
> > >
> > >
> >
> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:72)
> > > at kafka.producer.Producer.send(Producer.scala:74)
> > > at kafka.javaapi.producer.Producer.send(Producer.scala:32)
> > > ...
> > >   ...
> > >
> > > When I call consumerConnector.shutdown(), I get:
> > >
> > > java.nio.channels.ClosedByInterruptException
> > > at
> > >
> > >
> >
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
> > > at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:543)
> > > at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57)
> > > at kafka.consumer.SimpleConsumer.connect(SimpleConsumer.scala:47)
> > > at kafka.consumer.SimpleConsumer.reconnect(SimpleConsumer.scala:60)
> > > at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:81)
> > > at
> > >
> > >
> >
> kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:73)
> > > at
> > >
> > >
> >
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:112)
> > > at
> > >
> > >
> >
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
> > > at
> > >
> > >
> >
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
> > > at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
> > > at
> > >
> > >
> >
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:111)
> > > at
> > >
> > >
> >
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
> > > at
> > >
> > >
> >
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
> > > at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
> > > at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:110)
> > > at
> > >
> > >
> >
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:96)
> > > at
> > >
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:88)
> > > at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51)
> > >
> > > Jason
> > >
> >
>

Re: expected exceptions?

Posted by Jason Rosenberg <jb...@squareup.com>.
If expected, does it make sense to log them as exceptions as such?  Can we
instead log something meaningful to the console, like:

"No leader was available, one will now be created"

or

"ConsumerConnector has shutdown"

etc.

Should I file jira's for these?

Jason


On Wed, May 8, 2013 at 8:22 AM, Jun Rao <ju...@gmail.com> wrote:

> Yes, both are expected.
>
> Thanks,
>
> Jun
>
>
> On Wed, May 8, 2013 at 12:16 AM, Jason Rosenberg <jb...@squareup.com> wrote:
>
> > I'm porting some unit tests from 0.7.2 to 0.8.0.  The test does the
> > following, all embedded in the same java process:
> >
> > -- spins up a zk instance
> > -- spins up a kafka server using a fresh log directory
> > -- creates a producer and sends a message
> > -- creates a high-level consumer and verifies that it can consume the
> > message
> > -- shuts down the consumer
> > -- stops the kafka server
> > -- stops zk
> >
> > The test seems to be working fine now, however, I consistently see the
> > following exceptions (which from poking around the mailing list seem to
> be
> > expected?).  If these are expected, can we suppress the logging of these
> > exceptions, since it clutters the output of tests, and presumably,
> clutters
> > the logs of the running server/consumers, during clean startup and
> > shutdown......
> >
> > When I call producer.send(), I get:
> >
> > kafka.common.LeaderNotAvailableException: No leader for any partition
> > at
> >
> >
> kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$getPartition(DefaultEventHandler.scala:212)
> > at
> >
> >
> kafka.producer.async.DefaultEventHandler$$anonfun$partitionAndCollate$1.apply(DefaultEventHandler.scala:150)
> > at
> >
> >
> kafka.producer.async.DefaultEventHandler$$anonfun$partitionAndCollate$1.apply(DefaultEventHandler.scala:148)
> > at
> >
> >
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:57)
> > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:43)
> > at
> >
> >
> kafka.producer.async.DefaultEventHandler.partitionAndCollate(DefaultEventHandler.scala:148)
> > at
> >
> >
> kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:94)
> > at
> >
> >
> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:72)
> > at kafka.producer.Producer.send(Producer.scala:74)
> > at kafka.javaapi.producer.Producer.send(Producer.scala:32)
> > ...
> >   ...
> >
> > When I call consumerConnector.shutdown(), I get:
> >
> > java.nio.channels.ClosedByInterruptException
> > at
> >
> >
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
> > at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:543)
> > at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57)
> > at kafka.consumer.SimpleConsumer.connect(SimpleConsumer.scala:47)
> > at kafka.consumer.SimpleConsumer.reconnect(SimpleConsumer.scala:60)
> > at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:81)
> > at
> >
> >
> kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:73)
> > at
> >
> >
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:112)
> > at
> >
> >
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
> > at
> >
> >
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
> > at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
> > at
> >
> >
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:111)
> > at
> >
> >
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
> > at
> >
> >
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
> > at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
> > at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:110)
> > at
> >
> >
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:96)
> > at
> > kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:88)
> > at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51)
> >
> > Jason
> >
>

Re: expected exceptions?

Posted by Jun Rao <ju...@gmail.com>.
Yes, both are expected.

Thanks,

Jun


On Wed, May 8, 2013 at 12:16 AM, Jason Rosenberg <jb...@squareup.com> wrote:

> I'm porting some unit tests from 0.7.2 to 0.8.0.  The test does the
> following, all embedded in the same java process:
>
> -- spins up a zk instance
> -- spins up a kafka server using a fresh log directory
> -- creates a producer and sends a message
> -- creates a high-level consumer and verifies that it can consume the
> message
> -- shuts down the consumer
> -- stops the kafka server
> -- stops zk
>
> The test seems to be working fine now, however, I consistently see the
> following exceptions (which from poking around the mailing list seem to be
> expected?).  If these are expected, can we suppress the logging of these
> exceptions, since it clutters the output of tests, and presumably, clutters
> the logs of the running server/consumers, during clean startup and
> shutdown......
>
> When I call producer.send(), I get:
>
> kafka.common.LeaderNotAvailableException: No leader for any partition
> at
>
> kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$getPartition(DefaultEventHandler.scala:212)
> at
>
> kafka.producer.async.DefaultEventHandler$$anonfun$partitionAndCollate$1.apply(DefaultEventHandler.scala:150)
> at
>
> kafka.producer.async.DefaultEventHandler$$anonfun$partitionAndCollate$1.apply(DefaultEventHandler.scala:148)
> at
>
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:57)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:43)
> at
>
> kafka.producer.async.DefaultEventHandler.partitionAndCollate(DefaultEventHandler.scala:148)
> at
>
> kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:94)
> at
>
> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:72)
> at kafka.producer.Producer.send(Producer.scala:74)
> at kafka.javaapi.producer.Producer.send(Producer.scala:32)
> ...
>   ...
>
> When I call consumerConnector.shutdown(), I get:
>
> java.nio.channels.ClosedByInterruptException
> at
>
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
> at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:543)
> at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57)
> at kafka.consumer.SimpleConsumer.connect(SimpleConsumer.scala:47)
> at kafka.consumer.SimpleConsumer.reconnect(SimpleConsumer.scala:60)
> at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:81)
> at
>
> kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:73)
> at
>
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:112)
> at
>
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
> at
>
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
> at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
> at
>
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:111)
> at
>
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
> at
>
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
> at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
> at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:110)
> at
>
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:96)
> at
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:88)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51)
>
> Jason
>