You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "BalajiSeshadri (JIRA)" <ji...@apache.org> on 2013/04/26 00:52:17 UTC

[jira] [Comment Edited] (KAFKA-816) Reduce noise in Kafka server logs due to NotLeaderForPartitionException

    [ https://issues.apache.org/jira/browse/KAFKA-816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13642334#comment-13642334 ] 

BalajiSeshadri edited comment on KAFKA-816 at 4/25/13 10:51 PM:
----------------------------------------------------------------

Using the below trunk and i still see error happening.Please let me know if this can be fixed.


https://github.com/apache/kafka.git

[2013-04-25 16:47:08,924] WARN [console-consumer-24019_MERD7-21964-1366930009136-8b7f9eb7-leader-finder-thread], Failed to add fetcher for [mytopic,0] to broker id:0,host:MERD7-21964.echostar.com,port:9092 (kafka.consumer.ConsumerFetcherManager$$anon$1)
kafka.common.NotLeaderForPartitionException
        at sun.reflect.GeneratedConstructorAccessor1.newInstance(Unknown Source)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
        at java.lang.Class.newInstance0(Class.java:372)
        at java.lang.Class.newInstance(Class.java:325)
        at kafka.common.ErrorMapping$.exceptionFor(ErrorMapping.scala:72)
        at kafka.consumer.SimpleConsumer.earliestOrLatestOffset(SimpleConsumer.scala:163)
        at kafka.consumer.ConsumerFetcherThread.handleOffsetOutOfRange(ConsumerFetcherThread.scala:61)
        at kafka.server.AbstractFetcherThread.addPartition(AbstractFetcherThread.scala:167)
        at kafka.server.AbstractFetcherManager.addFetcher(AbstractFetcherManager.scala:48)
        at kafka.consumer.ConsumerFetcherManager$$anon$1$$anonfun$doWork$3.apply(ConsumerFetcherManager.scala:79)
        at kafka.consumer.ConsumerFetcherManager$$anon$1$$anonfun$doWork$3.apply(ConsumerFetcherManager.scala:75)
        at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:95)
        at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:95)
        at scala.collection.Iterator$class.foreach(Iterator.scala:772)
        at scala.collection.mutable.HashTable$$anon$1.foreach(HashTable.scala:157)
        at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:190)
        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:45)
        at scala.collection.mutable.HashMap.foreach(HashMap.scala:95)
        at kafka.consumer.ConsumerFetcherManager$$anon$1.doWork(ConsumerFetcherManager.scala:75)
        at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51)

                
      was (Author: balaji.seshadri@dish.com):
    Using the below trunk and i still see error happening.Please lets me know if this can be fixed.


https://github.com/apache/kafka.git

[2013-04-25 16:47:08,924] WARN [console-consumer-24019_MERD7-21964-1366930009136-8b7f9eb7-leader-finder-thread], Failed to add fetcher for [mytopic,0] to broker id:0,host:MERD7-21964.echostar.com,port:9092 (kafka.consumer.ConsumerFetcherManager$$anon$1)
kafka.common.NotLeaderForPartitionException
        at sun.reflect.GeneratedConstructorAccessor1.newInstance(Unknown Source)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
        at java.lang.Class.newInstance0(Class.java:372)
        at java.lang.Class.newInstance(Class.java:325)
        at kafka.common.ErrorMapping$.exceptionFor(ErrorMapping.scala:72)
        at kafka.consumer.SimpleConsumer.earliestOrLatestOffset(SimpleConsumer.scala:163)
        at kafka.consumer.ConsumerFetcherThread.handleOffsetOutOfRange(ConsumerFetcherThread.scala:61)
        at kafka.server.AbstractFetcherThread.addPartition(AbstractFetcherThread.scala:167)
        at kafka.server.AbstractFetcherManager.addFetcher(AbstractFetcherManager.scala:48)
        at kafka.consumer.ConsumerFetcherManager$$anon$1$$anonfun$doWork$3.apply(ConsumerFetcherManager.scala:79)
        at kafka.consumer.ConsumerFetcherManager$$anon$1$$anonfun$doWork$3.apply(ConsumerFetcherManager.scala:75)
        at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:95)
        at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:95)
        at scala.collection.Iterator$class.foreach(Iterator.scala:772)
        at scala.collection.mutable.HashTable$$anon$1.foreach(HashTable.scala:157)
        at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:190)
        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:45)
        at scala.collection.mutable.HashMap.foreach(HashMap.scala:95)
        at kafka.consumer.ConsumerFetcherManager$$anon$1.doWork(ConsumerFetcherManager.scala:75)
        at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51)

                  
> Reduce noise in Kafka server logs due to NotLeaderForPartitionException
> -----------------------------------------------------------------------
>
>                 Key: KAFKA-816
>                 URL: https://issues.apache.org/jira/browse/KAFKA-816
>             Project: Kafka
>          Issue Type: Bug
>          Components: core
>    Affects Versions: 0.8
>            Reporter: Neha Narkhede
>            Assignee: Neha Narkhede
>            Priority: Blocker
>              Labels: kafka-0.8, p2
>         Attachments: kafka-816.patch, kafka-816-v2.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> NotLeaderForPartitionException is logged at the ERROR level with a full stack trace. But really this is just an informational message on the server when a client with stale metadata sends requests to the wrong leader for a partition. This floods the logs either if there are many clients or few clients sending many topics (migration tool or mirror maker). This should probably be logged at WARN and without the stack trace

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira