You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Fredo Lee <bu...@gmail.com> on 2015/11/25 07:52:21 UTC

0.9.0.0[error]

The content below is the report for kafka

when i try to fetch coordinator broker, i get 6 for ever.



[2015-11-25 14:48:28,638] ERROR [KafkaApi-1] error when handling request
Name: FetchRequest; Version: 1; CorrelationId: 643; ClientId:
ReplicaFetcherThread-0-4; ReplicaId:
1; MaxWait: 500 ms; MinBytes: 1 bytes; RequestInfo: [__consumer_offsets,49]
-> PartitionFetchInfo(0,1048576),[__consumer_offsets,17] ->
PartitionFetchInfo(0,1048576),[__con
sumer_offsets,29] -> PartitionFetchInfo(0,1048576),[blcs,6] ->
PartitionFetchInfo(0,1048576),[__consumer_offsets,41] ->
PartitionFetchInfo(0,1048576),[__consumer_offsets,13
] -> PartitionFetchInfo(0,1048576),[__consumer_offsets,5] ->
PartitionFetchInfo(0,1048576),[__consumer_offsets,37] ->
PartitionFetchInfo(0,1048576),[__consumer_offsets,25]
-> PartitionFetchInfo(0,1048576),[__consumer_offsets,1] ->
PartitionFetchInfo(0,1048576) (kafka.server.KafkaApis)
kafka.common.KafkaException: Should not set log end offset on partition
[__consumer_offsets,49]'s local replica 1
        at kafka.cluster.Replica.logEndOffset_$eq(Replica.scala:66)
        at kafka.cluster.Replica.updateLogReadResult(Replica.scala:53)
        at
kafka.cluster.Partition.updateReplicaLogReadResult(Partition.scala:240)
        at
kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:852)
        at
kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:849)
        at
scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221)
        at
scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
        at
kafka.server.ReplicaManager.updateFollowerLogReadResults(ReplicaManager.scala:849)
        at
kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:467)
        at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:434)
        at kafka.server.KafkaApis.handle(KafkaApis.scala:69)
        at
kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
        at java.lang.Thread.run(Thread.java:722)

Re: 0.9.0.0[error]

Posted by Fredo Lee <bu...@gmail.com>.
I think it has nothing to do with those clients.

actually i write a consumer client with the erlang programming language,
but i have not used it yet.

i just use the script: kafka-topic.sh to create a topic named blcs , then
this error is reported

2015-11-26 13:37 GMT+08:00 Jun Rao <ju...@confluent.io>:

> Are you running any non-java client, especially a consumer?
>
> Thanks,
>
> Jun
>
> On Wed, Nov 25, 2015 at 6:38 PM, Fredo Lee <bu...@gmail.com>
> wrote:
>
> > this is my config file for original file with some changed by me.
> >
> > broker.id=1
> > listeners=PLAINTEXT://:9092
> > num.partitions=10
> > log.dirs=/tmp/kafka-logs1
> > zookeeper.connect=localhost:2181
> > zookeeper.connection.timeout.ms=2000
> > delete.topic.enable=true
> > default.replication.factor=2
> > auto.leader.rebalance.enable=true
> >
> >
> > if i change listeners to 9093, it works!!!! There is no process running
> on
> > this port!!
> > i donot know why
> >
> >
> > 2015-11-25 23:58 GMT+08:00 Jun Rao <ju...@confluent.io>:
> >
> > > Fredo,
> > >
> > > Thanks for reporting this. Are you starting a brand new 0.9.0.0
> cluster?
> > > Are there steps that one can follow to reproduce this issue easily?
> > >
> > > Jun
> > >
> > > On Tue, Nov 24, 2015 at 10:52 PM, Fredo Lee <bu...@gmail.com>
> > > wrote:
> > >
> > > > The content below is the report for kafka
> > > >
> > > > when i try to fetch coordinator broker, i get 6 for ever.
> > > >
> > > >
> > > >
> > > > [2015-11-25 14:48:28,638] ERROR [KafkaApi-1] error when handling
> > request
> > > > Name: FetchRequest; Version: 1; CorrelationId: 643; ClientId:
> > > > ReplicaFetcherThread-0-4; ReplicaId:
> > > > 1; MaxWait: 500 ms; MinBytes: 1 bytes; RequestInfo:
> > > [__consumer_offsets,49]
> > > > -> PartitionFetchInfo(0,1048576),[__consumer_offsets,17] ->
> > > > PartitionFetchInfo(0,1048576),[__con
> > > > sumer_offsets,29] -> PartitionFetchInfo(0,1048576),[blcs,6] ->
> > > > PartitionFetchInfo(0,1048576),[__consumer_offsets,41] ->
> > > > PartitionFetchInfo(0,1048576),[__consumer_offsets,13
> > > > ] -> PartitionFetchInfo(0,1048576),[__consumer_offsets,5] ->
> > > > PartitionFetchInfo(0,1048576),[__consumer_offsets,37] ->
> > > > PartitionFetchInfo(0,1048576),[__consumer_offsets,25]
> > > > -> PartitionFetchInfo(0,1048576),[__consumer_offsets,1] ->
> > > > PartitionFetchInfo(0,1048576) (kafka.server.KafkaApis)
> > > > kafka.common.KafkaException: Should not set log end offset on
> partition
> > > > [__consumer_offsets,49]'s local replica 1
> > > >         at kafka.cluster.Replica.logEndOffset_$eq(Replica.scala:66)
> > > >         at
> kafka.cluster.Replica.updateLogReadResult(Replica.scala:53)
> > > >         at
> > > >
> kafka.cluster.Partition.updateReplicaLogReadResult(Partition.scala:240)
> > > >         at
> > > >
> > > >
> > >
> >
> kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:852)
> > > >         at
> > > >
> > > >
> > >
> >
> kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:849)
> > > >         at
> > > >
> scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221)
> > > >         at
> > > >
> > scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
> > > >         at
> > > >
> > > >
> > >
> >
> kafka.server.ReplicaManager.updateFollowerLogReadResults(ReplicaManager.scala:849)
> > > >         at
> > > > kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:467)
> > > >         at
> > kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:434)
> > > >         at kafka.server.KafkaApis.handle(KafkaApis.scala:69)
> > > >         at
> > > > kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> > > >         at java.lang.Thread.run(Thread.java:722)
> > > >
> > >
> >
>

Re: 0.9.0.0[error]

Posted by Jun Rao <ju...@confluent.io>.
Are you running any non-java client, especially a consumer?

Thanks,

Jun

On Wed, Nov 25, 2015 at 6:38 PM, Fredo Lee <bu...@gmail.com> wrote:

> this is my config file for original file with some changed by me.
>
> broker.id=1
> listeners=PLAINTEXT://:9092
> num.partitions=10
> log.dirs=/tmp/kafka-logs1
> zookeeper.connect=localhost:2181
> zookeeper.connection.timeout.ms=2000
> delete.topic.enable=true
> default.replication.factor=2
> auto.leader.rebalance.enable=true
>
>
> if i change listeners to 9093, it works!!!! There is no process running on
> this port!!
> i donot know why
>
>
> 2015-11-25 23:58 GMT+08:00 Jun Rao <ju...@confluent.io>:
>
> > Fredo,
> >
> > Thanks for reporting this. Are you starting a brand new 0.9.0.0 cluster?
> > Are there steps that one can follow to reproduce this issue easily?
> >
> > Jun
> >
> > On Tue, Nov 24, 2015 at 10:52 PM, Fredo Lee <bu...@gmail.com>
> > wrote:
> >
> > > The content below is the report for kafka
> > >
> > > when i try to fetch coordinator broker, i get 6 for ever.
> > >
> > >
> > >
> > > [2015-11-25 14:48:28,638] ERROR [KafkaApi-1] error when handling
> request
> > > Name: FetchRequest; Version: 1; CorrelationId: 643; ClientId:
> > > ReplicaFetcherThread-0-4; ReplicaId:
> > > 1; MaxWait: 500 ms; MinBytes: 1 bytes; RequestInfo:
> > [__consumer_offsets,49]
> > > -> PartitionFetchInfo(0,1048576),[__consumer_offsets,17] ->
> > > PartitionFetchInfo(0,1048576),[__con
> > > sumer_offsets,29] -> PartitionFetchInfo(0,1048576),[blcs,6] ->
> > > PartitionFetchInfo(0,1048576),[__consumer_offsets,41] ->
> > > PartitionFetchInfo(0,1048576),[__consumer_offsets,13
> > > ] -> PartitionFetchInfo(0,1048576),[__consumer_offsets,5] ->
> > > PartitionFetchInfo(0,1048576),[__consumer_offsets,37] ->
> > > PartitionFetchInfo(0,1048576),[__consumer_offsets,25]
> > > -> PartitionFetchInfo(0,1048576),[__consumer_offsets,1] ->
> > > PartitionFetchInfo(0,1048576) (kafka.server.KafkaApis)
> > > kafka.common.KafkaException: Should not set log end offset on partition
> > > [__consumer_offsets,49]'s local replica 1
> > >         at kafka.cluster.Replica.logEndOffset_$eq(Replica.scala:66)
> > >         at kafka.cluster.Replica.updateLogReadResult(Replica.scala:53)
> > >         at
> > > kafka.cluster.Partition.updateReplicaLogReadResult(Partition.scala:240)
> > >         at
> > >
> > >
> >
> kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:852)
> > >         at
> > >
> > >
> >
> kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:849)
> > >         at
> > > scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221)
> > >         at
> > >
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
> > >         at
> > >
> > >
> >
> kafka.server.ReplicaManager.updateFollowerLogReadResults(ReplicaManager.scala:849)
> > >         at
> > > kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:467)
> > >         at
> kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:434)
> > >         at kafka.server.KafkaApis.handle(KafkaApis.scala:69)
> > >         at
> > > kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> > >         at java.lang.Thread.run(Thread.java:722)
> > >
> >
>

Re: 0.9.0.0[error]

Posted by Fredo Lee <bu...@gmail.com>.
this is my config file for original file with some changed by me.

broker.id=1
listeners=PLAINTEXT://:9092
num.partitions=10
log.dirs=/tmp/kafka-logs1
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=2000
delete.topic.enable=true
default.replication.factor=2
auto.leader.rebalance.enable=true


if i change listeners to 9093, it works!!!! There is no process running on
this port!!
i donot know why


2015-11-25 23:58 GMT+08:00 Jun Rao <ju...@confluent.io>:

> Fredo,
>
> Thanks for reporting this. Are you starting a brand new 0.9.0.0 cluster?
> Are there steps that one can follow to reproduce this issue easily?
>
> Jun
>
> On Tue, Nov 24, 2015 at 10:52 PM, Fredo Lee <bu...@gmail.com>
> wrote:
>
> > The content below is the report for kafka
> >
> > when i try to fetch coordinator broker, i get 6 for ever.
> >
> >
> >
> > [2015-11-25 14:48:28,638] ERROR [KafkaApi-1] error when handling request
> > Name: FetchRequest; Version: 1; CorrelationId: 643; ClientId:
> > ReplicaFetcherThread-0-4; ReplicaId:
> > 1; MaxWait: 500 ms; MinBytes: 1 bytes; RequestInfo:
> [__consumer_offsets,49]
> > -> PartitionFetchInfo(0,1048576),[__consumer_offsets,17] ->
> > PartitionFetchInfo(0,1048576),[__con
> > sumer_offsets,29] -> PartitionFetchInfo(0,1048576),[blcs,6] ->
> > PartitionFetchInfo(0,1048576),[__consumer_offsets,41] ->
> > PartitionFetchInfo(0,1048576),[__consumer_offsets,13
> > ] -> PartitionFetchInfo(0,1048576),[__consumer_offsets,5] ->
> > PartitionFetchInfo(0,1048576),[__consumer_offsets,37] ->
> > PartitionFetchInfo(0,1048576),[__consumer_offsets,25]
> > -> PartitionFetchInfo(0,1048576),[__consumer_offsets,1] ->
> > PartitionFetchInfo(0,1048576) (kafka.server.KafkaApis)
> > kafka.common.KafkaException: Should not set log end offset on partition
> > [__consumer_offsets,49]'s local replica 1
> >         at kafka.cluster.Replica.logEndOffset_$eq(Replica.scala:66)
> >         at kafka.cluster.Replica.updateLogReadResult(Replica.scala:53)
> >         at
> > kafka.cluster.Partition.updateReplicaLogReadResult(Partition.scala:240)
> >         at
> >
> >
> kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:852)
> >         at
> >
> >
> kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:849)
> >         at
> > scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221)
> >         at
> > scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
> >         at
> >
> >
> kafka.server.ReplicaManager.updateFollowerLogReadResults(ReplicaManager.scala:849)
> >         at
> > kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:467)
> >         at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:434)
> >         at kafka.server.KafkaApis.handle(KafkaApis.scala:69)
> >         at
> > kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> >         at java.lang.Thread.run(Thread.java:722)
> >
>

Re: 0.9.0.0[error]

Posted by Jun Rao <ju...@confluent.io>.
Fredo,

Thanks for reporting this. Are you starting a brand new 0.9.0.0 cluster?
Are there steps that one can follow to reproduce this issue easily?

Jun

On Tue, Nov 24, 2015 at 10:52 PM, Fredo Lee <bu...@gmail.com> wrote:

> The content below is the report for kafka
>
> when i try to fetch coordinator broker, i get 6 for ever.
>
>
>
> [2015-11-25 14:48:28,638] ERROR [KafkaApi-1] error when handling request
> Name: FetchRequest; Version: 1; CorrelationId: 643; ClientId:
> ReplicaFetcherThread-0-4; ReplicaId:
> 1; MaxWait: 500 ms; MinBytes: 1 bytes; RequestInfo: [__consumer_offsets,49]
> -> PartitionFetchInfo(0,1048576),[__consumer_offsets,17] ->
> PartitionFetchInfo(0,1048576),[__con
> sumer_offsets,29] -> PartitionFetchInfo(0,1048576),[blcs,6] ->
> PartitionFetchInfo(0,1048576),[__consumer_offsets,41] ->
> PartitionFetchInfo(0,1048576),[__consumer_offsets,13
> ] -> PartitionFetchInfo(0,1048576),[__consumer_offsets,5] ->
> PartitionFetchInfo(0,1048576),[__consumer_offsets,37] ->
> PartitionFetchInfo(0,1048576),[__consumer_offsets,25]
> -> PartitionFetchInfo(0,1048576),[__consumer_offsets,1] ->
> PartitionFetchInfo(0,1048576) (kafka.server.KafkaApis)
> kafka.common.KafkaException: Should not set log end offset on partition
> [__consumer_offsets,49]'s local replica 1
>         at kafka.cluster.Replica.logEndOffset_$eq(Replica.scala:66)
>         at kafka.cluster.Replica.updateLogReadResult(Replica.scala:53)
>         at
> kafka.cluster.Partition.updateReplicaLogReadResult(Partition.scala:240)
>         at
>
> kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:852)
>         at
>
> kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:849)
>         at
> scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221)
>         at
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
>         at
>
> kafka.server.ReplicaManager.updateFollowerLogReadResults(ReplicaManager.scala:849)
>         at
> kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:467)
>         at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:434)
>         at kafka.server.KafkaApis.handle(KafkaApis.scala:69)
>         at
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
>         at java.lang.Thread.run(Thread.java:722)
>

Re: 0.9.0.0[error]

Posted by Muqtafi Akhmad <mu...@traveloka.com>.
hello Fredo,

Can you provide your program's code? There might be some clues

On Wed, Nov 25, 2015 at 2:27 PM, Fredo Lee <bu...@gmail.com> wrote:

> four kafka nodes, i get these errors
> one node, it works well.
>
> 2015-11-25 14:52 GMT+08:00 Fredo Lee <bu...@gmail.com>:
>
> >
> > The content below is the report for kafka
> >
> > when i try to fetch coordinator broker, i get 6 for ever.
> >
> >
> >
> > [2015-11-25 14:48:28,638] ERROR [KafkaApi-1] error when handling request
> > Name: FetchRequest; Version: 1; CorrelationId: 643; ClientId:
> > ReplicaFetcherThread-0-4; ReplicaId:
> > 1; MaxWait: 500 ms; MinBytes: 1 bytes; RequestInfo:
> > [__consumer_offsets,49] ->
> > PartitionFetchInfo(0,1048576),[__consumer_offsets,17] ->
> > PartitionFetchInfo(0,1048576),[__con
> > sumer_offsets,29] -> PartitionFetchInfo(0,1048576),[blcs,6] ->
> > PartitionFetchInfo(0,1048576),[__consumer_offsets,41] ->
> > PartitionFetchInfo(0,1048576),[__consumer_offsets,13
> > ] -> PartitionFetchInfo(0,1048576),[__consumer_offsets,5] ->
> > PartitionFetchInfo(0,1048576),[__consumer_offsets,37] ->
> > PartitionFetchInfo(0,1048576),[__consumer_offsets,25]
> > -> PartitionFetchInfo(0,1048576),[__consumer_offsets,1] ->
> > PartitionFetchInfo(0,1048576) (kafka.server.KafkaApis)
> > kafka.common.KafkaException: Should not set log end offset on partition
> > [__consumer_offsets,49]'s local replica 1
> >         at kafka.cluster.Replica.logEndOffset_$eq(Replica.scala:66)
> >         at kafka.cluster.Replica.updateLogReadResult(Replica.scala:53)
> >         at
> > kafka.cluster.Partition.updateReplicaLogReadResult(Partition.scala:240)
> >         at
> >
> kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:852)
> >         at
> >
> kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:849)
> >         at
> > scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221)
> >         at
> > scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
> >         at
> >
> kafka.server.ReplicaManager.updateFollowerLogReadResults(ReplicaManager.scala:849)
> >         at
> > kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:467)
> >         at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:434)
> >         at kafka.server.KafkaApis.handle(KafkaApis.scala:69)
> >         at
> > kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> >         at java.lang.Thread.run(Thread.java:722)
> >
>



-- 
Muqtafi Akhmad
Software Engineer
Traveloka

Re: 0.9.0.0[error]

Posted by Fredo Lee <bu...@gmail.com>.
four kafka nodes, i get these errors
one node, it works well.

2015-11-25 14:52 GMT+08:00 Fredo Lee <bu...@gmail.com>:

>
> The content below is the report for kafka
>
> when i try to fetch coordinator broker, i get 6 for ever.
>
>
>
> [2015-11-25 14:48:28,638] ERROR [KafkaApi-1] error when handling request
> Name: FetchRequest; Version: 1; CorrelationId: 643; ClientId:
> ReplicaFetcherThread-0-4; ReplicaId:
> 1; MaxWait: 500 ms; MinBytes: 1 bytes; RequestInfo:
> [__consumer_offsets,49] ->
> PartitionFetchInfo(0,1048576),[__consumer_offsets,17] ->
> PartitionFetchInfo(0,1048576),[__con
> sumer_offsets,29] -> PartitionFetchInfo(0,1048576),[blcs,6] ->
> PartitionFetchInfo(0,1048576),[__consumer_offsets,41] ->
> PartitionFetchInfo(0,1048576),[__consumer_offsets,13
> ] -> PartitionFetchInfo(0,1048576),[__consumer_offsets,5] ->
> PartitionFetchInfo(0,1048576),[__consumer_offsets,37] ->
> PartitionFetchInfo(0,1048576),[__consumer_offsets,25]
> -> PartitionFetchInfo(0,1048576),[__consumer_offsets,1] ->
> PartitionFetchInfo(0,1048576) (kafka.server.KafkaApis)
> kafka.common.KafkaException: Should not set log end offset on partition
> [__consumer_offsets,49]'s local replica 1
>         at kafka.cluster.Replica.logEndOffset_$eq(Replica.scala:66)
>         at kafka.cluster.Replica.updateLogReadResult(Replica.scala:53)
>         at
> kafka.cluster.Partition.updateReplicaLogReadResult(Partition.scala:240)
>         at
> kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:852)
>         at
> kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:849)
>         at
> scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221)
>         at
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
>         at
> kafka.server.ReplicaManager.updateFollowerLogReadResults(ReplicaManager.scala:849)
>         at
> kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:467)
>         at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:434)
>         at kafka.server.KafkaApis.handle(KafkaApis.scala:69)
>         at
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
>         at java.lang.Thread.run(Thread.java:722)
>