You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Manikumar (JIRA)" <ji...@apache.org> on 2018/05/29 12:13:00 UTC

[jira] [Resolved] (KAFKA-3293) Consumers are not able to get messages.

     [ https://issues.apache.org/jira/browse/KAFKA-3293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Manikumar resolved KAFKA-3293.
------------------------------
    Resolution: Cannot Reproduce

 Closing inactive issue. Please reopen if the issue still exists in newer versions.

> Consumers are not able to get messages.
> ---------------------------------------
>
>                 Key: KAFKA-3293
>                 URL: https://issues.apache.org/jira/browse/KAFKA-3293
>             Project: Kafka
>          Issue Type: Bug
>          Components: consumer, offset manager
>    Affects Versions: 0.9.0.1
>         Environment: kafka: kafka_2.11-0.9.0.1
> java: jdk1.8.0_65
> OS: Linux stephen-T450s 3.19.0-51-generic #57~14.04.1-Ubuntu SMP Fri Feb 19 14:36:55 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
>            Reporter: Stephen Wong
>            Assignee: Neha Narkhede
>            Priority: Major
>
> Overview
> =======
> The results of test are not consistent.
> The problem is that something is preventing the consumer from receiving the messages.
> Configuration
> ==========
> Server (only num.partitions is changed)
> diff config/server.properties config.backup/server.properties
> 65c65
> < num.partitions=8
> ---
> > num.partitions=1
> Producer
>     properties.put("bootstrap.servers", “localhost:9092”);
>     properties.put("acks", "all");
>     properties.put("key.serializer", "org.apache.kafka.common.serialization.LongSerializer");
>     properties.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
>     properties.put("partitioner.class", "kafkatest.sample2.SimplePartitioner");
> Consumer
>     properties.put("bootstrap.servers", “localhost:9092”);
>     properties.put("group.id", "testGroup");
>     properties.put("key.deserializer", "org.apache.kafka.common.serialization.LongDeserializer");
>     properties.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
>     properties.put("enable.auto.commit", "false");
> Steps to reproduce:
> ===============
> 1. started the zookeeper
> 2. started the kafka server
> 3. created topic
> $ bin/kafka-topics.sh --zookeeper localhost:2181 --create --replication-factor 1 --partition 8 --topic testTopic4
> 4. Ran SimpleProducerDriver with 5 producers, and the amount of messages produced is 50
> 5. Offset Status
> $ bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list localhost:9092 --topic testTopic4 --time -1
> testTopic4:2:1
> testTopic4:5:27
> testTopic4:4:1
> testTopic4:7:2
> testTopic4:1:8
> testTopic4:3:0
> testTopic4:6:11
> testTopic4:0:0
> 6. waited till the producer driver completes, it takes no more than a few seconds
> 7. ran the SimpleConsumerDriver a couple of times, and no message is received. Following DEBUG information is found:
> 2016-02-25 22:42:19 DEBUG [pool-1-thread-2] Fetcher: - Ignoring fetched records for partition testTopic4-3 since it is no longer fetchable
> 8. altered the properties of consumer, had the auto commit disabled:
>     //properties.put("enable.auto.commit", "false");
> 9. ran the SimpleConsumerDriver a couple of times, still, no message is received.
> Following DEBUG information is found:
> 2016-02-25 22:47:23 DEBUG [pool-1-thread-2] ConsumerCoordinator: - Committed offset 8 for partition testTopic4-1
> seems like the offset was updated?
> 10. re-enabled the auto commit, nothing changed.
> Following DEBUG information is found:
> 2016-02-25 22:49:38 DEBUG [pool-1-thread-7] Fetcher: - Resetting offset for partition testTopic4-6 to the committed offset 11
> 11. ran the SimpleProducerDriver again, another 50 messages are published
> 12. ran the SimpleConsumerDriver again, 100 messages were consumed.
> 13. ran the SimpleConsumerDriver again, 50 messages were consumed.
> As auto commit is disabled, all messages (100) should be consumed.
> The results of test are not consistent.
> The problem is that something is preventing the consumer from receiving the messages.
> And sometimes it required running the producer when the consumers are still active so as to get around it.
> And once the consumers started to consume messages, the problem did not occur any more.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)