You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by iamabug <18...@163.com> on 2019/10/16 06:27:35 UTC

two SASL_PLAINTEXT listeners


Greetings,


Has anyone tried configuring two SASL_PLAINTEXT listeners, internal and external traffic separated. I do so and something really weird happens. Consuming and producing using internal listener works fine but when consuming and producing using external listener timeout errors always occur.


ERROR while producing:
[2019-10-16 14:14:45,810] ERROR Error when sending message to topic test with key: null, value: 1 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for test-1: 24050 ms has passed since batch creation plus linger time


ERROR while consuming:
[2019-10-16 14:17:10,860] WARN [Consumer clientId=consumer-1, groupId=console-consumer-567] Synchronous auto-commit of offsets {test-1=OffsetAndMetadata{offset=8, metadata=''}, test-0=OffsetAndMetadata{offset=6, metadata=''}, test-2=OffsetAndMetadata{offset=4, metadata=''}} failed: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records. (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)


Related configs:
listeners = SASL_PLAINTEXT://INTERNAL_IP:9092,EXTERNAL://EXTERNAL_IP:19092
listener.security.protocol.map = SASL_PLAINTEXT:SASL_PLAINTEXT,EXTERNAL:SASL_PLAINTEXT
security.inter.broker.protocol = SASL_PLAINTEXT
sasl.enabled.mechanisms = PLAIN
sasl.mechanism.inter.broker.protocol = PLAIN


I am using Ambari to manage this 2.0.0 Kafka of three brokers. I also replicate the problem described above in a single-broker Kafka. Could somebody give me some advice, it would be really helpful.