You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "Fernando Iury Alves Costa (Jira)" <ji...@apache.org> on 2019/10/30 13:26:00 UTC

[jira] [Commented] (KAFKA-6985) Error connection between cluster node

    [ https://issues.apache.org/jira/browse/KAFKA-6985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16963013#comment-16963013 ] 

Fernando Iury Alves Costa commented on KAFKA-6985:
--------------------------------------------------

I'm having the same issue running kafka_2.11-2.1.0

> Error connection between cluster node
> -------------------------------------
>
>                 Key: KAFKA-6985
>                 URL: https://issues.apache.org/jira/browse/KAFKA-6985
>             Project: Kafka
>          Issue Type: Bug
>          Components: KafkaConnect
>         Environment: Centos-7
>            Reporter: Ranjeet Ranjan
>            Priority: Major
>
> Hi Have setup multi-node Kafka cluster but getting an error while connecting one node to another although there is an issue with firewall or port. I am able to telnet 
> WARN [ReplicaFetcherThread-0-1], Error in fetch Kafka.server.ReplicaFetcherThread$FetchRequest@8395951 (Kafka.server.ReplicaFetcherThread)
> java.io.IOException: Connection to Kafka-1:9092 (id: 1 rack: null) failed
>  
> {code:java}
>  
> at kafka.utils.NetworkClientBlockingOps$.awaitReady$1(NetworkClientBlockingOps.scala:84)
> at kafka.utils.NetworkClientBlockingOps$.blockingReady$extension(NetworkClientBlockingOps.scala:94)
> at kafka.server.ReplicaFetcherThread.sendRequest(ReplicaFetcherThread.scala:244)
> at kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:234)
> at kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:42)
> at kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:118)
> at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:103)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
> {code}
> Here you go server.properties
> Node:1
>  
> {code:java}
> ############################# Server Basics #############################
> # The id of the broker. This must be set to a unique integer for each broker.
> broker.id=1
> # Switch to enable topic deletion or not, default value is false
> delete.topic.enable=true
> ############################# Socket Server Settings #############################
> listeners=PLAINTEXT://kafka-1:9092
> advertised.listeners=PLAINTEXT://kafka-1:9092
> #listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
> # The number of threads handling network requests
> num.network.threads=3
> # The number of threads doing disk I/O
> num.io.threads=8
> # The send buffer (SO_SNDBUF) used by the socket server
> socket.send.buffer.bytes=102400
> # The receive buffer (SO_RCVBUF) used by the socket server
> socket.receive.buffer.bytes=102400
> # The maximum size of a request that the socket server will accept (protection against OOM)
> socket.request.max.bytes=104857600
> ############################# Log Basics #############################
> # A comma seperated list of directories under which to store log files
> log.dirs=/var/log/kafka
> # The default number of log partitions per topic. More partitions allow greater
> # parallelism for consumption, but this will also result in more files across
> # the brokers.
> num.partitions=1
> # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
> # This value is recommended to be increased for installations with data dirs located in RAID array.
> num.recovery.threads.per.data.dir=1
> ############################# Log Retention Policy #############################
> # The minimum age of a log file to be eligible for deletion due to age
> log.retention.hours=48
> # A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
> # segments don't drop below log.retention.bytes. Functions independently of log.retention.hours.
> log.retention.bytes=1073741824
> # The maximum size of a log segment file. When this size is reached a new log segment will be created.
> log.segment.bytes=1073741824
> # The interval at which log segments are checked to see if they can be deleted according
> # to the retention policies
> log.retention.check.interval.ms=300000
> ############################# Zookeeper #############################
> # root directory for all kafka znodes.
> zookeeper.connect=10.130.82.28:2181
> # Timeout in ms for connecting to zookeeper
> zookeeper.connection.timeout.ms=6000
> {code}
>  
>  
> Node-2
> {code:java}
> ############################# Server Basics #############################
> # The id of the broker. This must be set to a unique integer for each broker.
> broker.id=2
> # Switch to enable topic deletion or not, default value is false
> delete.topic.enable=true
> ############################# Socket Server Settings #############################
> listeners=PLAINTEXT://kafka-2:9092
> advertised.listeners=PLAINTEXT://kafka-2:9092
> #listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
> # The number of threads handling network requests
> num.network.threads=3
> # The number of threads doing disk I/O
> num.io.threads=8
> # The send buffer (SO_SNDBUF) used by the socket server
> socket.send.buffer.bytes=102400
> # The receive buffer (SO_RCVBUF) used by the socket server
> socket.receive.buffer.bytes=102400
> # The maximum size of a request that the socket server will accept (protection against OOM)
> socket.request.max.bytes=104857600
> ############################# Log Basics #############################
> # A comma seperated list of directories under which to store log files
> log.dirs=/var/log/kafka
> # The default number of log partitions per topic. More partitions allow greater
> # parallelism for consumption, but this will also result in more files across
> # the brokers.
> num.partitions=1
> # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
> # This value is recommended to be increased for installations with data dirs located in RAID array.
> num.recovery.threads.per.data.dir=1
> ############################# Log Retention Policy #############################
> # The minimum age of a log file to be eligible for deletion due to age
> log.retention.hours=48
> # A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
> # segments don't drop below log.retention.bytes. Functions independently of log.retention.hours.
> log.retention.bytes=1073741824
> # The maximum size of a log segment file. When this size is reached a new log segment will be created.
> log.segment.bytes=1073741824
> # The interval at which log segments are checked to see if they can be deleted according
> # to the retention policies
> log.retention.check.interval.ms=300000
> ############################# Zookeeper #############################
> # root directory for all kafka znodes.
> zookeeper.connect=10.130.82.28:2181
> # Timeout in ms for connecting to zookeeper
> zookeeper.connection.timeout.ms=6000
> {code}
>  
> Node- 3
>  
> {code:java}
> ############################# Server Basics #############################
> # The id of the broker. This must be set to a unique integer for each broker.
> broker.id=3
> # Switch to enable topic deletion or not, default value is false
> delete.topic.enable=true
> ############################# Socket Server Settings #############################
> listeners=PLAINTEXT://kafka-3:9092
> advertised.listeners=PLAINTEXT://kafka-3:9092
> #listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
> # The number of threads handling network requests
> num.network.threads=3
> # The number of threads doing disk I/O
> num.io.threads=8
> # The send buffer (SO_SNDBUF) used by the socket server
> socket.send.buffer.bytes=102400
> # The receive buffer (SO_RCVBUF) used by the socket server
> socket.receive.buffer.bytes=102400
> # The maximum size of a request that the socket server will accept (protection against OOM)
> socket.request.max.bytes=104857600
> ############################# Log Basics #############################
> # A comma seperated list of directories under which to store log files
> log.dirs=/var/log/kafka
> # The default number of log partitions per topic. More partitions allow greater
> # parallelism for consumption, but this will also result in more files across
> # the brokers.
> num.partitions=1
> # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
> # This value is recommended to be increased for installations with data dirs located in RAID array.
> num.recovery.threads.per.data.dir=1
> ############################# Log Retention Policy #############################
> # The minimum age of a log file to be eligible for deletion due to age
> log.retention.hours=48
> # A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
> # segments don't drop below log.retention.bytes. Functions independently of log.retention.hours.
> log.retention.bytes=1073741824
> # The maximum size of a log segment file. When this size is reached a new log segment will be created.
> log.segment.bytes=1073741824
> # The interval at which log segments are checked to see if they can be deleted according
> # to the retention policies
> log.retention.check.interval.ms=300000
> ############################# Zookeeper #############################
> # root directory for all kafka znodes.
> zookeeper.connect=10.130.82.28:2181
> # Timeout in ms for connecting to zookeeper
> zookeeper.connection.timeout.ms=6000
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)