You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by "M. Manna" <ma...@gmail.com> on 2018/01/02 10:18:07 UTC

Unable to start 1st broker in a 3 node configuration

Hi All,

Firstly a very Happy New Year!

I set up my 3 node configuration where each of the broker is set to have
identical configurations. They are in in three different servers, but
within the same domain.

I have got a very simply Windows script that does the following:

1) Starts each zookeeper instances with a 5 seconds delay.
2) Once the zookeepers are running, wait for 10 seconds.
3) Start all Brokers with a 5 seconds delay.

Just from today, I am unable to start my first broker. I tried to
individually stop and start the broker but it didn't help. Also, I tried to
do a full cleanpup (i.e. remove all ZK and Kafka logs) to start again. But
the first broker seem to be causing issues.

All these are in Kafka_2.10-0.10.2.1.

I suspect that the machine has somehow bound 9092 port to something and got
stuck in the process. At least, from the logs this is what I am getting:

>
> log4j:ERROR Failed to rename [C:\kafka_2.10-0.10.2.1/logs/server.log] to
> [C:\kafka_2.10-0.10.2.1/logs/server.log.2018-01-02-09].
> [2018-01-02 10:04:28,204] INFO KafkaConfig values:
>         advertised.host.name = null
>         advertised.listeners = PLAINTEXT://localhost:9092
>         advertised.port = null
>         authorizer.class.name =
>         auto.create.topics.enable = true
>         auto.leader.rebalance.enable = true
>         background.threads = 10
>         broker.id = 1
>         broker.id.generation.enable = true
>         broker.rack = null
>         compression.type = gzip
>         connections.max.idle.ms = 600000
>         controlled.shutdown.enable = true
>         controlled.shutdown.max.retries = 10
>         controlled.shutdown.retry.backoff.ms = 3000
>         controller.socket.timeout.ms = 30000
>         create.topic.policy.class.name = null
>         default.replication.factor = 1
>         delete.topic.enable = true
>         fetch.purgatory.purge.interval.requests = 1000
>         group.max.session.timeout.ms = 300000
>         group.min.session.timeout.ms = 6000
>         host.name =
>         inter.broker.listener.name = null
>         inter.broker.protocol.version = 0.10.2-IV0
>         leader.imbalance.check.interval.seconds = 300
>         leader.imbalance.per.broker.percentage = 10
>         listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL
>         listeners = null
>         log.cleaner.backoff.ms = 15000
>         log.cleaner.dedupe.buffer.size = 134217728
>         log.cleaner.delete.retention.ms = 86400000
>         log.cleaner.enable = true
>         log.cleaner.io.buffer.load.factor = 0.9
>         log.cleaner.io.buffer.size = 524288
>         log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
>         log.cleaner.min.cleanable.ratio = 0.5
>         log.cleaner.min.compaction.lag.ms = 0
>         log.cleaner.threads = 1
>         log.cleanup.policy = [delete]
>         log.dir = /tmp/kafka-logs
>         log.dirs = /kafka1
>         log.flush.interval.messages = 9223372036854775807
>         log.flush.interval.ms = null
>         log.flush.offset.checkpoint.interval.ms = 60000
>         log.flush.scheduler.interval.ms = 9223372036854775807
>         log.index.interval.bytes = 4096
>         log.index.size.max.bytes = 10485760
>         log.message.format.version = 0.10.2-IV0
>         log.message.timestamp.difference.max.ms = 9223372036854775807
>         log.message.timestamp.type = CreateTime
>         log.preallocate = false
>         log.retention.bytes = 20971520
>         log.retention.check.interval.ms = 300000
>         log.retention.hours = 2
>         log.retention.minutes = 15
>         log.retention.ms = null
>         log.roll.hours = 1
>         log.roll.jitter.hours = 0
>         log.roll.jitter.ms = null
>         log.roll.ms = null
>         log.segment.bytes = 10485760
>         log.segment.delete.delay.ms = 60000
>         max.connections.per.ip = 2147483647
>         max.connections.per.ip.overrides =
>         message.max.bytes = 1000012
>         metric.reporters = []
>         metrics.num.samples = 2
>         metrics.recording.level = INFO
>         metrics.sample.window.ms = 30000
>         min.insync.replicas = 1
>         num.io.threads = 24
>         num.network.threads = 9
>         num.partitions = 1
>         num.recovery.threads.per.data.dir = 1
>         num.replica.fetchers = 1
>         offset.metadata.max.bytes = 4096
>         offsets.commit.required.acks = -1
>         offsets.commit.timeout.ms = 5000
>         offsets.load.buffer.size = 5242880
>         offsets.retention.check.interval.ms = 300000
>         offsets.retention.minutes = 2880
>         offsets.topic.compression.codec = 0
>         offsets.topic.num.partitions = 50
>         offsets.topic.replication.factor = 3
>         offsets.topic.segment.bytes = 104857600
>         port = 9092
>         principal.builder.class = class
> org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
>         producer.purgatory.purge.interval.requests = 1000
>         queued.max.requests = 500
>         quota.consumer.default = 9223372036854775807
>         quota.producer.default = 9223372036854775807
>         quota.window.num = 11
>         quota.window.size.seconds = 1
>         replica.fetch.backoff.ms = 1000
>         replica.fetch.max.bytes = 1048576
>         replica.fetch.min.bytes = 1
>         replica.fetch.response.max.bytes = 10485760
>         replica.fetch.wait.max.ms = 500
>         replica.high.watermark.checkpoint.interval.ms = 5000
>         replica.lag.time.max.ms = 10000
>         replica.socket.receive.buffer.bytes = 65536
>         replica.socket.timeout.ms = 30000
>         replication.quota.window.num = 11
>         replication.quota.window.size.seconds = 1
>         request.timeout.ms = 45000
>         reserved.broker.max.id = 1000
>         sasl.enabled.mechanisms = [GSSAPI]
>         sasl.kerberos.kinit.cmd = /usr/bin/kinit
>         sasl.kerberos.min.time.before.relogin = 60000
>         sasl.kerberos.principal.to.local.rules = [DEFAULT]
>         sasl.kerberos.service.name = null
>         sasl.kerberos.ticket.renew.jitter = 0.05
>         sasl.kerberos.ticket.renew.window.factor = 0.8
>         sasl.mechanism.inter.broker.protocol = GSSAPI
>         security.inter.broker.protocol = PLAINTEXT
>         socket.receive.buffer.bytes = 102400
>         socket.request.max.bytes = 999999999
>         socket.send.buffer.bytes = 102400
>         ssl.cipher.suites = null
>         ssl.client.auth = none
>         ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
>         ssl.endpoint.identification.algorithm = null
>         ssl.key.password = null
>         ssl.keymanager.algorithm = SunX509
>         ssl.keystore.location = null
>         ssl.keystore.password = null
>         ssl.keystore.type = JKS
>         ssl.protocol = TLS
>         ssl.provider = null
>         ssl.secure.random.implementation = null
>         ssl.trustmanager.algorithm = PKIX
>         ssl.truststore.location = null
>         ssl.truststore.password = null
>         ssl.truststore.type = JKS
>         unclean.leader.election.enable = true
>         zookeeper.connect =
> localhost:2181,eg2-pp-ifs-245:2181,eg2-pp-ifs-219:9092
>         zookeeper.connection.timeout.ms = 35000
>         zookeeper.session.timeout.ms = 20000
>         zookeeper.set.acl = false
>         zookeeper.sync.time.ms = 2000
>  (kafka.server.KafkaConfig)
> [2018-01-02 10:04:28,282] INFO starting (kafka.server.KafkaServer)
> [2018-01-02 10:04:28,297] INFO Connecting to zookeeper on
> localhost:2181,eg2-pp-ifs-245:2181,eg2-pp-ifs-219:9092
> (kafka.server.KafkaServer)
> [2018-01-02 10:04:28,313] INFO Starting ZkClient event thread.
> (org.I0Itec.zkclient.ZkEventThread)
> [2018-01-02 10:04:28,313] INFO Client
> environment:zookeeper.version=3.4.9-1757313, built on 08/23/2016 06:50 GMT
> (org.apache.zookeeper.ZooKeeper)
> [2018-01-02 10:04:28,313] INFO Client environment:host.name=
> localhost.eg2pp.net (org.apache.zookeeper.ZooKeeper)
> [2018-01-02 10:04:28,313] INFO Client environment:java.version=1.8.0_121
> (org.apache.zookeeper.ZooKeeper)
> [2018-01-02 10:04:28,313] INFO Client environment:java.vendor=Oracle
> Corporation (org.apache.zookeeper.ZooKeeper)
> [2018-01-02 10:04:28,313] INFO Client
> environment:java.home=C:\jdk1.8.0\jre (org.apache.zookeeper.ZooKeeper)
> [2018-01-02 10:04:28,313] INFO Client
> environment:java.class.path=C:\kafka_2.10-0.10.2.1\libs\aopalliance-repackaged-2.5.0-b05.jar;C:\kafka_2.10-0.10.2.1\libs\argparse4j-0.7.0.jar;C:\kafka_2.10-0.10.2.1\libs\connect-api-0.10.2.1.jar;C:\kafka_2.10-0.10.2.1\libs\connect-file-0.10.2.1.jar;C:\kafka_2.10-0.10.2.1\libs\connect-json-0.10.2.1.jar;C:\kafka_2.10-0.10.2.1\libs\connect-runtime-0.10.2.1.jar;C:\kafka_2.10-0.10.2.1\libs\connect-transforms-0.10.2.1.jar;C:\kafka_2.10-0.10.2.1\libs\guava-18.0.jar;C:\kafka_2.10-0.10.2.1\libs\hk2-api-2.5.0-b05.jar;C:\kafka_2.10-0.10.2.1\libs\hk2-locator-2.5.0-b05.jar;C:\kafka_2.10-0.10.2.1\libs\hk2-utils-2.5.0-b05.jar;C:\kafka_2.10-0.10.2.1\libs\jackson-annotations-2.8.0.jar;C:\kafka_2.10-0.10.2.1\libs\jackson-annotations-2.8.5.jar;C:\kafka_2.10-0.10.2.1\libs\jackson-core-2.8.5.jar;C:\kafka_2.10-0.10.2.1\libs\jackson-databind-2.8.5.jar;C:\kafka_2.10-0.10.2.1\libs\jackson-jaxrs-base-2.8.5.jar;C:\kafka_2.10-0.10.2.1\libs\jackson-jaxrs-json-provider-2.8.5.jar;C:\kafka_2.10-0.10.2.1\libs\jackson-module-jaxb-annotations-2.8.5.jar;C:\kafka_2.10-0.10.2.1\libs\javassist-3.20.0-GA.jar;C:\kafka_2.10-0.10.2.1\libs\javax.annotation-api-1.2.jar;C:\kafka_2.10-0.10.
>
> 2.1\libs\javax.inject-1.jar;C:\kafka_2.10-0.10.2.1\libs\javax.inject-2.5.0-b05.jar;C:\kafka_2.10-0.10.2.1\libs\javax.servlet-api-3.1.0.jar;C:\kafka_2.10-0.10.2.1\libs\javax.ws.rs-api-2.0.1.jar;C:\kafka_2.10-0.10.2.1\libs\jersey-client-2.24.jar;C:\kafka_2.10-0.10.2.1\libs\jersey-common-2.24.jar;C:\kafka_2.10-0.10.2.1\libs\jersey-container-servlet-2.24.jar;C:\kafka_2.10-0.10.2.1\libs\jersey-container-servlet-core-2.24.jar;C:\kafka_2.10-0.10.2.1\libs\jersey-guava-2.24.jar;C:\kafka_2.10-0.10.2.1\libs\jersey-media-jaxb-2.24.jar;C:\kafka_2.10-0.10.2.1\libs\jersey-server-2.24.jar;C:\kafka_2.10-0.10.2.1\libs\jetty-continuation-9.2.15.v20160210.jar;C:\kafka_2.10-0.10.2.1\libs\jetty-http-9.2.15.v20160210.jar;C:\kafka_2.10-0.10.2.1\libs\jetty-io-9.2.15.v20160210.jar;C:\kafka_2.10-0.10.2.1\libs\jetty-security-9.2.15.v20160210.jar;C:\kafka_2.10-0.10.2.1\libs\jetty-server-9.2.15.v20160210.jar;C:\kafka_2.10-0.10.2.1\libs\jetty-servlet-9.2.15.v20160210.jar;C:\kafka_2.10-0.10.2.1\libs\jetty-servlets-9.2.15.v20160210.jar;C:\kafka_2.10-0.10.2.1\libs\jetty-util-9.2.15.v20160210.jar;C:\kafka_2.10-0.10.2.1\libs\jopt-simple-5.0.3.jar;C:\kafka_2.10-0.10.2.1\libs\kafka-clients-0.10.2.1.jar;C:\kafka_2.10-0.
>
> 10.2.1\libs\kafka-log4j-appender-0.10.2.1.jar;C:\kafka_2.10-0.10.2.1\libs\kafka-streams-0.10.2.1.jar;C:\kafka_2.10-0.10.2.1\libs\kafka-streams-examples-0.10.2.1.jar;C:\kafka_2.10-0.10.2.1\libs\kafka-tools-0.10.2.1.jar;C:\kafka_2.10-0.10.2.1\libs\kafka_2.10-0.10.2.1-javadoc.jar;C:\kafka_2.10-0.10.2.1\libs\kafka_2.10-0.10.2.1-javadoc.jar.asc;C:\kafka_2.10-0.10.2.1\libs\kafka_2.10-0.10.2.1-scaladoc.jar;C:\kafka_2.10-0.10.2.1\libs\kafka_2.10-0.10.2.1-scaladoc.jar.asc;C:\kafka_2.10-0.10.2.1\libs\kafka_2.10-0.10.2.1-sources.jar;C:\kafka_2.10-0.10.2.1\libs\kafka_2.10-0.10.2.1-sources.jar.asc;C:\kafka_2.10-0.10.2.1\libs\kafka_2.10-0.10.2.1-test-sources.jar;C:\kafka_2.10-0.10.2.1\libs\kafka_2.10-0.10.2.1-test-sources.jar.asc;C:\kafka_2.10-0.10.2.1\libs\kafka_2.10-0.10.2.1-test.jar;C:\kafka_2.10-0.10.2.1\libs\kafka_2.10-0.10.2.1-test.jar.asc;C:\kafka_2.10-0.10.2.1\libs\kafka_2.10-0.10.2.1.jar;C:\kafka_2.10-0.10.2.1\libs\kafka_2.10-0.10.2.1.jar.asc;C:\kafka_2.10-0.10.2.1\libs\log4j-1.2.17.jar;C:\kafka_2.10-0.10.2.1\libs\lz4-1.3.0.jar;C:\kafka_2.10-0.10.2.1\libs\metrics-core-2.2.0.jar;C:\kafka_2.10-0.10.2.1\libs\osgi-resource-locator-1.0.1.jar;C:\kafka_2.10-0.10.2.1\libs\reflections-0.9.10.jar
> ;C:\kafka_2.10-0.10.2.1\libs\rocksdbjni-5.0.1.jar;C:\kafka_2.10-0.10.2.1\libs\scala-library-2.10.6.jar;C:\kafka_2.10-0.10.2.1\libs\slf4j-api-1.7.21.jar;C:\kafka_2.10-0.10.2.1\libs\slf4j-log4j12-1.7.21.jar;C:\kafka_2.10-0.10.2.1\libs\snappy-java-1.1.2.6.jar;C:\kafka_2.10-0.10.2.1\libs\validation-api-1.1.0.Final.jar;C:\kafka_2.10-0.10.2.1\libs\zkclient-0.10.jar;C:\kafka_2.10-0.10.2.1\libs\zookeeper-3.4.9.jar
> (org.apache.zookeeper.ZooKeeper)
> [2018-01-02 10:04:28,313] INFO Client
> environment:java.library.path=C:\jdk1.8.0\bin;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;C:\Program
> Files
> (x86)\Integrity\IntegrityClient10\bin;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\PROGRA~2\INTEGR~1\Toolkit\mksnt;C:\CustomCommands\;C:\jdk1.8.0\bin;C:\apache-maven-3.5.0\bin;C:\apache-ant-1.9.6\bin;c:\choco\bin;C:\Program
> Files (x86)\Microsoft SQL Server\Client SDK\ODBC\130\Tools\Binn\;C:\Program
> Files (x86)\Microsoft SQL Server\130\Tools\Binn\;C:\Program Files
> (x86)\Microsoft SQL Server\130\DTS\Binn\;C:\Program Files (x86)\Microsoft
> SQL Server\130\Tools\Binn\ManagementStudio\;C:\Program Files\Microsoft SQL
> Server\130\DTS\Binn\;C:\Program Files\Microsoft SQL Server\Client
> SDK\ODBC\130\Tools\Binn\;C:\Program Files\Microsoft SQL
> Server\130\Tools\Binn\;. (org.apache.zookeeper.ZooKeeper)
> [2018-01-02 10:04:28,329] INFO Client
> environment:java.io.tmpdir=C:\Users\I318433\AppData\Local\Temp\3\
> (org.apache.zookeeper.ZooKeeper)
> [2018-01-02 10:04:28,329] INFO Client environment:java.compiler=<NA>
> (org.apache.zookeeper.ZooKeeper)
> [2018-01-02 10:04:28,329] INFO Client environment:os.name=Windows Server
> 2012 R2 (org.apache.zookeeper.ZooKeeper)
> [2018-01-02 10:04:28,329] INFO Client environment:os.arch=amd64
> (org.apache.zookeeper.ZooKeeper)
> [2018-01-02 10:04:28,329] INFO Client environment:os.version=6.3
> (org.apache.zookeeper.ZooKeeper)
> [2018-01-02 10:04:28,329] INFO Client environment:user.name=I318433
> (org.apache.zookeeper.ZooKeeper)
> [2018-01-02 10:04:28,329] INFO Client
> environment:user.home=C:\Users\I318433 (org.apache.zookeeper.ZooKeeper)
> [2018-01-02 10:04:28,329] INFO Client
> environment:user.dir=C:\kafka_2.10-0.10.2.1\bin\windows
> (org.apache.zookeeper.ZooKeeper)
> [2018-01-02 10:04:28,329] INFO Initiating client connection,
> connectString=localhost:2181,eg2-pp-ifs-245:2181,eg2-pp-ifs-219:9092
> sessionTimeout=20000 watcher=org.I0Itec.zkclient.ZkClient@a514af7
> (org.apache.zookeeper.ZooKeeper)
> [2018-01-02 10:04:28,360] INFO Waiting for keeper state SyncConnected
> (org.I0Itec.zkclient.ZkClient)
> [2018-01-02 10:04:28,360] INFO Opening socket connection to server
> eg2-pp-ifs-219.eg2pp.net/10.44.201.219:9092. Will not attempt to
> authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
> [2018-01-02 10:04:28,360] INFO Socket connection established to
> eg2-pp-ifs-219.eg2pp.net/10.44.201.219:9092, initiating session
> (org.apache.zookeeper.ClientCnxn)
> [2018-01-02 10:04:33,368] WARN Client session timed out, have not heard
> from server in 5008ms for sessionid 0x0 (org.apache.zookeeper.ClientCnxn)
> [2018-01-02 10:04:33,368] INFO Client session timed out, have not heard
> from server in 5008ms for sessionid 0x0, closing socket connection and
> attempting reconnect (org.apache.zookeeper.ClientCnxn)
> [2018-01-02 10:04:33,742] INFO Opening socket connection to server
> eg2-pp-ifs-245.eg2pp.net/10.44.201.245:2181. Will not attempt to
> authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
> [2018-01-02 10:04:33,742] INFO Socket connection established to
> eg2-pp-ifs-245.eg2pp.net/10.44.201.245:2181, initiating session
> (org.apache.zookeeper.ClientCnxn)
> [2018-01-02 10:04:33,742] INFO Session establishment complete on server
> eg2-pp-ifs-245.eg2pp.net/10.44.201.245:2181, sessionid =
> 0x260b64e7ac90002, negotiated timeout = 20000
> (org.apache.zookeeper.ClientCnxn)
> [2018-01-02 10:04:33,742] INFO zookeeper state changed (SyncConnected)
> (org.I0Itec.zkclient.ZkClient)
> [2018-01-02 10:04:33,851] INFO Cluster ID = u4Z2d2gUS8O9SLG5c51ljA
> (kafka.server.KafkaServer)
> [2018-01-02 10:04:33,851] WARN No meta.properties file under dir
> C:\kafka1\meta.properties (kafka.server.BrokerMetadataCheckpoint)
> [2018-01-02 10:04:33,883] INFO [ThrottledRequestReaper-Fetch], Starting
> (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
> [2018-01-02 10:04:33,883] INFO [ThrottledRequestReaper-Produce], Starting
> (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
> [2018-01-02 10:04:33,945] INFO Loading logs. (kafka.log.LogManager)
> [2018-01-02 10:04:33,945] INFO Logs loading complete in 0 ms.
> (kafka.log.LogManager)
> [2018-01-02 10:04:34,023] INFO Starting log cleanup with a period of
> 300000 ms. (kafka.log.LogManager)
> [2018-01-02 10:04:34,023] INFO Starting log flusher with a default period
> of 9223372036854775807 ms. (kafka.log.LogManager)
> log4j:ERROR Failed to rename [C:\kafka_2.10-0.10.2.1/logs/log-cleaner.log]
> to [C:\kafka_2.10-0.10.2.1/logs/log-cleaner.log.2018-01-02-09].
> [2018-01-02 10:04:34,086] FATAL [Kafka Server 1], Fatal error during
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
> kafka.common.KafkaException: Socket server failed to bind to 0.0.0.0:9092:
> Address already in use: bind.
>         at kafka.network.Acceptor.openServerSocket(SocketServer.scala:330)
>         at kafka.network.Acceptor.<init>(SocketServer.scala:255)
>         at
> kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:98)
>         at
> kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:90)
>         at
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>         at
> scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>         at kafka.network.SocketServer.startup(SocketServer.scala:90)
>         at kafka.server.KafkaServer.startup(KafkaServer.scala:215)
>         at
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
>         at kafka.Kafka$.main(Kafka.scala:67)
>         at kafka.Kafka.main(Kafka.scala)
> Caused by: java.net.BindException: Address already in use: bind
>         at sun.nio.ch.Net.bind0(Native Method)
>         at sun.nio.ch.Net.bind(Net.java:433)
>         at sun.nio.ch.Net.bind(Net.java:425)
>         at
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
>         at kafka.network.Acceptor.openServerSocket(SocketServer.scala:326)
>         ... 10 more
> [2018-01-02 10:04:34,101] INFO [Kafka Server 1], shutting down
> (kafka.server.KafkaServer)
> [2018-01-02 10:04:34,101] INFO [Socket Server on Broker 1], Shutting down
> (kafka.network.SocketServer)
> [2018-01-02 10:04:34,117] INFO [Socket Server on Broker 1], Shutdown
> completed (kafka.network.SocketServer)
> [2018-01-02 10:04:34,117] INFO Shutting down. (kafka.log.LogManager)
> [2018-01-02 10:04:34,117] INFO Shutdown complete. (kafka.log.LogManager)
> [2018-01-02 10:04:34,117] INFO Terminate ZkClient event thread.
> (org.I0Itec.zkclient.ZkEventThread)
> [2018-01-02 10:04:34,133] INFO Session: 0x260b64e7ac90002 closed
> (org.apache.zookeeper.ZooKeeper)
> [2018-01-02 10:04:34,133] INFO EventThread shut down for session:
> 0x260b64e7ac90002 (org.apache.zookeeper.ClientCnxn)
> [2018-01-02 10:04:34,133] INFO [Kafka Server 1], shut down completed
> (kafka.server.KafkaServer)
> [2018-01-02 10:04:34,133] FATAL Fatal error during KafkaServerStartable
> startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
> kafka.common.KafkaException: Socket server failed to bind to 0.0.0.0:9092:
> Address already in use: bind.
>         at kafka.network.Acceptor.openServerSocket(SocketServer.scala:330)
>         at kafka.network.Acceptor.<init>(SocketServer.scala:255)
>         at
> kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:98)
>         at
> kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:90)
>         at
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>         at
> scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>         at kafka.network.SocketServer.startup(SocketServer.scala:90)
>         at kafka.server.KafkaServer.startup(KafkaServer.scala:215)
>         at
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
>         at kafka.Kafka$.main(Kafka.scala:67)
>         at kafka.Kafka.main(Kafka.scala)
> Caused by: java.net.BindException: Address already in use: bind
>         at sun.nio.ch.Net.bind0(Native Method)
>         at sun.nio.ch.Net.bind(Net.java:433)
>         at sun.nio.ch.Net.bind(Net.java:425)
>         at
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
>         at kafka.network.Acceptor.openServerSocket(SocketServer.scala:326)
>         ... 10 more
> [2018-01-02 10:04:34,148] INFO [Kafka Server 1], shutting down
> (kafka.server.KafkaServer)




Could someone please advise whether this is a typical problem? Also, is
this a very typical Windows problem, or could this happen on Linux too ?

Best Regards,

Re: Unable to start 1st broker in a 3 node configuration

Posted by Jordan Pilat <jr...@gmail.com>.
The logs say,

log4j:ERROR Failed to rename [C:\kafka_2.10-0.10.2.1/logs/log-cleaner.log] to [C:\kafka_2.10-0.10.2.1/logs/log-cleaner.log.2018-01-02-09].

Does that second filename already exist?
Does the user starting Kafka have permission to create files in that directory?
Does the user starting Kafka have permission to rename the first filename there?
Is that device writeable?
Is there disk space left on that device?

- Jordan Pilat


On 2018-01-02 04:18, "M. Manna" <ma...@gmail.com> wrote: 
> Hi All,
> 
> Firstly a very Happy New Year!
> 
> I set up my 3 node configuration where each of the broker is set to have
> identical configurations. They are in in three different servers, but
> within the same domain.
> 
> I have got a very simply Windows script that does the following:
> 
> 1) Starts each zookeeper instances with a 5 seconds delay.
> 2) Once the zookeepers are running, wait for 10 seconds.
> 3) Start all Brokers with a 5 seconds delay.
> 
> Just from today, I am unable to start my first broker. I tried to
> individually stop and start the broker but it didn't help. Also, I tried to
> do a full cleanpup (i.e. remove all ZK and Kafka logs) to start again. But
> the first broker seem to be causing issues.
> 
> All these are in Kafka_2.10-0.10.2.1.
> 
> I suspect that the machine has somehow bound 9092 port to something and got
> stuck in the process. At least, from the logs this is what I am getting:
> 
> >
> > log4j:ERROR Failed to rename [C:\kafka_2.10-0.10.2.1/logs/server.log] to
> > [C:\kafka_2.10-0.10.2.1/logs/server.log.2018-01-02-09].
> > [2018-01-02 10:04:28,204] INFO KafkaConfig values:
> >         advertised.host.name = null
> >         advertised.listeners = PLAINTEXT://localhost:9092
> >         advertised.port = null
> >         authorizer.class.name =
> >         auto.create.topics.enable = true
> >         auto.leader.rebalance.enable = true
> >         background.threads = 10
> >         broker.id = 1
> >         broker.id.generation.enable = true
> >         broker.rack = null
> >         compression.type = gzip
> >         connections.max.idle.ms = 600000
> >         controlled.shutdown.enable = true
> >         controlled.shutdown.max.retries = 10
> >         controlled.shutdown.retry.backoff.ms = 3000
> >         controller.socket.timeout.ms = 30000
> >         create.topic.policy.class.name = null
> >         default.replication.factor = 1
> >         delete.topic.enable = true
> >         fetch.purgatory.purge.interval.requests = 1000
> >         group.max.session.timeout.ms = 300000
> >         group.min.session.timeout.ms = 6000
> >         host.name =
> >         inter.broker.listener.name = null
> >         inter.broker.protocol.version = 0.10.2-IV0
> >         leader.imbalance.check.interval.seconds = 300
> >         leader.imbalance.per.broker.percentage = 10
> >         listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL
> >         listeners = null
> >         log.cleaner.backoff.ms = 15000
> >         log.cleaner.dedupe.buffer.size = 134217728
> >         log.cleaner.delete.retention.ms = 86400000
> >         log.cleaner.enable = true
> >         log.cleaner.io.buffer.load.factor = 0.9
> >         log.cleaner.io.buffer.size = 524288
> >         log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
> >         log.cleaner.min.cleanable.ratio = 0.5
> >         log.cleaner.min.compaction.lag.ms = 0
> >         log.cleaner.threads = 1
> >         log.cleanup.policy = [delete]
> >         log.dir = /tmp/kafka-logs
> >         log.dirs = /kafka1
> >         log.flush.interval.messages = 9223372036854775807
> >         log.flush.interval.ms = null
> >         log.flush.offset.checkpoint.interval.ms = 60000
> >         log.flush.scheduler.interval.ms = 9223372036854775807
> >         log.index.interval.bytes = 4096
> >         log.index.size.max.bytes = 10485760
> >         log.message.format.version = 0.10.2-IV0
> >         log.message.timestamp.difference.max.ms = 9223372036854775807
> >         log.message.timestamp.type = CreateTime
> >         log.preallocate = false
> >         log.retention.bytes = 20971520
> >         log.retention.check.interval.ms = 300000
> >         log.retention.hours = 2
> >         log.retention.minutes = 15
> >         log.retention.ms = null
> >         log.roll.hours = 1
> >         log.roll.jitter.hours = 0
> >         log.roll.jitter.ms = null
> >         log.roll.ms = null
> >         log.segment.bytes = 10485760
> >         log.segment.delete.delay.ms = 60000
> >         max.connections.per.ip = 2147483647
> >         max.connections.per.ip.overrides =
> >         message.max.bytes = 1000012
> >         metric.reporters = []
> >         metrics.num.samples = 2
> >         metrics.recording.level = INFO
> >         metrics.sample.window.ms = 30000
> >         min.insync.replicas = 1
> >         num.io.threads = 24
> >         num.network.threads = 9
> >         num.partitions = 1
> >         num.recovery.threads.per.data.dir = 1
> >         num.replica.fetchers = 1
> >         offset.metadata.max.bytes = 4096
> >         offsets.commit.required.acks = -1
> >         offsets.commit.timeout.ms = 5000
> >         offsets.load.buffer.size = 5242880
> >         offsets.retention.check.interval.ms = 300000
> >         offsets.retention.minutes = 2880
> >         offsets.topic.compression.codec = 0
> >         offsets.topic.num.partitions = 50
> >         offsets.topic.replication.factor = 3
> >         offsets.topic.segment.bytes = 104857600
> >         port = 9092
> >         principal.builder.class = class
> > org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
> >         producer.purgatory.purge.interval.requests = 1000
> >         queued.max.requests = 500
> >         quota.consumer.default = 9223372036854775807
> >         quota.producer.default = 9223372036854775807
> >         quota.window.num = 11
> >         quota.window.size.seconds = 1
> >         replica.fetch.backoff.ms = 1000
> >         replica.fetch.max.bytes = 1048576
> >         replica.fetch.min.bytes = 1
> >         replica.fetch.response.max.bytes = 10485760
> >         replica.fetch.wait.max.ms = 500
> >         replica.high.watermark.checkpoint.interval.ms = 5000
> >         replica.lag.time.max.ms = 10000
> >         replica.socket.receive.buffer.bytes = 65536
> >         replica.socket.timeout.ms = 30000
> >         replication.quota.window.num = 11
> >         replication.quota.window.size.seconds = 1
> >         request.timeout.ms = 45000
> >         reserved.broker.max.id = 1000
> >         sasl.enabled.mechanisms = [GSSAPI]
> >         sasl.kerberos.kinit.cmd = /usr/bin/kinit
> >         sasl.kerberos.min.time.before.relogin = 60000
> >         sasl.kerberos.principal.to.local.rules = [DEFAULT]
> >         sasl.kerberos.service.name = null
> >         sasl.kerberos.ticket.renew.jitter = 0.05
> >         sasl.kerberos.ticket.renew.window.factor = 0.8
> >         sasl.mechanism.inter.broker.protocol = GSSAPI
> >         security.inter.broker.protocol = PLAINTEXT
> >         socket.receive.buffer.bytes = 102400
> >         socket.request.max.bytes = 999999999
> >         socket.send.buffer.bytes = 102400
> >         ssl.cipher.suites = null
> >         ssl.client.auth = none
> >         ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
> >         ssl.endpoint.identification.algorithm = null
> >         ssl.key.password = null
> >         ssl.keymanager.algorithm = SunX509
> >         ssl.keystore.location = null
> >         ssl.keystore.password = null
> >         ssl.keystore.type = JKS
> >         ssl.protocol = TLS
> >         ssl.provider = null
> >         ssl.secure.random.implementation = null
> >         ssl.trustmanager.algorithm = PKIX
> >         ssl.truststore.location = null
> >         ssl.truststore.password = null
> >         ssl.truststore.type = JKS
> >         unclean.leader.election.enable = true
> >         zookeeper.connect =
> > localhost:2181,eg2-pp-ifs-245:2181,eg2-pp-ifs-219:9092
> >         zookeeper.connection.timeout.ms = 35000
> >         zookeeper.session.timeout.ms = 20000
> >         zookeeper.set.acl = false
> >         zookeeper.sync.time.ms = 2000
> >  (kafka.server.KafkaConfig)
> > [2018-01-02 10:04:28,282] INFO starting (kafka.server.KafkaServer)
> > [2018-01-02 10:04:28,297] INFO Connecting to zookeeper on
> > localhost:2181,eg2-pp-ifs-245:2181,eg2-pp-ifs-219:9092
> > (kafka.server.KafkaServer)
> > [2018-01-02 10:04:28,313] INFO Starting ZkClient event thread.
> > (org.I0Itec.zkclient.ZkEventThread)
> > [2018-01-02 10:04:28,313] INFO Client
> > environment:zookeeper.version=3.4.9-1757313, built on 08/23/2016 06:50 GMT
> > (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:28,313] INFO Client environment:host.name=
> > localhost.eg2pp.net (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:28,313] INFO Client environment:java.version=1.8.0_121
> > (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:28,313] INFO Client environment:java.vendor=Oracle
> > Corporation (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:28,313] INFO Client
> > environment:java.home=C:\jdk1.8.0\jre (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:28,313] INFO Client
> > environment:java.class.path=C:\kafka_2.10-0.10.2.1\libs\aopalliance-repackaged-2.5.0-b05.jar;C:\kafka_2.10-0.10.2.1\libs\argparse4j-0.7.0.jar;C:\kafka_2.10-0.10.2.1\libs\connect-api-0.10.2.1.jar;C:\kafka_2.10-0.10.2.1\libs\connect-file-0.10.2.1.jar;C:\kafka_2.10-0.10.2.1\libs\connect-json-0.10.2.1.jar;C:\kafka_2.10-0.10.2.1\libs\connect-runtime-0.10.2.1.jar;C:\kafka_2.10-0.10.2.1\libs\connect-transforms-0.10.2.1.jar;C:\kafka_2.10-0.10.2.1\libs\guava-18.0.jar;C:\kafka_2.10-0.10.2.1\libs\hk2-api-2.5.0-b05.jar;C:\kafka_2.10-0.10.2.1\libs\hk2-locator-2.5.0-b05.jar;C:\kafka_2.10-0.10.2.1\libs\hk2-utils-2.5.0-b05.jar;C:\kafka_2.10-0.10.2.1\libs\jackson-annotations-2.8.0.jar;C:\kafka_2.10-0.10.2.1\libs\jackson-annotations-2.8.5.jar;C:\kafka_2.10-0.10.2.1\libs\jackson-core-2.8.5.jar;C:\kafka_2.10-0.10.2.1\libs\jackson-databind-2.8.5.jar;C:\kafka_2.10-0.10.2.1\libs\jackson-jaxrs-base-2.8.5.jar;C:\kafka_2.10-0.10.2.1\libs\jackson-jaxrs-json-provider-2.8.5.jar;C:\kafka_2.10-0.10.2.1\libs\ja
 ckson-module-jaxb-annotations-2.8.5.jar;C:\kafka_2.10-0.10.2.1\libs\javassist-3.20.0-GA.jar;C:\kafka_2.10-0.10.2.1\libs\javax.annotation-api-1.2.jar;C:\kafka_2.10-0.10.
> >
> > 2.1\libs\javax.inject-1.jar;C:\kafka_2.10-0.10.2.1\libs\javax.inject-2.5.0-b05.jar;C:\kafka_2.10-0.10.2.1\libs\javax.servlet-api-3.1.0.jar;C:\kafka_2.10-0.10.2.1\libs\javax.ws.rs-api-2.0.1.jar;C:\kafka_2.10-0.10.2.1\libs\jersey-client-2.24.jar;C:\kafka_2.10-0.10.2.1\libs\jersey-common-2.24.jar;C:\kafka_2.10-0.10.2.1\libs\jersey-container-servlet-2.24.jar;C:\kafka_2.10-0.10.2.1\libs\jersey-container-servlet-core-2.24.jar;C:\kafka_2.10-0.10.2.1\libs\jersey-guava-2.24.jar;C:\kafka_2.10-0.10.2.1\libs\jersey-media-jaxb-2.24.jar;C:\kafka_2.10-0.10.2.1\libs\jersey-server-2.24.jar;C:\kafka_2.10-0.10.2.1\libs\jetty-continuation-9.2.15.v20160210.jar;C:\kafka_2.10-0.10.2.1\libs\jetty-http-9.2.15.v20160210.jar;C:\kafka_2.10-0.10.2.1\libs\jetty-io-9.2.15.v20160210.jar;C:\kafka_2.10-0.10.2.1\libs\jetty-security-9.2.15.v20160210.jar;C:\kafka_2.10-0.10.2.1\libs\jetty-server-9.2.15.v20160210.jar;C:\kafka_2.10-0.10.2.1\libs\jetty-servlet-9.2.15.v20160210.jar;C:\kafka_2.10-0.10.2.1\libs\jetty-servl
 ets-9.2.15.v20160210.jar;C:\kafka_2.10-0.10.2.1\libs\jetty-util-9.2.15.v20160210.jar;C:\kafka_2.10-0.10.2.1\libs\jopt-simple-5.0.3.jar;C:\kafka_2.10-0.10.2.1\libs\kafka-clients-0.10.2.1.jar;C:\kafka_2.10-0.
> >
> > 10.2.1\libs\kafka-log4j-appender-0.10.2.1.jar;C:\kafka_2.10-0.10.2.1\libs\kafka-streams-0.10.2.1.jar;C:\kafka_2.10-0.10.2.1\libs\kafka-streams-examples-0.10.2.1.jar;C:\kafka_2.10-0.10.2.1\libs\kafka-tools-0.10.2.1.jar;C:\kafka_2.10-0.10.2.1\libs\kafka_2.10-0.10.2.1-javadoc.jar;C:\kafka_2.10-0.10.2.1\libs\kafka_2.10-0.10.2.1-javadoc.jar.asc;C:\kafka_2.10-0.10.2.1\libs\kafka_2.10-0.10.2.1-scaladoc.jar;C:\kafka_2.10-0.10.2.1\libs\kafka_2.10-0.10.2.1-scaladoc.jar.asc;C:\kafka_2.10-0.10.2.1\libs\kafka_2.10-0.10.2.1-sources.jar;C:\kafka_2.10-0.10.2.1\libs\kafka_2.10-0.10.2.1-sources.jar.asc;C:\kafka_2.10-0.10.2.1\libs\kafka_2.10-0.10.2.1-test-sources.jar;C:\kafka_2.10-0.10.2.1\libs\kafka_2.10-0.10.2.1-test-sources.jar.asc;C:\kafka_2.10-0.10.2.1\libs\kafka_2.10-0.10.2.1-test.jar;C:\kafka_2.10-0.10.2.1\libs\kafka_2.10-0.10.2.1-test.jar.asc;C:\kafka_2.10-0.10.2.1\libs\kafka_2.10-0.10.2.1.jar;C:\kafka_2.10-0.10.2.1\libs\kafka_2.10-0.10.2.1.jar.asc;C:\kafka_2.10-0.10.2.1\libs\log4j-1.2.17.j
 ar;C:\kafka_2.10-0.10.2.1\libs\lz4-1.3.0.jar;C:\kafka_2.10-0.10.2.1\libs\metrics-core-2.2.0.jar;C:\kafka_2.10-0.10.2.1\libs\osgi-resource-locator-1.0.1.jar;C:\kafka_2.10-0.10.2.1\libs\reflections-0.9.10.jar
> > ;C:\kafka_2.10-0.10.2.1\libs\rocksdbjni-5.0.1.jar;C:\kafka_2.10-0.10.2.1\libs\scala-library-2.10.6.jar;C:\kafka_2.10-0.10.2.1\libs\slf4j-api-1.7.21.jar;C:\kafka_2.10-0.10.2.1\libs\slf4j-log4j12-1.7.21.jar;C:\kafka_2.10-0.10.2.1\libs\snappy-java-1.1.2.6.jar;C:\kafka_2.10-0.10.2.1\libs\validation-api-1.1.0.Final.jar;C:\kafka_2.10-0.10.2.1\libs\zkclient-0.10.jar;C:\kafka_2.10-0.10.2.1\libs\zookeeper-3.4.9.jar
> > (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:28,313] INFO Client
> > environment:java.library.path=C:\jdk1.8.0\bin;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;C:\Program
> > Files
> > (x86)\Integrity\IntegrityClient10\bin;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\PROGRA~2\INTEGR~1\Toolkit\mksnt;C:\CustomCommands\;C:\jdk1.8.0\bin;C:\apache-maven-3.5.0\bin;C:\apache-ant-1.9.6\bin;c:\choco\bin;C:\Program
> > Files (x86)\Microsoft SQL Server\Client SDK\ODBC\130\Tools\Binn\;C:\Program
> > Files (x86)\Microsoft SQL Server\130\Tools\Binn\;C:\Program Files
> > (x86)\Microsoft SQL Server\130\DTS\Binn\;C:\Program Files (x86)\Microsoft
> > SQL Server\130\Tools\Binn\ManagementStudio\;C:\Program Files\Microsoft SQL
> > Server\130\DTS\Binn\;C:\Program Files\Microsoft SQL Server\Client
> > SDK\ODBC\130\Tools\Binn\;C:\Program Files\Microsoft SQL
> > Server\130\Tools\Binn\;. (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:28,329] INFO Client
> > environment:java.io.tmpdir=C:\Users\I318433\AppData\Local\Temp\3\
> > (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:28,329] INFO Client environment:java.compiler=<NA>
> > (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:28,329] INFO Client environment:os.name=Windows Server
> > 2012 R2 (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:28,329] INFO Client environment:os.arch=amd64
> > (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:28,329] INFO Client environment:os.version=6.3
> > (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:28,329] INFO Client environment:user.name=I318433
> > (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:28,329] INFO Client
> > environment:user.home=C:\Users\I318433 (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:28,329] INFO Client
> > environment:user.dir=C:\kafka_2.10-0.10.2.1\bin\windows
> > (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:28,329] INFO Initiating client connection,
> > connectString=localhost:2181,eg2-pp-ifs-245:2181,eg2-pp-ifs-219:9092
> > sessionTimeout=20000 watcher=org.I0Itec.zkclient.ZkClient@a514af7
> > (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:28,360] INFO Waiting for keeper state SyncConnected
> > (org.I0Itec.zkclient.ZkClient)
> > [2018-01-02 10:04:28,360] INFO Opening socket connection to server
> > eg2-pp-ifs-219.eg2pp.net/10.44.201.219:9092. Will not attempt to
> > authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
> > [2018-01-02 10:04:28,360] INFO Socket connection established to
> > eg2-pp-ifs-219.eg2pp.net/10.44.201.219:9092, initiating session
> > (org.apache.zookeeper.ClientCnxn)
> > [2018-01-02 10:04:33,368] WARN Client session timed out, have not heard
> > from server in 5008ms for sessionid 0x0 (org.apache.zookeeper.ClientCnxn)
> > [2018-01-02 10:04:33,368] INFO Client session timed out, have not heard
> > from server in 5008ms for sessionid 0x0, closing socket connection and
> > attempting reconnect (org.apache.zookeeper.ClientCnxn)
> > [2018-01-02 10:04:33,742] INFO Opening socket connection to server
> > eg2-pp-ifs-245.eg2pp.net/10.44.201.245:2181. Will not attempt to
> > authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
> > [2018-01-02 10:04:33,742] INFO Socket connection established to
> > eg2-pp-ifs-245.eg2pp.net/10.44.201.245:2181, initiating session
> > (org.apache.zookeeper.ClientCnxn)
> > [2018-01-02 10:04:33,742] INFO Session establishment complete on server
> > eg2-pp-ifs-245.eg2pp.net/10.44.201.245:2181, sessionid =
> > 0x260b64e7ac90002, negotiated timeout = 20000
> > (org.apache.zookeeper.ClientCnxn)
> > [2018-01-02 10:04:33,742] INFO zookeeper state changed (SyncConnected)
> > (org.I0Itec.zkclient.ZkClient)
> > [2018-01-02 10:04:33,851] INFO Cluster ID = u4Z2d2gUS8O9SLG5c51ljA
> > (kafka.server.KafkaServer)
> > [2018-01-02 10:04:33,851] WARN No meta.properties file under dir
> > C:\kafka1\meta.properties (kafka.server.BrokerMetadataCheckpoint)
> > [2018-01-02 10:04:33,883] INFO [ThrottledRequestReaper-Fetch], Starting
> > (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
> > [2018-01-02 10:04:33,883] INFO [ThrottledRequestReaper-Produce], Starting
> > (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
> > [2018-01-02 10:04:33,945] INFO Loading logs. (kafka.log.LogManager)
> > [2018-01-02 10:04:33,945] INFO Logs loading complete in 0 ms.
> > (kafka.log.LogManager)
> > [2018-01-02 10:04:34,023] INFO Starting log cleanup with a period of
> > 300000 ms. (kafka.log.LogManager)
> > [2018-01-02 10:04:34,023] INFO Starting log flusher with a default period
> > of 9223372036854775807 ms. (kafka.log.LogManager)
> > log4j:ERROR Failed to rename [C:\kafka_2.10-0.10.2.1/logs/log-cleaner.log]
> > to [C:\kafka_2.10-0.10.2.1/logs/log-cleaner.log.2018-01-02-09].
> > [2018-01-02 10:04:34,086] FATAL [Kafka Server 1], Fatal error during
> > KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
> > kafka.common.KafkaException: Socket server failed to bind to 0.0.0.0:9092:
> > Address already in use: bind.
> >         at kafka.network.Acceptor.openServerSocket(SocketServer.scala:330)
> >         at kafka.network.Acceptor.<init>(SocketServer.scala:255)
> >         at
> > kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:98)
> >         at
> > kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:90)
> >         at
> > scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> >         at
> > scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
> >         at kafka.network.SocketServer.startup(SocketServer.scala:90)
> >         at kafka.server.KafkaServer.startup(KafkaServer.scala:215)
> >         at
> > kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
> >         at kafka.Kafka$.main(Kafka.scala:67)
> >         at kafka.Kafka.main(Kafka.scala)
> > Caused by: java.net.BindException: Address already in use: bind
> >         at sun.nio.ch.Net.bind0(Native Method)
> >         at sun.nio.ch.Net.bind(Net.java:433)
> >         at sun.nio.ch.Net.bind(Net.java:425)
> >         at
> > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> >         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> >         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
> >         at kafka.network.Acceptor.openServerSocket(SocketServer.scala:326)
> >         ... 10 more
> > [2018-01-02 10:04:34,101] INFO [Kafka Server 1], shutting down
> > (kafka.server.KafkaServer)
> > [2018-01-02 10:04:34,101] INFO [Socket Server on Broker 1], Shutting down
> > (kafka.network.SocketServer)
> > [2018-01-02 10:04:34,117] INFO [Socket Server on Broker 1], Shutdown
> > completed (kafka.network.SocketServer)
> > [2018-01-02 10:04:34,117] INFO Shutting down. (kafka.log.LogManager)
> > [2018-01-02 10:04:34,117] INFO Shutdown complete. (kafka.log.LogManager)
> > [2018-01-02 10:04:34,117] INFO Terminate ZkClient event thread.
> > (org.I0Itec.zkclient.ZkEventThread)
> > [2018-01-02 10:04:34,133] INFO Session: 0x260b64e7ac90002 closed
> > (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:34,133] INFO EventThread shut down for session:
> > 0x260b64e7ac90002 (org.apache.zookeeper.ClientCnxn)
> > [2018-01-02 10:04:34,133] INFO [Kafka Server 1], shut down completed
> > (kafka.server.KafkaServer)
> > [2018-01-02 10:04:34,133] FATAL Fatal error during KafkaServerStartable
> > startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
> > kafka.common.KafkaException: Socket server failed to bind to 0.0.0.0:9092:
> > Address already in use: bind.
> >         at kafka.network.Acceptor.openServerSocket(SocketServer.scala:330)
> >         at kafka.network.Acceptor.<init>(SocketServer.scala:255)
> >         at
> > kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:98)
> >         at
> > kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:90)
> >         at
> > scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> >         at
> > scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
> >         at kafka.network.SocketServer.startup(SocketServer.scala:90)
> >         at kafka.server.KafkaServer.startup(KafkaServer.scala:215)
> >         at
> > kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
> >         at kafka.Kafka$.main(Kafka.scala:67)
> >         at kafka.Kafka.main(Kafka.scala)
> > Caused by: java.net.BindException: Address already in use: bind
> >         at sun.nio.ch.Net.bind0(Native Method)
> >         at sun.nio.ch.Net.bind(Net.java:433)
> >         at sun.nio.ch.Net.bind(Net.java:425)
> >         at
> > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> >         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> >         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
> >         at kafka.network.Acceptor.openServerSocket(SocketServer.scala:326)
> >         ... 10 more
> > [2018-01-02 10:04:34,148] INFO [Kafka Server 1], shutting down
> > (kafka.server.KafkaServer)
> 
> 
> 
> 
> Could someone please advise whether this is a typical problem? Also, is
> this a very typical Windows problem, or could this happen on Linux too ?
> 
> Best Regards,
> 

Re: Unable to start 1st broker in a 3 node configuration

Posted by Ted Yu <yu...@gmail.com>.
bq.          zookeeper.connect = localhost:2181,eg2-pp-ifs-245:
2181,eg2-pp-ifs-219:*9092*

Why did 9092 appear in zookeeper setting ?

Cheers

On Tue, Jan 2, 2018 at 2:18 AM, M. Manna <ma...@gmail.com> wrote:

> Hi All,
>
> Firstly a very Happy New Year!
>
> I set up my 3 node configuration where each of the broker is set to have
> identical configurations. They are in in three different servers, but
> within the same domain.
>
> I have got a very simply Windows script that does the following:
>
> 1) Starts each zookeeper instances with a 5 seconds delay.
> 2) Once the zookeepers are running, wait for 10 seconds.
> 3) Start all Brokers with a 5 seconds delay.
>
> Just from today, I am unable to start my first broker. I tried to
> individually stop and start the broker but it didn't help. Also, I tried to
> do a full cleanpup (i.e. remove all ZK and Kafka logs) to start again. But
> the first broker seem to be causing issues.
>
> All these are in Kafka_2.10-0.10.2.1.
>
> I suspect that the machine has somehow bound 9092 port to something and got
> stuck in the process. At least, from the logs this is what I am getting:
>
> >
> > log4j:ERROR Failed to rename [C:\kafka_2.10-0.10.2.1/logs/server.log] to
> > [C:\kafka_2.10-0.10.2.1/logs/server.log.2018-01-02-09].
> > [2018-01-02 10:04:28,204] INFO KafkaConfig values:
> >         advertised.host.name = null
> >         advertised.listeners = PLAINTEXT://localhost:9092
> >         advertised.port = null
> >         authorizer.class.name =
> >         auto.create.topics.enable = true
> >         auto.leader.rebalance.enable = true
> >         background.threads = 10
> >         broker.id = 1
> >         broker.id.generation.enable = true
> >         broker.rack = null
> >         compression.type = gzip
> >         connections.max.idle.ms = 600000
> >         controlled.shutdown.enable = true
> >         controlled.shutdown.max.retries = 10
> >         controlled.shutdown.retry.backoff.ms = 3000
> >         controller.socket.timeout.ms = 30000
> >         create.topic.policy.class.name = null
> >         default.replication.factor = 1
> >         delete.topic.enable = true
> >         fetch.purgatory.purge.interval.requests = 1000
> >         group.max.session.timeout.ms = 300000
> >         group.min.session.timeout.ms = 6000
> >         host.name =
> >         inter.broker.listener.name = null
> >         inter.broker.protocol.version = 0.10.2-IV0
> >         leader.imbalance.check.interval.seconds = 300
> >         leader.imbalance.per.broker.percentage = 10
> >         listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL
> >         listeners = null
> >         log.cleaner.backoff.ms = 15000
> >         log.cleaner.dedupe.buffer.size = 134217728
> >         log.cleaner.delete.retention.ms = 86400000
> >         log.cleaner.enable = true
> >         log.cleaner.io.buffer.load.factor = 0.9
> >         log.cleaner.io.buffer.size = 524288
> >         log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
> >         log.cleaner.min.cleanable.ratio = 0.5
> >         log.cleaner.min.compaction.lag.ms = 0
> >         log.cleaner.threads = 1
> >         log.cleanup.policy = [delete]
> >         log.dir = /tmp/kafka-logs
> >         log.dirs = /kafka1
> >         log.flush.interval.messages = 9223372036854775807
> >         log.flush.interval.ms = null
> >         log.flush.offset.checkpoint.interval.ms = 60000
> >         log.flush.scheduler.interval.ms = 9223372036854775807
> >         log.index.interval.bytes = 4096
> >         log.index.size.max.bytes = 10485760
> >         log.message.format.version = 0.10.2-IV0
> >         log.message.timestamp.difference.max.ms = 9223372036854775807
> >         log.message.timestamp.type = CreateTime
> >         log.preallocate = false
> >         log.retention.bytes = 20971520
> >         log.retention.check.interval.ms = 300000
> >         log.retention.hours = 2
> >         log.retention.minutes = 15
> >         log.retention.ms = null
> >         log.roll.hours = 1
> >         log.roll.jitter.hours = 0
> >         log.roll.jitter.ms = null
> >         log.roll.ms = null
> >         log.segment.bytes = 10485760
> >         log.segment.delete.delay.ms = 60000
> >         max.connections.per.ip = 2147483647
> >         max.connections.per.ip.overrides =
> >         message.max.bytes = 1000012
> >         metric.reporters = []
> >         metrics.num.samples = 2
> >         metrics.recording.level = INFO
> >         metrics.sample.window.ms = 30000
> >         min.insync.replicas = 1
> >         num.io.threads = 24
> >         num.network.threads = 9
> >         num.partitions = 1
> >         num.recovery.threads.per.data.dir = 1
> >         num.replica.fetchers = 1
> >         offset.metadata.max.bytes = 4096
> >         offsets.commit.required.acks = -1
> >         offsets.commit.timeout.ms = 5000
> >         offsets.load.buffer.size = 5242880
> >         offsets.retention.check.interval.ms = 300000
> >         offsets.retention.minutes = 2880
> >         offsets.topic.compression.codec = 0
> >         offsets.topic.num.partitions = 50
> >         offsets.topic.replication.factor = 3
> >         offsets.topic.segment.bytes = 104857600
> >         port = 9092
> >         principal.builder.class = class
> > org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
> >         producer.purgatory.purge.interval.requests = 1000
> >         queued.max.requests = 500
> >         quota.consumer.default = 9223372036854775807
> >         quota.producer.default = 9223372036854775807
> >         quota.window.num = 11
> >         quota.window.size.seconds = 1
> >         replica.fetch.backoff.ms = 1000
> >         replica.fetch.max.bytes = 1048576
> >         replica.fetch.min.bytes = 1
> >         replica.fetch.response.max.bytes = 10485760
> >         replica.fetch.wait.max.ms = 500
> >         replica.high.watermark.checkpoint.interval.ms = 5000
> >         replica.lag.time.max.ms = 10000
> >         replica.socket.receive.buffer.bytes = 65536
> >         replica.socket.timeout.ms = 30000
> >         replication.quota.window.num = 11
> >         replication.quota.window.size.seconds = 1
> >         request.timeout.ms = 45000
> >         reserved.broker.max.id = 1000
> >         sasl.enabled.mechanisms = [GSSAPI]
> >         sasl.kerberos.kinit.cmd = /usr/bin/kinit
> >         sasl.kerberos.min.time.before.relogin = 60000
> >         sasl.kerberos.principal.to.local.rules = [DEFAULT]
> >         sasl.kerberos.service.name = null
> >         sasl.kerberos.ticket.renew.jitter = 0.05
> >         sasl.kerberos.ticket.renew.window.factor = 0.8
> >         sasl.mechanism.inter.broker.protocol = GSSAPI
> >         security.inter.broker.protocol = PLAINTEXT
> >         socket.receive.buffer.bytes = 102400
> >         socket.request.max.bytes = 999999999
> >         socket.send.buffer.bytes = 102400
> >         ssl.cipher.suites = null
> >         ssl.client.auth = none
> >         ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
> >         ssl.endpoint.identification.algorithm = null
> >         ssl.key.password = null
> >         ssl.keymanager.algorithm = SunX509
> >         ssl.keystore.location = null
> >         ssl.keystore.password = null
> >         ssl.keystore.type = JKS
> >         ssl.protocol = TLS
> >         ssl.provider = null
> >         ssl.secure.random.implementation = null
> >         ssl.trustmanager.algorithm = PKIX
> >         ssl.truststore.location = null
> >         ssl.truststore.password = null
> >         ssl.truststore.type = JKS
> >         unclean.leader.election.enable = true
> >         zookeeper.connect =
> > localhost:2181,eg2-pp-ifs-245:2181,eg2-pp-ifs-219:9092
> >         zookeeper.connection.timeout.ms = 35000
> >         zookeeper.session.timeout.ms = 20000
> >         zookeeper.set.acl = false
> >         zookeeper.sync.time.ms = 2000
> >  (kafka.server.KafkaConfig)
> > [2018-01-02 10:04:28,282] INFO starting (kafka.server.KafkaServer)
> > [2018-01-02 10:04:28,297] INFO Connecting to zookeeper on
> > localhost:2181,eg2-pp-ifs-245:2181,eg2-pp-ifs-219:9092
> > (kafka.server.KafkaServer)
> > [2018-01-02 10:04:28,313] INFO Starting ZkClient event thread.
> > (org.I0Itec.zkclient.ZkEventThread)
> > [2018-01-02 10:04:28,313] INFO Client
> > environment:zookeeper.version=3.4.9-1757313, built on 08/23/2016 06:50
> GMT
> > (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:28,313] INFO Client environment:host.name=
> > localhost.eg2pp.net (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:28,313] INFO Client environment:java.version=1.8.0_121
> > (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:28,313] INFO Client environment:java.vendor=Oracle
> > Corporation (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:28,313] INFO Client
> > environment:java.home=C:\jdk1.8.0\jre (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:28,313] INFO Client
> > environment:java.class.path=C:\kafka_2.10-0.10.2.1\libs\
> aopalliance-repackaged-2.5.0-b05.jar;C:\kafka_2.10-0.10.2.
> 1\libs\argparse4j-0.7.0.jar;C:\kafka_2.10-0.10.2.1\libs\
> connect-api-0.10.2.1.jar;C:\kafka_2.10-0.10.2.1\libs\
> connect-file-0.10.2.1.jar;C:\kafka_2.10-0.10.2.1\libs\
> connect-json-0.10.2.1.jar;C:\kafka_2.10-0.10.2.1\libs\
> connect-runtime-0.10.2.1.jar;C:\kafka_2.10-0.10.2.1\libs\
> connect-transforms-0.10.2.1.jar;C:\kafka_2.10-0.10.2.1\
> libs\guava-18.0.jar;C:\kafka_2.10-0.10.2.1\libs\hk2-api-2.
> 5.0-b05.jar;C:\kafka_2.10-0.10.2.1\libs\hk2-locator-2.5.0-
> b05.jar;C:\kafka_2.10-0.10.2.1\libs\hk2-utils-2.5.0-b05.
> jar;C:\kafka_2.10-0.10.2.1\libs\jackson-annotations-2.8.
> 0.jar;C:\kafka_2.10-0.10.2.1\libs\jackson-annotations-2.8.
> 5.jar;C:\kafka_2.10-0.10.2.1\libs\jackson-core-2.8.5.jar;C:
> \kafka_2.10-0.10.2.1\libs\jackson-databind-2.8.5.jar;C:\
> kafka_2.10-0.10.2.1\libs\jackson-jaxrs-base-2.8.5.jar;
> C:\kafka_2.10-0.10.2.1\libs\jackson-jaxrs-json-provider-2.
> 8.5.jar;C:\kafka_2.10-0.10.2.1\libs\jackson-module-jaxb-
> annotations-2.8.5.jar;C:\kafka_2.10-0.10.2.1\libs\
> javassist-3.20.0-GA.jar;C:\kafka_2.10-0.10.2.1\libs\
> javax.annotation-api-1.2.jar;C:\kafka_2.10-0.10.
> >
> > 2.1\libs\javax.inject-1.jar;C:\kafka_2.10-0.10.2.1\libs\
> javax.inject-2.5.0-b05.jar;C:\kafka_2.10-0.10.2.1\libs\
> javax.servlet-api-3.1.0.jar;C:\kafka_2.10-0.10.2.1\libs\
> javax.ws.rs-api-2.0.1.jar;C:\kafka_2.10-0.10.2.1\libs\
> jersey-client-2.24.jar;C:\kafka_2.10-0.10.2.1\libs\
> jersey-common-2.24.jar;C:\kafka_2.10-0.10.2.1\libs\
> jersey-container-servlet-2.24.jar;C:\kafka_2.10-0.10.2.1\
> libs\jersey-container-servlet-core-2.24.jar;C:\kafka_2.10-0.
> 10.2.1\libs\jersey-guava-2.24.jar;C:\kafka_2.10-0.10.2.1\
> libs\jersey-media-jaxb-2.24.jar;C:\kafka_2.10-0.10.2.1\
> libs\jersey-server-2.24.jar;C:\kafka_2.10-0.10.2.1\libs\
> jetty-continuation-9.2.15.v20160210.jar;C:\kafka_2.10-0.
> 10.2.1\libs\jetty-http-9.2.15.v20160210.jar;C:\kafka_2.10-0.
> 10.2.1\libs\jetty-io-9.2.15.v20160210.jar;C:\kafka_2.10-0.
> 10.2.1\libs\jetty-security-9.2.15.v20160210.jar;C:\kafka_2.
> 10-0.10.2.1\libs\jetty-server-9.2.15.v20160210.jar;C:\kafka_
> 2.10-0.10.2.1\libs\jetty-servlet-9.2.15.v20160210.jar;
> C:\kafka_2.10-0.10.2.1\libs\jetty-servlets-9.2.15.
> v20160210.jar;C:\kafka_2.10-0.10.2.1\libs\jetty-util-9.2.15.
> v20160210.jar;C:\kafka_2.10-0.10.2.1\libs\jopt-simple-5.0.3.
> jar;C:\kafka_2.10-0.10.2.1\libs\kafka-clients-0.10.2.1.
> jar;C:\kafka_2.10-0.
> >
> > 10.2.1\libs\kafka-log4j-appender-0.10.2.1.jar;C:\
> kafka_2.10-0.10.2.1\libs\kafka-streams-0.10.2.1.jar;C:\
> kafka_2.10-0.10.2.1\libs\kafka-streams-examples-0.10.2.
> 1.jar;C:\kafka_2.10-0.10.2.1\libs\kafka-tools-0.10.2.1.jar;
> C:\kafka_2.10-0.10.2.1\libs\kafka_2.10-0.10.2.1-javadoc.
> jar;C:\kafka_2.10-0.10.2.1\libs\kafka_2.10-0.10.2.1-
> javadoc.jar.asc;C:\kafka_2.10-0.10.2.1\libs\kafka_2.10-0.10.
> 2.1-scaladoc.jar;C:\kafka_2.10-0.10.2.1\libs\kafka_2.10-0.
> 10.2.1-scaladoc.jar.asc;C:\kafka_2.10-0.10.2.1\libs\
> kafka_2.10-0.10.2.1-sources.jar;C:\kafka_2.10-0.10.2.1\
> libs\kafka_2.10-0.10.2.1-sources.jar.asc;C:\kafka_2.10-
> 0.10.2.1\libs\kafka_2.10-0.10.2.1-test-sources.jar;C:\kafka_
> 2.10-0.10.2.1\libs\kafka_2.10-0.10.2.1-test-sources.jar.asc;
> C:\kafka_2.10-0.10.2.1\libs\kafka_2.10-0.10.2.1-test.jar;
> C:\kafka_2.10-0.10.2.1\libs\kafka_2.10-0.10.2.1-test.jar.
> asc;C:\kafka_2.10-0.10.2.1\libs\kafka_2.10-0.10.2.1.jar;
> C:\kafka_2.10-0.10.2.1\libs\kafka_2.10-0.10.2.1.jar.asc;C:
> \kafka_2.10-0.10.2.1\libs\log4j-1.2.17.jar;C:\kafka_2.
> 10-0.10.2.1\libs\lz4-1.3.0.jar;C:\kafka_2.10-0.10.2.1\
> libs\metrics-core-2.2.0.jar;C:\kafka_2.10-0.10.2.1\libs\
> osgi-resource-locator-1.0.1.jar;C:\kafka_2.10-0.10.2.1\
> libs\reflections-0.9.10.jar
> > ;C:\kafka_2.10-0.10.2.1\libs\rocksdbjni-5.0.1.jar;C:\kafka_
> 2.10-0.10.2.1\libs\scala-library-2.10.6.jar;C:\kafka_2.
> 10-0.10.2.1\libs\slf4j-api-1.7.21.jar;C:\kafka_2.10-0.10.2.
> 1\libs\slf4j-log4j12-1.7.21.jar;C:\kafka_2.10-0.10.2.1\
> libs\snappy-java-1.1.2.6.jar;C:\kafka_2.10-0.10.2.1\libs\
> validation-api-1.1.0.Final.jar;C:\kafka_2.10-0.10.2.1\
> libs\zkclient-0.10.jar;C:\kafka_2.10-0.10.2.1\libs\zookeeper-3.4.9.jar
> > (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:28,313] INFO Client
> > environment:java.library.path=C:\jdk1.8.0\bin;C:\Windows\
> Sun\Java\bin;C:\Windows\system32;C:\Windows;C:\Program
> > Files
> > (x86)\Integrity\IntegrityClient10\bin;C:\Windows\system32;C:\Windows;C:
> \Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\
> PROGRA~2\INTEGR~1\Toolkit\mksnt;C:\CustomCommands\;C:\
> jdk1.8.0\bin;C:\apache-maven-3.5.0\bin;C:\apache-ant-1.9.6\
> bin;c:\choco\bin;C:\Program
> > Files (x86)\Microsoft SQL Server\Client SDK\ODBC\130\Tools\Binn\;C:\
> Program
> > Files (x86)\Microsoft SQL Server\130\Tools\Binn\;C:\Program Files
> > (x86)\Microsoft SQL Server\130\DTS\Binn\;C:\Program Files
> (x86)\Microsoft
> > SQL Server\130\Tools\Binn\ManagementStudio\;C:\Program Files\Microsoft
> SQL
> > Server\130\DTS\Binn\;C:\Program Files\Microsoft SQL Server\Client
> > SDK\ODBC\130\Tools\Binn\;C:\Program Files\Microsoft SQL
> > Server\130\Tools\Binn\;. (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:28,329] INFO Client
> > environment:java.io.tmpdir=C:\Users\I318433\AppData\Local\Temp\3\
> > (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:28,329] INFO Client environment:java.compiler=<NA>
> > (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:28,329] INFO Client environment:os.name=Windows Server
> > 2012 R2 (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:28,329] INFO Client environment:os.arch=amd64
> > (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:28,329] INFO Client environment:os.version=6.3
> > (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:28,329] INFO Client environment:user.name=I318433
> > (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:28,329] INFO Client
> > environment:user.home=C:\Users\I318433 (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:28,329] INFO Client
> > environment:user.dir=C:\kafka_2.10-0.10.2.1\bin\windows
> > (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:28,329] INFO Initiating client connection,
> > connectString=localhost:2181,eg2-pp-ifs-245:2181,eg2-pp-ifs-219:9092
> > sessionTimeout=20000 watcher=org.I0Itec.zkclient.ZkClient@a514af7
> > (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:28,360] INFO Waiting for keeper state SyncConnected
> > (org.I0Itec.zkclient.ZkClient)
> > [2018-01-02 10:04:28,360] INFO Opening socket connection to server
> > eg2-pp-ifs-219.eg2pp.net/10.44.201.219:9092. Will not attempt to
> > authenticate using SASL (unknown error) (org.apache.zookeeper.
> ClientCnxn)
> > [2018-01-02 10:04:28,360] INFO Socket connection established to
> > eg2-pp-ifs-219.eg2pp.net/10.44.201.219:9092, initiating session
> > (org.apache.zookeeper.ClientCnxn)
> > [2018-01-02 10:04:33,368] WARN Client session timed out, have not heard
> > from server in 5008ms for sessionid 0x0 (org.apache.zookeeper.
> ClientCnxn)
> > [2018-01-02 10:04:33,368] INFO Client session timed out, have not heard
> > from server in 5008ms for sessionid 0x0, closing socket connection and
> > attempting reconnect (org.apache.zookeeper.ClientCnxn)
> > [2018-01-02 10:04:33,742] INFO Opening socket connection to server
> > eg2-pp-ifs-245.eg2pp.net/10.44.201.245:2181. Will not attempt to
> > authenticate using SASL (unknown error) (org.apache.zookeeper.
> ClientCnxn)
> > [2018-01-02 10:04:33,742] INFO Socket connection established to
> > eg2-pp-ifs-245.eg2pp.net/10.44.201.245:2181, initiating session
> > (org.apache.zookeeper.ClientCnxn)
> > [2018-01-02 10:04:33,742] INFO Session establishment complete on server
> > eg2-pp-ifs-245.eg2pp.net/10.44.201.245:2181, sessionid =
> > 0x260b64e7ac90002, negotiated timeout = 20000
> > (org.apache.zookeeper.ClientCnxn)
> > [2018-01-02 10:04:33,742] INFO zookeeper state changed (SyncConnected)
> > (org.I0Itec.zkclient.ZkClient)
> > [2018-01-02 10:04:33,851] INFO Cluster ID = u4Z2d2gUS8O9SLG5c51ljA
> > (kafka.server.KafkaServer)
> > [2018-01-02 10:04:33,851] WARN No meta.properties file under dir
> > C:\kafka1\meta.properties (kafka.server.BrokerMetadataCheckpoint)
> > [2018-01-02 10:04:33,883] INFO [ThrottledRequestReaper-Fetch], Starting
> > (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
> > [2018-01-02 10:04:33,883] INFO [ThrottledRequestReaper-Produce],
> Starting
> > (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
> > [2018-01-02 10:04:33,945] INFO Loading logs. (kafka.log.LogManager)
> > [2018-01-02 10:04:33,945] INFO Logs loading complete in 0 ms.
> > (kafka.log.LogManager)
> > [2018-01-02 10:04:34,023] INFO Starting log cleanup with a period of
> > 300000 ms. (kafka.log.LogManager)
> > [2018-01-02 10:04:34,023] INFO Starting log flusher with a default period
> > of 9223372036854775807 ms. (kafka.log.LogManager)
> > log4j:ERROR Failed to rename [C:\kafka_2.10-0.10.2.1/logs/
> log-cleaner.log]
> > to [C:\kafka_2.10-0.10.2.1/logs/log-cleaner.log.2018-01-02-09].
> > [2018-01-02 10:04:34,086] FATAL [Kafka Server 1], Fatal error during
> > KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
> > kafka.common.KafkaException: Socket server failed to bind to
> 0.0.0.0:9092:
> > Address already in use: bind.
> >         at kafka.network.Acceptor.openServerSocket(SocketServer.
> scala:330)
> >         at kafka.network.Acceptor.<init>(SocketServer.scala:255)
> >         at
> > kafka.network.SocketServer$$anonfun$startup$1.apply(
> SocketServer.scala:98)
> >         at
> > kafka.network.SocketServer$$anonfun$startup$1.apply(
> SocketServer.scala:90)
> >         at
> > scala.collection.mutable.ResizableArray$class.foreach(
> ResizableArray.scala:59)
> >         at
> > scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
> >         at kafka.network.SocketServer.startup(SocketServer.scala:90)
> >         at kafka.server.KafkaServer.startup(KafkaServer.scala:215)
> >         at
> > kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
> >         at kafka.Kafka$.main(Kafka.scala:67)
> >         at kafka.Kafka.main(Kafka.scala)
> > Caused by: java.net.BindException: Address already in use: bind
> >         at sun.nio.ch.Net.bind0(Native Method)
> >         at sun.nio.ch.Net.bind(Net.java:433)
> >         at sun.nio.ch.Net.bind(Net.java:425)
> >         at
> > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:
> 223)
> >         at sun.nio.ch.ServerSocketAdaptor.bind(
> ServerSocketAdaptor.java:74)
> >         at sun.nio.ch.ServerSocketAdaptor.bind(
> ServerSocketAdaptor.java:67)
> >         at kafka.network.Acceptor.openServerSocket(SocketServer.
> scala:326)
> >         ... 10 more
> > [2018-01-02 10:04:34,101] INFO [Kafka Server 1], shutting down
> > (kafka.server.KafkaServer)
> > [2018-01-02 10:04:34,101] INFO [Socket Server on Broker 1], Shutting down
> > (kafka.network.SocketServer)
> > [2018-01-02 10:04:34,117] INFO [Socket Server on Broker 1], Shutdown
> > completed (kafka.network.SocketServer)
> > [2018-01-02 10:04:34,117] INFO Shutting down. (kafka.log.LogManager)
> > [2018-01-02 10:04:34,117] INFO Shutdown complete. (kafka.log.LogManager)
> > [2018-01-02 10:04:34,117] INFO Terminate ZkClient event thread.
> > (org.I0Itec.zkclient.ZkEventThread)
> > [2018-01-02 10:04:34,133] INFO Session: 0x260b64e7ac90002 closed
> > (org.apache.zookeeper.ZooKeeper)
> > [2018-01-02 10:04:34,133] INFO EventThread shut down for session:
> > 0x260b64e7ac90002 (org.apache.zookeeper.ClientCnxn)
> > [2018-01-02 10:04:34,133] INFO [Kafka Server 1], shut down completed
> > (kafka.server.KafkaServer)
> > [2018-01-02 10:04:34,133] FATAL Fatal error during KafkaServerStartable
> > startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
> > kafka.common.KafkaException: Socket server failed to bind to
> 0.0.0.0:9092:
> > Address already in use: bind.
> >         at kafka.network.Acceptor.openServerSocket(SocketServer.
> scala:330)
> >         at kafka.network.Acceptor.<init>(SocketServer.scala:255)
> >         at
> > kafka.network.SocketServer$$anonfun$startup$1.apply(
> SocketServer.scala:98)
> >         at
> > kafka.network.SocketServer$$anonfun$startup$1.apply(
> SocketServer.scala:90)
> >         at
> > scala.collection.mutable.ResizableArray$class.foreach(
> ResizableArray.scala:59)
> >         at
> > scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
> >         at kafka.network.SocketServer.startup(SocketServer.scala:90)
> >         at kafka.server.KafkaServer.startup(KafkaServer.scala:215)
> >         at
> > kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
> >         at kafka.Kafka$.main(Kafka.scala:67)
> >         at kafka.Kafka.main(Kafka.scala)
> > Caused by: java.net.BindException: Address already in use: bind
> >         at sun.nio.ch.Net.bind0(Native Method)
> >         at sun.nio.ch.Net.bind(Net.java:433)
> >         at sun.nio.ch.Net.bind(Net.java:425)
> >         at
> > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:
> 223)
> >         at sun.nio.ch.ServerSocketAdaptor.bind(
> ServerSocketAdaptor.java:74)
> >         at sun.nio.ch.ServerSocketAdaptor.bind(
> ServerSocketAdaptor.java:67)
> >         at kafka.network.Acceptor.openServerSocket(SocketServer.
> scala:326)
> >         ... 10 more
> > [2018-01-02 10:04:34,148] INFO [Kafka Server 1], shutting down
> > (kafka.server.KafkaServer)
>
>
>
>
> Could someone please advise whether this is a typical problem? Also, is
> this a very typical Windows problem, or could this happen on Linux too ?
>
> Best Regards,
>