You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Oliver Eckle <ie...@gmx.de> on 2019/11/16 19:20:41 UTC

Kafka Broker do not recover after crash

Hello,



having a Kafka Cluster running in Kubernetes with 3 Brokers and all
replikations (topic, offsets) set to 2.

For whatever reason one of the broker crash and restartes. And since it
circles in some kind of restart/crash loop.

Any idea how to recover?



Whole Logfile is like that:



[38;5;6m [38;5;5m19:15:42.58 [0m

[38;5;6m [38;5;5m19:15:42.58 [0m[1mWelcome to the Bitnami kafka container[0m

[38;5;6m [38;5;5m19:15:42.58 [0mSubscribe to project updates by watching
[1mhttps://github.com/bitnami/bitnami-docker-kafka[0m

[38;5;6m [38;5;5m19:15:42.58 [0mSubmit issues and feature requests at
[1mhttps://github.com/bitnami/bitnami-docker-kafka/issues[0m

[38;5;6m [38;5;5m19:15:42.58 [0mSend us your feedback at
[1mcontainers@bitnami.com[0m

[38;5;6m [38;5;5m19:15:42.59 [0m

[38;5;6m [38;5;5m19:15:42.59 [0m[38;5;2mINFO [0m ==> ** Starting Kafka setup
**

[38;5;6m [38;5;5m19:15:42.83 [0m[38;5;3mWARN [0m ==> You set the environment
variable ALLOW_PLAINTEXT_LISTENER=yes. For safety reasons, do not use this
flag in a production environment.

[38;5;6m [38;5;5m19:15:42.84 [0m[38;5;2mINFO [0m ==> Initializing Kafka...

[38;5;6m [38;5;5m19:15:42.84 [0m[38;5;2mINFO [0m ==> No injected
configuration files found, creating default config files

[38;5;6m [38;5;5m19:15:43.83 [0m[38;5;2mINFO [0m ==> ** Kafka setup
finished! **



[38;5;6m [38;5;5m19:15:43.84 [0m[38;5;2mINFO [0m ==> ** Starting Kafka **

[2019-11-16 19:15:49,625] INFO Registered kafka:type=kafka.Log4jController
MBean (kafka.utils.Log4jControllerRegistration$)

[2019-11-16 19:15:52,933] INFO Registered signal handlers for TERM, INT, HUP
(org.apache.kafka.common.utils.LoggingSignalHandler)

[2019-11-16 19:15:52,934] INFO starting (kafka.server.KafkaServer)

[2019-11-16 19:15:52,935] INFO Connecting to zookeeper on kafka-zookeeper
(kafka.server.KafkaServer)

[2019-11-16 19:15:53,230] INFO [ZooKeeperClient Kafka server] Initializing a
new session to kafka-zookeeper. (kafka.zookeeper.ZooKeeperClient)

[2019-11-16 19:15:53,331] INFO Client
environment:zookeeper.version=3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bc
f, built on 03/06/2019 16:18 GMT (org.apache.zookeeper.ZooKeeper)

[2019-11-16 19:15:53,331] INFO Client
environment:host.name=kafka-1.kafka-headless.bd-iot.svc.cluster.local
(org.apache.zookeeper.ZooKeeper)

[2019-11-16 19:15:53,331] INFO Client environment:java.version=1.8.0_232
(org.apache.zookeeper.ZooKeeper)

[2019-11-16 19:15:53,331] INFO Client environment:java.vendor=AdoptOpenJDK
(org.apache.zookeeper.ZooKeeper)

[2019-11-16 19:15:53,332] INFO Client
environment:java.home=/opt/bitnami/java (org.apache.zookeeper.ZooKeeper)

[2019-11-16 19:15:53,332] INFO Client
environment:java.class.path=/opt/bitnami/kafka/bin/../libs/activation-1.1.1.
jar:/opt/bitnami/kafka/bin/../libs/aopalliance-repackaged-2.5.0.jar:/opt/bit
nami/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/bitnami/kafka/bin/../libs/a
udience-annotations-0.5.0.jar:/opt/bitnami/kafka/bin/../libs/commons-lang3-3
.8.1.jar:/opt/bitnami/kafka/bin/../libs/connect-api-2.3.1.jar:/opt/bitnami/k
afka/bin/../libs/connect-basic-auth-extension-2.3.1.jar:/opt/bitnami/kafka/b
in/../libs/connect-file-2.3.1.jar:/opt/bitnami/kafka/bin/../libs/connect-jso
n-2.3.1.jar:/opt/bitnami/kafka/bin/../libs/connect-runtime-2.3.1.jar:/opt/bi
tnami/kafka/bin/../libs/connect-transforms-2.3.1.jar:/opt/bitnami/kafka/bin/
../libs/guava-20.0.jar:/opt/bitnami/kafka/bin/../libs/hk2-api-2.5.0.jar:/opt
/bitnami/kafka/bin/../libs/hk2-locator-2.5.0.jar:/opt/bitnami/kafka/bin/../l
ibs/hk2-utils-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/jackson-annotations-2
.10.0.jar:/opt/bitnami/kafka/bin/../libs/jackson-core-2.10.0.jar:/opt/bitnam
i/kafka/bin/../libs/jackson-databind-2.10.0.jar:/opt/bitnami/kafka/bin/../li
bs/jackson-dataformat-csv-2.10.0.jar:/opt/bitnami/kafka/bin/../libs/jackson-
datatype-jdk8-2.10.0.jar:/opt/bitnami/kafka/bin/../libs/jackson-jaxrs-base-2
.10.0.jar:/opt/bitnami/kafka/bin/../libs/jackson-jaxrs-json-provider-2.10.0.
jar:/opt/bitnami/kafka/bin/../libs/jackson-module-jaxb-annotations-2.10.0.ja
r:/opt/bitnami/kafka/bin/../libs/jackson-module-paranamer-2.10.0.jar:/opt/bi
tnami/kafka/bin/../libs/jackson-module-scala_2.11-2.10.0.jar:/opt/bitnami/ka
fka/bin/../libs/jakarta.activation-api-1.2.1.jar:/opt/bitnami/kafka/bin/../l
ibs/jakarta.annotation-api-1.3.4.jar:/opt/bitnami/kafka/bin/../libs/jakarta.
inject-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/jakarta.ws.rs-api-2.1.5.jar:
/opt/bitnami/kafka/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/opt/bitnami/k
afka/bin/../libs/javassist-3.22.0-CR2.jar:/opt/bitnami/kafka/bin/../libs/jav
ax.servlet-api-3.1.0.jar:/opt/bitnami/kafka/bin/../libs/javax.ws.rs-api-2.1.
1.jar:/opt/bitnami/kafka/bin/../libs/jaxb-api-2.3.0.jar:/opt/bitnami/kafka/b
in/../libs/jersey-client-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-comm
on-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-container-servlet-2.28.jar
:/opt/bitnami/kafka/bin/../libs/jersey-container-servlet-core-2.28.jar:/opt/
bitnami/kafka/bin/../libs/jersey-hk2-2.28.jar:/opt/bitnami/kafka/bin/../libs
/jersey-media-jaxb-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-server-2.2
8.jar:/opt/bitnami/kafka/bin/../libs/jetty-client-9.4.18.v20190429.jar:/opt/
bitnami/kafka/bin/../libs/jetty-continuation-9.4.18.v20190429.jar:/opt/bitna
mi/kafka/bin/../libs/jetty-http-9.4.18.v20190429.jar:/opt/bitnami/kafka/bin/
../libs/jetty-io-9.4.18.v20190429.jar:/opt/bitnami/kafka/bin/../libs/jetty-s
ecurity-9.4.18.v20190429.jar:/opt/bitnami/kafka/bin/../libs/jetty-server-9.4
.18.v20190429.jar:/opt/bitnami/kafka/bin/../libs/jetty-servlet-9.4.18.v20190
429.jar:/opt/bitnami/kafka/bin/../libs/jetty-servlets-9.4.18.v20190429.jar:/
opt/bitnami/kafka/bin/../libs/jetty-util-9.4.18.v20190429.jar:/opt/bitnami/k
afka/bin/../libs/jopt-simple-5.0.4.jar:/opt/bitnami/kafka/bin/../libs/jsr305
-3.0.2.jar:/opt/bitnami/kafka/bin/../libs/kafka-clients-2.3.1.jar:/opt/bitna
mi/kafka/bin/../libs/kafka-log4j-appender-2.3.1.jar:/opt/bitnami/kafka/bin/.
./libs/kafka-streams-2.3.1.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-
examples-2.3.1.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-scala_2.11-2
.3.1.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-test-utils-2.3.1.jar:/
opt/bitnami/kafka/bin/../libs/kafka-tools-2.3.1.jar:/opt/bitnami/kafka/bin/.
./libs/kafka_2.11-2.3.1-sources.jar:/opt/bitnami/kafka/bin/../libs/kafka_2.1
1-2.3.1.jar:/opt/bitnami/kafka/bin/../libs/log4j-1.2.17.jar:/opt/bitnami/kaf
ka/bin/../libs/lz4-java-1.6.0.jar:/opt/bitnami/kafka/bin/../libs/maven-artif
act-3.6.1.jar:/opt/bitnami/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/bit
nami/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/bitnami/kafka/bi
n/../libs/paranamer-2.8.jar:/opt/bitnami/kafka/bin/../libs/plexus-utils-3.2.
0.jar:/opt/bitnami/kafka/bin/../libs/reflections-0.9.11.jar:/opt/bitnami/kaf
ka/bin/../libs/rocksdbjni-5.18.3.jar:/opt/bitnami/kafka/bin/../libs/scala-li
brary-2.11.12.jar:/opt/bitnami/kafka/bin/../libs/scala-logging_2.11-3.9.0.ja
r:/opt/bitnami/kafka/bin/../libs/scala-reflect-2.11.12.jar:/opt/bitnami/kafk
a/bin/../libs/slf4j-api-1.7.26.jar:/opt/bitnami/kafka/bin/../libs/slf4j-log4
j12-1.7.26.jar:/opt/bitnami/kafka/bin/../libs/snappy-java-1.1.7.3.jar:/opt/b
itnami/kafka/bin/../libs/spotbugs-annotations-3.1.9.jar:/opt/bitnami/kafka/b
in/../libs/validation-api-2.0.1.Final.jar:/opt/bitnami/kafka/bin/../libs/zkc
lient-0.11.jar:/opt/bitnami/kafka/bin/../libs/zookeeper-3.4.14.jar:/opt/bitn
ami/kafka/bin/../libs/zstd-jni-1.4.0-1.jar (org.apache.zookeeper.ZooKeeper)

[2019-11-16 19:15:53,333] INFO Client
environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64
:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)

[2019-11-16 19:15:53,333] INFO Client environment:java.io.tmpdir=/tmp
(org.apache.zookeeper.ZooKeeper)

[2019-11-16 19:15:53,334] INFO Client environment:java.compiler=<NA>
(org.apache.zookeeper.ZooKeeper)

[2019-11-16 19:15:53,334] INFO Client environment:os.name=Linux
(org.apache.zookeeper.ZooKeeper)

[2019-11-16 19:15:53,334] INFO Client environment:os.arch=amd64
(org.apache.zookeeper.ZooKeeper)

[2019-11-16 19:15:53,335] INFO Client
environment:os.version=4.15.0-1060-azure (org.apache.zookeeper.ZooKeeper)

[2019-11-16 19:15:53,336] INFO Client environment:user.name=?
(org.apache.zookeeper.ZooKeeper)

[2019-11-16 19:15:53,336] INFO Client environment:user.home=?
(org.apache.zookeeper.ZooKeeper)

[2019-11-16 19:15:53,336] INFO Client environment:user.dir=/
(org.apache.zookeeper.ZooKeeper)

[2019-11-16 19:15:53,338] INFO Initiating client connection,
connectString=kafka-zookeeper sessionTimeout=6000
watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@31304f14
(org.apache.zookeeper.ZooKeeper)

[2019-11-16 19:15:53,528] INFO [ZooKeeperClient Kafka server] Waiting until
connected. (kafka.zookeeper.ZooKeeperClient)

[2019-11-16 19:15:53,545] INFO Opening socket connection to server
kafka-zookeeper/10.0.215.214:2181. Will not attempt to authenticate using
SASL (unknown error) (org.apache.zookeeper.ClientCnxn)

[2019-11-16 19:15:53,552] INFO Socket connection established to
kafka-zookeeper/10.0.215.214:2181, initiating session
(org.apache.zookeeper.ClientCnxn)

[2019-11-16 19:15:53,627] INFO Session establishment complete on server
kafka-zookeeper/10.0.215.214:2181, sessionid = 0x10000810b780070, negotiated
timeout = 6000 (org.apache.zookeeper.ClientCnxn)

[2019-11-16 19:15:53,630] INFO [ZooKeeperClient Kafka server] Connected.
(kafka.zookeeper.ZooKeeperClient)

[2019-11-16 19:15:55,034] INFO Cluster ID = dvSQ1W2US72rcqGef9tm6w
(kafka.server.KafkaServer)

[2019-11-16 19:15:55,637] INFO KafkaConfig values:

                advertised.host.name = null

                advertised.listeners =
PLAINTEXT://kafka-1.kafka-headless.bd-iot.svc.cluster.local:9092

                advertised.port = null

                alter.config.policy.class.name = null

                alter.log.dirs.replication.quota.window.num = 11

                alter.log.dirs.replication.quota.window.size.seconds = 1

                authorizer.class.name =

                auto.create.topics.enable = true

                auto.leader.rebalance.enable = true

                background.threads = 10

                broker.id = -1

                broker.id.generation.enable = true

                broker.rack = null

                client.quota.callback.class = null

                compression.type = producer

                connection.failed.authentication.delay.ms = 100

                connections.max.idle.ms = 600000

                connections.max.reauth.ms = 0

                control.plane.listener.name = null

                controlled.shutdown.enable = true

                controlled.shutdown.max.retries = 3

                controlled.shutdown.retry.backoff.ms = 5000

                controller.socket.timeout.ms = 30000

                create.topic.policy.class.name = null

                default.replication.factor = 2

                delegation.token.expiry.check.interval.ms = 3600000

                delegation.token.expiry.time.ms = 86400000

                delegation.token.master.key = null

                delegation.token.max.lifetime.ms = 604800000

                delete.records.purgatory.purge.interval.requests = 1

                delete.topic.enable = true

                fetch.purgatory.purge.interval.requests = 1000

                group.initial.rebalance.delay.ms = 0

                group.max.session.timeout.ms = 1800000

                group.max.size = 2147483647

                group.min.session.timeout.ms = 6000

                host.name =

                inter.broker.listener.name = null

                inter.broker.protocol.version = 2.3-IV1

                kafka.metrics.polling.interval.secs = 10

                kafka.metrics.reporters = []

                leader.imbalance.check.interval.seconds = 300

                leader.imbalance.per.broker.percentage = 10

                listener.security.protocol.map =
PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

                listeners = PLAINTEXT://:9092

                log.cleaner.backoff.ms = 15000

                log.cleaner.dedupe.buffer.size = 134217728

                log.cleaner.delete.retention.ms = 86400000

                log.cleaner.enable = true

                log.cleaner.io.buffer.load.factor = 0.9

                log.cleaner.io.buffer.size = 524288

                log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308

                log.cleaner.max.compaction.lag.ms = 9223372036854775807

                log.cleaner.min.cleanable.ratio = 0.5

                log.cleaner.min.compaction.lag.ms = 0

                log.cleaner.threads = 1

                log.cleanup.policy = [delete]

                log.dir = /tmp/kafka-logs

                log.dirs = /bitnami/kafka/data

                log.flush.interval.messages = 10000

                log.flush.interval.ms = 1000

                log.flush.offset.checkpoint.interval.ms = 60000

                log.flush.scheduler.interval.ms = 9223372036854775807

                log.flush.start.offset.checkpoint.interval.ms = 60000

                log.index.interval.bytes = 4096

                log.index.size.max.bytes = 10485760

                log.message.downconversion.enable = true

                log.message.format.version = 2.3-IV1

                log.message.timestamp.difference.max.ms =
9223372036854775807

                log.message.timestamp.type = CreateTime

                log.preallocate = false

                log.retention.bytes = 1073741824

                log.retention.check.interval.ms = 300000

                log.retention.hours = 168

                log.retention.minutes = null

                log.retention.ms = null

                log.roll.hours = 168

                log.roll.jitter.hours = 0

                log.roll.jitter.ms = null

                log.roll.ms = null

                log.segment.bytes = 1073741824

                log.segment.delete.delay.ms = 60000

                max.connections = 2147483647

                max.connections.per.ip = 2147483647

                max.connections.per.ip.overrides =

                max.incremental.fetch.session.cache.slots = 1000

                message.max.bytes = 1000012

                metric.reporters = []

                metrics.num.samples = 2

                metrics.recording.level = INFO

                metrics.sample.window.ms = 30000

                min.insync.replicas = 1

                num.io.threads = 8

                num.network.threads = 3

                num.partitions = 1

                num.recovery.threads.per.data.dir = 1

                num.replica.alter.log.dirs.threads = null

                num.replica.fetchers = 1

                offset.metadata.max.bytes = 4096

                offsets.commit.required.acks = -1

                offsets.commit.timeout.ms = 5000

                offsets.load.buffer.size = 5242880

                offsets.retention.check.interval.ms = 600000

                offsets.retention.minutes = 10080

                offsets.topic.compression.codec = 0

                offsets.topic.num.partitions = 50

                offsets.topic.replication.factor = 2

                offsets.topic.segment.bytes = 104857600

                password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding

                password.encoder.iterations = 4096

                password.encoder.key.length = 128

                password.encoder.keyfactory.algorithm = null

                password.encoder.old.secret = null

                password.encoder.secret = null

                port = 9092

                principal.builder.class = null

                producer.purgatory.purge.interval.requests = 1000

                queued.max.request.bytes = -1

                queued.max.requests = 500

                quota.consumer.default = 9223372036854775807

                quota.producer.default = 9223372036854775807

                quota.window.num = 11

                quota.window.size.seconds = 1

                replica.fetch.backoff.ms = 1000

                replica.fetch.max.bytes = 1048576

                replica.fetch.min.bytes = 1

                replica.fetch.response.max.bytes = 10485760

                replica.fetch.wait.max.ms = 500

                replica.high.watermark.checkpoint.interval.ms = 5000

                replica.lag.time.max.ms = 10000

                replica.socket.receive.buffer.bytes = 65536

                replica.socket.timeout.ms = 30000

                replication.quota.window.num = 11

                replication.quota.window.size.seconds = 1

                request.timeout.ms = 30000

                reserved.broker.max.id = 1000

                sasl.client.callback.handler.class = null

                sasl.enabled.mechanisms = [GSSAPI]

                sasl.jaas.config = null

                sasl.kerberos.kinit.cmd = /usr/bin/kinit

                sasl.kerberos.min.time.before.relogin = 60000

                sasl.kerberos.principal.to.local.rules = [DEFAULT]

                sasl.kerberos.service.name = null

                sasl.kerberos.ticket.renew.jitter = 0.05

                sasl.kerberos.ticket.renew.window.factor = 0.8

                sasl.login.callback.handler.class = null

                sasl.login.class = null

                sasl.login.refresh.buffer.seconds = 300

                sasl.login.refresh.min.period.seconds = 60

                sasl.login.refresh.window.factor = 0.8

                sasl.login.refresh.window.jitter = 0.05

                sasl.mechanism.inter.broker.protocol = GSSAPI

                sasl.server.callback.handler.class = null

                security.inter.broker.protocol = PLAINTEXT

                socket.receive.buffer.bytes = 102400

                socket.request.max.bytes = 104857600

                socket.send.buffer.bytes = 102400

                ssl.cipher.suites = []

                ssl.client.auth = none

                ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]

                ssl.endpoint.identification.algorithm = https

                ssl.key.password = null

                ssl.keymanager.algorithm = SunX509

                ssl.keystore.location = null

                ssl.keystore.password = null

                ssl.keystore.type = JKS

                ssl.principal.mapping.rules = [DEFAULT]

                ssl.protocol = TLS

                ssl.provider = null

                ssl.secure.random.implementation = null

                ssl.trustmanager.algorithm = PKIX

                ssl.truststore.location = null

                ssl.truststore.password = null

                ssl.truststore.type = JKS

                transaction.abort.timed.out.transaction.cleanup.interval.ms
= 60000

                transaction.max.timeout.ms = 900000

                transaction.remove.expired.transaction.cleanup.interval.ms =
3600000

                transaction.state.log.load.buffer.size = 5242880

                transaction.state.log.min.isr = 2

                transaction.state.log.num.partitions = 50

                transaction.state.log.replication.factor = 2

                transaction.state.log.segment.bytes = 104857600

                transactional.id.expiration.ms = 604800000

                unclean.leader.election.enable = false

                zookeeper.connect = kafka-zookeeper

                zookeeper.connection.timeout.ms = 6000

                zookeeper.max.in.flight.requests = 10

                zookeeper.session.timeout.ms = 6000

                zookeeper.set.acl = false

                zookeeper.sync.time.ms = 2000

(kafka.server.KafkaConfig)

[2019-11-16 19:15:55,829] INFO KafkaConfig values:

                advertised.host.name = null

                advertised.listeners =
PLAINTEXT://kafka-1.kafka-headless.bd-iot.svc.cluster.local:9092

                advertised.port = null

                alter.config.policy.class.name = null

                alter.log.dirs.replication.quota.window.num = 11

                alter.log.dirs.replication.quota.window.size.seconds = 1

                authorizer.class.name =

                auto.create.topics.enable = true

                auto.leader.rebalance.enable = true

                background.threads = 10

                broker.id = -1

                broker.id.generation.enable = true

                broker.rack = null

                client.quota.callback.class = null

                compression.type = producer

                connection.failed.authentication.delay.ms = 100

                connections.max.idle.ms = 600000

                connections.max.reauth.ms = 0

                control.plane.listener.name = null

                controlled.shutdown.enable = true

                controlled.shutdown.max.retries = 3

                controlled.shutdown.retry.backoff.ms = 5000

                controller.socket.timeout.ms = 30000

                create.topic.policy.class.name = null

                default.replication.factor = 2

                delegation.token.expiry.check.interval.ms = 3600000

                delegation.token.expiry.time.ms = 86400000

                delegation.token.master.key = null

                delegation.token.max.lifetime.ms = 604800000

                delete.records.purgatory.purge.interval.requests = 1

                delete.topic.enable = true

                fetch.purgatory.purge.interval.requests = 1000

                group.initial.rebalance.delay.ms = 0

                group.max.session.timeout.ms = 1800000

                group.max.size = 2147483647

                group.min.session.timeout.ms = 6000

                host.name =

                inter.broker.listener.name = null

                inter.broker.protocol.version = 2.3-IV1

                kafka.metrics.polling.interval.secs = 10

                kafka.metrics.reporters = []

                leader.imbalance.check.interval.seconds = 300

                leader.imbalance.per.broker.percentage = 10

                listener.security.protocol.map =
PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

                listeners = PLAINTEXT://:9092

                log.cleaner.backoff.ms = 15000

                log.cleaner.dedupe.buffer.size = 134217728

                log.cleaner.delete.retention.ms = 86400000

                log.cleaner.enable = true

                log.cleaner.io.buffer.load.factor = 0.9

                log.cleaner.io.buffer.size = 524288

                log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308

                log.cleaner.max.compaction.lag.ms = 9223372036854775807

                log.cleaner.min.cleanable.ratio = 0.5

                log.cleaner.min.compaction.lag.ms = 0

                log.cleaner.threads = 1

                log.cleanup.policy = [delete]

                log.dir = /tmp/kafka-logs

                log.dirs = /bitnami/kafka/data

                log.flush.interval.messages = 10000

                log.flush.interval.ms = 1000

                log.flush.offset.checkpoint.interval.ms = 60000

                log.flush.scheduler.interval.ms = 9223372036854775807

                log.flush.start.offset.checkpoint.interval.ms = 60000

                log.index.interval.bytes = 4096

                log.index.size.max.bytes = 10485760

                log.message.downconversion.enable = true

                log.message.format.version = 2.3-IV1

                log.message.timestamp.difference.max.ms =
9223372036854775807

                log.message.timestamp.type = CreateTime

                log.preallocate = false

                log.retention.bytes = 1073741824

                log.retention.check.interval.ms = 300000

                log.retention.hours = 168

                log.retention.minutes = null

                log.retention.ms = null

                log.roll.hours = 168

                log.roll.jitter.hours = 0

                log.roll.jitter.ms = null

                log.roll.ms = null

                log.segment.bytes = 1073741824

                log.segment.delete.delay.ms = 60000

                max.connections = 2147483647

                max.connections.per.ip = 2147483647

                max.connections.per.ip.overrides =

                max.incremental.fetch.session.cache.slots = 1000

                message.max.bytes = 1000012

                metric.reporters = []

                metrics.num.samples = 2

                metrics.recording.level = INFO

                metrics.sample.window.ms = 30000

                min.insync.replicas = 1

                num.io.threads = 8

                num.network.threads = 3

                num.partitions = 1

                num.recovery.threads.per.data.dir = 1

                num.replica.alter.log.dirs.threads = null

                num.replica.fetchers = 1

                offset.metadata.max.bytes = 4096

                offsets.commit.required.acks = -1

                offsets.commit.timeout.ms = 5000

                offsets.load.buffer.size = 5242880

                offsets.retention.check.interval.ms = 600000

                offsets.retention.minutes = 10080

                offsets.topic.compression.codec = 0

                offsets.topic.num.partitions = 50

                offsets.topic.replication.factor = 2

                offsets.topic.segment.bytes = 104857600

                password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding

                password.encoder.iterations = 4096

                password.encoder.key.length = 128

                password.encoder.keyfactory.algorithm = null

                password.encoder.old.secret = null

                password.encoder.secret = null

                port = 9092

                principal.builder.class = null

                producer.purgatory.purge.interval.requests = 1000

                queued.max.request.bytes = -1

                queued.max.requests = 500

                quota.consumer.default = 9223372036854775807

                quota.producer.default = 9223372036854775807

                quota.window.num = 11

                quota.window.size.seconds = 1

                replica.fetch.backoff.ms = 1000

                replica.fetch.max.bytes = 1048576

                replica.fetch.min.bytes = 1

                replica.fetch.response.max.bytes = 10485760

                replica.fetch.wait.max.ms = 500

                replica.high.watermark.checkpoint.interval.ms = 5000

                replica.lag.time.max.ms = 10000

                replica.socket.receive.buffer.bytes = 65536

                replica.socket.timeout.ms = 30000

                replication.quota.window.num = 11

                replication.quota.window.size.seconds = 1

                request.timeout.ms = 30000

                reserved.broker.max.id = 1000

                sasl.client.callback.handler.class = null

                sasl.enabled.mechanisms = [GSSAPI]

                sasl.jaas.config = null

                sasl.kerberos.kinit.cmd = /usr/bin/kinit

                sasl.kerberos.min.time.before.relogin = 60000

                sasl.kerberos.principal.to.local.rules = [DEFAULT]

                sasl.kerberos.service.name = null

                sasl.kerberos.ticket.renew.jitter = 0.05

                sasl.kerberos.ticket.renew.window.factor = 0.8

                sasl.login.callback.handler.class = null

                sasl.login.class = null

                sasl.login.refresh.buffer.seconds = 300

                sasl.login.refresh.min.period.seconds = 60

                sasl.login.refresh.window.factor = 0.8

                sasl.login.refresh.window.jitter = 0.05

                sasl.mechanism.inter.broker.protocol = GSSAPI

                sasl.server.callback.handler.class = null

                security.inter.broker.protocol = PLAINTEXT

                socket.receive.buffer.bytes = 102400

                socket.request.max.bytes = 104857600

                socket.send.buffer.bytes = 102400

                ssl.cipher.suites = []

                ssl.client.auth = none

                ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]

                ssl.endpoint.identification.algorithm = https

                ssl.key.password = null

                ssl.keymanager.algorithm = SunX509

                ssl.keystore.location = null

                ssl.keystore.password = null

                ssl.keystore.type = JKS

                ssl.principal.mapping.rules = [DEFAULT]

                ssl.protocol = TLS

                ssl.provider = null

                ssl.secure.random.implementation = null

                ssl.trustmanager.algorithm = PKIX

                ssl.truststore.location = null

                ssl.truststore.password = null

                ssl.truststore.type = JKS

                transaction.abort.timed.out.transaction.cleanup.interval.ms
= 60000

                transaction.max.timeout.ms = 900000

                transaction.remove.expired.transaction.cleanup.interval.ms =
3600000

                transaction.state.log.load.buffer.size = 5242880

                transaction.state.log.min.isr = 2

                transaction.state.log.num.partitions = 50

                transaction.state.log.replication.factor = 2

                transaction.state.log.segment.bytes = 104857600

                transactional.id.expiration.ms = 604800000

                unclean.leader.election.enable = false

                zookeeper.connect = kafka-zookeeper

                zookeeper.connection.timeout.ms = 6000

                zookeeper.max.in.flight.requests = 10

                zookeeper.session.timeout.ms = 6000

                zookeeper.set.acl = false

                zookeeper.sync.time.ms = 2000

(kafka.server.KafkaConfig)

[2019-11-16 19:15:56,039] INFO [ThrottledChannelReaper-Fetch]: Starting
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)

[2019-11-16 19:15:56,044] INFO [ThrottledChannelReaper-Produce]: Starting
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)

[2019-11-16 19:15:56,046] INFO [ThrottledChannelReaper-Request]: Starting
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)

[2019-11-16 19:15:56,335] INFO Loading logs. (kafka.log.LogManager)

[2019-11-16 19:15:56,638] INFO [Log partition=__consumer_offsets-4,
dir=/bitnami/kafka/data] Recovering unflushed segment 0 (kafka.log.Log)

[2019-11-16 19:15:56,727] INFO [Log partition=__consumer_offsets-4,
dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
format version 2 (kafka.log.Log)

[2019-11-16 19:15:56,931] INFO [Log partition=__consumer_offsets-4,
dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
format version 2 (kafka.log.Log)

[2019-11-16 19:15:56,933] INFO [Log partition=__consumer_offsets-4,
dir=/bitnami/kafka/data] Completed load of log with 1 segments, log start
offset 0 and log end offset 0 in 399 ms (kafka.log.Log)

[2019-11-16 19:15:57,029] INFO [Log partition=__consumer_offsets-22,
dir=/bitnami/kafka/data] Recovering unflushed segment 0 (kafka.log.Log)

[2019-11-16 19:15:57,029] INFO [Log partition=__consumer_offsets-22,
dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
format version 2 (kafka.log.Log)

[2019-11-16 19:15:57,031] INFO [Log partition=__consumer_offsets-22,
dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
format version 2 (kafka.log.Log)

[2019-11-16 19:15:57,032] INFO [Log partition=__consumer_offsets-22,
dir=/bitnami/kafka/data] Completed load of log with 1 segments, log start
offset 0 and log end offset 0 in 6 ms (kafka.log.Log)

[2019-11-16 19:15:57,147] INFO [Log partition=__consumer_offsets-32,
dir=/bitnami/kafka/data] Recovering unflushed segment 0 (kafka.log.Log)

[2019-11-16 19:15:57,148] INFO [Log partition=__consumer_offsets-32,
dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
format version 2 (kafka.log.Log)

[2019-11-16 19:15:57,150] INFO [Log partition=__consumer_offsets-32,
dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
format version 2 (kafka.log.Log)

[2019-11-16 19:15:57,226] INFO [Log partition=__consumer_offsets-32,
dir=/bitnami/kafka/data] Completed load of log with 1 segments, log start
offset 0 and log end offset 0 in 189 ms (kafka.log.Log)

[2019-11-16 19:15:57,330] INFO [Log partition=__consumer_offsets-39,
dir=/bitnami/kafka/data] Recovering unflushed segment 0 (kafka.log.Log)

[2019-11-16 19:15:57,330] INFO [Log partition=__consumer_offsets-39,
dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
format version 2 (kafka.log.Log)

[2019-11-16 19:15:57,333] INFO [Log partition=__consumer_offsets-39,
dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
format version 2 (kafka.log.Log)

[2019-11-16 19:15:57,334] INFO [Log partition=__consumer_offsets-39,
dir=/bitnami/kafka/data] Completed load of log with 1 segments, log start
offset 0 and log end offset 0 in 6 ms (kafka.log.Log)

[2019-11-16 19:15:57,429] INFO [Log partition=__consumer_offsets-26,
dir=/bitnami/kafka/data] Recovering unflushed segment 0 (kafka.log.Log)

[2019-11-16 19:15:57,429] INFO [Log partition=__consumer_offsets-26,
dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
format version 2 (kafka.log.Log)

[2019-11-16 19:15:57,431] INFO [Log partition=__consumer_offsets-26,
dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
format version 2 (kafka.log.Log)

[2019-11-16 19:15:57,432] INFO [Log partition=__consumer_offsets-26,
dir=/bitnami/kafka/data] Completed load of log with 1 segments, log start
offset 0 and log end offset 0 in 5 ms (kafka.log.Log)

[2019-11-16 19:15:57,527] INFO [Log partition=__consumer_offsets-44,
dir=/bitnami/kafka/data] Recovering unflushed segment 0 (kafka.log.Log)

[2019-11-16 19:15:57,529] INFO [Log partition=__consumer_offsets-44,
dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
format version 2 (kafka.log.Log)

[2019-11-16 19:15:57,533] INFO [Log partition=__consumer_offsets-44,
dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
format version 2 (kafka.log.Log)

[2019-11-16 19:15:57,534] INFO [Log partition=__consumer_offsets-44,
dir=/bitnami/kafka/data] Completed load of log with 1 segments, log start
offset 0 and log end offset 0 in 8 ms (kafka.log.Log)

[2019-11-16 19:15:57,634] INFO [Log partition=__consumer_offsets-25,
dir=/bitnami/kafka/data] Recovering unflushed segment 0 (kafka.log.Log)

[2019-11-16 19:15:57,635] INFO [Log partition=__consumer_offsets-25,
dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
format version 2 (kafka.log.Log)

[2019-11-16 19:15:57,637] INFO [Log partition=__consumer_offsets-25,
dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
format version 2 (kafka.log.Log)

[2019-11-16 19:15:57,638] INFO [Log partition=__consumer_offsets-25,
dir=/bitnami/kafka/data] Completed load of log with 1 segments, log start
offset 0 and log end offset 0 in 7 ms (kafka.log.Log)

[2019-11-16 19:15:57,730] INFO [Log partition=__consumer_offsets-8,
dir=/bitnami/kafka/data] Recovering unflushed segment 0 (kafka.log.Log)

[2019-11-16 19:15:57,730] INFO [Log partition=__consumer_offsets-8,
dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
format version 2 (kafka.log.Log)

[2019-11-16 19:15:57,733] INFO [Log partition=__consumer_offsets-8,
dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
format version 2 (kafka.log.Log)

[2019-11-16 19:15:57,733] INFO [Log partition=__consumer_offsets-8,
dir=/bitnami/kafka/data] Completed load of log with 1 segments, log start
offset 0 and log end offset 0 in 5 ms (kafka.log.Log)

[2019-11-16 19:15:57,741] INFO [Log partition=batch.alarm-0,
dir=/bitnami/kafka/data] Recovering unflushed segment 0 (kafka.log.Log)

[2019-11-16 19:15:57,826] INFO [Log partition=batch.alarm-0,
dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
format version 2 (kafka.log.Log)

[2019-11-16 19:15:57,830] INFO [Log partition=batch.alarm-0,
dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
format version 2 (kafka.log.Log)

[2019-11-16 19:15:57,833] INFO [Log partition=batch.alarm-0,
dir=/bitnami/kafka/data] Completed load of log with 1 segments, log start
offset 0 and log end offset 0 in 94 ms (kafka.log.Log)

[2019-11-16 19:15:57,936] INFO [Log partition=__consumer_offsets-38,
dir=/bitnami/kafka/data] Recovering unflushed segment 33982499
(kafka.log.Log)

[2019-11-16 19:15:57,937] INFO [Log partition=__consumer_offsets-38,
dir=/bitnami/kafka/data] Loading producer state till offset 33982499 with
message format version 2 (kafka.log.Log)

[2019-11-16 19:15:57,941] INFO [ProducerStateManager
partition=__consumer_offsets-38] Loading producer state from snapshot file
'/bitnami/kafka/data/__consumer_offsets-38/00000000000033982499.snapshot'
(kafka.log.ProducerStateManager)

[2019-11-16 19:16:10,208] INFO Terminating process due to signal SIGTERM
(org.apache.kafka.common.utils.LoggingSignalHandler)

[2019-11-16 19:16:10,217] INFO [KafkaServer id=1012] shutting down
(kafka.server.KafkaServer)

[2019-11-16 19:16:10,226] ERROR [KafkaServer id=1012] Fatal error during
KafkaServer shutdown. (kafka.server.KafkaServer)

java.lang.IllegalStateException: Kafka server is still starting up, cannot
shut down!

                at kafka.server.KafkaServer.shutdown(KafkaServer.scala:584)

                at
kafka.server.KafkaServerStartable.shutdown(KafkaServerStartable.scala:48)

                at kafka.Kafka$$anon$1.run(Kafka.scala:81)

[2019-11-16 19:16:10,233] ERROR Halting Kafka.
(kafka.server.KafkaServerStartable)



Kind Regards

Oliver


Re: Kafka Broker do not recover after crash

Posted by Eric Azama <ea...@gmail.com>.
Hi Oliver,

Your first line of log has a timestamp of 19:15:42 and the last few logs
show that the container received a SIGTERM at 19:16:10. That looks
suspiciously close to 30 seconds after kubernetes initiated the pod. Does
your deployment have a timeout that terminates a container if it's not
ready in 30 seconds

Broker start up times can get rather long depending on things like the
number of partitions. You might need to adjust your readiness timeout to
accommodate it.

On Sun, Nov 17, 2019 at 2:59 AM M. Manna <ma...@gmail.com> wrote:

> HI,
>
> On Sat, 16 Nov 2019 at 19:54, Oliver Eckle <ie...@gmx.de> wrote:
>
> > Hi,
> >
> > yes it is intentional, but just because I don't know better and want to
> > spare a little resources?
> >
>
> I never understood the benefit of having more brokers than replicas with
> the intention of saving resources. A lot of people do that, and the Kafka
> community seems to be okay with it ( i.e. not documentation or caution for
> NOT doing that). Please make sure you use it to full extent.
>
> For your case, I believe your log and index file stored on the affected
> broker (or rather, the PV attached to it if you have one) may have been
> corrupted.
> The best way (rather than debugging and investigating logs endlessly) is to
> simply delete the pod and let it start again. Also, make sure that it
> does't refer to the old files (if you have a PV/StatefulSet with it). It's
> important that upon restart the broker builds all the data files themselves
> than referring to previously stored files.
>
> Try that and see how it goes.
>
> Thanks,
>
>
> From your answer I guess the preferred way is having a replication of 3?
> >
> >
> > -----Ursprüngliche Nachricht-----
> > Von: M. Manna <ma...@gmail.com>
> > Gesendet: Samstag, 16. November 2019 20:27
> > An: users@kafka.apache.org
> > Betreff: Re: Kafka Broker do not recover after crash
> >
> > Hi,
> >
> > On Sat, 16 Nov 2019 at 19:21, Oliver Eckle <ie...@gmx.de> wrote:
> >
> > > Hello,
> > >
> > >
> > >
> > > having a Kafka Cluster running in Kubernetes with 3 Brokers and all
> > > replikations (topic, offsets) set to 2.
> >
> >
> > This sounds strange. You have 3 brokers and replication set to 2. Is this
> > intentional ?
> >
> >
> > >
> > > For whatever reason one of the broker crash and restartes. And since
> > > it circles in some kind of restart/crash loop.
> > >
> > > Any idea how to recover?
> > >
> > >
> > >
> > > Whole Logfile is like that:
> > >
> > >
> > >
> > > [38;5;6m [38;5;5m19:15:42.58 [0m
> > >
> > > [38;5;6m [38;5;5m19:15:42.58 [0m[1mWelcome to the Bitnami kafka
> > > container[0m
> > >
> > > [38;5;6m [38;5;5m19:15:42.58 [0mSubscribe to project updates by
> > > watching [1mhttps://github.com/bitnami/bitnami-docker-kafka[0m
> <http://github.com/bitnami/bitnami-docker-kafka%5B0m>
> > <http://github.com/bitnami/bitnami-docker-kafka%5B0m>
> > > <http://github.com/bitnami/bitnami-docker-kafka%5B0m>
> > >
> > > [38;5;6m [38;5;5m19:15:42.58 [0mSubmit issues and feature requests at
> > > [1mhttps://github.com/bitnami/bitnami-docker-kafka/issues[0m
> <http://github.com/bitnami/bitnami-docker-kafka/issues%5B0m>
> > <http://github.com/bitnami/bitnami-docker-kafka/issues%5B0m>
> > > <http://github.com/bitnami/bitnami-docker-kafka/issues%5B0m>
> > >
> > > [38;5;6m [38;5;5m19:15:42.58 [0mSend us your feedback at
> > > [1mcontainers@bitnami.com[0m
> > >
> > > [38;5;6m [38;5;5m19:15:42.59 [0m
> > >
> > > [38;5;6m [38;5;5m19:15:42.59 [0m[38;5;2mINFO [0m ==> ** Starting Kafka
> > > setup
> > > **
> > >
> > > [38;5;6m [38;5;5m19:15:42.83 [0m[38;5;3mWARN [0m ==> You set the
> > > environment variable ALLOW_PLAINTEXT_LISTENER=yes. For safety reasons,
> > > do not use this flag in a production environment.
> > >
> > > [38;5;6m [38;5;5m19:15:42.84 [0m[38;5;2mINFO [0m ==> Initializing
> > Kafka...
> > >
> > > [38;5;6m [38;5;5m19:15:42.84 [0m[38;5;2mINFO [0m ==> No injected
> > > configuration files found, creating default config files
> > >
> > > [38;5;6m [38;5;5m19:15:43.83 [0m[38;5;2mINFO [0m ==> ** Kafka setup
> > > finished! **
> > >
> > >
> > >
> > > [38;5;6m [38;5;5m19:15:43.84 [0m[38;5;2mINFO [0m ==> ** Starting Kafka
> > > **
> > >
> > > [2019-11-16 19:15:49,625] INFO Registered
> > > kafka:type=kafka.Log4jController MBean
> > > (kafka.utils.Log4jControllerRegistration$)
> > >
> > > [2019-11-16 19:15:52,933] INFO Registered signal handlers for TERM,
> > > INT, HUP
> > > (org.apache.kafka.common.utils.LoggingSignalHandler)
> > >
> > > [2019-11-16 19:15:52,934] INFO starting (kafka.server.KafkaServer)
> > >
> > > [2019-11-16 19:15:52,935] INFO Connecting to zookeeper on
> > > kafka-zookeeper
> > > (kafka.server.KafkaServer)
> > >
> > > [2019-11-16 19:15:53,230] INFO [ZooKeeperClient Kafka server]
> > > Initializing a new session to kafka-zookeeper.
> > > (kafka.zookeeper.ZooKeeperClient)
> > >
> > > [2019-11-16 19:15:53,331] INFO Client
> > >
> > > environment:zookeeper.version=3.4.14-4c25d480e66aadd371de8bd2fd8da255a
> > > c140bc f, built on 03/06/2019 16:18 GMT
> > > (org.apache.zookeeper.ZooKeeper)
> > >
> > > [2019-11-16 19:15:53,331] INFO Client
> > > environment:host.name=kafka-1.kafka-headless.bd-iot.svc.cluster.local
> > > (org.apache.zookeeper.ZooKeeper)
> > >
> > > [2019-11-16 19:15:53,331] INFO Client
> > > environment:java.version=1.8.0_232
> > > (org.apache.zookeeper.ZooKeeper)
> > >
> > > [2019-11-16 19:15:53,331] INFO Client
> > > environment:java.vendor=AdoptOpenJDK
> > > (org.apache.zookeeper.ZooKeeper)
> > >
> > > [2019-11-16 19:15:53,332] INFO Client
> > > environment:java.home=/opt/bitnami/java
> > > (org.apache.zookeeper.ZooKeeper)
> > >
> > > [2019-11-16 19:15:53,332] INFO Client
> > >
> > >
> >
> environment:java.class.path=/opt/bitnami/kafka/bin/../libs/activation-1.1.1.
> > >
> > > jar:/opt/bitnami/kafka/bin/../libs/aopalliance-repackaged-2.5.0.jar:/o
> > > pt/bit
> > >
> > > nami/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/bitnami/kafka/bin/../
> > > libs/a
> > >
> > > udience-annotations-0.5.0.jar:/opt/bitnami/kafka/bin/../libs/commons-l
> > > ang3-3
> > >
> > > .8.1.jar:/opt/bitnami/kafka/bin/../libs/connect-api-2.3.1.jar:/opt/bit
> > > nami/k
> > >
> > > afka/bin/../libs/connect-basic-auth-extension-2.3.1.jar:/opt/bitnami/k
> > > afka/b
> > >
> > > in/../libs/connect-file-2.3.1.jar:/opt/bitnami/kafka/bin/../libs/conne
> > > ct-jso
> > >
> > > n-2.3.1.jar:/opt/bitnami/kafka/bin/../libs/connect-runtime-2.3.1.jar:/
> > > opt/bi
> > >
> > > tnami/kafka/bin/../libs/connect-transforms-2.3.1.jar:/opt/bitnami/kafk
> > > a/bin/
> > >
> > > ../libs/guava-20.0.jar:/opt/bitnami/kafka/bin/../libs/hk2-api-2.5.0.ja
> > > r:/opt
> > >
> > > /bitnami/kafka/bin/../libs/hk2-locator-2.5.0.jar:/opt/bitnami/kafka/bi
> > > n/../l
> > >
> > > ibs/hk2-utils-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/jackson-annotat
> > > ions-2
> > >
> > > .10.0.jar:/opt/bitnami/kafka/bin/../libs/jackson-core-2.10.0.jar:/opt/
> > > bitnam
> > >
> > > i/kafka/bin/../libs/jackson-databind-2.10.0.jar:/opt/bitnami/kafka/bin
> > > /../li
> > >
> > > bs/jackson-dataformat-csv-2.10.0.jar:/opt/bitnami/kafka/bin/../libs/ja
> > > ckson-
> > >
> > > datatype-jdk8-2.10.0.jar:/opt/bitnami/kafka/bin/../libs/jackson-jaxrs-
> > > base-2
> > >
> > >
> >
> .10.0.jar:/opt/bitnami/kafka/bin/../libs/jackson-jaxrs-json-provider-2.10.0.
> > >
> > > jar:/opt/bitnami/kafka/bin/../libs/jackson-module-jaxb-annotations-2.1
> > > 0.0.ja
> > >
> > > r:/opt/bitnami/kafka/bin/../libs/jackson-module-paranamer-2.10.0.jar:/
> > > opt/bi
> > >
> > > tnami/kafka/bin/../libs/jackson-module-scala_2.11-2.10.0.jar:/opt/bitn
> > > ami/ka
> > >
> > > fka/bin/../libs/jakarta.activation-api-1.2.1.jar:/opt/bitnami/kafka/bi
> > > n/../l
> > >
> > >
> >
> ibs/jakarta.annotation-api-1.3.4.jar:/opt/bitnami/kafka/bin/../libs/jakarta.
> > > inject-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/jakarta.ws
> > > .rs-api-2.1.5.jar:
> > >
> > > /opt/bitnami/kafka/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/opt/bit
> > > nami/k
> > >
> > > afka/bin/../libs/javassist-3.22.0-CR2.jar:/opt/bitnami/kafka/bin/../li
> > > bs/jav
> > >
> > >
> >
> ax.servlet-api-3.1.0.jar:/opt/bitnami/kafka/bin/../libs/javax.ws.rs-api-2.1.
> > >
> > > 1.jar:/opt/bitnami/kafka/bin/../libs/jaxb-api-2.3.0.jar:/opt/bitnami/k
> > > afka/b
> > >
> > > in/../libs/jersey-client-2.28.jar:/opt/bitnami/kafka/bin/../libs/jerse
> > > y-comm
> > >
> > > on-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-container-servlet-2.
> > > 28.jar
> > >
> > > :/opt/bitnami/kafka/bin/../libs/jersey-container-servlet-core-2.28.jar
> > > :/opt/
> > >
> > > bitnami/kafka/bin/../libs/jersey-hk2-2.28.jar:/opt/bitnami/kafka/bin/.
> > > ./libs
> > >
> > > /jersey-media-jaxb-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-serv
> > > er-2.2
> > >
> > > 8.jar:/opt/bitnami/kafka/bin/../libs/jetty-client-9.4.18.v20190429.jar
> > > :/opt/
> > >
> > > bitnami/kafka/bin/../libs/jetty-continuation-9.4.18.v20190429.jar:/opt
> > > /bitna
> > >
> > > mi/kafka/bin/../libs/jetty-http-9.4.18.v20190429.jar:/opt/bitnami/kafk
> > > a/bin/
> > >
> > > ../libs/jetty-io-9.4.18.v20190429.jar:/opt/bitnami/kafka/bin/../libs/j
> > > etty-s
> > >
> > > ecurity-9.4.18.v20190429.jar:/opt/bitnami/kafka/bin/../libs/jetty-serv
> > > er-9.4
> > >
> > > .18.v20190429.jar:/opt/bitnami/kafka/bin/../libs/jetty-servlet-9.4.18.
> > > v20190
> > >
> > > 429.jar:/opt/bitnami/kafka/bin/../libs/jetty-servlets-9.4.18.v20190429
> > > .jar:/
> > >
> > > opt/bitnami/kafka/bin/../libs/jetty-util-9.4.18.v20190429.jar:/opt/bit
> > > nami/k
> > >
> > > afka/bin/../libs/jopt-simple-5.0.4.jar:/opt/bitnami/kafka/bin/../libs/
> > > jsr305
> > >
> > > -3.0.2.jar:/opt/bitnami/kafka/bin/../libs/kafka-clients-2.3.1.jar:/opt
> > > /bitna
> > >
> > >
> >
> mi/kafka/bin/../libs/kafka-log4j-appender-2.3.1.jar:/opt/bitnami/kafka/bin/.
> > >
> > > ./libs/kafka-streams-2.3.1.jar:/opt/bitnami/kafka/bin/../libs/kafka-st
> > > reams-
> > >
> > > examples-2.3.1.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-scala_
> > > 2.11-2
> > >
> > > .3.1.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-test-utils-2.3.1
> > > .jar:/
> > >
> > >
> >
> opt/bitnami/kafka/bin/../libs/kafka-tools-2.3.1.jar:/opt/bitnami/kafka/bin/.
> > >
> > > ./libs/kafka_2.11-2.3.1-sources.jar:/opt/bitnami/kafka/bin/../libs/kaf
> > > ka_2.1
> > >
> > > 1-2.3.1.jar:/opt/bitnami/kafka/bin/../libs/log4j-1.2.17.jar:/opt/bitna
> > > mi/kaf
> > >
> > > ka/bin/../libs/lz4-java-1.6.0.jar:/opt/bitnami/kafka/bin/../libs/maven
> > > -artif
> > >
> > > act-3.6.1.jar:/opt/bitnami/kafka/bin/../libs/metrics-core-2.2.0.jar:/o
> > > pt/bit
> > >
> > > nami/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/bitnami/ka
> > > fka/bi
> > >
> > >
> >
> n/../libs/paranamer-2.8.jar:/opt/bitnami/kafka/bin/../libs/plexus-utils-3.2.
> > >
> > > 0.jar:/opt/bitnami/kafka/bin/../libs/reflections-0.9.11.jar:/opt/bitna
> > > mi/kaf
> > >
> > > ka/bin/../libs/rocksdbjni-5.18.3.jar:/opt/bitnami/kafka/bin/../libs/sc
> > > ala-li
> > >
> > > brary-2.11.12.jar:/opt/bitnami/kafka/bin/../libs/scala-logging_2.11-3.
> > > 9.0.ja
> > >
> > > r:/opt/bitnami/kafka/bin/../libs/scala-reflect-2.11.12.jar:/opt/bitnam
> > > i/kafk
> > >
> > > a/bin/../libs/slf4j-api-1.7.26.jar:/opt/bitnami/kafka/bin/../libs/slf4
> > > j-log4
> > >
> > > j12-1.7.26.jar:/opt/bitnami/kafka/bin/../libs/snappy-java-1.1.7.3.jar:
> > > /opt/b
> > >
> > > itnami/kafka/bin/../libs/spotbugs-annotations-3.1.9.jar:/opt/bitnami/k
> > > afka/b
> > >
> > > in/../libs/validation-api-2.0.1.Final.jar:/opt/bitnami/kafka/bin/../li
> > > bs/zkc
> > >
> > > lient-0.11.jar:/opt/bitnami/kafka/bin/../libs/zookeeper-3.4.14.jar:/op
> > > t/bitn ami/kafka/bin/../libs/zstd-jni-1.4.0-1.jar
> > > (org.apache.zookeeper.ZooKeeper)
> > >
> > > [2019-11-16 19:15:53,333] INFO Client
> > >
> > > environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:
> > > /lib64 :/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
> > >
> > > [2019-11-16 19:15:53,333] INFO Client environment:java.io.tmpdir=/tmp
> > > (org.apache.zookeeper.ZooKeeper)
> > >
> > > [2019-11-16 19:15:53,334] INFO Client environment:java.compiler=<NA>
> > > (org.apache.zookeeper.ZooKeeper)
> > >
> > > [2019-11-16 19:15:53,334] INFO Client environment:os.name=Linux
> > > (org.apache.zookeeper.ZooKeeper)
> > >
> > > [2019-11-16 19:15:53,334] INFO Client environment:os.arch=amd64
> > > (org.apache.zookeeper.ZooKeeper)
> > >
> > > [2019-11-16 19:15:53,335] INFO Client
> > > environment:os.version=4.15.0-1060-azure
> > > (org.apache.zookeeper.ZooKeeper)
> > >
> > > [2019-11-16 19:15:53,336] INFO Client environment:user.name=?
> > > (org.apache.zookeeper.ZooKeeper)
> > >
> > > [2019-11-16 19:15:53,336] INFO Client environment:user.home=?
> > > (org.apache.zookeeper.ZooKeeper)
> > >
> > > [2019-11-16 19:15:53,336] INFO Client environment:user.dir=/
> > > (org.apache.zookeeper.ZooKeeper)
> > >
> > > [2019-11-16 19:15:53,338] INFO Initiating client connection,
> > > connectString=kafka-zookeeper sessionTimeout=6000
> > > watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@31304f
> > > 14
> > > (org.apache.zookeeper.ZooKeeper)
> > >
> > > [2019-11-16 19:15:53,528] INFO [ZooKeeperClient Kafka server] Waiting
> > > until connected. (kafka.zookeeper.ZooKeeperClient)
> > >
> > > [2019-11-16 19:15:53,545] INFO Opening socket connection to server
> > > kafka-zookeeper/10.0.215.214:2181. Will not attempt to authenticate
> > > using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
> > >
> > > [2019-11-16 19:15:53,552] INFO Socket connection established to
> > > kafka-zookeeper/10.0.215.214:2181, initiating session
> > > (org.apache.zookeeper.ClientCnxn)
> > >
> > > [2019-11-16 19:15:53,627] INFO Session establishment complete on
> > > server kafka-zookeeper/10.0.215.214:2181, sessionid =
> > > 0x10000810b780070, negotiated timeout = 6000
> > > (org.apache.zookeeper.ClientCnxn)
> > >
> > > [2019-11-16 19:15:53,630] INFO [ZooKeeperClient Kafka server]
> Connected.
> > > (kafka.zookeeper.ZooKeeperClient)
> > >
> > > [2019-11-16 19:15:55,034] INFO Cluster ID = dvSQ1W2US72rcqGef9tm6w
> > > (kafka.server.KafkaServer)
> > >
> > > [2019-11-16 19:15:55,637] INFO KafkaConfig values:
> > >
> > >                 advertised.host.name = null
> > >
> > >                 advertised.listeners =
> > > PLAINTEXT://kafka-1.kafka-headless.bd-iot.svc.cluster.local:9092
> > >
> > >                 advertised.port = null
> > >
> > >                 alter.config.policy.class.name = null
> > >
> > >                 alter.log.dirs.replication.quota.window.num = 11
> > >
> > >                 alter.log.dirs.replication.quota.window.size.seconds =
> > > 1
> > >
> > >                 authorizer.class.name =
> > >
> > >                 auto.create.topics.enable = true
> > >
> > >                 auto.leader.rebalance.enable = true
> > >
> > >                 background.threads = 10
> > >
> > >                 broker.id = -1
> > >
> > >                 broker.id.generation.enable = true
> > >
> > >                 broker.rack = null
> > >
> > >                 client.quota.callback.class = null
> > >
> > >                 compression.type = producer
> > >
> > >                 connection.failed.authentication.delay.ms = 100
> > >
> > >                 connections.max.idle.ms = 600000
> > >
> > >                 connections.max.reauth.ms = 0
> > >
> > >                 control.plane.listener.name = null
> > >
> > >                 controlled.shutdown.enable = true
> > >
> > >                 controlled.shutdown.max.retries = 3
> > >
> > >                 controlled.shutdown.retry.backoff.ms = 5000
> > >
> > >                 controller.socket.timeout.ms = 30000
> > >
> > >                 create.topic.policy.class.name = null
> > >
> > >                 default.replication.factor = 2
> > >
> > >                 delegation.token.expiry.check.interval.ms = 3600000
> > >
> > >                 delegation.token.expiry.time.ms = 86400000
> > >
> > >                 delegation.token.master.key = null
> > >
> > >                 delegation.token.max.lifetime.ms = 604800000
> > >
> > >                 delete.records.purgatory.purge.interval.requests = 1
> > >
> > >                 delete.topic.enable = true
> > >
> > >                 fetch.purgatory.purge.interval.requests = 1000
> > >
> > >                 group.initial.rebalance.delay.ms = 0
> > >
> > >                 group.max.session.timeout.ms = 1800000
> > >
> > >                 group.max.size = 2147483647
> > >
> > >                 group.min.session.timeout.ms = 6000
> > >
> > >                 host.name =
> > >
> > >                 inter.broker.listener.name = null
> > >
> > >                 inter.broker.protocol.version = 2.3-IV1
> > >
> > >                 kafka.metrics.polling.interval.secs = 10
> > >
> > >                 kafka.metrics.reporters = []
> > >
> > >                 leader.imbalance.check.interval.seconds = 300
> > >
> > >                 leader.imbalance.per.broker.percentage = 10
> > >
> > >                 listener.security.protocol.map =
> > > PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SAS
> > > L_SSL
> > >
> > >                 listeners = PLAINTEXT://:9092
> > >
> > >                 log.cleaner.backoff.ms = 15000
> > >
> > >                 log.cleaner.dedupe.buffer.size = 134217728
> > >
> > >                 log.cleaner.delete.retention.ms = 86400000
> > >
> > >                 log.cleaner.enable = true
> > >
> > >                 log.cleaner.io.buffer.load.factor = 0.9
> > >
> > >                 log.cleaner.io.buffer.size = 524288
> > >
> > >                 log.cleaner.io.max.bytes.per.second =
> > > 1.7976931348623157E308
> > >
> > >                 log.cleaner.max.compaction.lag.ms =
> > > 9223372036854775807
> > >
> > >                 log.cleaner.min.cleanable.ratio = 0.5
> > >
> > >                 log.cleaner.min.compaction.lag.ms = 0
> > >
> > >                 log.cleaner.threads = 1
> > >
> > >                 log.cleanup.policy = [delete]
> > >
> > >                 log.dir = /tmp/kafka-logs
> > >
> > >                 log.dirs = /bitnami/kafka/data
> > >
> > >                 log.flush.interval.messages = 10000
> > >
> > >                 log.flush.interval.ms = 1000
> > >
> > >                 log.flush.offset.checkpoint.interval.ms = 60000
> > >
> > >                 log.flush.scheduler.interval.ms = 9223372036854775807
> > >
> > >                 log.flush.start.offset.checkpoint.interval.ms = 60000
> > >
> > >                 log.index.interval.bytes = 4096
> > >
> > >                 log.index.size.max.bytes = 10485760
> > >
> > >                 log.message.downconversion.enable = true
> > >
> > >                 log.message.format.version = 2.3-IV1
> > >
> > >                 log.message.timestamp.difference.max.ms =
> > > 9223372036854775807
> > >
> > >                 log.message.timestamp.type = CreateTime
> > >
> > >                 log.preallocate = false
> > >
> > >                 log.retention.bytes = 1073741824
> > >
> > >                 log.retention.check.interval.ms = 300000
> > >
> > >                 log.retention.hours = 168
> > >
> > >                 log.retention.minutes = null
> > >
> > >                 log.retention.ms = null
> > >
> > >                 log.roll.hours = 168
> > >
> > >                 log.roll.jitter.hours = 0
> > >
> > >                 log.roll.jitter.ms = null
> > >
> > >                 log.roll.ms = null
> > >
> > >                 log.segment.bytes = 1073741824
> > >
> > >                 log.segment.delete.delay.ms = 60000
> > >
> > >                 max.connections = 2147483647
> > >
> > >                 max.connections.per.ip = 2147483647
> > >
> > >                 max.connections.per.ip.overrides =
> > >
> > >                 max.incremental.fetch.session.cache.slots = 1000
> > >
> > >                 message.max.bytes = 1000012
> > >
> > >                 metric.reporters = []
> > >
> > >                 metrics.num.samples = 2
> > >
> > >                 metrics.recording.level = INFO
> > >
> > >                 metrics.sample.window.ms = 30000
> > >
> > >                 min.insync.replicas = 1
> > >
> > >                 num.io.threads = 8
> > >
> > >                 num.network.threads = 3
> > >
> > >                 num.partitions = 1
> > >
> > >                 num.recovery.threads.per.data.dir = 1
> > >
> > >                 num.replica.alter.log.dirs.threads = null
> > >
> > >                 num.replica.fetchers = 1
> > >
> > >                 offset.metadata.max.bytes = 4096
> > >
> > >                 offsets.commit.required.acks = -1
> > >
> > >                 offsets.commit.timeout.ms = 5000
> > >
> > >                 offsets.load.buffer.size = 5242880
> > >
> > >                 offsets.retention.check.interval.ms = 600000
> > >
> > >                 offsets.retention.minutes = 10080
> > >
> > >                 offsets.topic.compression.codec = 0
> > >
> > >                 offsets.topic.num.partitions = 50
> > >
> > >                 offsets.topic.replication.factor = 2
> > >
> > >                 offsets.topic.segment.bytes = 104857600
> > >
> > >                 password.encoder.cipher.algorithm =
> > > AES/CBC/PKCS5Padding
> > >
> > >                 password.encoder.iterations = 4096
> > >
> > >                 password.encoder.key.length = 128
> > >
> > >                 password.encoder.keyfactory.algorithm = null
> > >
> > >                 password.encoder.old.secret = null
> > >
> > >                 password.encoder.secret = null
> > >
> > >                 port = 9092
> > >
> > >                 principal.builder.class = null
> > >
> > >                 producer.purgatory.purge.interval.requests = 1000
> > >
> > >                 queued.max.request.bytes = -1
> > >
> > >                 queued.max.requests = 500
> > >
> > >                 quota.consumer.default = 9223372036854775807
> > >
> > >                 quota.producer.default = 9223372036854775807
> > >
> > >                 quota.window.num = 11
> > >
> > >                 quota.window.size.seconds = 1
> > >
> > >                 replica.fetch.backoff.ms = 1000
> > >
> > >                 replica.fetch.max.bytes = 1048576
> > >
> > >                 replica.fetch.min.bytes = 1
> > >
> > >                 replica.fetch.response.max.bytes = 10485760
> > >
> > >                 replica.fetch.wait.max.ms = 500
> > >
> > >                 replica.high.watermark.checkpoint.interval.ms = 5000
> > >
> > >                 replica.lag.time.max.ms = 10000
> > >
> > >                 replica.socket.receive.buffer.bytes = 65536
> > >
> > >                 replica.socket.timeout.ms = 30000
> > >
> > >                 replication.quota.window.num = 11
> > >
> > >                 replication.quota.window.size.seconds = 1
> > >
> > >                 request.timeout.ms = 30000
> > >
> > >                 reserved.broker.max.id = 1000
> > >
> > >                 sasl.client.callback.handler.class = null
> > >
> > >                 sasl.enabled.mechanisms = [GSSAPI]
> > >
> > >                 sasl.jaas.config = null
> > >
> > >                 sasl.kerberos.kinit.cmd = /usr/bin/kinit
> > >
> > >                 sasl.kerberos.min.time.before.relogin = 60000
> > >
> > >                 sasl.kerberos.principal.to.local.rules = [DEFAULT]
> > >
> > >                 sasl.kerberos.service.name = null
> > >
> > >                 sasl.kerberos.ticket.renew.jitter = 0.05
> > >
> > >                 sasl.kerberos.ticket.renew.window.factor = 0.8
> > >
> > >                 sasl.login.callback.handler.class = null
> > >
> > >                 sasl.login.class = null
> > >
> > >                 sasl.login.refresh.buffer.seconds = 300
> > >
> > >                 sasl.login.refresh.min.period.seconds = 60
> > >
> > >                 sasl.login.refresh.window.factor = 0.8
> > >
> > >                 sasl.login.refresh.window.jitter = 0.05
> > >
> > >                 sasl.mechanism.inter.broker.protocol = GSSAPI
> > >
> > >                 sasl.server.callback.handler.class = null
> > >
> > >                 security.inter.broker.protocol = PLAINTEXT
> > >
> > >                 socket.receive.buffer.bytes = 102400
> > >
> > >                 socket.request.max.bytes = 104857600
> > >
> > >                 socket.send.buffer.bytes = 102400
> > >
> > >                 ssl.cipher.suites = []
> > >
> > >                 ssl.client.auth = none
> > >
> > >                 ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
> > >
> > >                 ssl.endpoint.identification.algorithm = https
> > >
> > >                 ssl.key.password = null
> > >
> > >                 ssl.keymanager.algorithm = SunX509
> > >
> > >                 ssl.keystore.location = null
> > >
> > >                 ssl.keystore.password = null
> > >
> > >                 ssl.keystore.type = JKS
> > >
> > >                 ssl.principal.mapping.rules = [DEFAULT]
> > >
> > >                 ssl.protocol = TLS
> > >
> > >                 ssl.provider = null
> > >
> > >                 ssl.secure.random.implementation = null
> > >
> > >                 ssl.trustmanager.algorithm = PKIX
> > >
> > >                 ssl.truststore.location = null
> > >
> > >                 ssl.truststore.password = null
> > >
> > >                 ssl.truststore.type = JKS
> > >
> > >
> > > transaction.abort.timed.out.transaction.cleanup.interval.ms
> > > = 60000
> > >
> > >                 transaction.max.timeout.ms = 900000
> > >
> > >
> > > transaction.remove.expired.transaction.cleanup.interval.ms
> > > =
> > > 3600000
> > >
> > >                 transaction.state.log.load.buffer.size = 5242880
> > >
> > >                 transaction.state.log.min.isr = 2
> > >
> > >                 transaction.state.log.num.partitions = 50
> > >
> > >                 transaction.state.log.replication.factor = 2
> > >
> > >                 transaction.state.log.segment.bytes = 104857600
> > >
> > >                 transactional.id.expiration.ms = 604800000
> > >
> > >                 unclean.leader.election.enable = false
> > >
> > >                 zookeeper.connect = kafka-zookeeper
> > >
> > >                 zookeeper.connection.timeout.ms = 6000
> > >
> > >                 zookeeper.max.in.flight.requests = 10
> > >
> > >                 zookeeper.session.timeout.ms = 6000
> > >
> > >                 zookeeper.set.acl = false
> > >
> > >                 zookeeper.sync.time.ms = 2000
> > >
> > > (kafka.server.KafkaConfig)
> > >
> > > [2019-11-16 19:15:55,829] INFO KafkaConfig values:
> > >
> > >                 advertised.host.name = null
> > >
> > >                 advertised.listeners =
> > > PLAINTEXT://kafka-1.kafka-headless.bd-iot.svc.cluster.local:9092
> > >
> > >                 advertised.port = null
> > >
> > >                 alter.config.policy.class.name = null
> > >
> > >                 alter.log.dirs.replication.quota.window.num = 11
> > >
> > >                 alter.log.dirs.replication.quota.window.size.seconds =
> > > 1
> > >
> > >                 authorizer.class.name =
> > >
> > >                 auto.create.topics.enable = true
> > >
> > >                 auto.leader.rebalance.enable = true
> > >
> > >                 background.threads = 10
> > >
> > >                 broker.id = -1
> > >
> > >                 broker.id.generation.enable = true
> > >
> > >                 broker.rack = null
> > >
> > >                 client.quota.callback.class = null
> > >
> > >                 compression.type = producer
> > >
> > >                 connection.failed.authentication.delay.ms = 100
> > >
> > >                 connections.max.idle.ms = 600000
> > >
> > >                 connections.max.reauth.ms = 0
> > >
> > >                 control.plane.listener.name = null
> > >
> > >                 controlled.shutdown.enable = true
> > >
> > >                 controlled.shutdown.max.retries = 3
> > >
> > >                 controlled.shutdown.retry.backoff.ms = 5000
> > >
> > >                 controller.socket.timeout.ms = 30000
> > >
> > >                 create.topic.policy.class.name = null
> > >
> > >                 default.replication.factor = 2
> > >
> > >                 delegation.token.expiry.check.interval.ms = 3600000
> > >
> > >                 delegation.token.expiry.time.ms = 86400000
> > >
> > >                 delegation.token.master.key = null
> > >
> > >                 delegation.token.max.lifetime.ms = 604800000
> > >
> > >                 delete.records.purgatory.purge.interval.requests = 1
> > >
> > >                 delete.topic.enable = true
> > >
> > >                 fetch.purgatory.purge.interval.requests = 1000
> > >
> > >                 group.initial.rebalance.delay.ms = 0
> > >
> > >                 group.max.session.timeout.ms = 1800000
> > >
> > >                 group.max.size = 2147483647
> > >
> > >                 group.min.session.timeout.ms = 6000
> > >
> > >                 host.name =
> > >
> > >                 inter.broker.listener.name = null
> > >
> > >                 inter.broker.protocol.version = 2.3-IV1
> > >
> > >                 kafka.metrics.polling.interval.secs = 10
> > >
> > >                 kafka.metrics.reporters = []
> > >
> > >                 leader.imbalance.check.interval.seconds = 300
> > >
> > >                 leader.imbalance.per.broker.percentage = 10
> > >
> > >                 listener.security.protocol.map =
> > > PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SAS
> > > L_SSL
> > >
> > >                 listeners = PLAINTEXT://:9092
> > >
> > >                 log.cleaner.backoff.ms = 15000
> > >
> > >                 log.cleaner.dedupe.buffer.size = 134217728
> > >
> > >                 log.cleaner.delete.retention.ms = 86400000
> > >
> > >                 log.cleaner.enable = true
> > >
> > >                 log.cleaner.io.buffer.load.factor = 0.9
> > >
> > >                 log.cleaner.io.buffer.size = 524288
> > >
> > >                 log.cleaner.io.max.bytes.per.second =
> > > 1.7976931348623157E308
> > >
> > >                 log.cleaner.max.compaction.lag.ms =
> > > 9223372036854775807
> > >
> > >                 log.cleaner.min.cleanable.ratio = 0.5
> > >
> > >                 log.cleaner.min.compaction.lag.ms = 0
> > >
> > >                 log.cleaner.threads = 1
> > >
> > >                 log.cleanup.policy = [delete]
> > >
> > >                 log.dir = /tmp/kafka-logs
> > >
> > >                 log.dirs = /bitnami/kafka/data
> > >
> > >                 log.flush.interval.messages = 10000
> > >
> > >                 log.flush.interval.ms = 1000
> > >
> > >                 log.flush.offset.checkpoint.interval.ms = 60000
> > >
> > >                 log.flush.scheduler.interval.ms = 9223372036854775807
> > >
> > >                 log.flush.start.offset.checkpoint.interval.ms = 60000
> > >
> > >                 log.index.interval.bytes = 4096
> > >
> > >                 log.index.size.max.bytes = 10485760
> > >
> > >                 log.message.downconversion.enable = true
> > >
> > >                 log.message.format.version = 2.3-IV1
> > >
> > >                 log.message.timestamp.difference.max.ms =
> > > 9223372036854775807
> > >
> > >                 log.message.timestamp.type = CreateTime
> > >
> > >                 log.preallocate = false
> > >
> > >                 log.retention.bytes = 1073741824
> > >
> > >                 log.retention.check.interval.ms = 300000
> > >
> > >                 log.retention.hours = 168
> > >
> > >                 log.retention.minutes = null
> > >
> > >                 log.retention.ms = null
> > >
> > >                 log.roll.hours = 168
> > >
> > >                 log.roll.jitter.hours = 0
> > >
> > >                 log.roll.jitter.ms = null
> > >
> > >                 log.roll.ms = null
> > >
> > >                 log.segment.bytes = 1073741824
> > >
> > >                 log.segment.delete.delay.ms = 60000
> > >
> > >                 max.connections = 2147483647
> > >
> > >                 max.connections.per.ip = 2147483647
> > >
> > >                 max.connections.per.ip.overrides =
> > >
> > >                 max.incremental.fetch.session.cache.slots = 1000
> > >
> > >                 message.max.bytes = 1000012
> > >
> > >                 metric.reporters = []
> > >
> > >                 metrics.num.samples = 2
> > >
> > >                 metrics.recording.level = INFO
> > >
> > >                 metrics.sample.window.ms = 30000
> > >
> > >                 min.insync.replicas = 1
> > >
> > >                 num.io.threads = 8
> > >
> > >                 num.network.threads = 3
> > >
> > >                 num.partitions = 1
> > >
> > >                 num.recovery.threads.per.data.dir = 1
> > >
> > >                 num.replica.alter.log.dirs.threads = null
> > >
> > >                 num.replica.fetchers = 1
> > >
> > >                 offset.metadata.max.bytes = 4096
> > >
> > >                 offsets.commit.required.acks = -1
> > >
> > >                 offsets.commit.timeout.ms = 5000
> > >
> > >                 offsets.load.buffer.size = 5242880
> > >
> > >                 offsets.retention.check.interval.ms = 600000
> > >
> > >                 offsets.retention.minutes = 10080
> > >
> > >                 offsets.topic.compression.codec = 0
> > >
> > >                 offsets.topic.num.partitions = 50
> > >
> > >                 offsets.topic.replication.factor = 2
> > >
> > >                 offsets.topic.segment.bytes = 104857600
> > >
> > >                 password.encoder.cipher.algorithm =
> > > AES/CBC/PKCS5Padding
> > >
> > >                 password.encoder.iterations = 4096
> > >
> > >                 password.encoder.key.length = 128
> > >
> > >                 password.encoder.keyfactory.algorithm = null
> > >
> > >                 password.encoder.old.secret = null
> > >
> > >                 password.encoder.secret = null
> > >
> > >                 port = 9092
> > >
> > >                 principal.builder.class = null
> > >
> > >                 producer.purgatory.purge.interval.requests = 1000
> > >
> > >                 queued.max.request.bytes = -1
> > >
> > >                 queued.max.requests = 500
> > >
> > >                 quota.consumer.default = 9223372036854775807
> > >
> > >                 quota.producer.default = 9223372036854775807
> > >
> > >                 quota.window.num = 11
> > >
> > >                 quota.window.size.seconds = 1
> > >
> > >                 replica.fetch.backoff.ms = 1000
> > >
> > >                 replica.fetch.max.bytes = 1048576
> > >
> > >                 replica.fetch.min.bytes = 1
> > >
> > >                 replica.fetch.response.max.bytes = 10485760
> > >
> > >                 replica.fetch.wait.max.ms = 500
> > >
> > >                 replica.high.watermark.checkpoint.interval.ms = 5000
> > >
> > >                 replica.lag.time.max.ms = 10000
> > >
> > >                 replica.socket.receive.buffer.bytes = 65536
> > >
> > >                 replica.socket.timeout.ms = 30000
> > >
> > >                 replication.quota.window.num = 11
> > >
> > >                 replication.quota.window.size.seconds = 1
> > >
> > >                 request.timeout.ms = 30000
> > >
> > >                 reserved.broker.max.id = 1000
> > >
> > >                 sasl.client.callback.handler.class = null
> > >
> > >                 sasl.enabled.mechanisms = [GSSAPI]
> > >
> > >                 sasl.jaas.config = null
> > >
> > >                 sasl.kerberos.kinit.cmd = /usr/bin/kinit
> > >
> > >                 sasl.kerberos.min.time.before.relogin = 60000
> > >
> > >                 sasl.kerberos.principal.to.local.rules = [DEFAULT]
> > >
> > >                 sasl.kerberos.service.name = null
> > >
> > >                 sasl.kerberos.ticket.renew.jitter = 0.05
> > >
> > >                 sasl.kerberos.ticket.renew.window.factor = 0.8
> > >
> > >                 sasl.login.callback.handler.class = null
> > >
> > >                 sasl.login.class = null
> > >
> > >                 sasl.login.refresh.buffer.seconds = 300
> > >
> > >                 sasl.login.refresh.min.period.seconds = 60
> > >
> > >                 sasl.login.refresh.window.factor = 0.8
> > >
> > >                 sasl.login.refresh.window.jitter = 0.05
> > >
> > >                 sasl.mechanism.inter.broker.protocol = GSSAPI
> > >
> > >                 sasl.server.callback.handler.class = null
> > >
> > >                 security.inter.broker.protocol = PLAINTEXT
> > >
> > >                 socket.receive.buffer.bytes = 102400
> > >
> > >                 socket.request.max.bytes = 104857600
> > >
> > >                 socket.send.buffer.bytes = 102400
> > >
> > >                 ssl.cipher.suites = []
> > >
> > >                 ssl.client.auth = none
> > >
> > >                 ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
> > >
> > >                 ssl.endpoint.identification.algorithm = https
> > >
> > >                 ssl.key.password = null
> > >
> > >                 ssl.keymanager.algorithm = SunX509
> > >
> > >                 ssl.keystore.location = null
> > >
> > >                 ssl.keystore.password = null
> > >
> > >                 ssl.keystore.type = JKS
> > >
> > >                 ssl.principal.mapping.rules = [DEFAULT]
> > >
> > >                 ssl.protocol = TLS
> > >
> > >                 ssl.provider = null
> > >
> > >                 ssl.secure.random.implementation = null
> > >
> > >                 ssl.trustmanager.algorithm = PKIX
> > >
> > >                 ssl.truststore.location = null
> > >
> > >                 ssl.truststore.password = null
> > >
> > >                 ssl.truststore.type = JKS
> > >
> > >
> > > transaction.abort.timed.out.transaction.cleanup.interval.ms
> > > = 60000
> > >
> > >                 transaction.max.timeout.ms = 900000
> > >
> > >
> > > transaction.remove.expired.transaction.cleanup.interval.ms
> > > =
> > > 3600000
> > >
> > >                 transaction.state.log.load.buffer.size = 5242880
> > >
> > >                 transaction.state.log.min.isr = 2
> > >
> > >                 transaction.state.log.num.partitions = 50
> > >
> > >                 transaction.state.log.replication.factor = 2
> > >
> > >                 transaction.state.log.segment.bytes = 104857600
> > >
> > >                 transactional.id.expiration.ms = 604800000
> > >
> > >                 unclean.leader.election.enable = false
> > >
> > >                 zookeeper.connect = kafka-zookeeper
> > >
> > >                 zookeeper.connection.timeout.ms = 6000
> > >
> > >                 zookeeper.max.in.flight.requests = 10
> > >
> > >                 zookeeper.session.timeout.ms = 6000
> > >
> > >                 zookeeper.set.acl = false
> > >
> > >                 zookeeper.sync.time.ms = 2000
> > >
> > > (kafka.server.KafkaConfig)
> > >
> > > [2019-11-16 19:15:56,039] INFO [ThrottledChannelReaper-Fetch]:
> > > Starting
> > > (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
> > >
> > > [2019-11-16 19:15:56,044] INFO [ThrottledChannelReaper-Produce]:
> > > Starting
> > > (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
> > >
> > > [2019-11-16 19:15:56,046] INFO [ThrottledChannelReaper-Request]:
> > > Starting
> > > (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
> > >
> > > [2019-11-16 19:15:56,335] INFO Loading logs. (kafka.log.LogManager)
> > >
> > > [2019-11-16 19:15:56,638] INFO [Log partition=__consumer_offsets-4,
> > > dir=/bitnami/kafka/data] Recovering unflushed segment 0
> > > (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:56,727] INFO [Log partition=__consumer_offsets-4,
> > > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > > message format version 2 (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:56,931] INFO [Log partition=__consumer_offsets-4,
> > > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > > message format version 2 (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:56,933] INFO [Log partition=__consumer_offsets-4,
> > > dir=/bitnami/kafka/data] Completed load of log with 1 segments, log
> > > start offset 0 and log end offset 0 in 399 ms (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,029] INFO [Log partition=__consumer_offsets-22,
> > > dir=/bitnami/kafka/data] Recovering unflushed segment 0
> > > (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,029] INFO [Log partition=__consumer_offsets-22,
> > > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > > message format version 2 (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,031] INFO [Log partition=__consumer_offsets-22,
> > > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > > message format version 2 (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,032] INFO [Log partition=__consumer_offsets-22,
> > > dir=/bitnami/kafka/data] Completed load of log with 1 segments, log
> > > start offset 0 and log end offset 0 in 6 ms (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,147] INFO [Log partition=__consumer_offsets-32,
> > > dir=/bitnami/kafka/data] Recovering unflushed segment 0
> > > (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,148] INFO [Log partition=__consumer_offsets-32,
> > > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > > message format version 2 (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,150] INFO [Log partition=__consumer_offsets-32,
> > > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > > message format version 2 (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,226] INFO [Log partition=__consumer_offsets-32,
> > > dir=/bitnami/kafka/data] Completed load of log with 1 segments, log
> > > start offset 0 and log end offset 0 in 189 ms (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,330] INFO [Log partition=__consumer_offsets-39,
> > > dir=/bitnami/kafka/data] Recovering unflushed segment 0
> > > (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,330] INFO [Log partition=__consumer_offsets-39,
> > > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > > message format version 2 (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,333] INFO [Log partition=__consumer_offsets-39,
> > > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > > message format version 2 (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,334] INFO [Log partition=__consumer_offsets-39,
> > > dir=/bitnami/kafka/data] Completed load of log with 1 segments, log
> > > start offset 0 and log end offset 0 in 6 ms (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,429] INFO [Log partition=__consumer_offsets-26,
> > > dir=/bitnami/kafka/data] Recovering unflushed segment 0
> > > (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,429] INFO [Log partition=__consumer_offsets-26,
> > > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > > message format version 2 (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,431] INFO [Log partition=__consumer_offsets-26,
> > > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > > message format version 2 (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,432] INFO [Log partition=__consumer_offsets-26,
> > > dir=/bitnami/kafka/data] Completed load of log with 1 segments, log
> > > start offset 0 and log end offset 0 in 5 ms (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,527] INFO [Log partition=__consumer_offsets-44,
> > > dir=/bitnami/kafka/data] Recovering unflushed segment 0
> > > (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,529] INFO [Log partition=__consumer_offsets-44,
> > > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > > message format version 2 (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,533] INFO [Log partition=__consumer_offsets-44,
> > > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > > message format version 2 (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,534] INFO [Log partition=__consumer_offsets-44,
> > > dir=/bitnami/kafka/data] Completed load of log with 1 segments, log
> > > start offset 0 and log end offset 0 in 8 ms (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,634] INFO [Log partition=__consumer_offsets-25,
> > > dir=/bitnami/kafka/data] Recovering unflushed segment 0
> > > (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,635] INFO [Log partition=__consumer_offsets-25,
> > > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > > message format version 2 (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,637] INFO [Log partition=__consumer_offsets-25,
> > > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > > message format version 2 (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,638] INFO [Log partition=__consumer_offsets-25,
> > > dir=/bitnami/kafka/data] Completed load of log with 1 segments, log
> > > start offset 0 and log end offset 0 in 7 ms (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,730] INFO [Log partition=__consumer_offsets-8,
> > > dir=/bitnami/kafka/data] Recovering unflushed segment 0
> > > (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,730] INFO [Log partition=__consumer_offsets-8,
> > > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > > message format version 2 (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,733] INFO [Log partition=__consumer_offsets-8,
> > > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > > message format version 2 (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,733] INFO [Log partition=__consumer_offsets-8,
> > > dir=/bitnami/kafka/data] Completed load of log with 1 segments, log
> > > start offset 0 and log end offset 0 in 5 ms (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,741] INFO [Log partition=batch.alarm-0,
> > > dir=/bitnami/kafka/data] Recovering unflushed segment 0
> > > (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,826] INFO [Log partition=batch.alarm-0,
> > > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > > message format version 2 (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,830] INFO [Log partition=batch.alarm-0,
> > > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > > message format version 2 (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,833] INFO [Log partition=batch.alarm-0,
> > > dir=/bitnami/kafka/data] Completed load of log with 1 segments, log
> > > start offset 0 and log end offset 0 in 94 ms (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,936] INFO [Log partition=__consumer_offsets-38,
> > > dir=/bitnami/kafka/data] Recovering unflushed segment 33982499
> > > (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,937] INFO [Log partition=__consumer_offsets-38,
> > > dir=/bitnami/kafka/data] Loading producer state till offset 33982499
> > > with message format version 2 (kafka.log.Log)
> > >
> > > [2019-11-16 19:15:57,941] INFO [ProducerStateManager
> > > partition=__consumer_offsets-38] Loading producer state from snapshot
> > > file
> > '/bitnami/kafka/data/__consumer_offsets-38/00000000000033982499.snapshot'
> > > (kafka.log.ProducerStateManager)
> > >
> > > [2019-11-16 19:16:10,208] INFO Terminating process due to signal
> > > SIGTERM
> > > (org.apache.kafka.common.utils.LoggingSignalHandler)
> > >
> > > [2019-11-16 19:16:10,217] INFO [KafkaServer id=1012] shutting down
> > > (kafka.server.KafkaServer)
> > >
> > > [2019-11-16 19:16:10,226] ERROR [KafkaServer id=1012] Fatal error
> > > during KafkaServer shutdown. (kafka.server.KafkaServer)
> > >
> > > java.lang.IllegalStateException: Kafka server is still starting up,
> > > cannot shut down!
> > >
> > >                 at
> > > kafka.server.KafkaServer.shutdown(KafkaServer.scala:584)
> > >
> > >                 at
> > > kafka.server.KafkaServerStartable.shutdown(KafkaServerStartable.scala:
> > > 48)
> > >
> > >                 at kafka.Kafka$$anon$1.run(Kafka.scala:81)
> > >
> > > [2019-11-16 19:16:10,233] ERROR Halting Kafka.
> > > (kafka.server.KafkaServerStartable)
> > >
> > >
> > >
> > > Kind Regards
> > >
> > > Oliver
> > >
> > >
> >
> >
> >
>

Re: Kafka Broker do not recover after crash

Posted by "M. Manna" <ma...@gmail.com>.
HI,

On Sat, 16 Nov 2019 at 19:54, Oliver Eckle <ie...@gmx.de> wrote:

> Hi,
>
> yes it is intentional, but just because I don't know better and want to
> spare a little resources?
>

I never understood the benefit of having more brokers than replicas with
the intention of saving resources. A lot of people do that, and the Kafka
community seems to be okay with it ( i.e. not documentation or caution for
NOT doing that). Please make sure you use it to full extent.

For your case, I believe your log and index file stored on the affected
broker (or rather, the PV attached to it if you have one) may have been
corrupted.
The best way (rather than debugging and investigating logs endlessly) is to
simply delete the pod and let it start again. Also, make sure that it
does't refer to the old files (if you have a PV/StatefulSet with it). It's
important that upon restart the broker builds all the data files themselves
than referring to previously stored files.

Try that and see how it goes.

Thanks,


From your answer I guess the preferred way is having a replication of 3?
>
>
> -----Ursprüngliche Nachricht-----
> Von: M. Manna <ma...@gmail.com>
> Gesendet: Samstag, 16. November 2019 20:27
> An: users@kafka.apache.org
> Betreff: Re: Kafka Broker do not recover after crash
>
> Hi,
>
> On Sat, 16 Nov 2019 at 19:21, Oliver Eckle <ie...@gmx.de> wrote:
>
> > Hello,
> >
> >
> >
> > having a Kafka Cluster running in Kubernetes with 3 Brokers and all
> > replikations (topic, offsets) set to 2.
>
>
> This sounds strange. You have 3 brokers and replication set to 2. Is this
> intentional ?
>
>
> >
> > For whatever reason one of the broker crash and restartes. And since
> > it circles in some kind of restart/crash loop.
> >
> > Any idea how to recover?
> >
> >
> >
> > Whole Logfile is like that:
> >
> >
> >
> > [38;5;6m [38;5;5m19:15:42.58 [0m
> >
> > [38;5;6m [38;5;5m19:15:42.58 [0m[1mWelcome to the Bitnami kafka
> > container[0m
> >
> > [38;5;6m [38;5;5m19:15:42.58 [0mSubscribe to project updates by
> > watching [1mhttps://github.com/bitnami/bitnami-docker-kafka[0m
> <http://github.com/bitnami/bitnami-docker-kafka%5B0m>
> > <http://github.com/bitnami/bitnami-docker-kafka%5B0m>
> >
> > [38;5;6m [38;5;5m19:15:42.58 [0mSubmit issues and feature requests at
> > [1mhttps://github.com/bitnami/bitnami-docker-kafka/issues[0m
> <http://github.com/bitnami/bitnami-docker-kafka/issues%5B0m>
> > <http://github.com/bitnami/bitnami-docker-kafka/issues%5B0m>
> >
> > [38;5;6m [38;5;5m19:15:42.58 [0mSend us your feedback at
> > [1mcontainers@bitnami.com[0m
> >
> > [38;5;6m [38;5;5m19:15:42.59 [0m
> >
> > [38;5;6m [38;5;5m19:15:42.59 [0m[38;5;2mINFO [0m ==> ** Starting Kafka
> > setup
> > **
> >
> > [38;5;6m [38;5;5m19:15:42.83 [0m[38;5;3mWARN [0m ==> You set the
> > environment variable ALLOW_PLAINTEXT_LISTENER=yes. For safety reasons,
> > do not use this flag in a production environment.
> >
> > [38;5;6m [38;5;5m19:15:42.84 [0m[38;5;2mINFO [0m ==> Initializing
> Kafka...
> >
> > [38;5;6m [38;5;5m19:15:42.84 [0m[38;5;2mINFO [0m ==> No injected
> > configuration files found, creating default config files
> >
> > [38;5;6m [38;5;5m19:15:43.83 [0m[38;5;2mINFO [0m ==> ** Kafka setup
> > finished! **
> >
> >
> >
> > [38;5;6m [38;5;5m19:15:43.84 [0m[38;5;2mINFO [0m ==> ** Starting Kafka
> > **
> >
> > [2019-11-16 19:15:49,625] INFO Registered
> > kafka:type=kafka.Log4jController MBean
> > (kafka.utils.Log4jControllerRegistration$)
> >
> > [2019-11-16 19:15:52,933] INFO Registered signal handlers for TERM,
> > INT, HUP
> > (org.apache.kafka.common.utils.LoggingSignalHandler)
> >
> > [2019-11-16 19:15:52,934] INFO starting (kafka.server.KafkaServer)
> >
> > [2019-11-16 19:15:52,935] INFO Connecting to zookeeper on
> > kafka-zookeeper
> > (kafka.server.KafkaServer)
> >
> > [2019-11-16 19:15:53,230] INFO [ZooKeeperClient Kafka server]
> > Initializing a new session to kafka-zookeeper.
> > (kafka.zookeeper.ZooKeeperClient)
> >
> > [2019-11-16 19:15:53,331] INFO Client
> >
> > environment:zookeeper.version=3.4.14-4c25d480e66aadd371de8bd2fd8da255a
> > c140bc f, built on 03/06/2019 16:18 GMT
> > (org.apache.zookeeper.ZooKeeper)
> >
> > [2019-11-16 19:15:53,331] INFO Client
> > environment:host.name=kafka-1.kafka-headless.bd-iot.svc.cluster.local
> > (org.apache.zookeeper.ZooKeeper)
> >
> > [2019-11-16 19:15:53,331] INFO Client
> > environment:java.version=1.8.0_232
> > (org.apache.zookeeper.ZooKeeper)
> >
> > [2019-11-16 19:15:53,331] INFO Client
> > environment:java.vendor=AdoptOpenJDK
> > (org.apache.zookeeper.ZooKeeper)
> >
> > [2019-11-16 19:15:53,332] INFO Client
> > environment:java.home=/opt/bitnami/java
> > (org.apache.zookeeper.ZooKeeper)
> >
> > [2019-11-16 19:15:53,332] INFO Client
> >
> >
> environment:java.class.path=/opt/bitnami/kafka/bin/../libs/activation-1.1.1.
> >
> > jar:/opt/bitnami/kafka/bin/../libs/aopalliance-repackaged-2.5.0.jar:/o
> > pt/bit
> >
> > nami/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/bitnami/kafka/bin/../
> > libs/a
> >
> > udience-annotations-0.5.0.jar:/opt/bitnami/kafka/bin/../libs/commons-l
> > ang3-3
> >
> > .8.1.jar:/opt/bitnami/kafka/bin/../libs/connect-api-2.3.1.jar:/opt/bit
> > nami/k
> >
> > afka/bin/../libs/connect-basic-auth-extension-2.3.1.jar:/opt/bitnami/k
> > afka/b
> >
> > in/../libs/connect-file-2.3.1.jar:/opt/bitnami/kafka/bin/../libs/conne
> > ct-jso
> >
> > n-2.3.1.jar:/opt/bitnami/kafka/bin/../libs/connect-runtime-2.3.1.jar:/
> > opt/bi
> >
> > tnami/kafka/bin/../libs/connect-transforms-2.3.1.jar:/opt/bitnami/kafk
> > a/bin/
> >
> > ../libs/guava-20.0.jar:/opt/bitnami/kafka/bin/../libs/hk2-api-2.5.0.ja
> > r:/opt
> >
> > /bitnami/kafka/bin/../libs/hk2-locator-2.5.0.jar:/opt/bitnami/kafka/bi
> > n/../l
> >
> > ibs/hk2-utils-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/jackson-annotat
> > ions-2
> >
> > .10.0.jar:/opt/bitnami/kafka/bin/../libs/jackson-core-2.10.0.jar:/opt/
> > bitnam
> >
> > i/kafka/bin/../libs/jackson-databind-2.10.0.jar:/opt/bitnami/kafka/bin
> > /../li
> >
> > bs/jackson-dataformat-csv-2.10.0.jar:/opt/bitnami/kafka/bin/../libs/ja
> > ckson-
> >
> > datatype-jdk8-2.10.0.jar:/opt/bitnami/kafka/bin/../libs/jackson-jaxrs-
> > base-2
> >
> >
> .10.0.jar:/opt/bitnami/kafka/bin/../libs/jackson-jaxrs-json-provider-2.10.0.
> >
> > jar:/opt/bitnami/kafka/bin/../libs/jackson-module-jaxb-annotations-2.1
> > 0.0.ja
> >
> > r:/opt/bitnami/kafka/bin/../libs/jackson-module-paranamer-2.10.0.jar:/
> > opt/bi
> >
> > tnami/kafka/bin/../libs/jackson-module-scala_2.11-2.10.0.jar:/opt/bitn
> > ami/ka
> >
> > fka/bin/../libs/jakarta.activation-api-1.2.1.jar:/opt/bitnami/kafka/bi
> > n/../l
> >
> >
> ibs/jakarta.annotation-api-1.3.4.jar:/opt/bitnami/kafka/bin/../libs/jakarta.
> > inject-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/jakarta.ws
> > .rs-api-2.1.5.jar:
> >
> > /opt/bitnami/kafka/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/opt/bit
> > nami/k
> >
> > afka/bin/../libs/javassist-3.22.0-CR2.jar:/opt/bitnami/kafka/bin/../li
> > bs/jav
> >
> >
> ax.servlet-api-3.1.0.jar:/opt/bitnami/kafka/bin/../libs/javax.ws.rs-api-2.1.
> >
> > 1.jar:/opt/bitnami/kafka/bin/../libs/jaxb-api-2.3.0.jar:/opt/bitnami/k
> > afka/b
> >
> > in/../libs/jersey-client-2.28.jar:/opt/bitnami/kafka/bin/../libs/jerse
> > y-comm
> >
> > on-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-container-servlet-2.
> > 28.jar
> >
> > :/opt/bitnami/kafka/bin/../libs/jersey-container-servlet-core-2.28.jar
> > :/opt/
> >
> > bitnami/kafka/bin/../libs/jersey-hk2-2.28.jar:/opt/bitnami/kafka/bin/.
> > ./libs
> >
> > /jersey-media-jaxb-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-serv
> > er-2.2
> >
> > 8.jar:/opt/bitnami/kafka/bin/../libs/jetty-client-9.4.18.v20190429.jar
> > :/opt/
> >
> > bitnami/kafka/bin/../libs/jetty-continuation-9.4.18.v20190429.jar:/opt
> > /bitna
> >
> > mi/kafka/bin/../libs/jetty-http-9.4.18.v20190429.jar:/opt/bitnami/kafk
> > a/bin/
> >
> > ../libs/jetty-io-9.4.18.v20190429.jar:/opt/bitnami/kafka/bin/../libs/j
> > etty-s
> >
> > ecurity-9.4.18.v20190429.jar:/opt/bitnami/kafka/bin/../libs/jetty-serv
> > er-9.4
> >
> > .18.v20190429.jar:/opt/bitnami/kafka/bin/../libs/jetty-servlet-9.4.18.
> > v20190
> >
> > 429.jar:/opt/bitnami/kafka/bin/../libs/jetty-servlets-9.4.18.v20190429
> > .jar:/
> >
> > opt/bitnami/kafka/bin/../libs/jetty-util-9.4.18.v20190429.jar:/opt/bit
> > nami/k
> >
> > afka/bin/../libs/jopt-simple-5.0.4.jar:/opt/bitnami/kafka/bin/../libs/
> > jsr305
> >
> > -3.0.2.jar:/opt/bitnami/kafka/bin/../libs/kafka-clients-2.3.1.jar:/opt
> > /bitna
> >
> >
> mi/kafka/bin/../libs/kafka-log4j-appender-2.3.1.jar:/opt/bitnami/kafka/bin/.
> >
> > ./libs/kafka-streams-2.3.1.jar:/opt/bitnami/kafka/bin/../libs/kafka-st
> > reams-
> >
> > examples-2.3.1.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-scala_
> > 2.11-2
> >
> > .3.1.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-test-utils-2.3.1
> > .jar:/
> >
> >
> opt/bitnami/kafka/bin/../libs/kafka-tools-2.3.1.jar:/opt/bitnami/kafka/bin/.
> >
> > ./libs/kafka_2.11-2.3.1-sources.jar:/opt/bitnami/kafka/bin/../libs/kaf
> > ka_2.1
> >
> > 1-2.3.1.jar:/opt/bitnami/kafka/bin/../libs/log4j-1.2.17.jar:/opt/bitna
> > mi/kaf
> >
> > ka/bin/../libs/lz4-java-1.6.0.jar:/opt/bitnami/kafka/bin/../libs/maven
> > -artif
> >
> > act-3.6.1.jar:/opt/bitnami/kafka/bin/../libs/metrics-core-2.2.0.jar:/o
> > pt/bit
> >
> > nami/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/bitnami/ka
> > fka/bi
> >
> >
> n/../libs/paranamer-2.8.jar:/opt/bitnami/kafka/bin/../libs/plexus-utils-3.2.
> >
> > 0.jar:/opt/bitnami/kafka/bin/../libs/reflections-0.9.11.jar:/opt/bitna
> > mi/kaf
> >
> > ka/bin/../libs/rocksdbjni-5.18.3.jar:/opt/bitnami/kafka/bin/../libs/sc
> > ala-li
> >
> > brary-2.11.12.jar:/opt/bitnami/kafka/bin/../libs/scala-logging_2.11-3.
> > 9.0.ja
> >
> > r:/opt/bitnami/kafka/bin/../libs/scala-reflect-2.11.12.jar:/opt/bitnam
> > i/kafk
> >
> > a/bin/../libs/slf4j-api-1.7.26.jar:/opt/bitnami/kafka/bin/../libs/slf4
> > j-log4
> >
> > j12-1.7.26.jar:/opt/bitnami/kafka/bin/../libs/snappy-java-1.1.7.3.jar:
> > /opt/b
> >
> > itnami/kafka/bin/../libs/spotbugs-annotations-3.1.9.jar:/opt/bitnami/k
> > afka/b
> >
> > in/../libs/validation-api-2.0.1.Final.jar:/opt/bitnami/kafka/bin/../li
> > bs/zkc
> >
> > lient-0.11.jar:/opt/bitnami/kafka/bin/../libs/zookeeper-3.4.14.jar:/op
> > t/bitn ami/kafka/bin/../libs/zstd-jni-1.4.0-1.jar
> > (org.apache.zookeeper.ZooKeeper)
> >
> > [2019-11-16 19:15:53,333] INFO Client
> >
> > environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:
> > /lib64 :/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
> >
> > [2019-11-16 19:15:53,333] INFO Client environment:java.io.tmpdir=/tmp
> > (org.apache.zookeeper.ZooKeeper)
> >
> > [2019-11-16 19:15:53,334] INFO Client environment:java.compiler=<NA>
> > (org.apache.zookeeper.ZooKeeper)
> >
> > [2019-11-16 19:15:53,334] INFO Client environment:os.name=Linux
> > (org.apache.zookeeper.ZooKeeper)
> >
> > [2019-11-16 19:15:53,334] INFO Client environment:os.arch=amd64
> > (org.apache.zookeeper.ZooKeeper)
> >
> > [2019-11-16 19:15:53,335] INFO Client
> > environment:os.version=4.15.0-1060-azure
> > (org.apache.zookeeper.ZooKeeper)
> >
> > [2019-11-16 19:15:53,336] INFO Client environment:user.name=?
> > (org.apache.zookeeper.ZooKeeper)
> >
> > [2019-11-16 19:15:53,336] INFO Client environment:user.home=?
> > (org.apache.zookeeper.ZooKeeper)
> >
> > [2019-11-16 19:15:53,336] INFO Client environment:user.dir=/
> > (org.apache.zookeeper.ZooKeeper)
> >
> > [2019-11-16 19:15:53,338] INFO Initiating client connection,
> > connectString=kafka-zookeeper sessionTimeout=6000
> > watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@31304f
> > 14
> > (org.apache.zookeeper.ZooKeeper)
> >
> > [2019-11-16 19:15:53,528] INFO [ZooKeeperClient Kafka server] Waiting
> > until connected. (kafka.zookeeper.ZooKeeperClient)
> >
> > [2019-11-16 19:15:53,545] INFO Opening socket connection to server
> > kafka-zookeeper/10.0.215.214:2181. Will not attempt to authenticate
> > using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
> >
> > [2019-11-16 19:15:53,552] INFO Socket connection established to
> > kafka-zookeeper/10.0.215.214:2181, initiating session
> > (org.apache.zookeeper.ClientCnxn)
> >
> > [2019-11-16 19:15:53,627] INFO Session establishment complete on
> > server kafka-zookeeper/10.0.215.214:2181, sessionid =
> > 0x10000810b780070, negotiated timeout = 6000
> > (org.apache.zookeeper.ClientCnxn)
> >
> > [2019-11-16 19:15:53,630] INFO [ZooKeeperClient Kafka server] Connected.
> > (kafka.zookeeper.ZooKeeperClient)
> >
> > [2019-11-16 19:15:55,034] INFO Cluster ID = dvSQ1W2US72rcqGef9tm6w
> > (kafka.server.KafkaServer)
> >
> > [2019-11-16 19:15:55,637] INFO KafkaConfig values:
> >
> >                 advertised.host.name = null
> >
> >                 advertised.listeners =
> > PLAINTEXT://kafka-1.kafka-headless.bd-iot.svc.cluster.local:9092
> >
> >                 advertised.port = null
> >
> >                 alter.config.policy.class.name = null
> >
> >                 alter.log.dirs.replication.quota.window.num = 11
> >
> >                 alter.log.dirs.replication.quota.window.size.seconds =
> > 1
> >
> >                 authorizer.class.name =
> >
> >                 auto.create.topics.enable = true
> >
> >                 auto.leader.rebalance.enable = true
> >
> >                 background.threads = 10
> >
> >                 broker.id = -1
> >
> >                 broker.id.generation.enable = true
> >
> >                 broker.rack = null
> >
> >                 client.quota.callback.class = null
> >
> >                 compression.type = producer
> >
> >                 connection.failed.authentication.delay.ms = 100
> >
> >                 connections.max.idle.ms = 600000
> >
> >                 connections.max.reauth.ms = 0
> >
> >                 control.plane.listener.name = null
> >
> >                 controlled.shutdown.enable = true
> >
> >                 controlled.shutdown.max.retries = 3
> >
> >                 controlled.shutdown.retry.backoff.ms = 5000
> >
> >                 controller.socket.timeout.ms = 30000
> >
> >                 create.topic.policy.class.name = null
> >
> >                 default.replication.factor = 2
> >
> >                 delegation.token.expiry.check.interval.ms = 3600000
> >
> >                 delegation.token.expiry.time.ms = 86400000
> >
> >                 delegation.token.master.key = null
> >
> >                 delegation.token.max.lifetime.ms = 604800000
> >
> >                 delete.records.purgatory.purge.interval.requests = 1
> >
> >                 delete.topic.enable = true
> >
> >                 fetch.purgatory.purge.interval.requests = 1000
> >
> >                 group.initial.rebalance.delay.ms = 0
> >
> >                 group.max.session.timeout.ms = 1800000
> >
> >                 group.max.size = 2147483647
> >
> >                 group.min.session.timeout.ms = 6000
> >
> >                 host.name =
> >
> >                 inter.broker.listener.name = null
> >
> >                 inter.broker.protocol.version = 2.3-IV1
> >
> >                 kafka.metrics.polling.interval.secs = 10
> >
> >                 kafka.metrics.reporters = []
> >
> >                 leader.imbalance.check.interval.seconds = 300
> >
> >                 leader.imbalance.per.broker.percentage = 10
> >
> >                 listener.security.protocol.map =
> > PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SAS
> > L_SSL
> >
> >                 listeners = PLAINTEXT://:9092
> >
> >                 log.cleaner.backoff.ms = 15000
> >
> >                 log.cleaner.dedupe.buffer.size = 134217728
> >
> >                 log.cleaner.delete.retention.ms = 86400000
> >
> >                 log.cleaner.enable = true
> >
> >                 log.cleaner.io.buffer.load.factor = 0.9
> >
> >                 log.cleaner.io.buffer.size = 524288
> >
> >                 log.cleaner.io.max.bytes.per.second =
> > 1.7976931348623157E308
> >
> >                 log.cleaner.max.compaction.lag.ms =
> > 9223372036854775807
> >
> >                 log.cleaner.min.cleanable.ratio = 0.5
> >
> >                 log.cleaner.min.compaction.lag.ms = 0
> >
> >                 log.cleaner.threads = 1
> >
> >                 log.cleanup.policy = [delete]
> >
> >                 log.dir = /tmp/kafka-logs
> >
> >                 log.dirs = /bitnami/kafka/data
> >
> >                 log.flush.interval.messages = 10000
> >
> >                 log.flush.interval.ms = 1000
> >
> >                 log.flush.offset.checkpoint.interval.ms = 60000
> >
> >                 log.flush.scheduler.interval.ms = 9223372036854775807
> >
> >                 log.flush.start.offset.checkpoint.interval.ms = 60000
> >
> >                 log.index.interval.bytes = 4096
> >
> >                 log.index.size.max.bytes = 10485760
> >
> >                 log.message.downconversion.enable = true
> >
> >                 log.message.format.version = 2.3-IV1
> >
> >                 log.message.timestamp.difference.max.ms =
> > 9223372036854775807
> >
> >                 log.message.timestamp.type = CreateTime
> >
> >                 log.preallocate = false
> >
> >                 log.retention.bytes = 1073741824
> >
> >                 log.retention.check.interval.ms = 300000
> >
> >                 log.retention.hours = 168
> >
> >                 log.retention.minutes = null
> >
> >                 log.retention.ms = null
> >
> >                 log.roll.hours = 168
> >
> >                 log.roll.jitter.hours = 0
> >
> >                 log.roll.jitter.ms = null
> >
> >                 log.roll.ms = null
> >
> >                 log.segment.bytes = 1073741824
> >
> >                 log.segment.delete.delay.ms = 60000
> >
> >                 max.connections = 2147483647
> >
> >                 max.connections.per.ip = 2147483647
> >
> >                 max.connections.per.ip.overrides =
> >
> >                 max.incremental.fetch.session.cache.slots = 1000
> >
> >                 message.max.bytes = 1000012
> >
> >                 metric.reporters = []
> >
> >                 metrics.num.samples = 2
> >
> >                 metrics.recording.level = INFO
> >
> >                 metrics.sample.window.ms = 30000
> >
> >                 min.insync.replicas = 1
> >
> >                 num.io.threads = 8
> >
> >                 num.network.threads = 3
> >
> >                 num.partitions = 1
> >
> >                 num.recovery.threads.per.data.dir = 1
> >
> >                 num.replica.alter.log.dirs.threads = null
> >
> >                 num.replica.fetchers = 1
> >
> >                 offset.metadata.max.bytes = 4096
> >
> >                 offsets.commit.required.acks = -1
> >
> >                 offsets.commit.timeout.ms = 5000
> >
> >                 offsets.load.buffer.size = 5242880
> >
> >                 offsets.retention.check.interval.ms = 600000
> >
> >                 offsets.retention.minutes = 10080
> >
> >                 offsets.topic.compression.codec = 0
> >
> >                 offsets.topic.num.partitions = 50
> >
> >                 offsets.topic.replication.factor = 2
> >
> >                 offsets.topic.segment.bytes = 104857600
> >
> >                 password.encoder.cipher.algorithm =
> > AES/CBC/PKCS5Padding
> >
> >                 password.encoder.iterations = 4096
> >
> >                 password.encoder.key.length = 128
> >
> >                 password.encoder.keyfactory.algorithm = null
> >
> >                 password.encoder.old.secret = null
> >
> >                 password.encoder.secret = null
> >
> >                 port = 9092
> >
> >                 principal.builder.class = null
> >
> >                 producer.purgatory.purge.interval.requests = 1000
> >
> >                 queued.max.request.bytes = -1
> >
> >                 queued.max.requests = 500
> >
> >                 quota.consumer.default = 9223372036854775807
> >
> >                 quota.producer.default = 9223372036854775807
> >
> >                 quota.window.num = 11
> >
> >                 quota.window.size.seconds = 1
> >
> >                 replica.fetch.backoff.ms = 1000
> >
> >                 replica.fetch.max.bytes = 1048576
> >
> >                 replica.fetch.min.bytes = 1
> >
> >                 replica.fetch.response.max.bytes = 10485760
> >
> >                 replica.fetch.wait.max.ms = 500
> >
> >                 replica.high.watermark.checkpoint.interval.ms = 5000
> >
> >                 replica.lag.time.max.ms = 10000
> >
> >                 replica.socket.receive.buffer.bytes = 65536
> >
> >                 replica.socket.timeout.ms = 30000
> >
> >                 replication.quota.window.num = 11
> >
> >                 replication.quota.window.size.seconds = 1
> >
> >                 request.timeout.ms = 30000
> >
> >                 reserved.broker.max.id = 1000
> >
> >                 sasl.client.callback.handler.class = null
> >
> >                 sasl.enabled.mechanisms = [GSSAPI]
> >
> >                 sasl.jaas.config = null
> >
> >                 sasl.kerberos.kinit.cmd = /usr/bin/kinit
> >
> >                 sasl.kerberos.min.time.before.relogin = 60000
> >
> >                 sasl.kerberos.principal.to.local.rules = [DEFAULT]
> >
> >                 sasl.kerberos.service.name = null
> >
> >                 sasl.kerberos.ticket.renew.jitter = 0.05
> >
> >                 sasl.kerberos.ticket.renew.window.factor = 0.8
> >
> >                 sasl.login.callback.handler.class = null
> >
> >                 sasl.login.class = null
> >
> >                 sasl.login.refresh.buffer.seconds = 300
> >
> >                 sasl.login.refresh.min.period.seconds = 60
> >
> >                 sasl.login.refresh.window.factor = 0.8
> >
> >                 sasl.login.refresh.window.jitter = 0.05
> >
> >                 sasl.mechanism.inter.broker.protocol = GSSAPI
> >
> >                 sasl.server.callback.handler.class = null
> >
> >                 security.inter.broker.protocol = PLAINTEXT
> >
> >                 socket.receive.buffer.bytes = 102400
> >
> >                 socket.request.max.bytes = 104857600
> >
> >                 socket.send.buffer.bytes = 102400
> >
> >                 ssl.cipher.suites = []
> >
> >                 ssl.client.auth = none
> >
> >                 ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
> >
> >                 ssl.endpoint.identification.algorithm = https
> >
> >                 ssl.key.password = null
> >
> >                 ssl.keymanager.algorithm = SunX509
> >
> >                 ssl.keystore.location = null
> >
> >                 ssl.keystore.password = null
> >
> >                 ssl.keystore.type = JKS
> >
> >                 ssl.principal.mapping.rules = [DEFAULT]
> >
> >                 ssl.protocol = TLS
> >
> >                 ssl.provider = null
> >
> >                 ssl.secure.random.implementation = null
> >
> >                 ssl.trustmanager.algorithm = PKIX
> >
> >                 ssl.truststore.location = null
> >
> >                 ssl.truststore.password = null
> >
> >                 ssl.truststore.type = JKS
> >
> >
> > transaction.abort.timed.out.transaction.cleanup.interval.ms
> > = 60000
> >
> >                 transaction.max.timeout.ms = 900000
> >
> >
> > transaction.remove.expired.transaction.cleanup.interval.ms
> > =
> > 3600000
> >
> >                 transaction.state.log.load.buffer.size = 5242880
> >
> >                 transaction.state.log.min.isr = 2
> >
> >                 transaction.state.log.num.partitions = 50
> >
> >                 transaction.state.log.replication.factor = 2
> >
> >                 transaction.state.log.segment.bytes = 104857600
> >
> >                 transactional.id.expiration.ms = 604800000
> >
> >                 unclean.leader.election.enable = false
> >
> >                 zookeeper.connect = kafka-zookeeper
> >
> >                 zookeeper.connection.timeout.ms = 6000
> >
> >                 zookeeper.max.in.flight.requests = 10
> >
> >                 zookeeper.session.timeout.ms = 6000
> >
> >                 zookeeper.set.acl = false
> >
> >                 zookeeper.sync.time.ms = 2000
> >
> > (kafka.server.KafkaConfig)
> >
> > [2019-11-16 19:15:55,829] INFO KafkaConfig values:
> >
> >                 advertised.host.name = null
> >
> >                 advertised.listeners =
> > PLAINTEXT://kafka-1.kafka-headless.bd-iot.svc.cluster.local:9092
> >
> >                 advertised.port = null
> >
> >                 alter.config.policy.class.name = null
> >
> >                 alter.log.dirs.replication.quota.window.num = 11
> >
> >                 alter.log.dirs.replication.quota.window.size.seconds =
> > 1
> >
> >                 authorizer.class.name =
> >
> >                 auto.create.topics.enable = true
> >
> >                 auto.leader.rebalance.enable = true
> >
> >                 background.threads = 10
> >
> >                 broker.id = -1
> >
> >                 broker.id.generation.enable = true
> >
> >                 broker.rack = null
> >
> >                 client.quota.callback.class = null
> >
> >                 compression.type = producer
> >
> >                 connection.failed.authentication.delay.ms = 100
> >
> >                 connections.max.idle.ms = 600000
> >
> >                 connections.max.reauth.ms = 0
> >
> >                 control.plane.listener.name = null
> >
> >                 controlled.shutdown.enable = true
> >
> >                 controlled.shutdown.max.retries = 3
> >
> >                 controlled.shutdown.retry.backoff.ms = 5000
> >
> >                 controller.socket.timeout.ms = 30000
> >
> >                 create.topic.policy.class.name = null
> >
> >                 default.replication.factor = 2
> >
> >                 delegation.token.expiry.check.interval.ms = 3600000
> >
> >                 delegation.token.expiry.time.ms = 86400000
> >
> >                 delegation.token.master.key = null
> >
> >                 delegation.token.max.lifetime.ms = 604800000
> >
> >                 delete.records.purgatory.purge.interval.requests = 1
> >
> >                 delete.topic.enable = true
> >
> >                 fetch.purgatory.purge.interval.requests = 1000
> >
> >                 group.initial.rebalance.delay.ms = 0
> >
> >                 group.max.session.timeout.ms = 1800000
> >
> >                 group.max.size = 2147483647
> >
> >                 group.min.session.timeout.ms = 6000
> >
> >                 host.name =
> >
> >                 inter.broker.listener.name = null
> >
> >                 inter.broker.protocol.version = 2.3-IV1
> >
> >                 kafka.metrics.polling.interval.secs = 10
> >
> >                 kafka.metrics.reporters = []
> >
> >                 leader.imbalance.check.interval.seconds = 300
> >
> >                 leader.imbalance.per.broker.percentage = 10
> >
> >                 listener.security.protocol.map =
> > PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SAS
> > L_SSL
> >
> >                 listeners = PLAINTEXT://:9092
> >
> >                 log.cleaner.backoff.ms = 15000
> >
> >                 log.cleaner.dedupe.buffer.size = 134217728
> >
> >                 log.cleaner.delete.retention.ms = 86400000
> >
> >                 log.cleaner.enable = true
> >
> >                 log.cleaner.io.buffer.load.factor = 0.9
> >
> >                 log.cleaner.io.buffer.size = 524288
> >
> >                 log.cleaner.io.max.bytes.per.second =
> > 1.7976931348623157E308
> >
> >                 log.cleaner.max.compaction.lag.ms =
> > 9223372036854775807
> >
> >                 log.cleaner.min.cleanable.ratio = 0.5
> >
> >                 log.cleaner.min.compaction.lag.ms = 0
> >
> >                 log.cleaner.threads = 1
> >
> >                 log.cleanup.policy = [delete]
> >
> >                 log.dir = /tmp/kafka-logs
> >
> >                 log.dirs = /bitnami/kafka/data
> >
> >                 log.flush.interval.messages = 10000
> >
> >                 log.flush.interval.ms = 1000
> >
> >                 log.flush.offset.checkpoint.interval.ms = 60000
> >
> >                 log.flush.scheduler.interval.ms = 9223372036854775807
> >
> >                 log.flush.start.offset.checkpoint.interval.ms = 60000
> >
> >                 log.index.interval.bytes = 4096
> >
> >                 log.index.size.max.bytes = 10485760
> >
> >                 log.message.downconversion.enable = true
> >
> >                 log.message.format.version = 2.3-IV1
> >
> >                 log.message.timestamp.difference.max.ms =
> > 9223372036854775807
> >
> >                 log.message.timestamp.type = CreateTime
> >
> >                 log.preallocate = false
> >
> >                 log.retention.bytes = 1073741824
> >
> >                 log.retention.check.interval.ms = 300000
> >
> >                 log.retention.hours = 168
> >
> >                 log.retention.minutes = null
> >
> >                 log.retention.ms = null
> >
> >                 log.roll.hours = 168
> >
> >                 log.roll.jitter.hours = 0
> >
> >                 log.roll.jitter.ms = null
> >
> >                 log.roll.ms = null
> >
> >                 log.segment.bytes = 1073741824
> >
> >                 log.segment.delete.delay.ms = 60000
> >
> >                 max.connections = 2147483647
> >
> >                 max.connections.per.ip = 2147483647
> >
> >                 max.connections.per.ip.overrides =
> >
> >                 max.incremental.fetch.session.cache.slots = 1000
> >
> >                 message.max.bytes = 1000012
> >
> >                 metric.reporters = []
> >
> >                 metrics.num.samples = 2
> >
> >                 metrics.recording.level = INFO
> >
> >                 metrics.sample.window.ms = 30000
> >
> >                 min.insync.replicas = 1
> >
> >                 num.io.threads = 8
> >
> >                 num.network.threads = 3
> >
> >                 num.partitions = 1
> >
> >                 num.recovery.threads.per.data.dir = 1
> >
> >                 num.replica.alter.log.dirs.threads = null
> >
> >                 num.replica.fetchers = 1
> >
> >                 offset.metadata.max.bytes = 4096
> >
> >                 offsets.commit.required.acks = -1
> >
> >                 offsets.commit.timeout.ms = 5000
> >
> >                 offsets.load.buffer.size = 5242880
> >
> >                 offsets.retention.check.interval.ms = 600000
> >
> >                 offsets.retention.minutes = 10080
> >
> >                 offsets.topic.compression.codec = 0
> >
> >                 offsets.topic.num.partitions = 50
> >
> >                 offsets.topic.replication.factor = 2
> >
> >                 offsets.topic.segment.bytes = 104857600
> >
> >                 password.encoder.cipher.algorithm =
> > AES/CBC/PKCS5Padding
> >
> >                 password.encoder.iterations = 4096
> >
> >                 password.encoder.key.length = 128
> >
> >                 password.encoder.keyfactory.algorithm = null
> >
> >                 password.encoder.old.secret = null
> >
> >                 password.encoder.secret = null
> >
> >                 port = 9092
> >
> >                 principal.builder.class = null
> >
> >                 producer.purgatory.purge.interval.requests = 1000
> >
> >                 queued.max.request.bytes = -1
> >
> >                 queued.max.requests = 500
> >
> >                 quota.consumer.default = 9223372036854775807
> >
> >                 quota.producer.default = 9223372036854775807
> >
> >                 quota.window.num = 11
> >
> >                 quota.window.size.seconds = 1
> >
> >                 replica.fetch.backoff.ms = 1000
> >
> >                 replica.fetch.max.bytes = 1048576
> >
> >                 replica.fetch.min.bytes = 1
> >
> >                 replica.fetch.response.max.bytes = 10485760
> >
> >                 replica.fetch.wait.max.ms = 500
> >
> >                 replica.high.watermark.checkpoint.interval.ms = 5000
> >
> >                 replica.lag.time.max.ms = 10000
> >
> >                 replica.socket.receive.buffer.bytes = 65536
> >
> >                 replica.socket.timeout.ms = 30000
> >
> >                 replication.quota.window.num = 11
> >
> >                 replication.quota.window.size.seconds = 1
> >
> >                 request.timeout.ms = 30000
> >
> >                 reserved.broker.max.id = 1000
> >
> >                 sasl.client.callback.handler.class = null
> >
> >                 sasl.enabled.mechanisms = [GSSAPI]
> >
> >                 sasl.jaas.config = null
> >
> >                 sasl.kerberos.kinit.cmd = /usr/bin/kinit
> >
> >                 sasl.kerberos.min.time.before.relogin = 60000
> >
> >                 sasl.kerberos.principal.to.local.rules = [DEFAULT]
> >
> >                 sasl.kerberos.service.name = null
> >
> >                 sasl.kerberos.ticket.renew.jitter = 0.05
> >
> >                 sasl.kerberos.ticket.renew.window.factor = 0.8
> >
> >                 sasl.login.callback.handler.class = null
> >
> >                 sasl.login.class = null
> >
> >                 sasl.login.refresh.buffer.seconds = 300
> >
> >                 sasl.login.refresh.min.period.seconds = 60
> >
> >                 sasl.login.refresh.window.factor = 0.8
> >
> >                 sasl.login.refresh.window.jitter = 0.05
> >
> >                 sasl.mechanism.inter.broker.protocol = GSSAPI
> >
> >                 sasl.server.callback.handler.class = null
> >
> >                 security.inter.broker.protocol = PLAINTEXT
> >
> >                 socket.receive.buffer.bytes = 102400
> >
> >                 socket.request.max.bytes = 104857600
> >
> >                 socket.send.buffer.bytes = 102400
> >
> >                 ssl.cipher.suites = []
> >
> >                 ssl.client.auth = none
> >
> >                 ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
> >
> >                 ssl.endpoint.identification.algorithm = https
> >
> >                 ssl.key.password = null
> >
> >                 ssl.keymanager.algorithm = SunX509
> >
> >                 ssl.keystore.location = null
> >
> >                 ssl.keystore.password = null
> >
> >                 ssl.keystore.type = JKS
> >
> >                 ssl.principal.mapping.rules = [DEFAULT]
> >
> >                 ssl.protocol = TLS
> >
> >                 ssl.provider = null
> >
> >                 ssl.secure.random.implementation = null
> >
> >                 ssl.trustmanager.algorithm = PKIX
> >
> >                 ssl.truststore.location = null
> >
> >                 ssl.truststore.password = null
> >
> >                 ssl.truststore.type = JKS
> >
> >
> > transaction.abort.timed.out.transaction.cleanup.interval.ms
> > = 60000
> >
> >                 transaction.max.timeout.ms = 900000
> >
> >
> > transaction.remove.expired.transaction.cleanup.interval.ms
> > =
> > 3600000
> >
> >                 transaction.state.log.load.buffer.size = 5242880
> >
> >                 transaction.state.log.min.isr = 2
> >
> >                 transaction.state.log.num.partitions = 50
> >
> >                 transaction.state.log.replication.factor = 2
> >
> >                 transaction.state.log.segment.bytes = 104857600
> >
> >                 transactional.id.expiration.ms = 604800000
> >
> >                 unclean.leader.election.enable = false
> >
> >                 zookeeper.connect = kafka-zookeeper
> >
> >                 zookeeper.connection.timeout.ms = 6000
> >
> >                 zookeeper.max.in.flight.requests = 10
> >
> >                 zookeeper.session.timeout.ms = 6000
> >
> >                 zookeeper.set.acl = false
> >
> >                 zookeeper.sync.time.ms = 2000
> >
> > (kafka.server.KafkaConfig)
> >
> > [2019-11-16 19:15:56,039] INFO [ThrottledChannelReaper-Fetch]:
> > Starting
> > (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
> >
> > [2019-11-16 19:15:56,044] INFO [ThrottledChannelReaper-Produce]:
> > Starting
> > (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
> >
> > [2019-11-16 19:15:56,046] INFO [ThrottledChannelReaper-Request]:
> > Starting
> > (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
> >
> > [2019-11-16 19:15:56,335] INFO Loading logs. (kafka.log.LogManager)
> >
> > [2019-11-16 19:15:56,638] INFO [Log partition=__consumer_offsets-4,
> > dir=/bitnami/kafka/data] Recovering unflushed segment 0
> > (kafka.log.Log)
> >
> > [2019-11-16 19:15:56,727] INFO [Log partition=__consumer_offsets-4,
> > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > message format version 2 (kafka.log.Log)
> >
> > [2019-11-16 19:15:56,931] INFO [Log partition=__consumer_offsets-4,
> > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > message format version 2 (kafka.log.Log)
> >
> > [2019-11-16 19:15:56,933] INFO [Log partition=__consumer_offsets-4,
> > dir=/bitnami/kafka/data] Completed load of log with 1 segments, log
> > start offset 0 and log end offset 0 in 399 ms (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,029] INFO [Log partition=__consumer_offsets-22,
> > dir=/bitnami/kafka/data] Recovering unflushed segment 0
> > (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,029] INFO [Log partition=__consumer_offsets-22,
> > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > message format version 2 (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,031] INFO [Log partition=__consumer_offsets-22,
> > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > message format version 2 (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,032] INFO [Log partition=__consumer_offsets-22,
> > dir=/bitnami/kafka/data] Completed load of log with 1 segments, log
> > start offset 0 and log end offset 0 in 6 ms (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,147] INFO [Log partition=__consumer_offsets-32,
> > dir=/bitnami/kafka/data] Recovering unflushed segment 0
> > (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,148] INFO [Log partition=__consumer_offsets-32,
> > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > message format version 2 (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,150] INFO [Log partition=__consumer_offsets-32,
> > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > message format version 2 (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,226] INFO [Log partition=__consumer_offsets-32,
> > dir=/bitnami/kafka/data] Completed load of log with 1 segments, log
> > start offset 0 and log end offset 0 in 189 ms (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,330] INFO [Log partition=__consumer_offsets-39,
> > dir=/bitnami/kafka/data] Recovering unflushed segment 0
> > (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,330] INFO [Log partition=__consumer_offsets-39,
> > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > message format version 2 (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,333] INFO [Log partition=__consumer_offsets-39,
> > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > message format version 2 (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,334] INFO [Log partition=__consumer_offsets-39,
> > dir=/bitnami/kafka/data] Completed load of log with 1 segments, log
> > start offset 0 and log end offset 0 in 6 ms (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,429] INFO [Log partition=__consumer_offsets-26,
> > dir=/bitnami/kafka/data] Recovering unflushed segment 0
> > (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,429] INFO [Log partition=__consumer_offsets-26,
> > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > message format version 2 (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,431] INFO [Log partition=__consumer_offsets-26,
> > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > message format version 2 (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,432] INFO [Log partition=__consumer_offsets-26,
> > dir=/bitnami/kafka/data] Completed load of log with 1 segments, log
> > start offset 0 and log end offset 0 in 5 ms (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,527] INFO [Log partition=__consumer_offsets-44,
> > dir=/bitnami/kafka/data] Recovering unflushed segment 0
> > (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,529] INFO [Log partition=__consumer_offsets-44,
> > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > message format version 2 (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,533] INFO [Log partition=__consumer_offsets-44,
> > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > message format version 2 (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,534] INFO [Log partition=__consumer_offsets-44,
> > dir=/bitnami/kafka/data] Completed load of log with 1 segments, log
> > start offset 0 and log end offset 0 in 8 ms (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,634] INFO [Log partition=__consumer_offsets-25,
> > dir=/bitnami/kafka/data] Recovering unflushed segment 0
> > (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,635] INFO [Log partition=__consumer_offsets-25,
> > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > message format version 2 (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,637] INFO [Log partition=__consumer_offsets-25,
> > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > message format version 2 (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,638] INFO [Log partition=__consumer_offsets-25,
> > dir=/bitnami/kafka/data] Completed load of log with 1 segments, log
> > start offset 0 and log end offset 0 in 7 ms (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,730] INFO [Log partition=__consumer_offsets-8,
> > dir=/bitnami/kafka/data] Recovering unflushed segment 0
> > (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,730] INFO [Log partition=__consumer_offsets-8,
> > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > message format version 2 (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,733] INFO [Log partition=__consumer_offsets-8,
> > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > message format version 2 (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,733] INFO [Log partition=__consumer_offsets-8,
> > dir=/bitnami/kafka/data] Completed load of log with 1 segments, log
> > start offset 0 and log end offset 0 in 5 ms (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,741] INFO [Log partition=batch.alarm-0,
> > dir=/bitnami/kafka/data] Recovering unflushed segment 0
> > (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,826] INFO [Log partition=batch.alarm-0,
> > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > message format version 2 (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,830] INFO [Log partition=batch.alarm-0,
> > dir=/bitnami/kafka/data] Loading producer state till offset 0 with
> > message format version 2 (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,833] INFO [Log partition=batch.alarm-0,
> > dir=/bitnami/kafka/data] Completed load of log with 1 segments, log
> > start offset 0 and log end offset 0 in 94 ms (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,936] INFO [Log partition=__consumer_offsets-38,
> > dir=/bitnami/kafka/data] Recovering unflushed segment 33982499
> > (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,937] INFO [Log partition=__consumer_offsets-38,
> > dir=/bitnami/kafka/data] Loading producer state till offset 33982499
> > with message format version 2 (kafka.log.Log)
> >
> > [2019-11-16 19:15:57,941] INFO [ProducerStateManager
> > partition=__consumer_offsets-38] Loading producer state from snapshot
> > file
> '/bitnami/kafka/data/__consumer_offsets-38/00000000000033982499.snapshot'
> > (kafka.log.ProducerStateManager)
> >
> > [2019-11-16 19:16:10,208] INFO Terminating process due to signal
> > SIGTERM
> > (org.apache.kafka.common.utils.LoggingSignalHandler)
> >
> > [2019-11-16 19:16:10,217] INFO [KafkaServer id=1012] shutting down
> > (kafka.server.KafkaServer)
> >
> > [2019-11-16 19:16:10,226] ERROR [KafkaServer id=1012] Fatal error
> > during KafkaServer shutdown. (kafka.server.KafkaServer)
> >
> > java.lang.IllegalStateException: Kafka server is still starting up,
> > cannot shut down!
> >
> >                 at
> > kafka.server.KafkaServer.shutdown(KafkaServer.scala:584)
> >
> >                 at
> > kafka.server.KafkaServerStartable.shutdown(KafkaServerStartable.scala:
> > 48)
> >
> >                 at kafka.Kafka$$anon$1.run(Kafka.scala:81)
> >
> > [2019-11-16 19:16:10,233] ERROR Halting Kafka.
> > (kafka.server.KafkaServerStartable)
> >
> >
> >
> > Kind Regards
> >
> > Oliver
> >
> >
>
>
>

AW: Kafka Broker do not recover after crash

Posted by Oliver Eckle <ie...@gmx.de>.
Hi,

yes it is intentional, but just because I don't know better and want to spare a little resources?
From your answer I guess the preferred way is having a replication of 3?


-----Ursprüngliche Nachricht-----
Von: M. Manna <ma...@gmail.com> 
Gesendet: Samstag, 16. November 2019 20:27
An: users@kafka.apache.org
Betreff: Re: Kafka Broker do not recover after crash

Hi,

On Sat, 16 Nov 2019 at 19:21, Oliver Eckle <ie...@gmx.de> wrote:

> Hello,
>
>
>
> having a Kafka Cluster running in Kubernetes with 3 Brokers and all 
> replikations (topic, offsets) set to 2.


This sounds strange. You have 3 brokers and replication set to 2. Is this intentional ?


>
> For whatever reason one of the broker crash and restartes. And since 
> it circles in some kind of restart/crash loop.
>
> Any idea how to recover?
>
>
>
> Whole Logfile is like that:
>
>
>
> [38;5;6m [38;5;5m19:15:42.58 [0m
>
> [38;5;6m [38;5;5m19:15:42.58 [0m[1mWelcome to the Bitnami kafka 
> container[0m
>
> [38;5;6m [38;5;5m19:15:42.58 [0mSubscribe to project updates by 
> watching [1mhttps://github.com/bitnami/bitnami-docker-kafka[0m
> <http://github.com/bitnami/bitnami-docker-kafka%5B0m>
>
> [38;5;6m [38;5;5m19:15:42.58 [0mSubmit issues and feature requests at 
> [1mhttps://github.com/bitnami/bitnami-docker-kafka/issues[0m
> <http://github.com/bitnami/bitnami-docker-kafka/issues%5B0m>
>
> [38;5;6m [38;5;5m19:15:42.58 [0mSend us your feedback at 
> [1mcontainers@bitnami.com[0m
>
> [38;5;6m [38;5;5m19:15:42.59 [0m
>
> [38;5;6m [38;5;5m19:15:42.59 [0m[38;5;2mINFO [0m ==> ** Starting Kafka 
> setup
> **
>
> [38;5;6m [38;5;5m19:15:42.83 [0m[38;5;3mWARN [0m ==> You set the 
> environment variable ALLOW_PLAINTEXT_LISTENER=yes. For safety reasons, 
> do not use this flag in a production environment.
>
> [38;5;6m [38;5;5m19:15:42.84 [0m[38;5;2mINFO [0m ==> Initializing Kafka...
>
> [38;5;6m [38;5;5m19:15:42.84 [0m[38;5;2mINFO [0m ==> No injected 
> configuration files found, creating default config files
>
> [38;5;6m [38;5;5m19:15:43.83 [0m[38;5;2mINFO [0m ==> ** Kafka setup 
> finished! **
>
>
>
> [38;5;6m [38;5;5m19:15:43.84 [0m[38;5;2mINFO [0m ==> ** Starting Kafka 
> **
>
> [2019-11-16 19:15:49,625] INFO Registered 
> kafka:type=kafka.Log4jController MBean 
> (kafka.utils.Log4jControllerRegistration$)
>
> [2019-11-16 19:15:52,933] INFO Registered signal handlers for TERM, 
> INT, HUP
> (org.apache.kafka.common.utils.LoggingSignalHandler)
>
> [2019-11-16 19:15:52,934] INFO starting (kafka.server.KafkaServer)
>
> [2019-11-16 19:15:52,935] INFO Connecting to zookeeper on 
> kafka-zookeeper
> (kafka.server.KafkaServer)
>
> [2019-11-16 19:15:53,230] INFO [ZooKeeperClient Kafka server] 
> Initializing a new session to kafka-zookeeper. 
> (kafka.zookeeper.ZooKeeperClient)
>
> [2019-11-16 19:15:53,331] INFO Client
>
> environment:zookeeper.version=3.4.14-4c25d480e66aadd371de8bd2fd8da255a
> c140bc f, built on 03/06/2019 16:18 GMT 
> (org.apache.zookeeper.ZooKeeper)
>
> [2019-11-16 19:15:53,331] INFO Client
> environment:host.name=kafka-1.kafka-headless.bd-iot.svc.cluster.local
> (org.apache.zookeeper.ZooKeeper)
>
> [2019-11-16 19:15:53,331] INFO Client 
> environment:java.version=1.8.0_232
> (org.apache.zookeeper.ZooKeeper)
>
> [2019-11-16 19:15:53,331] INFO Client 
> environment:java.vendor=AdoptOpenJDK
> (org.apache.zookeeper.ZooKeeper)
>
> [2019-11-16 19:15:53,332] INFO Client
> environment:java.home=/opt/bitnami/java 
> (org.apache.zookeeper.ZooKeeper)
>
> [2019-11-16 19:15:53,332] INFO Client
>
> environment:java.class.path=/opt/bitnami/kafka/bin/../libs/activation-1.1.1.
>
> jar:/opt/bitnami/kafka/bin/../libs/aopalliance-repackaged-2.5.0.jar:/o
> pt/bit
>
> nami/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/bitnami/kafka/bin/../
> libs/a
>
> udience-annotations-0.5.0.jar:/opt/bitnami/kafka/bin/../libs/commons-l
> ang3-3
>
> .8.1.jar:/opt/bitnami/kafka/bin/../libs/connect-api-2.3.1.jar:/opt/bit
> nami/k
>
> afka/bin/../libs/connect-basic-auth-extension-2.3.1.jar:/opt/bitnami/k
> afka/b
>
> in/../libs/connect-file-2.3.1.jar:/opt/bitnami/kafka/bin/../libs/conne
> ct-jso
>
> n-2.3.1.jar:/opt/bitnami/kafka/bin/../libs/connect-runtime-2.3.1.jar:/
> opt/bi
>
> tnami/kafka/bin/../libs/connect-transforms-2.3.1.jar:/opt/bitnami/kafk
> a/bin/
>
> ../libs/guava-20.0.jar:/opt/bitnami/kafka/bin/../libs/hk2-api-2.5.0.ja
> r:/opt
>
> /bitnami/kafka/bin/../libs/hk2-locator-2.5.0.jar:/opt/bitnami/kafka/bi
> n/../l
>
> ibs/hk2-utils-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/jackson-annotat
> ions-2
>
> .10.0.jar:/opt/bitnami/kafka/bin/../libs/jackson-core-2.10.0.jar:/opt/
> bitnam
>
> i/kafka/bin/../libs/jackson-databind-2.10.0.jar:/opt/bitnami/kafka/bin
> /../li
>
> bs/jackson-dataformat-csv-2.10.0.jar:/opt/bitnami/kafka/bin/../libs/ja
> ckson-
>
> datatype-jdk8-2.10.0.jar:/opt/bitnami/kafka/bin/../libs/jackson-jaxrs-
> base-2
>
> .10.0.jar:/opt/bitnami/kafka/bin/../libs/jackson-jaxrs-json-provider-2.10.0.
>
> jar:/opt/bitnami/kafka/bin/../libs/jackson-module-jaxb-annotations-2.1
> 0.0.ja
>
> r:/opt/bitnami/kafka/bin/../libs/jackson-module-paranamer-2.10.0.jar:/
> opt/bi
>
> tnami/kafka/bin/../libs/jackson-module-scala_2.11-2.10.0.jar:/opt/bitn
> ami/ka
>
> fka/bin/../libs/jakarta.activation-api-1.2.1.jar:/opt/bitnami/kafka/bi
> n/../l
>
> ibs/jakarta.annotation-api-1.3.4.jar:/opt/bitnami/kafka/bin/../libs/jakarta.
> inject-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/jakarta.ws
> .rs-api-2.1.5.jar:
>
> /opt/bitnami/kafka/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/opt/bit
> nami/k
>
> afka/bin/../libs/javassist-3.22.0-CR2.jar:/opt/bitnami/kafka/bin/../li
> bs/jav
>
> ax.servlet-api-3.1.0.jar:/opt/bitnami/kafka/bin/../libs/javax.ws.rs-api-2.1.
>
> 1.jar:/opt/bitnami/kafka/bin/../libs/jaxb-api-2.3.0.jar:/opt/bitnami/k
> afka/b
>
> in/../libs/jersey-client-2.28.jar:/opt/bitnami/kafka/bin/../libs/jerse
> y-comm
>
> on-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-container-servlet-2.
> 28.jar
>
> :/opt/bitnami/kafka/bin/../libs/jersey-container-servlet-core-2.28.jar
> :/opt/
>
> bitnami/kafka/bin/../libs/jersey-hk2-2.28.jar:/opt/bitnami/kafka/bin/.
> ./libs
>
> /jersey-media-jaxb-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-serv
> er-2.2
>
> 8.jar:/opt/bitnami/kafka/bin/../libs/jetty-client-9.4.18.v20190429.jar
> :/opt/
>
> bitnami/kafka/bin/../libs/jetty-continuation-9.4.18.v20190429.jar:/opt
> /bitna
>
> mi/kafka/bin/../libs/jetty-http-9.4.18.v20190429.jar:/opt/bitnami/kafk
> a/bin/
>
> ../libs/jetty-io-9.4.18.v20190429.jar:/opt/bitnami/kafka/bin/../libs/j
> etty-s
>
> ecurity-9.4.18.v20190429.jar:/opt/bitnami/kafka/bin/../libs/jetty-serv
> er-9.4
>
> .18.v20190429.jar:/opt/bitnami/kafka/bin/../libs/jetty-servlet-9.4.18.
> v20190
>
> 429.jar:/opt/bitnami/kafka/bin/../libs/jetty-servlets-9.4.18.v20190429
> .jar:/
>
> opt/bitnami/kafka/bin/../libs/jetty-util-9.4.18.v20190429.jar:/opt/bit
> nami/k
>
> afka/bin/../libs/jopt-simple-5.0.4.jar:/opt/bitnami/kafka/bin/../libs/
> jsr305
>
> -3.0.2.jar:/opt/bitnami/kafka/bin/../libs/kafka-clients-2.3.1.jar:/opt
> /bitna
>
> mi/kafka/bin/../libs/kafka-log4j-appender-2.3.1.jar:/opt/bitnami/kafka/bin/.
>
> ./libs/kafka-streams-2.3.1.jar:/opt/bitnami/kafka/bin/../libs/kafka-st
> reams-
>
> examples-2.3.1.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-scala_
> 2.11-2
>
> .3.1.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-test-utils-2.3.1
> .jar:/
>
> opt/bitnami/kafka/bin/../libs/kafka-tools-2.3.1.jar:/opt/bitnami/kafka/bin/.
>
> ./libs/kafka_2.11-2.3.1-sources.jar:/opt/bitnami/kafka/bin/../libs/kaf
> ka_2.1
>
> 1-2.3.1.jar:/opt/bitnami/kafka/bin/../libs/log4j-1.2.17.jar:/opt/bitna
> mi/kaf
>
> ka/bin/../libs/lz4-java-1.6.0.jar:/opt/bitnami/kafka/bin/../libs/maven
> -artif
>
> act-3.6.1.jar:/opt/bitnami/kafka/bin/../libs/metrics-core-2.2.0.jar:/o
> pt/bit
>
> nami/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/bitnami/ka
> fka/bi
>
> n/../libs/paranamer-2.8.jar:/opt/bitnami/kafka/bin/../libs/plexus-utils-3.2.
>
> 0.jar:/opt/bitnami/kafka/bin/../libs/reflections-0.9.11.jar:/opt/bitna
> mi/kaf
>
> ka/bin/../libs/rocksdbjni-5.18.3.jar:/opt/bitnami/kafka/bin/../libs/sc
> ala-li
>
> brary-2.11.12.jar:/opt/bitnami/kafka/bin/../libs/scala-logging_2.11-3.
> 9.0.ja
>
> r:/opt/bitnami/kafka/bin/../libs/scala-reflect-2.11.12.jar:/opt/bitnam
> i/kafk
>
> a/bin/../libs/slf4j-api-1.7.26.jar:/opt/bitnami/kafka/bin/../libs/slf4
> j-log4
>
> j12-1.7.26.jar:/opt/bitnami/kafka/bin/../libs/snappy-java-1.1.7.3.jar:
> /opt/b
>
> itnami/kafka/bin/../libs/spotbugs-annotations-3.1.9.jar:/opt/bitnami/k
> afka/b
>
> in/../libs/validation-api-2.0.1.Final.jar:/opt/bitnami/kafka/bin/../li
> bs/zkc
>
> lient-0.11.jar:/opt/bitnami/kafka/bin/../libs/zookeeper-3.4.14.jar:/op
> t/bitn ami/kafka/bin/../libs/zstd-jni-1.4.0-1.jar 
> (org.apache.zookeeper.ZooKeeper)
>
> [2019-11-16 19:15:53,333] INFO Client
>
> environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:
> /lib64 :/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
>
> [2019-11-16 19:15:53,333] INFO Client environment:java.io.tmpdir=/tmp
> (org.apache.zookeeper.ZooKeeper)
>
> [2019-11-16 19:15:53,334] INFO Client environment:java.compiler=<NA>
> (org.apache.zookeeper.ZooKeeper)
>
> [2019-11-16 19:15:53,334] INFO Client environment:os.name=Linux
> (org.apache.zookeeper.ZooKeeper)
>
> [2019-11-16 19:15:53,334] INFO Client environment:os.arch=amd64
> (org.apache.zookeeper.ZooKeeper)
>
> [2019-11-16 19:15:53,335] INFO Client
> environment:os.version=4.15.0-1060-azure 
> (org.apache.zookeeper.ZooKeeper)
>
> [2019-11-16 19:15:53,336] INFO Client environment:user.name=?
> (org.apache.zookeeper.ZooKeeper)
>
> [2019-11-16 19:15:53,336] INFO Client environment:user.home=?
> (org.apache.zookeeper.ZooKeeper)
>
> [2019-11-16 19:15:53,336] INFO Client environment:user.dir=/
> (org.apache.zookeeper.ZooKeeper)
>
> [2019-11-16 19:15:53,338] INFO Initiating client connection, 
> connectString=kafka-zookeeper sessionTimeout=6000
> watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@31304f
> 14
> (org.apache.zookeeper.ZooKeeper)
>
> [2019-11-16 19:15:53,528] INFO [ZooKeeperClient Kafka server] Waiting 
> until connected. (kafka.zookeeper.ZooKeeperClient)
>
> [2019-11-16 19:15:53,545] INFO Opening socket connection to server 
> kafka-zookeeper/10.0.215.214:2181. Will not attempt to authenticate 
> using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
>
> [2019-11-16 19:15:53,552] INFO Socket connection established to 
> kafka-zookeeper/10.0.215.214:2181, initiating session
> (org.apache.zookeeper.ClientCnxn)
>
> [2019-11-16 19:15:53,627] INFO Session establishment complete on 
> server kafka-zookeeper/10.0.215.214:2181, sessionid = 
> 0x10000810b780070, negotiated timeout = 6000 
> (org.apache.zookeeper.ClientCnxn)
>
> [2019-11-16 19:15:53,630] INFO [ZooKeeperClient Kafka server] Connected.
> (kafka.zookeeper.ZooKeeperClient)
>
> [2019-11-16 19:15:55,034] INFO Cluster ID = dvSQ1W2US72rcqGef9tm6w
> (kafka.server.KafkaServer)
>
> [2019-11-16 19:15:55,637] INFO KafkaConfig values:
>
>                 advertised.host.name = null
>
>                 advertised.listeners =
> PLAINTEXT://kafka-1.kafka-headless.bd-iot.svc.cluster.local:9092
>
>                 advertised.port = null
>
>                 alter.config.policy.class.name = null
>
>                 alter.log.dirs.replication.quota.window.num = 11
>
>                 alter.log.dirs.replication.quota.window.size.seconds = 
> 1
>
>                 authorizer.class.name =
>
>                 auto.create.topics.enable = true
>
>                 auto.leader.rebalance.enable = true
>
>                 background.threads = 10
>
>                 broker.id = -1
>
>                 broker.id.generation.enable = true
>
>                 broker.rack = null
>
>                 client.quota.callback.class = null
>
>                 compression.type = producer
>
>                 connection.failed.authentication.delay.ms = 100
>
>                 connections.max.idle.ms = 600000
>
>                 connections.max.reauth.ms = 0
>
>                 control.plane.listener.name = null
>
>                 controlled.shutdown.enable = true
>
>                 controlled.shutdown.max.retries = 3
>
>                 controlled.shutdown.retry.backoff.ms = 5000
>
>                 controller.socket.timeout.ms = 30000
>
>                 create.topic.policy.class.name = null
>
>                 default.replication.factor = 2
>
>                 delegation.token.expiry.check.interval.ms = 3600000
>
>                 delegation.token.expiry.time.ms = 86400000
>
>                 delegation.token.master.key = null
>
>                 delegation.token.max.lifetime.ms = 604800000
>
>                 delete.records.purgatory.purge.interval.requests = 1
>
>                 delete.topic.enable = true
>
>                 fetch.purgatory.purge.interval.requests = 1000
>
>                 group.initial.rebalance.delay.ms = 0
>
>                 group.max.session.timeout.ms = 1800000
>
>                 group.max.size = 2147483647
>
>                 group.min.session.timeout.ms = 6000
>
>                 host.name =
>
>                 inter.broker.listener.name = null
>
>                 inter.broker.protocol.version = 2.3-IV1
>
>                 kafka.metrics.polling.interval.secs = 10
>
>                 kafka.metrics.reporters = []
>
>                 leader.imbalance.check.interval.seconds = 300
>
>                 leader.imbalance.per.broker.percentage = 10
>
>                 listener.security.protocol.map = 
> PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SAS
> L_SSL
>
>                 listeners = PLAINTEXT://:9092
>
>                 log.cleaner.backoff.ms = 15000
>
>                 log.cleaner.dedupe.buffer.size = 134217728
>
>                 log.cleaner.delete.retention.ms = 86400000
>
>                 log.cleaner.enable = true
>
>                 log.cleaner.io.buffer.load.factor = 0.9
>
>                 log.cleaner.io.buffer.size = 524288
>
>                 log.cleaner.io.max.bytes.per.second =
> 1.7976931348623157E308
>
>                 log.cleaner.max.compaction.lag.ms = 
> 9223372036854775807
>
>                 log.cleaner.min.cleanable.ratio = 0.5
>
>                 log.cleaner.min.compaction.lag.ms = 0
>
>                 log.cleaner.threads = 1
>
>                 log.cleanup.policy = [delete]
>
>                 log.dir = /tmp/kafka-logs
>
>                 log.dirs = /bitnami/kafka/data
>
>                 log.flush.interval.messages = 10000
>
>                 log.flush.interval.ms = 1000
>
>                 log.flush.offset.checkpoint.interval.ms = 60000
>
>                 log.flush.scheduler.interval.ms = 9223372036854775807
>
>                 log.flush.start.offset.checkpoint.interval.ms = 60000
>
>                 log.index.interval.bytes = 4096
>
>                 log.index.size.max.bytes = 10485760
>
>                 log.message.downconversion.enable = true
>
>                 log.message.format.version = 2.3-IV1
>
>                 log.message.timestamp.difference.max.ms =
> 9223372036854775807
>
>                 log.message.timestamp.type = CreateTime
>
>                 log.preallocate = false
>
>                 log.retention.bytes = 1073741824
>
>                 log.retention.check.interval.ms = 300000
>
>                 log.retention.hours = 168
>
>                 log.retention.minutes = null
>
>                 log.retention.ms = null
>
>                 log.roll.hours = 168
>
>                 log.roll.jitter.hours = 0
>
>                 log.roll.jitter.ms = null
>
>                 log.roll.ms = null
>
>                 log.segment.bytes = 1073741824
>
>                 log.segment.delete.delay.ms = 60000
>
>                 max.connections = 2147483647
>
>                 max.connections.per.ip = 2147483647
>
>                 max.connections.per.ip.overrides =
>
>                 max.incremental.fetch.session.cache.slots = 1000
>
>                 message.max.bytes = 1000012
>
>                 metric.reporters = []
>
>                 metrics.num.samples = 2
>
>                 metrics.recording.level = INFO
>
>                 metrics.sample.window.ms = 30000
>
>                 min.insync.replicas = 1
>
>                 num.io.threads = 8
>
>                 num.network.threads = 3
>
>                 num.partitions = 1
>
>                 num.recovery.threads.per.data.dir = 1
>
>                 num.replica.alter.log.dirs.threads = null
>
>                 num.replica.fetchers = 1
>
>                 offset.metadata.max.bytes = 4096
>
>                 offsets.commit.required.acks = -1
>
>                 offsets.commit.timeout.ms = 5000
>
>                 offsets.load.buffer.size = 5242880
>
>                 offsets.retention.check.interval.ms = 600000
>
>                 offsets.retention.minutes = 10080
>
>                 offsets.topic.compression.codec = 0
>
>                 offsets.topic.num.partitions = 50
>
>                 offsets.topic.replication.factor = 2
>
>                 offsets.topic.segment.bytes = 104857600
>
>                 password.encoder.cipher.algorithm = 
> AES/CBC/PKCS5Padding
>
>                 password.encoder.iterations = 4096
>
>                 password.encoder.key.length = 128
>
>                 password.encoder.keyfactory.algorithm = null
>
>                 password.encoder.old.secret = null
>
>                 password.encoder.secret = null
>
>                 port = 9092
>
>                 principal.builder.class = null
>
>                 producer.purgatory.purge.interval.requests = 1000
>
>                 queued.max.request.bytes = -1
>
>                 queued.max.requests = 500
>
>                 quota.consumer.default = 9223372036854775807
>
>                 quota.producer.default = 9223372036854775807
>
>                 quota.window.num = 11
>
>                 quota.window.size.seconds = 1
>
>                 replica.fetch.backoff.ms = 1000
>
>                 replica.fetch.max.bytes = 1048576
>
>                 replica.fetch.min.bytes = 1
>
>                 replica.fetch.response.max.bytes = 10485760
>
>                 replica.fetch.wait.max.ms = 500
>
>                 replica.high.watermark.checkpoint.interval.ms = 5000
>
>                 replica.lag.time.max.ms = 10000
>
>                 replica.socket.receive.buffer.bytes = 65536
>
>                 replica.socket.timeout.ms = 30000
>
>                 replication.quota.window.num = 11
>
>                 replication.quota.window.size.seconds = 1
>
>                 request.timeout.ms = 30000
>
>                 reserved.broker.max.id = 1000
>
>                 sasl.client.callback.handler.class = null
>
>                 sasl.enabled.mechanisms = [GSSAPI]
>
>                 sasl.jaas.config = null
>
>                 sasl.kerberos.kinit.cmd = /usr/bin/kinit
>
>                 sasl.kerberos.min.time.before.relogin = 60000
>
>                 sasl.kerberos.principal.to.local.rules = [DEFAULT]
>
>                 sasl.kerberos.service.name = null
>
>                 sasl.kerberos.ticket.renew.jitter = 0.05
>
>                 sasl.kerberos.ticket.renew.window.factor = 0.8
>
>                 sasl.login.callback.handler.class = null
>
>                 sasl.login.class = null
>
>                 sasl.login.refresh.buffer.seconds = 300
>
>                 sasl.login.refresh.min.period.seconds = 60
>
>                 sasl.login.refresh.window.factor = 0.8
>
>                 sasl.login.refresh.window.jitter = 0.05
>
>                 sasl.mechanism.inter.broker.protocol = GSSAPI
>
>                 sasl.server.callback.handler.class = null
>
>                 security.inter.broker.protocol = PLAINTEXT
>
>                 socket.receive.buffer.bytes = 102400
>
>                 socket.request.max.bytes = 104857600
>
>                 socket.send.buffer.bytes = 102400
>
>                 ssl.cipher.suites = []
>
>                 ssl.client.auth = none
>
>                 ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
>
>                 ssl.endpoint.identification.algorithm = https
>
>                 ssl.key.password = null
>
>                 ssl.keymanager.algorithm = SunX509
>
>                 ssl.keystore.location = null
>
>                 ssl.keystore.password = null
>
>                 ssl.keystore.type = JKS
>
>                 ssl.principal.mapping.rules = [DEFAULT]
>
>                 ssl.protocol = TLS
>
>                 ssl.provider = null
>
>                 ssl.secure.random.implementation = null
>
>                 ssl.trustmanager.algorithm = PKIX
>
>                 ssl.truststore.location = null
>
>                 ssl.truststore.password = null
>
>                 ssl.truststore.type = JKS
>
>
> transaction.abort.timed.out.transaction.cleanup.interval.ms
> = 60000
>
>                 transaction.max.timeout.ms = 900000
>
>                 
> transaction.remove.expired.transaction.cleanup.interval.ms
> =
> 3600000
>
>                 transaction.state.log.load.buffer.size = 5242880
>
>                 transaction.state.log.min.isr = 2
>
>                 transaction.state.log.num.partitions = 50
>
>                 transaction.state.log.replication.factor = 2
>
>                 transaction.state.log.segment.bytes = 104857600
>
>                 transactional.id.expiration.ms = 604800000
>
>                 unclean.leader.election.enable = false
>
>                 zookeeper.connect = kafka-zookeeper
>
>                 zookeeper.connection.timeout.ms = 6000
>
>                 zookeeper.max.in.flight.requests = 10
>
>                 zookeeper.session.timeout.ms = 6000
>
>                 zookeeper.set.acl = false
>
>                 zookeeper.sync.time.ms = 2000
>
> (kafka.server.KafkaConfig)
>
> [2019-11-16 19:15:55,829] INFO KafkaConfig values:
>
>                 advertised.host.name = null
>
>                 advertised.listeners =
> PLAINTEXT://kafka-1.kafka-headless.bd-iot.svc.cluster.local:9092
>
>                 advertised.port = null
>
>                 alter.config.policy.class.name = null
>
>                 alter.log.dirs.replication.quota.window.num = 11
>
>                 alter.log.dirs.replication.quota.window.size.seconds = 
> 1
>
>                 authorizer.class.name =
>
>                 auto.create.topics.enable = true
>
>                 auto.leader.rebalance.enable = true
>
>                 background.threads = 10
>
>                 broker.id = -1
>
>                 broker.id.generation.enable = true
>
>                 broker.rack = null
>
>                 client.quota.callback.class = null
>
>                 compression.type = producer
>
>                 connection.failed.authentication.delay.ms = 100
>
>                 connections.max.idle.ms = 600000
>
>                 connections.max.reauth.ms = 0
>
>                 control.plane.listener.name = null
>
>                 controlled.shutdown.enable = true
>
>                 controlled.shutdown.max.retries = 3
>
>                 controlled.shutdown.retry.backoff.ms = 5000
>
>                 controller.socket.timeout.ms = 30000
>
>                 create.topic.policy.class.name = null
>
>                 default.replication.factor = 2
>
>                 delegation.token.expiry.check.interval.ms = 3600000
>
>                 delegation.token.expiry.time.ms = 86400000
>
>                 delegation.token.master.key = null
>
>                 delegation.token.max.lifetime.ms = 604800000
>
>                 delete.records.purgatory.purge.interval.requests = 1
>
>                 delete.topic.enable = true
>
>                 fetch.purgatory.purge.interval.requests = 1000
>
>                 group.initial.rebalance.delay.ms = 0
>
>                 group.max.session.timeout.ms = 1800000
>
>                 group.max.size = 2147483647
>
>                 group.min.session.timeout.ms = 6000
>
>                 host.name =
>
>                 inter.broker.listener.name = null
>
>                 inter.broker.protocol.version = 2.3-IV1
>
>                 kafka.metrics.polling.interval.secs = 10
>
>                 kafka.metrics.reporters = []
>
>                 leader.imbalance.check.interval.seconds = 300
>
>                 leader.imbalance.per.broker.percentage = 10
>
>                 listener.security.protocol.map = 
> PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SAS
> L_SSL
>
>                 listeners = PLAINTEXT://:9092
>
>                 log.cleaner.backoff.ms = 15000
>
>                 log.cleaner.dedupe.buffer.size = 134217728
>
>                 log.cleaner.delete.retention.ms = 86400000
>
>                 log.cleaner.enable = true
>
>                 log.cleaner.io.buffer.load.factor = 0.9
>
>                 log.cleaner.io.buffer.size = 524288
>
>                 log.cleaner.io.max.bytes.per.second =
> 1.7976931348623157E308
>
>                 log.cleaner.max.compaction.lag.ms = 
> 9223372036854775807
>
>                 log.cleaner.min.cleanable.ratio = 0.5
>
>                 log.cleaner.min.compaction.lag.ms = 0
>
>                 log.cleaner.threads = 1
>
>                 log.cleanup.policy = [delete]
>
>                 log.dir = /tmp/kafka-logs
>
>                 log.dirs = /bitnami/kafka/data
>
>                 log.flush.interval.messages = 10000
>
>                 log.flush.interval.ms = 1000
>
>                 log.flush.offset.checkpoint.interval.ms = 60000
>
>                 log.flush.scheduler.interval.ms = 9223372036854775807
>
>                 log.flush.start.offset.checkpoint.interval.ms = 60000
>
>                 log.index.interval.bytes = 4096
>
>                 log.index.size.max.bytes = 10485760
>
>                 log.message.downconversion.enable = true
>
>                 log.message.format.version = 2.3-IV1
>
>                 log.message.timestamp.difference.max.ms =
> 9223372036854775807
>
>                 log.message.timestamp.type = CreateTime
>
>                 log.preallocate = false
>
>                 log.retention.bytes = 1073741824
>
>                 log.retention.check.interval.ms = 300000
>
>                 log.retention.hours = 168
>
>                 log.retention.minutes = null
>
>                 log.retention.ms = null
>
>                 log.roll.hours = 168
>
>                 log.roll.jitter.hours = 0
>
>                 log.roll.jitter.ms = null
>
>                 log.roll.ms = null
>
>                 log.segment.bytes = 1073741824
>
>                 log.segment.delete.delay.ms = 60000
>
>                 max.connections = 2147483647
>
>                 max.connections.per.ip = 2147483647
>
>                 max.connections.per.ip.overrides =
>
>                 max.incremental.fetch.session.cache.slots = 1000
>
>                 message.max.bytes = 1000012
>
>                 metric.reporters = []
>
>                 metrics.num.samples = 2
>
>                 metrics.recording.level = INFO
>
>                 metrics.sample.window.ms = 30000
>
>                 min.insync.replicas = 1
>
>                 num.io.threads = 8
>
>                 num.network.threads = 3
>
>                 num.partitions = 1
>
>                 num.recovery.threads.per.data.dir = 1
>
>                 num.replica.alter.log.dirs.threads = null
>
>                 num.replica.fetchers = 1
>
>                 offset.metadata.max.bytes = 4096
>
>                 offsets.commit.required.acks = -1
>
>                 offsets.commit.timeout.ms = 5000
>
>                 offsets.load.buffer.size = 5242880
>
>                 offsets.retention.check.interval.ms = 600000
>
>                 offsets.retention.minutes = 10080
>
>                 offsets.topic.compression.codec = 0
>
>                 offsets.topic.num.partitions = 50
>
>                 offsets.topic.replication.factor = 2
>
>                 offsets.topic.segment.bytes = 104857600
>
>                 password.encoder.cipher.algorithm = 
> AES/CBC/PKCS5Padding
>
>                 password.encoder.iterations = 4096
>
>                 password.encoder.key.length = 128
>
>                 password.encoder.keyfactory.algorithm = null
>
>                 password.encoder.old.secret = null
>
>                 password.encoder.secret = null
>
>                 port = 9092
>
>                 principal.builder.class = null
>
>                 producer.purgatory.purge.interval.requests = 1000
>
>                 queued.max.request.bytes = -1
>
>                 queued.max.requests = 500
>
>                 quota.consumer.default = 9223372036854775807
>
>                 quota.producer.default = 9223372036854775807
>
>                 quota.window.num = 11
>
>                 quota.window.size.seconds = 1
>
>                 replica.fetch.backoff.ms = 1000
>
>                 replica.fetch.max.bytes = 1048576
>
>                 replica.fetch.min.bytes = 1
>
>                 replica.fetch.response.max.bytes = 10485760
>
>                 replica.fetch.wait.max.ms = 500
>
>                 replica.high.watermark.checkpoint.interval.ms = 5000
>
>                 replica.lag.time.max.ms = 10000
>
>                 replica.socket.receive.buffer.bytes = 65536
>
>                 replica.socket.timeout.ms = 30000
>
>                 replication.quota.window.num = 11
>
>                 replication.quota.window.size.seconds = 1
>
>                 request.timeout.ms = 30000
>
>                 reserved.broker.max.id = 1000
>
>                 sasl.client.callback.handler.class = null
>
>                 sasl.enabled.mechanisms = [GSSAPI]
>
>                 sasl.jaas.config = null
>
>                 sasl.kerberos.kinit.cmd = /usr/bin/kinit
>
>                 sasl.kerberos.min.time.before.relogin = 60000
>
>                 sasl.kerberos.principal.to.local.rules = [DEFAULT]
>
>                 sasl.kerberos.service.name = null
>
>                 sasl.kerberos.ticket.renew.jitter = 0.05
>
>                 sasl.kerberos.ticket.renew.window.factor = 0.8
>
>                 sasl.login.callback.handler.class = null
>
>                 sasl.login.class = null
>
>                 sasl.login.refresh.buffer.seconds = 300
>
>                 sasl.login.refresh.min.period.seconds = 60
>
>                 sasl.login.refresh.window.factor = 0.8
>
>                 sasl.login.refresh.window.jitter = 0.05
>
>                 sasl.mechanism.inter.broker.protocol = GSSAPI
>
>                 sasl.server.callback.handler.class = null
>
>                 security.inter.broker.protocol = PLAINTEXT
>
>                 socket.receive.buffer.bytes = 102400
>
>                 socket.request.max.bytes = 104857600
>
>                 socket.send.buffer.bytes = 102400
>
>                 ssl.cipher.suites = []
>
>                 ssl.client.auth = none
>
>                 ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
>
>                 ssl.endpoint.identification.algorithm = https
>
>                 ssl.key.password = null
>
>                 ssl.keymanager.algorithm = SunX509
>
>                 ssl.keystore.location = null
>
>                 ssl.keystore.password = null
>
>                 ssl.keystore.type = JKS
>
>                 ssl.principal.mapping.rules = [DEFAULT]
>
>                 ssl.protocol = TLS
>
>                 ssl.provider = null
>
>                 ssl.secure.random.implementation = null
>
>                 ssl.trustmanager.algorithm = PKIX
>
>                 ssl.truststore.location = null
>
>                 ssl.truststore.password = null
>
>                 ssl.truststore.type = JKS
>
>
> transaction.abort.timed.out.transaction.cleanup.interval.ms
> = 60000
>
>                 transaction.max.timeout.ms = 900000
>
>                 
> transaction.remove.expired.transaction.cleanup.interval.ms
> =
> 3600000
>
>                 transaction.state.log.load.buffer.size = 5242880
>
>                 transaction.state.log.min.isr = 2
>
>                 transaction.state.log.num.partitions = 50
>
>                 transaction.state.log.replication.factor = 2
>
>                 transaction.state.log.segment.bytes = 104857600
>
>                 transactional.id.expiration.ms = 604800000
>
>                 unclean.leader.election.enable = false
>
>                 zookeeper.connect = kafka-zookeeper
>
>                 zookeeper.connection.timeout.ms = 6000
>
>                 zookeeper.max.in.flight.requests = 10
>
>                 zookeeper.session.timeout.ms = 6000
>
>                 zookeeper.set.acl = false
>
>                 zookeeper.sync.time.ms = 2000
>
> (kafka.server.KafkaConfig)
>
> [2019-11-16 19:15:56,039] INFO [ThrottledChannelReaper-Fetch]: 
> Starting
> (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
>
> [2019-11-16 19:15:56,044] INFO [ThrottledChannelReaper-Produce]: 
> Starting
> (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
>
> [2019-11-16 19:15:56,046] INFO [ThrottledChannelReaper-Request]: 
> Starting
> (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
>
> [2019-11-16 19:15:56,335] INFO Loading logs. (kafka.log.LogManager)
>
> [2019-11-16 19:15:56,638] INFO [Log partition=__consumer_offsets-4, 
> dir=/bitnami/kafka/data] Recovering unflushed segment 0 
> (kafka.log.Log)
>
> [2019-11-16 19:15:56,727] INFO [Log partition=__consumer_offsets-4, 
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with 
> message format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:56,931] INFO [Log partition=__consumer_offsets-4, 
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with 
> message format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:56,933] INFO [Log partition=__consumer_offsets-4, 
> dir=/bitnami/kafka/data] Completed load of log with 1 segments, log 
> start offset 0 and log end offset 0 in 399 ms (kafka.log.Log)
>
> [2019-11-16 19:15:57,029] INFO [Log partition=__consumer_offsets-22, 
> dir=/bitnami/kafka/data] Recovering unflushed segment 0 
> (kafka.log.Log)
>
> [2019-11-16 19:15:57,029] INFO [Log partition=__consumer_offsets-22, 
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with 
> message format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,031] INFO [Log partition=__consumer_offsets-22, 
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with 
> message format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,032] INFO [Log partition=__consumer_offsets-22, 
> dir=/bitnami/kafka/data] Completed load of log with 1 segments, log 
> start offset 0 and log end offset 0 in 6 ms (kafka.log.Log)
>
> [2019-11-16 19:15:57,147] INFO [Log partition=__consumer_offsets-32, 
> dir=/bitnami/kafka/data] Recovering unflushed segment 0 
> (kafka.log.Log)
>
> [2019-11-16 19:15:57,148] INFO [Log partition=__consumer_offsets-32, 
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with 
> message format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,150] INFO [Log partition=__consumer_offsets-32, 
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with 
> message format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,226] INFO [Log partition=__consumer_offsets-32, 
> dir=/bitnami/kafka/data] Completed load of log with 1 segments, log 
> start offset 0 and log end offset 0 in 189 ms (kafka.log.Log)
>
> [2019-11-16 19:15:57,330] INFO [Log partition=__consumer_offsets-39, 
> dir=/bitnami/kafka/data] Recovering unflushed segment 0 
> (kafka.log.Log)
>
> [2019-11-16 19:15:57,330] INFO [Log partition=__consumer_offsets-39, 
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with 
> message format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,333] INFO [Log partition=__consumer_offsets-39, 
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with 
> message format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,334] INFO [Log partition=__consumer_offsets-39, 
> dir=/bitnami/kafka/data] Completed load of log with 1 segments, log 
> start offset 0 and log end offset 0 in 6 ms (kafka.log.Log)
>
> [2019-11-16 19:15:57,429] INFO [Log partition=__consumer_offsets-26, 
> dir=/bitnami/kafka/data] Recovering unflushed segment 0 
> (kafka.log.Log)
>
> [2019-11-16 19:15:57,429] INFO [Log partition=__consumer_offsets-26, 
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with 
> message format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,431] INFO [Log partition=__consumer_offsets-26, 
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with 
> message format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,432] INFO [Log partition=__consumer_offsets-26, 
> dir=/bitnami/kafka/data] Completed load of log with 1 segments, log 
> start offset 0 and log end offset 0 in 5 ms (kafka.log.Log)
>
> [2019-11-16 19:15:57,527] INFO [Log partition=__consumer_offsets-44, 
> dir=/bitnami/kafka/data] Recovering unflushed segment 0 
> (kafka.log.Log)
>
> [2019-11-16 19:15:57,529] INFO [Log partition=__consumer_offsets-44, 
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with 
> message format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,533] INFO [Log partition=__consumer_offsets-44, 
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with 
> message format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,534] INFO [Log partition=__consumer_offsets-44, 
> dir=/bitnami/kafka/data] Completed load of log with 1 segments, log 
> start offset 0 and log end offset 0 in 8 ms (kafka.log.Log)
>
> [2019-11-16 19:15:57,634] INFO [Log partition=__consumer_offsets-25, 
> dir=/bitnami/kafka/data] Recovering unflushed segment 0 
> (kafka.log.Log)
>
> [2019-11-16 19:15:57,635] INFO [Log partition=__consumer_offsets-25, 
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with 
> message format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,637] INFO [Log partition=__consumer_offsets-25, 
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with 
> message format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,638] INFO [Log partition=__consumer_offsets-25, 
> dir=/bitnami/kafka/data] Completed load of log with 1 segments, log 
> start offset 0 and log end offset 0 in 7 ms (kafka.log.Log)
>
> [2019-11-16 19:15:57,730] INFO [Log partition=__consumer_offsets-8, 
> dir=/bitnami/kafka/data] Recovering unflushed segment 0 
> (kafka.log.Log)
>
> [2019-11-16 19:15:57,730] INFO [Log partition=__consumer_offsets-8, 
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with 
> message format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,733] INFO [Log partition=__consumer_offsets-8, 
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with 
> message format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,733] INFO [Log partition=__consumer_offsets-8, 
> dir=/bitnami/kafka/data] Completed load of log with 1 segments, log 
> start offset 0 and log end offset 0 in 5 ms (kafka.log.Log)
>
> [2019-11-16 19:15:57,741] INFO [Log partition=batch.alarm-0, 
> dir=/bitnami/kafka/data] Recovering unflushed segment 0 
> (kafka.log.Log)
>
> [2019-11-16 19:15:57,826] INFO [Log partition=batch.alarm-0, 
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with 
> message format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,830] INFO [Log partition=batch.alarm-0, 
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with 
> message format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,833] INFO [Log partition=batch.alarm-0, 
> dir=/bitnami/kafka/data] Completed load of log with 1 segments, log 
> start offset 0 and log end offset 0 in 94 ms (kafka.log.Log)
>
> [2019-11-16 19:15:57,936] INFO [Log partition=__consumer_offsets-38, 
> dir=/bitnami/kafka/data] Recovering unflushed segment 33982499
> (kafka.log.Log)
>
> [2019-11-16 19:15:57,937] INFO [Log partition=__consumer_offsets-38, 
> dir=/bitnami/kafka/data] Loading producer state till offset 33982499 
> with message format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,941] INFO [ProducerStateManager 
> partition=__consumer_offsets-38] Loading producer state from snapshot 
> file '/bitnami/kafka/data/__consumer_offsets-38/00000000000033982499.snapshot'
> (kafka.log.ProducerStateManager)
>
> [2019-11-16 19:16:10,208] INFO Terminating process due to signal 
> SIGTERM
> (org.apache.kafka.common.utils.LoggingSignalHandler)
>
> [2019-11-16 19:16:10,217] INFO [KafkaServer id=1012] shutting down
> (kafka.server.KafkaServer)
>
> [2019-11-16 19:16:10,226] ERROR [KafkaServer id=1012] Fatal error 
> during KafkaServer shutdown. (kafka.server.KafkaServer)
>
> java.lang.IllegalStateException: Kafka server is still starting up, 
> cannot shut down!
>
>                 at 
> kafka.server.KafkaServer.shutdown(KafkaServer.scala:584)
>
>                 at
> kafka.server.KafkaServerStartable.shutdown(KafkaServerStartable.scala:
> 48)
>
>                 at kafka.Kafka$$anon$1.run(Kafka.scala:81)
>
> [2019-11-16 19:16:10,233] ERROR Halting Kafka.
> (kafka.server.KafkaServerStartable)
>
>
>
> Kind Regards
>
> Oliver
>
>



Re: Kafka Broker do not recover after crash

Posted by "M. Manna" <ma...@gmail.com>.
Hi,

On Sat, 16 Nov 2019 at 19:21, Oliver Eckle <ie...@gmx.de> wrote:

> Hello,
>
>
>
> having a Kafka Cluster running in Kubernetes with 3 Brokers and all
> replikations (topic, offsets) set to 2.


This sounds strange. You have 3 brokers and replication set to 2. Is this
intentional ?


>
> For whatever reason one of the broker crash and restartes. And since it
> circles in some kind of restart/crash loop.
>
> Any idea how to recover?
>
>
>
> Whole Logfile is like that:
>
>
>
> [38;5;6m [38;5;5m19:15:42.58 [0m
>
> [38;5;6m [38;5;5m19:15:42.58 [0m[1mWelcome to the Bitnami kafka
> container[0m
>
> [38;5;6m [38;5;5m19:15:42.58 [0mSubscribe to project updates by watching
> [1mhttps://github.com/bitnami/bitnami-docker-kafka[0m
> <http://github.com/bitnami/bitnami-docker-kafka%5B0m>
>
> [38;5;6m [38;5;5m19:15:42.58 [0mSubmit issues and feature requests at
> [1mhttps://github.com/bitnami/bitnami-docker-kafka/issues[0m
> <http://github.com/bitnami/bitnami-docker-kafka/issues%5B0m>
>
> [38;5;6m [38;5;5m19:15:42.58 [0mSend us your feedback at
> [1mcontainers@bitnami.com[0m
>
> [38;5;6m [38;5;5m19:15:42.59 [0m
>
> [38;5;6m [38;5;5m19:15:42.59 [0m[38;5;2mINFO [0m ==> ** Starting Kafka
> setup
> **
>
> [38;5;6m [38;5;5m19:15:42.83 [0m[38;5;3mWARN [0m ==> You set the
> environment
> variable ALLOW_PLAINTEXT_LISTENER=yes. For safety reasons, do not use this
> flag in a production environment.
>
> [38;5;6m [38;5;5m19:15:42.84 [0m[38;5;2mINFO [0m ==> Initializing Kafka...
>
> [38;5;6m [38;5;5m19:15:42.84 [0m[38;5;2mINFO [0m ==> No injected
> configuration files found, creating default config files
>
> [38;5;6m [38;5;5m19:15:43.83 [0m[38;5;2mINFO [0m ==> ** Kafka setup
> finished! **
>
>
>
> [38;5;6m [38;5;5m19:15:43.84 [0m[38;5;2mINFO [0m ==> ** Starting Kafka **
>
> [2019-11-16 19:15:49,625] INFO Registered kafka:type=kafka.Log4jController
> MBean (kafka.utils.Log4jControllerRegistration$)
>
> [2019-11-16 19:15:52,933] INFO Registered signal handlers for TERM, INT,
> HUP
> (org.apache.kafka.common.utils.LoggingSignalHandler)
>
> [2019-11-16 19:15:52,934] INFO starting (kafka.server.KafkaServer)
>
> [2019-11-16 19:15:52,935] INFO Connecting to zookeeper on kafka-zookeeper
> (kafka.server.KafkaServer)
>
> [2019-11-16 19:15:53,230] INFO [ZooKeeperClient Kafka server] Initializing
> a
> new session to kafka-zookeeper. (kafka.zookeeper.ZooKeeperClient)
>
> [2019-11-16 19:15:53,331] INFO Client
>
> environment:zookeeper.version=3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bc
> f, built on 03/06/2019 16:18 GMT (org.apache.zookeeper.ZooKeeper)
>
> [2019-11-16 19:15:53,331] INFO Client
> environment:host.name=kafka-1.kafka-headless.bd-iot.svc.cluster.local
> (org.apache.zookeeper.ZooKeeper)
>
> [2019-11-16 19:15:53,331] INFO Client environment:java.version=1.8.0_232
> (org.apache.zookeeper.ZooKeeper)
>
> [2019-11-16 19:15:53,331] INFO Client environment:java.vendor=AdoptOpenJDK
> (org.apache.zookeeper.ZooKeeper)
>
> [2019-11-16 19:15:53,332] INFO Client
> environment:java.home=/opt/bitnami/java (org.apache.zookeeper.ZooKeeper)
>
> [2019-11-16 19:15:53,332] INFO Client
>
> environment:java.class.path=/opt/bitnami/kafka/bin/../libs/activation-1.1.1.
>
> jar:/opt/bitnami/kafka/bin/../libs/aopalliance-repackaged-2.5.0.jar:/opt/bit
>
> nami/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/bitnami/kafka/bin/../libs/a
>
> udience-annotations-0.5.0.jar:/opt/bitnami/kafka/bin/../libs/commons-lang3-3
>
> .8.1.jar:/opt/bitnami/kafka/bin/../libs/connect-api-2.3.1.jar:/opt/bitnami/k
>
> afka/bin/../libs/connect-basic-auth-extension-2.3.1.jar:/opt/bitnami/kafka/b
>
> in/../libs/connect-file-2.3.1.jar:/opt/bitnami/kafka/bin/../libs/connect-jso
>
> n-2.3.1.jar:/opt/bitnami/kafka/bin/../libs/connect-runtime-2.3.1.jar:/opt/bi
>
> tnami/kafka/bin/../libs/connect-transforms-2.3.1.jar:/opt/bitnami/kafka/bin/
>
> ../libs/guava-20.0.jar:/opt/bitnami/kafka/bin/../libs/hk2-api-2.5.0.jar:/opt
>
> /bitnami/kafka/bin/../libs/hk2-locator-2.5.0.jar:/opt/bitnami/kafka/bin/../l
>
> ibs/hk2-utils-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/jackson-annotations-2
>
> .10.0.jar:/opt/bitnami/kafka/bin/../libs/jackson-core-2.10.0.jar:/opt/bitnam
>
> i/kafka/bin/../libs/jackson-databind-2.10.0.jar:/opt/bitnami/kafka/bin/../li
>
> bs/jackson-dataformat-csv-2.10.0.jar:/opt/bitnami/kafka/bin/../libs/jackson-
>
> datatype-jdk8-2.10.0.jar:/opt/bitnami/kafka/bin/../libs/jackson-jaxrs-base-2
>
> .10.0.jar:/opt/bitnami/kafka/bin/../libs/jackson-jaxrs-json-provider-2.10.0.
>
> jar:/opt/bitnami/kafka/bin/../libs/jackson-module-jaxb-annotations-2.10.0.ja
>
> r:/opt/bitnami/kafka/bin/../libs/jackson-module-paranamer-2.10.0.jar:/opt/bi
>
> tnami/kafka/bin/../libs/jackson-module-scala_2.11-2.10.0.jar:/opt/bitnami/ka
>
> fka/bin/../libs/jakarta.activation-api-1.2.1.jar:/opt/bitnami/kafka/bin/../l
>
> ibs/jakarta.annotation-api-1.3.4.jar:/opt/bitnami/kafka/bin/../libs/jakarta.
> inject-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/jakarta.ws
> .rs-api-2.1.5.jar:
>
> /opt/bitnami/kafka/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/opt/bitnami/k
>
> afka/bin/../libs/javassist-3.22.0-CR2.jar:/opt/bitnami/kafka/bin/../libs/jav
>
> ax.servlet-api-3.1.0.jar:/opt/bitnami/kafka/bin/../libs/javax.ws.rs-api-2.1.
>
> 1.jar:/opt/bitnami/kafka/bin/../libs/jaxb-api-2.3.0.jar:/opt/bitnami/kafka/b
>
> in/../libs/jersey-client-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-comm
>
> on-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-container-servlet-2.28.jar
>
> :/opt/bitnami/kafka/bin/../libs/jersey-container-servlet-core-2.28.jar:/opt/
>
> bitnami/kafka/bin/../libs/jersey-hk2-2.28.jar:/opt/bitnami/kafka/bin/../libs
>
> /jersey-media-jaxb-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-server-2.2
>
> 8.jar:/opt/bitnami/kafka/bin/../libs/jetty-client-9.4.18.v20190429.jar:/opt/
>
> bitnami/kafka/bin/../libs/jetty-continuation-9.4.18.v20190429.jar:/opt/bitna
>
> mi/kafka/bin/../libs/jetty-http-9.4.18.v20190429.jar:/opt/bitnami/kafka/bin/
>
> ../libs/jetty-io-9.4.18.v20190429.jar:/opt/bitnami/kafka/bin/../libs/jetty-s
>
> ecurity-9.4.18.v20190429.jar:/opt/bitnami/kafka/bin/../libs/jetty-server-9.4
>
> .18.v20190429.jar:/opt/bitnami/kafka/bin/../libs/jetty-servlet-9.4.18.v20190
>
> 429.jar:/opt/bitnami/kafka/bin/../libs/jetty-servlets-9.4.18.v20190429.jar:/
>
> opt/bitnami/kafka/bin/../libs/jetty-util-9.4.18.v20190429.jar:/opt/bitnami/k
>
> afka/bin/../libs/jopt-simple-5.0.4.jar:/opt/bitnami/kafka/bin/../libs/jsr305
>
> -3.0.2.jar:/opt/bitnami/kafka/bin/../libs/kafka-clients-2.3.1.jar:/opt/bitna
>
> mi/kafka/bin/../libs/kafka-log4j-appender-2.3.1.jar:/opt/bitnami/kafka/bin/.
>
> ./libs/kafka-streams-2.3.1.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-
>
> examples-2.3.1.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-scala_2.11-2
>
> .3.1.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-test-utils-2.3.1.jar:/
>
> opt/bitnami/kafka/bin/../libs/kafka-tools-2.3.1.jar:/opt/bitnami/kafka/bin/.
>
> ./libs/kafka_2.11-2.3.1-sources.jar:/opt/bitnami/kafka/bin/../libs/kafka_2.1
>
> 1-2.3.1.jar:/opt/bitnami/kafka/bin/../libs/log4j-1.2.17.jar:/opt/bitnami/kaf
>
> ka/bin/../libs/lz4-java-1.6.0.jar:/opt/bitnami/kafka/bin/../libs/maven-artif
>
> act-3.6.1.jar:/opt/bitnami/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/bit
>
> nami/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/bitnami/kafka/bi
>
> n/../libs/paranamer-2.8.jar:/opt/bitnami/kafka/bin/../libs/plexus-utils-3.2.
>
> 0.jar:/opt/bitnami/kafka/bin/../libs/reflections-0.9.11.jar:/opt/bitnami/kaf
>
> ka/bin/../libs/rocksdbjni-5.18.3.jar:/opt/bitnami/kafka/bin/../libs/scala-li
>
> brary-2.11.12.jar:/opt/bitnami/kafka/bin/../libs/scala-logging_2.11-3.9.0.ja
>
> r:/opt/bitnami/kafka/bin/../libs/scala-reflect-2.11.12.jar:/opt/bitnami/kafk
>
> a/bin/../libs/slf4j-api-1.7.26.jar:/opt/bitnami/kafka/bin/../libs/slf4j-log4
>
> j12-1.7.26.jar:/opt/bitnami/kafka/bin/../libs/snappy-java-1.1.7.3.jar:/opt/b
>
> itnami/kafka/bin/../libs/spotbugs-annotations-3.1.9.jar:/opt/bitnami/kafka/b
>
> in/../libs/validation-api-2.0.1.Final.jar:/opt/bitnami/kafka/bin/../libs/zkc
>
> lient-0.11.jar:/opt/bitnami/kafka/bin/../libs/zookeeper-3.4.14.jar:/opt/bitn
> ami/kafka/bin/../libs/zstd-jni-1.4.0-1.jar (org.apache.zookeeper.ZooKeeper)
>
> [2019-11-16 19:15:53,333] INFO Client
>
> environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64
> :/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
>
> [2019-11-16 19:15:53,333] INFO Client environment:java.io.tmpdir=/tmp
> (org.apache.zookeeper.ZooKeeper)
>
> [2019-11-16 19:15:53,334] INFO Client environment:java.compiler=<NA>
> (org.apache.zookeeper.ZooKeeper)
>
> [2019-11-16 19:15:53,334] INFO Client environment:os.name=Linux
> (org.apache.zookeeper.ZooKeeper)
>
> [2019-11-16 19:15:53,334] INFO Client environment:os.arch=amd64
> (org.apache.zookeeper.ZooKeeper)
>
> [2019-11-16 19:15:53,335] INFO Client
> environment:os.version=4.15.0-1060-azure (org.apache.zookeeper.ZooKeeper)
>
> [2019-11-16 19:15:53,336] INFO Client environment:user.name=?
> (org.apache.zookeeper.ZooKeeper)
>
> [2019-11-16 19:15:53,336] INFO Client environment:user.home=?
> (org.apache.zookeeper.ZooKeeper)
>
> [2019-11-16 19:15:53,336] INFO Client environment:user.dir=/
> (org.apache.zookeeper.ZooKeeper)
>
> [2019-11-16 19:15:53,338] INFO Initiating client connection,
> connectString=kafka-zookeeper sessionTimeout=6000
> watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@31304f14
> (org.apache.zookeeper.ZooKeeper)
>
> [2019-11-16 19:15:53,528] INFO [ZooKeeperClient Kafka server] Waiting until
> connected. (kafka.zookeeper.ZooKeeperClient)
>
> [2019-11-16 19:15:53,545] INFO Opening socket connection to server
> kafka-zookeeper/10.0.215.214:2181. Will not attempt to authenticate using
> SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
>
> [2019-11-16 19:15:53,552] INFO Socket connection established to
> kafka-zookeeper/10.0.215.214:2181, initiating session
> (org.apache.zookeeper.ClientCnxn)
>
> [2019-11-16 19:15:53,627] INFO Session establishment complete on server
> kafka-zookeeper/10.0.215.214:2181, sessionid = 0x10000810b780070,
> negotiated
> timeout = 6000 (org.apache.zookeeper.ClientCnxn)
>
> [2019-11-16 19:15:53,630] INFO [ZooKeeperClient Kafka server] Connected.
> (kafka.zookeeper.ZooKeeperClient)
>
> [2019-11-16 19:15:55,034] INFO Cluster ID = dvSQ1W2US72rcqGef9tm6w
> (kafka.server.KafkaServer)
>
> [2019-11-16 19:15:55,637] INFO KafkaConfig values:
>
>                 advertised.host.name = null
>
>                 advertised.listeners =
> PLAINTEXT://kafka-1.kafka-headless.bd-iot.svc.cluster.local:9092
>
>                 advertised.port = null
>
>                 alter.config.policy.class.name = null
>
>                 alter.log.dirs.replication.quota.window.num = 11
>
>                 alter.log.dirs.replication.quota.window.size.seconds = 1
>
>                 authorizer.class.name =
>
>                 auto.create.topics.enable = true
>
>                 auto.leader.rebalance.enable = true
>
>                 background.threads = 10
>
>                 broker.id = -1
>
>                 broker.id.generation.enable = true
>
>                 broker.rack = null
>
>                 client.quota.callback.class = null
>
>                 compression.type = producer
>
>                 connection.failed.authentication.delay.ms = 100
>
>                 connections.max.idle.ms = 600000
>
>                 connections.max.reauth.ms = 0
>
>                 control.plane.listener.name = null
>
>                 controlled.shutdown.enable = true
>
>                 controlled.shutdown.max.retries = 3
>
>                 controlled.shutdown.retry.backoff.ms = 5000
>
>                 controller.socket.timeout.ms = 30000
>
>                 create.topic.policy.class.name = null
>
>                 default.replication.factor = 2
>
>                 delegation.token.expiry.check.interval.ms = 3600000
>
>                 delegation.token.expiry.time.ms = 86400000
>
>                 delegation.token.master.key = null
>
>                 delegation.token.max.lifetime.ms = 604800000
>
>                 delete.records.purgatory.purge.interval.requests = 1
>
>                 delete.topic.enable = true
>
>                 fetch.purgatory.purge.interval.requests = 1000
>
>                 group.initial.rebalance.delay.ms = 0
>
>                 group.max.session.timeout.ms = 1800000
>
>                 group.max.size = 2147483647
>
>                 group.min.session.timeout.ms = 6000
>
>                 host.name =
>
>                 inter.broker.listener.name = null
>
>                 inter.broker.protocol.version = 2.3-IV1
>
>                 kafka.metrics.polling.interval.secs = 10
>
>                 kafka.metrics.reporters = []
>
>                 leader.imbalance.check.interval.seconds = 300
>
>                 leader.imbalance.per.broker.percentage = 10
>
>                 listener.security.protocol.map =
> PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
>
>                 listeners = PLAINTEXT://:9092
>
>                 log.cleaner.backoff.ms = 15000
>
>                 log.cleaner.dedupe.buffer.size = 134217728
>
>                 log.cleaner.delete.retention.ms = 86400000
>
>                 log.cleaner.enable = true
>
>                 log.cleaner.io.buffer.load.factor = 0.9
>
>                 log.cleaner.io.buffer.size = 524288
>
>                 log.cleaner.io.max.bytes.per.second =
> 1.7976931348623157E308
>
>                 log.cleaner.max.compaction.lag.ms = 9223372036854775807
>
>                 log.cleaner.min.cleanable.ratio = 0.5
>
>                 log.cleaner.min.compaction.lag.ms = 0
>
>                 log.cleaner.threads = 1
>
>                 log.cleanup.policy = [delete]
>
>                 log.dir = /tmp/kafka-logs
>
>                 log.dirs = /bitnami/kafka/data
>
>                 log.flush.interval.messages = 10000
>
>                 log.flush.interval.ms = 1000
>
>                 log.flush.offset.checkpoint.interval.ms = 60000
>
>                 log.flush.scheduler.interval.ms = 9223372036854775807
>
>                 log.flush.start.offset.checkpoint.interval.ms = 60000
>
>                 log.index.interval.bytes = 4096
>
>                 log.index.size.max.bytes = 10485760
>
>                 log.message.downconversion.enable = true
>
>                 log.message.format.version = 2.3-IV1
>
>                 log.message.timestamp.difference.max.ms =
> 9223372036854775807
>
>                 log.message.timestamp.type = CreateTime
>
>                 log.preallocate = false
>
>                 log.retention.bytes = 1073741824
>
>                 log.retention.check.interval.ms = 300000
>
>                 log.retention.hours = 168
>
>                 log.retention.minutes = null
>
>                 log.retention.ms = null
>
>                 log.roll.hours = 168
>
>                 log.roll.jitter.hours = 0
>
>                 log.roll.jitter.ms = null
>
>                 log.roll.ms = null
>
>                 log.segment.bytes = 1073741824
>
>                 log.segment.delete.delay.ms = 60000
>
>                 max.connections = 2147483647
>
>                 max.connections.per.ip = 2147483647
>
>                 max.connections.per.ip.overrides =
>
>                 max.incremental.fetch.session.cache.slots = 1000
>
>                 message.max.bytes = 1000012
>
>                 metric.reporters = []
>
>                 metrics.num.samples = 2
>
>                 metrics.recording.level = INFO
>
>                 metrics.sample.window.ms = 30000
>
>                 min.insync.replicas = 1
>
>                 num.io.threads = 8
>
>                 num.network.threads = 3
>
>                 num.partitions = 1
>
>                 num.recovery.threads.per.data.dir = 1
>
>                 num.replica.alter.log.dirs.threads = null
>
>                 num.replica.fetchers = 1
>
>                 offset.metadata.max.bytes = 4096
>
>                 offsets.commit.required.acks = -1
>
>                 offsets.commit.timeout.ms = 5000
>
>                 offsets.load.buffer.size = 5242880
>
>                 offsets.retention.check.interval.ms = 600000
>
>                 offsets.retention.minutes = 10080
>
>                 offsets.topic.compression.codec = 0
>
>                 offsets.topic.num.partitions = 50
>
>                 offsets.topic.replication.factor = 2
>
>                 offsets.topic.segment.bytes = 104857600
>
>                 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
>
>                 password.encoder.iterations = 4096
>
>                 password.encoder.key.length = 128
>
>                 password.encoder.keyfactory.algorithm = null
>
>                 password.encoder.old.secret = null
>
>                 password.encoder.secret = null
>
>                 port = 9092
>
>                 principal.builder.class = null
>
>                 producer.purgatory.purge.interval.requests = 1000
>
>                 queued.max.request.bytes = -1
>
>                 queued.max.requests = 500
>
>                 quota.consumer.default = 9223372036854775807
>
>                 quota.producer.default = 9223372036854775807
>
>                 quota.window.num = 11
>
>                 quota.window.size.seconds = 1
>
>                 replica.fetch.backoff.ms = 1000
>
>                 replica.fetch.max.bytes = 1048576
>
>                 replica.fetch.min.bytes = 1
>
>                 replica.fetch.response.max.bytes = 10485760
>
>                 replica.fetch.wait.max.ms = 500
>
>                 replica.high.watermark.checkpoint.interval.ms = 5000
>
>                 replica.lag.time.max.ms = 10000
>
>                 replica.socket.receive.buffer.bytes = 65536
>
>                 replica.socket.timeout.ms = 30000
>
>                 replication.quota.window.num = 11
>
>                 replication.quota.window.size.seconds = 1
>
>                 request.timeout.ms = 30000
>
>                 reserved.broker.max.id = 1000
>
>                 sasl.client.callback.handler.class = null
>
>                 sasl.enabled.mechanisms = [GSSAPI]
>
>                 sasl.jaas.config = null
>
>                 sasl.kerberos.kinit.cmd = /usr/bin/kinit
>
>                 sasl.kerberos.min.time.before.relogin = 60000
>
>                 sasl.kerberos.principal.to.local.rules = [DEFAULT]
>
>                 sasl.kerberos.service.name = null
>
>                 sasl.kerberos.ticket.renew.jitter = 0.05
>
>                 sasl.kerberos.ticket.renew.window.factor = 0.8
>
>                 sasl.login.callback.handler.class = null
>
>                 sasl.login.class = null
>
>                 sasl.login.refresh.buffer.seconds = 300
>
>                 sasl.login.refresh.min.period.seconds = 60
>
>                 sasl.login.refresh.window.factor = 0.8
>
>                 sasl.login.refresh.window.jitter = 0.05
>
>                 sasl.mechanism.inter.broker.protocol = GSSAPI
>
>                 sasl.server.callback.handler.class = null
>
>                 security.inter.broker.protocol = PLAINTEXT
>
>                 socket.receive.buffer.bytes = 102400
>
>                 socket.request.max.bytes = 104857600
>
>                 socket.send.buffer.bytes = 102400
>
>                 ssl.cipher.suites = []
>
>                 ssl.client.auth = none
>
>                 ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
>
>                 ssl.endpoint.identification.algorithm = https
>
>                 ssl.key.password = null
>
>                 ssl.keymanager.algorithm = SunX509
>
>                 ssl.keystore.location = null
>
>                 ssl.keystore.password = null
>
>                 ssl.keystore.type = JKS
>
>                 ssl.principal.mapping.rules = [DEFAULT]
>
>                 ssl.protocol = TLS
>
>                 ssl.provider = null
>
>                 ssl.secure.random.implementation = null
>
>                 ssl.trustmanager.algorithm = PKIX
>
>                 ssl.truststore.location = null
>
>                 ssl.truststore.password = null
>
>                 ssl.truststore.type = JKS
>
>
> transaction.abort.timed.out.transaction.cleanup.interval.ms
> = 60000
>
>                 transaction.max.timeout.ms = 900000
>
>                 transaction.remove.expired.transaction.cleanup.interval.ms
> =
> 3600000
>
>                 transaction.state.log.load.buffer.size = 5242880
>
>                 transaction.state.log.min.isr = 2
>
>                 transaction.state.log.num.partitions = 50
>
>                 transaction.state.log.replication.factor = 2
>
>                 transaction.state.log.segment.bytes = 104857600
>
>                 transactional.id.expiration.ms = 604800000
>
>                 unclean.leader.election.enable = false
>
>                 zookeeper.connect = kafka-zookeeper
>
>                 zookeeper.connection.timeout.ms = 6000
>
>                 zookeeper.max.in.flight.requests = 10
>
>                 zookeeper.session.timeout.ms = 6000
>
>                 zookeeper.set.acl = false
>
>                 zookeeper.sync.time.ms = 2000
>
> (kafka.server.KafkaConfig)
>
> [2019-11-16 19:15:55,829] INFO KafkaConfig values:
>
>                 advertised.host.name = null
>
>                 advertised.listeners =
> PLAINTEXT://kafka-1.kafka-headless.bd-iot.svc.cluster.local:9092
>
>                 advertised.port = null
>
>                 alter.config.policy.class.name = null
>
>                 alter.log.dirs.replication.quota.window.num = 11
>
>                 alter.log.dirs.replication.quota.window.size.seconds = 1
>
>                 authorizer.class.name =
>
>                 auto.create.topics.enable = true
>
>                 auto.leader.rebalance.enable = true
>
>                 background.threads = 10
>
>                 broker.id = -1
>
>                 broker.id.generation.enable = true
>
>                 broker.rack = null
>
>                 client.quota.callback.class = null
>
>                 compression.type = producer
>
>                 connection.failed.authentication.delay.ms = 100
>
>                 connections.max.idle.ms = 600000
>
>                 connections.max.reauth.ms = 0
>
>                 control.plane.listener.name = null
>
>                 controlled.shutdown.enable = true
>
>                 controlled.shutdown.max.retries = 3
>
>                 controlled.shutdown.retry.backoff.ms = 5000
>
>                 controller.socket.timeout.ms = 30000
>
>                 create.topic.policy.class.name = null
>
>                 default.replication.factor = 2
>
>                 delegation.token.expiry.check.interval.ms = 3600000
>
>                 delegation.token.expiry.time.ms = 86400000
>
>                 delegation.token.master.key = null
>
>                 delegation.token.max.lifetime.ms = 604800000
>
>                 delete.records.purgatory.purge.interval.requests = 1
>
>                 delete.topic.enable = true
>
>                 fetch.purgatory.purge.interval.requests = 1000
>
>                 group.initial.rebalance.delay.ms = 0
>
>                 group.max.session.timeout.ms = 1800000
>
>                 group.max.size = 2147483647
>
>                 group.min.session.timeout.ms = 6000
>
>                 host.name =
>
>                 inter.broker.listener.name = null
>
>                 inter.broker.protocol.version = 2.3-IV1
>
>                 kafka.metrics.polling.interval.secs = 10
>
>                 kafka.metrics.reporters = []
>
>                 leader.imbalance.check.interval.seconds = 300
>
>                 leader.imbalance.per.broker.percentage = 10
>
>                 listener.security.protocol.map =
> PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
>
>                 listeners = PLAINTEXT://:9092
>
>                 log.cleaner.backoff.ms = 15000
>
>                 log.cleaner.dedupe.buffer.size = 134217728
>
>                 log.cleaner.delete.retention.ms = 86400000
>
>                 log.cleaner.enable = true
>
>                 log.cleaner.io.buffer.load.factor = 0.9
>
>                 log.cleaner.io.buffer.size = 524288
>
>                 log.cleaner.io.max.bytes.per.second =
> 1.7976931348623157E308
>
>                 log.cleaner.max.compaction.lag.ms = 9223372036854775807
>
>                 log.cleaner.min.cleanable.ratio = 0.5
>
>                 log.cleaner.min.compaction.lag.ms = 0
>
>                 log.cleaner.threads = 1
>
>                 log.cleanup.policy = [delete]
>
>                 log.dir = /tmp/kafka-logs
>
>                 log.dirs = /bitnami/kafka/data
>
>                 log.flush.interval.messages = 10000
>
>                 log.flush.interval.ms = 1000
>
>                 log.flush.offset.checkpoint.interval.ms = 60000
>
>                 log.flush.scheduler.interval.ms = 9223372036854775807
>
>                 log.flush.start.offset.checkpoint.interval.ms = 60000
>
>                 log.index.interval.bytes = 4096
>
>                 log.index.size.max.bytes = 10485760
>
>                 log.message.downconversion.enable = true
>
>                 log.message.format.version = 2.3-IV1
>
>                 log.message.timestamp.difference.max.ms =
> 9223372036854775807
>
>                 log.message.timestamp.type = CreateTime
>
>                 log.preallocate = false
>
>                 log.retention.bytes = 1073741824
>
>                 log.retention.check.interval.ms = 300000
>
>                 log.retention.hours = 168
>
>                 log.retention.minutes = null
>
>                 log.retention.ms = null
>
>                 log.roll.hours = 168
>
>                 log.roll.jitter.hours = 0
>
>                 log.roll.jitter.ms = null
>
>                 log.roll.ms = null
>
>                 log.segment.bytes = 1073741824
>
>                 log.segment.delete.delay.ms = 60000
>
>                 max.connections = 2147483647
>
>                 max.connections.per.ip = 2147483647
>
>                 max.connections.per.ip.overrides =
>
>                 max.incremental.fetch.session.cache.slots = 1000
>
>                 message.max.bytes = 1000012
>
>                 metric.reporters = []
>
>                 metrics.num.samples = 2
>
>                 metrics.recording.level = INFO
>
>                 metrics.sample.window.ms = 30000
>
>                 min.insync.replicas = 1
>
>                 num.io.threads = 8
>
>                 num.network.threads = 3
>
>                 num.partitions = 1
>
>                 num.recovery.threads.per.data.dir = 1
>
>                 num.replica.alter.log.dirs.threads = null
>
>                 num.replica.fetchers = 1
>
>                 offset.metadata.max.bytes = 4096
>
>                 offsets.commit.required.acks = -1
>
>                 offsets.commit.timeout.ms = 5000
>
>                 offsets.load.buffer.size = 5242880
>
>                 offsets.retention.check.interval.ms = 600000
>
>                 offsets.retention.minutes = 10080
>
>                 offsets.topic.compression.codec = 0
>
>                 offsets.topic.num.partitions = 50
>
>                 offsets.topic.replication.factor = 2
>
>                 offsets.topic.segment.bytes = 104857600
>
>                 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
>
>                 password.encoder.iterations = 4096
>
>                 password.encoder.key.length = 128
>
>                 password.encoder.keyfactory.algorithm = null
>
>                 password.encoder.old.secret = null
>
>                 password.encoder.secret = null
>
>                 port = 9092
>
>                 principal.builder.class = null
>
>                 producer.purgatory.purge.interval.requests = 1000
>
>                 queued.max.request.bytes = -1
>
>                 queued.max.requests = 500
>
>                 quota.consumer.default = 9223372036854775807
>
>                 quota.producer.default = 9223372036854775807
>
>                 quota.window.num = 11
>
>                 quota.window.size.seconds = 1
>
>                 replica.fetch.backoff.ms = 1000
>
>                 replica.fetch.max.bytes = 1048576
>
>                 replica.fetch.min.bytes = 1
>
>                 replica.fetch.response.max.bytes = 10485760
>
>                 replica.fetch.wait.max.ms = 500
>
>                 replica.high.watermark.checkpoint.interval.ms = 5000
>
>                 replica.lag.time.max.ms = 10000
>
>                 replica.socket.receive.buffer.bytes = 65536
>
>                 replica.socket.timeout.ms = 30000
>
>                 replication.quota.window.num = 11
>
>                 replication.quota.window.size.seconds = 1
>
>                 request.timeout.ms = 30000
>
>                 reserved.broker.max.id = 1000
>
>                 sasl.client.callback.handler.class = null
>
>                 sasl.enabled.mechanisms = [GSSAPI]
>
>                 sasl.jaas.config = null
>
>                 sasl.kerberos.kinit.cmd = /usr/bin/kinit
>
>                 sasl.kerberos.min.time.before.relogin = 60000
>
>                 sasl.kerberos.principal.to.local.rules = [DEFAULT]
>
>                 sasl.kerberos.service.name = null
>
>                 sasl.kerberos.ticket.renew.jitter = 0.05
>
>                 sasl.kerberos.ticket.renew.window.factor = 0.8
>
>                 sasl.login.callback.handler.class = null
>
>                 sasl.login.class = null
>
>                 sasl.login.refresh.buffer.seconds = 300
>
>                 sasl.login.refresh.min.period.seconds = 60
>
>                 sasl.login.refresh.window.factor = 0.8
>
>                 sasl.login.refresh.window.jitter = 0.05
>
>                 sasl.mechanism.inter.broker.protocol = GSSAPI
>
>                 sasl.server.callback.handler.class = null
>
>                 security.inter.broker.protocol = PLAINTEXT
>
>                 socket.receive.buffer.bytes = 102400
>
>                 socket.request.max.bytes = 104857600
>
>                 socket.send.buffer.bytes = 102400
>
>                 ssl.cipher.suites = []
>
>                 ssl.client.auth = none
>
>                 ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
>
>                 ssl.endpoint.identification.algorithm = https
>
>                 ssl.key.password = null
>
>                 ssl.keymanager.algorithm = SunX509
>
>                 ssl.keystore.location = null
>
>                 ssl.keystore.password = null
>
>                 ssl.keystore.type = JKS
>
>                 ssl.principal.mapping.rules = [DEFAULT]
>
>                 ssl.protocol = TLS
>
>                 ssl.provider = null
>
>                 ssl.secure.random.implementation = null
>
>                 ssl.trustmanager.algorithm = PKIX
>
>                 ssl.truststore.location = null
>
>                 ssl.truststore.password = null
>
>                 ssl.truststore.type = JKS
>
>
> transaction.abort.timed.out.transaction.cleanup.interval.ms
> = 60000
>
>                 transaction.max.timeout.ms = 900000
>
>                 transaction.remove.expired.transaction.cleanup.interval.ms
> =
> 3600000
>
>                 transaction.state.log.load.buffer.size = 5242880
>
>                 transaction.state.log.min.isr = 2
>
>                 transaction.state.log.num.partitions = 50
>
>                 transaction.state.log.replication.factor = 2
>
>                 transaction.state.log.segment.bytes = 104857600
>
>                 transactional.id.expiration.ms = 604800000
>
>                 unclean.leader.election.enable = false
>
>                 zookeeper.connect = kafka-zookeeper
>
>                 zookeeper.connection.timeout.ms = 6000
>
>                 zookeeper.max.in.flight.requests = 10
>
>                 zookeeper.session.timeout.ms = 6000
>
>                 zookeeper.set.acl = false
>
>                 zookeeper.sync.time.ms = 2000
>
> (kafka.server.KafkaConfig)
>
> [2019-11-16 19:15:56,039] INFO [ThrottledChannelReaper-Fetch]: Starting
> (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
>
> [2019-11-16 19:15:56,044] INFO [ThrottledChannelReaper-Produce]: Starting
> (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
>
> [2019-11-16 19:15:56,046] INFO [ThrottledChannelReaper-Request]: Starting
> (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
>
> [2019-11-16 19:15:56,335] INFO Loading logs. (kafka.log.LogManager)
>
> [2019-11-16 19:15:56,638] INFO [Log partition=__consumer_offsets-4,
> dir=/bitnami/kafka/data] Recovering unflushed segment 0 (kafka.log.Log)
>
> [2019-11-16 19:15:56,727] INFO [Log partition=__consumer_offsets-4,
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
> format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:56,931] INFO [Log partition=__consumer_offsets-4,
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
> format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:56,933] INFO [Log partition=__consumer_offsets-4,
> dir=/bitnami/kafka/data] Completed load of log with 1 segments, log start
> offset 0 and log end offset 0 in 399 ms (kafka.log.Log)
>
> [2019-11-16 19:15:57,029] INFO [Log partition=__consumer_offsets-22,
> dir=/bitnami/kafka/data] Recovering unflushed segment 0 (kafka.log.Log)
>
> [2019-11-16 19:15:57,029] INFO [Log partition=__consumer_offsets-22,
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
> format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,031] INFO [Log partition=__consumer_offsets-22,
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
> format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,032] INFO [Log partition=__consumer_offsets-22,
> dir=/bitnami/kafka/data] Completed load of log with 1 segments, log start
> offset 0 and log end offset 0 in 6 ms (kafka.log.Log)
>
> [2019-11-16 19:15:57,147] INFO [Log partition=__consumer_offsets-32,
> dir=/bitnami/kafka/data] Recovering unflushed segment 0 (kafka.log.Log)
>
> [2019-11-16 19:15:57,148] INFO [Log partition=__consumer_offsets-32,
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
> format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,150] INFO [Log partition=__consumer_offsets-32,
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
> format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,226] INFO [Log partition=__consumer_offsets-32,
> dir=/bitnami/kafka/data] Completed load of log with 1 segments, log start
> offset 0 and log end offset 0 in 189 ms (kafka.log.Log)
>
> [2019-11-16 19:15:57,330] INFO [Log partition=__consumer_offsets-39,
> dir=/bitnami/kafka/data] Recovering unflushed segment 0 (kafka.log.Log)
>
> [2019-11-16 19:15:57,330] INFO [Log partition=__consumer_offsets-39,
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
> format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,333] INFO [Log partition=__consumer_offsets-39,
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
> format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,334] INFO [Log partition=__consumer_offsets-39,
> dir=/bitnami/kafka/data] Completed load of log with 1 segments, log start
> offset 0 and log end offset 0 in 6 ms (kafka.log.Log)
>
> [2019-11-16 19:15:57,429] INFO [Log partition=__consumer_offsets-26,
> dir=/bitnami/kafka/data] Recovering unflushed segment 0 (kafka.log.Log)
>
> [2019-11-16 19:15:57,429] INFO [Log partition=__consumer_offsets-26,
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
> format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,431] INFO [Log partition=__consumer_offsets-26,
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
> format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,432] INFO [Log partition=__consumer_offsets-26,
> dir=/bitnami/kafka/data] Completed load of log with 1 segments, log start
> offset 0 and log end offset 0 in 5 ms (kafka.log.Log)
>
> [2019-11-16 19:15:57,527] INFO [Log partition=__consumer_offsets-44,
> dir=/bitnami/kafka/data] Recovering unflushed segment 0 (kafka.log.Log)
>
> [2019-11-16 19:15:57,529] INFO [Log partition=__consumer_offsets-44,
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
> format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,533] INFO [Log partition=__consumer_offsets-44,
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
> format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,534] INFO [Log partition=__consumer_offsets-44,
> dir=/bitnami/kafka/data] Completed load of log with 1 segments, log start
> offset 0 and log end offset 0 in 8 ms (kafka.log.Log)
>
> [2019-11-16 19:15:57,634] INFO [Log partition=__consumer_offsets-25,
> dir=/bitnami/kafka/data] Recovering unflushed segment 0 (kafka.log.Log)
>
> [2019-11-16 19:15:57,635] INFO [Log partition=__consumer_offsets-25,
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
> format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,637] INFO [Log partition=__consumer_offsets-25,
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
> format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,638] INFO [Log partition=__consumer_offsets-25,
> dir=/bitnami/kafka/data] Completed load of log with 1 segments, log start
> offset 0 and log end offset 0 in 7 ms (kafka.log.Log)
>
> [2019-11-16 19:15:57,730] INFO [Log partition=__consumer_offsets-8,
> dir=/bitnami/kafka/data] Recovering unflushed segment 0 (kafka.log.Log)
>
> [2019-11-16 19:15:57,730] INFO [Log partition=__consumer_offsets-8,
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
> format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,733] INFO [Log partition=__consumer_offsets-8,
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
> format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,733] INFO [Log partition=__consumer_offsets-8,
> dir=/bitnami/kafka/data] Completed load of log with 1 segments, log start
> offset 0 and log end offset 0 in 5 ms (kafka.log.Log)
>
> [2019-11-16 19:15:57,741] INFO [Log partition=batch.alarm-0,
> dir=/bitnami/kafka/data] Recovering unflushed segment 0 (kafka.log.Log)
>
> [2019-11-16 19:15:57,826] INFO [Log partition=batch.alarm-0,
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
> format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,830] INFO [Log partition=batch.alarm-0,
> dir=/bitnami/kafka/data] Loading producer state till offset 0 with message
> format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,833] INFO [Log partition=batch.alarm-0,
> dir=/bitnami/kafka/data] Completed load of log with 1 segments, log start
> offset 0 and log end offset 0 in 94 ms (kafka.log.Log)
>
> [2019-11-16 19:15:57,936] INFO [Log partition=__consumer_offsets-38,
> dir=/bitnami/kafka/data] Recovering unflushed segment 33982499
> (kafka.log.Log)
>
> [2019-11-16 19:15:57,937] INFO [Log partition=__consumer_offsets-38,
> dir=/bitnami/kafka/data] Loading producer state till offset 33982499 with
> message format version 2 (kafka.log.Log)
>
> [2019-11-16 19:15:57,941] INFO [ProducerStateManager
> partition=__consumer_offsets-38] Loading producer state from snapshot file
> '/bitnami/kafka/data/__consumer_offsets-38/00000000000033982499.snapshot'
> (kafka.log.ProducerStateManager)
>
> [2019-11-16 19:16:10,208] INFO Terminating process due to signal SIGTERM
> (org.apache.kafka.common.utils.LoggingSignalHandler)
>
> [2019-11-16 19:16:10,217] INFO [KafkaServer id=1012] shutting down
> (kafka.server.KafkaServer)
>
> [2019-11-16 19:16:10,226] ERROR [KafkaServer id=1012] Fatal error during
> KafkaServer shutdown. (kafka.server.KafkaServer)
>
> java.lang.IllegalStateException: Kafka server is still starting up, cannot
> shut down!
>
>                 at kafka.server.KafkaServer.shutdown(KafkaServer.scala:584)
>
>                 at
> kafka.server.KafkaServerStartable.shutdown(KafkaServerStartable.scala:48)
>
>                 at kafka.Kafka$$anon$1.run(Kafka.scala:81)
>
> [2019-11-16 19:16:10,233] ERROR Halting Kafka.
> (kafka.server.KafkaServerStartable)
>
>
>
> Kind Regards
>
> Oliver
>
>