You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Andrzej Trzeciak <An...@exelaonline.com> on 2021/12/03 09:02:13 UTC

SASL_PLAIN configuration problems

Hi, does any know the problem, or rather the solution for it, as described below?

For our medium size application, we already have confluent kafka 6.2.1 deployed.
It's now running in a secure environment, so it has no security enabled. In order to be more flexible on that regard, we want to add security features, starting with the simplest thing - a user/password authentication. It is also a prerequisite that we don't introduce any additional infrastructure. We choose SASL_PLAIN. We configured it to the best of our understanding of the online documentation ( [Configuring PLAIN | Confluent Documentation] ) without the use of JAAS file. I will paste the configs at the bottom, first the problem:
when the client is trying to connect we get the following error:
NetworkClient - [Producer clientId=signings] Connection to node -1 (localhost/127.0.0.1:9093) failed authentication due to: Unexpected handshake request with client mechanism PLAIN, enabled mechanisms are []

client side we are using kafka-clients-2.7.1

seerver config (server.properties file without the by default commented lines) content:
broker.id=0
sasl.enabled.mechanisms=PLAIN
sasl.mechanism.inter.broker.protocol=PLAIN
#security.inter.broker.protocol=SASL_PLAINTEXT
listeners=PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093
#listeners=SASL_PLAINTEXT
advertised.listeners=PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093
#advertised.listeners=SASL_PLAINTEXT
inter.broker.listener.name=SASL_PLAINTEXT
#inter.broker.listener.name=SASL_PLAINTEXT
listener.security.protocol.map=SASL_PLAINTEXT:PLAINTEXT,SASL_PLAINTEXT://localhost:9093:PLAINTEXT,PLAINTEXT:PLAINTEXT
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret"
user_kafkabroker1="kafkabroker1-secret";

num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
############################# Log Basics #############################
log.dirs=.../.../.../data/Kafka/data
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.minutes=10
log.retention.bytes=104857600
log.segment.bytes=104857600
log.retention.check.interval.ms=300000
############################# Zookeeper #############################
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0
offsets.topic.num.partitions=3

server config shown in the console:
advertised.host.name = null
advertised.listeners = PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.heartbeat.interval.ms = 2000
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
broker.session.timeout.ms = 9000
client.quota.callback.class = null
compression.type = producer
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
connections.max.reauth.ms = 0
control.plane.listener.name = null
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.listener.names = null
controller.quorum.append.linger.ms = 25
controller.quorum.election.backoff.max.ms = 1000
controller.quorum.election.timeout.ms = 1000
controller.quorum.fetch.timeout.ms = 2000
controller.quorum.request.timeout.ms = 2000
controller.quorum.retry.backoff.ms = 20
controller.quorum.voters =
controller.quota.window.num = 11
controller.quota.window.size.seconds = 1
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delegation.token.secret.key = null
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.max.bytes = 57671680
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 1800000
group.max.size = 2147483647
group.min.session.timeout.ms = 6000
host.name =
initial.broker.registration.timeout.ms = 60000
inter.broker.listener.name = SASL_PLAINTEXT
inter.broker.protocol.version = 2.8-IV1
kafka.metrics.polling.interval.secs = 10
kafka.metrics.reporters =
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = SASL_PLAINTEXT:PLAINTEXT,SASL_PLAINTEXT://localhost:9093:PLAINTEXT,PLAINTEXT:PLAINTEXT
listeners = PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.max.compaction.lag.ms = 9223372036854775807
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = .../.../.../data/Kafka/data
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.8-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = 104857600
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = 10
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 104857600
log.segment.delete.delay.ms = 60000
max.connection.creation.rate = 2147483647
max.connections = 2147483647
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1048588
metadata.log.dir = null
metric.reporters =
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
node.id = -1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 3
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
process.roles =
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 30000
replica.selector.class = null
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [PLAIN]
sasl.jaas.config = [hidden]
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.controller.protocol = GSSAPI
sasl.mechanism.inter.broker.protocol = PLAIN
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
security.providers = null
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites =
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.principal.mapping.rules = DEFAULT
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.clientCnxnSocket = null
zookeeper.connect = localhost:2181
zookeeper.connection.timeout.ms = 18000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 18000
zookeeper.set.acl = false
zookeeper.ssl.cipher.suites = null
zookeeper.ssl.client.enable = false
zookeeper.ssl.crl.enable = false
zookeeper.ssl.enabled.protocols = null
zookeeper.ssl.endpoint.identification.algorithm = HTTPS
zookeeper.ssl.keystore.location = null
zookeeper.ssl.keystore.password = null
zookeeper.ssl.keystore.type = null
zookeeper.ssl.ocsp.enable = false
zookeeper.ssl.protocol = TLSv1.2
zookeeper.ssl.truststore.location = null
zookeeper.ssl.truststore.password = null
zookeeper.ssl.truststore.type = null
zookeeper.sync.time.ms = 2000

client config shown in console:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [localhost:9093]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = notifications
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = wvcs:7d65616877c7a7a6bb8c20ed06d293d9
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes =
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters =
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = [hidden]
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = PLAIN
security.protocol = SASL_PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 127000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class agb.serializer.NotificationDeserializer



kind regards,
Andrzej Trzeciak


________________________________
Please consider the environment before printing or forwarding this email. If you do print this email, please recycle the paper.

This email message may contain confidential, proprietary and/or privileged information. It is intended only for the use of the intended recipient(s). If you have received it in error, please immediately advise the sender by reply email and then delete this email message. Any disclosure, copying, distribution or use of the information contained in this email message to or by anyone other than the intended recipient is strictly prohibited. Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Exela Technologies, Inc. or its subsidiaries.

This email does not constitute an agreement to conduct transactions by electronic means and does not create any legally binding contract or enforceable obligation against Exela in the absence of a fully signed written agreement.

Re: SASL_PLAIN configuration problems

Posted by Luke Chen <sh...@gmail.com>.
Hello Andrzej,

The error:
*NetworkClient - [Producer clientId=signings] Connection to node -1
(localhost/127.0.0.1:9093 <http://127.0.0.1:9093>) failed authentication
due to: Unexpected handshake request with client mechanism PLAIN, enabled
mechanisms are []*

It is complaining that your client is configuring to request using "PLAIN"
mechanism, but the server is not supported. The server supported mechanisms
are empty. Checking your server configuration, I found the config:


*listener.security.protocol.map=SASL_PLAINTEXT:PLAINTEXT,SASL_PLAINTEXT://localhost:9093:PLAINTEXT,PLAINTEXT:PLAINTEXT*

is not right. You set the the listener name: SASL_PLAINTEXT mapped to
`PLAINTEXT`, which is why your server isn't enabling any mechanism.

I think you can just remove the `*listener.security.protocol.map*` setting,
and use default one (which is to map SASL_PLAINTEXT:SASL_PLAINTEXT, and
PLAINTEXT:PLAINTEXT). That should fix your problem.

Good luck!

Thank you.
Luke

On Fri, Dec 3, 2021 at 5:03 PM Andrzej Trzeciak <
Andrzej.Trzeciak@exelaonline.com> wrote:

> Hi, does any know the problem, or rather the solution for it, as described
> below?
>
> For our medium size application, we already have confluent kafka 6.2.1
> deployed.
> It's now running in a secure environment, so it has no security enabled.
> In order to be more flexible on that regard, we want to add security
> features, starting with the simplest thing - a user/password
> authentication. It is also a prerequisite that we don't introduce any
> additional infrastructure. We choose SASL_PLAIN. We configured it to the
> best of our understanding of the online documentation ( [Configuring PLAIN
> | Confluent Documentation] ) without the use of JAAS file. I will paste the
> configs at the bottom, first the problem:
> when the client is trying to connect we get the following error:
> NetworkClient - [Producer clientId=signings] Connection to node -1
> (localhost/127.0.0.1:9093) failed authentication due to: Unexpected
> handshake request with client mechanism PLAIN, enabled mechanisms are []
>
> client side we are using kafka-clients-2.7.1
>
> seerver config (server.properties file without the by default commented
> lines) content:
> broker.id=0
> sasl.enabled.mechanisms=PLAIN
> sasl.mechanism.inter.broker.protocol=PLAIN
> #security.inter.broker.protocol=SASL_PLAINTEXT
> listeners=PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093
> #listeners=SASL_PLAINTEXT
>
> advertised.listeners=PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093
> #advertised.listeners=SASL_PLAINTEXT
> inter.broker.listener.name=SASL_PLAINTEXT
> #inter.broker.listener.name=SASL_PLAINTEXT
>
> listener.security.protocol.map=SASL_PLAINTEXT:PLAINTEXT,SASL_PLAINTEXT://localhost:9093:PLAINTEXT,PLAINTEXT:PLAINTEXT
> sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule
> required
> username="admin"
> password="admin-secret"
> user_admin="admin-secret"
> user_kafkabroker1="kafkabroker1-secret";
>
> num.network.threads=3
> num.io.threads=8
> socket.send.buffer.bytes=102400
> socket.receive.buffer.bytes=102400
> socket.request.max.bytes=104857600
> ############################# Log Basics #############################
> log.dirs=.../.../.../data/Kafka/data
> num.partitions=1
> num.recovery.threads.per.data.dir=1
> offsets.topic.replication.factor=1
> transaction.state.log.replication.factor=1
> transaction.state.log.min.isr=1
> log.retention.minutes=10
> log.retention.bytes=104857600
> log.segment.bytes=104857600
> log.retention.check.interval.ms=300000
> ############################# Zookeeper #############################
> zookeeper.connect=localhost:2181
> zookeeper.connection.timeout.ms=18000
> group.initial.rebalance.delay.ms=0
> offsets.topic.num.partitions=3
>
> server config shown in the console:
> advertised.host.name = null
> advertised.listeners =
> PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093
> advertised.port = null
> alter.config.policy.class.name = null
> alter.log.dirs.replication.quota.window.num = 11
> alter.log.dirs.replication.quota.window.size.seconds = 1
> authorizer.class.name =
> auto.create.topics.enable = true
> auto.leader.rebalance.enable = true
> background.threads = 10
> broker.heartbeat.interval.ms = 2000
> broker.id = 0
> broker.id.generation.enable = true
> broker.rack = null
> broker.session.timeout.ms = 9000
> client.quota.callback.class = null
> compression.type = producer
> connection.failed.authentication.delay.ms = 100
> connections.max.idle.ms = 600000
> connections.max.reauth.ms = 0
> control.plane.listener.name = null
> controlled.shutdown.enable = true
> controlled.shutdown.max.retries = 3
> controlled.shutdown.retry.backoff.ms = 5000
> controller.listener.names = null
> controller.quorum.append.linger.ms = 25
> controller.quorum.election.backoff.max.ms = 1000
> controller.quorum.election.timeout.ms = 1000
> controller.quorum.fetch.timeout.ms = 2000
> controller.quorum.request.timeout.ms = 2000
> controller.quorum.retry.backoff.ms = 20
> controller.quorum.voters =
> controller.quota.window.num = 11
> controller.quota.window.size.seconds = 1
> controller.socket.timeout.ms = 30000
> create.topic.policy.class.name = null
> default.replication.factor = 1
> delegation.token.expiry.check.interval.ms = 3600000
> delegation.token.expiry.time.ms = 86400000
> delegation.token.master.key = null
> delegation.token.max.lifetime.ms = 604800000
> delegation.token.secret.key = null
> delete.records.purgatory.purge.interval.requests = 1
> delete.topic.enable = true
> fetch.max.bytes = 57671680
> fetch.purgatory.purge.interval.requests = 1000
> group.initial.rebalance.delay.ms = 0
> group.max.session.timeout.ms = 1800000
> group.max.size = 2147483647
> group.min.session.timeout.ms = 6000
> host.name =
> initial.broker.registration.timeout.ms = 60000
> inter.broker.listener.name = SASL_PLAINTEXT
> inter.broker.protocol.version = 2.8-IV1
> kafka.metrics.polling.interval.secs = 10
> kafka.metrics.reporters =
> leader.imbalance.check.interval.seconds = 300
> leader.imbalance.per.broker.percentage = 10
> listener.security.protocol.map =
> SASL_PLAINTEXT:PLAINTEXT,SASL_PLAINTEXT://localhost:9093:PLAINTEXT,PLAINTEXT:PLAINTEXT
> listeners = PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093
> log.cleaner.backoff.ms = 15000
> log.cleaner.dedupe.buffer.size = 134217728
> log.cleaner.delete.retention.ms = 86400000
> log.cleaner.enable = true
> log.cleaner.io.buffer.load.factor = 0.9
> log.cleaner.io.buffer.size = 524288
> log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
> log.cleaner.max.compaction.lag.ms = 9223372036854775807
> log.cleaner.min.cleanable.ratio = 0.5
> log.cleaner.min.compaction.lag.ms = 0
> log.cleaner.threads = 1
> log.cleanup.policy = [delete]
> log.dir = /tmp/kafka-logs
> log.dirs = .../.../.../data/Kafka/data
> log.flush.interval.messages = 9223372036854775807
> log.flush.interval.ms = null
> log.flush.offset.checkpoint.interval.ms = 60000
> log.flush.scheduler.interval.ms = 9223372036854775807
> log.flush.start.offset.checkpoint.interval.ms = 60000
> log.index.interval.bytes = 4096
> log.index.size.max.bytes = 10485760
> log.message.downconversion.enable = true
> log.message.format.version = 2.8-IV1
> log.message.timestamp.difference.max.ms = 9223372036854775807
> log.message.timestamp.type = CreateTime
> log.preallocate = false
> log.retention.bytes = 104857600
> log.retention.check.interval.ms = 300000
> log.retention.hours = 168
> log.retention.minutes = 10
> log.retention.ms = null
> log.roll.hours = 168
> log.roll.jitter.hours = 0
> log.roll.jitter.ms = null
> log.roll.ms = null
> log.segment.bytes = 104857600
> log.segment.delete.delay.ms = 60000
> max.connection.creation.rate = 2147483647
> max.connections = 2147483647
> max.connections.per.ip = 2147483647
> max.connections.per.ip.overrides =
> max.incremental.fetch.session.cache.slots = 1000
> message.max.bytes = 1048588
> metadata.log.dir = null
> metric.reporters =
> metrics.num.samples = 2
> metrics.recording.level = INFO
> metrics.sample.window.ms = 30000
> min.insync.replicas = 1
> node.id = -1
> num.io.threads = 8
> num.network.threads = 3
> num.partitions = 1
> num.recovery.threads.per.data.dir = 1
> num.replica.alter.log.dirs.threads = null
> num.replica.fetchers = 1
> offset.metadata.max.bytes = 4096
> offsets.commit.required.acks = -1
> offsets.commit.timeout.ms = 5000
> offsets.load.buffer.size = 5242880
> offsets.retention.check.interval.ms = 600000
> offsets.retention.minutes = 10080
> offsets.topic.compression.codec = 0
> offsets.topic.num.partitions = 3
> offsets.topic.replication.factor = 1
> offsets.topic.segment.bytes = 104857600
> password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
> password.encoder.iterations = 4096
> password.encoder.key.length = 128
> password.encoder.keyfactory.algorithm = null
> password.encoder.old.secret = null
> password.encoder.secret = null
> port = 9092
> principal.builder.class = null
> process.roles =
> producer.purgatory.purge.interval.requests = 1000
> queued.max.request.bytes = -1
> queued.max.requests = 500
> quota.consumer.default = 9223372036854775807
> quota.producer.default = 9223372036854775807
> quota.window.num = 11
> quota.window.size.seconds = 1
> replica.fetch.backoff.ms = 1000
> replica.fetch.max.bytes = 1048576
> replica.fetch.min.bytes = 1
> replica.fetch.response.max.bytes = 10485760
> replica.fetch.wait.max.ms = 500
> replica.high.watermark.checkpoint.interval.ms = 5000
> replica.lag.time.max.ms = 30000
> replica.selector.class = null
> replica.socket.receive.buffer.bytes = 65536
> replica.socket.timeout.ms = 30000
> replication.quota.window.num = 11
> replication.quota.window.size.seconds = 1
> request.timeout.ms = 30000
> reserved.broker.max.id = 1000
> sasl.client.callback.handler.class = null
> sasl.enabled.mechanisms = [PLAIN]
> sasl.jaas.config = [hidden]
> sasl.kerberos.kinit.cmd = /usr/bin/kinit
> sasl.kerberos.min.time.before.relogin = 60000
> sasl.kerberos.principal.to.local.rules = [DEFAULT]
> sasl.kerberos.service.name = null
> sasl.kerberos.ticket.renew.jitter = 0.05
> sasl.kerberos.ticket.renew.window.factor = 0.8
> sasl.login.callback.handler.class = null
> sasl.login.class = null
> sasl.login.refresh.buffer.seconds = 300
> sasl.login.refresh.min.period.seconds = 60
> sasl.login.refresh.window.factor = 0.8
> sasl.login.refresh.window.jitter = 0.05
> sasl.mechanism.controller.protocol = GSSAPI
> sasl.mechanism.inter.broker.protocol = PLAIN
> sasl.server.callback.handler.class = null
> security.inter.broker.protocol = PLAINTEXT
> security.providers = null
> socket.connection.setup.timeout.max.ms = 30000
> socket.connection.setup.timeout.ms = 10000
> socket.receive.buffer.bytes = 102400
> socket.request.max.bytes = 104857600
> socket.send.buffer.bytes = 102400
> ssl.cipher.suites =
> ssl.client.auth = none
> ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
> ssl.endpoint.identification.algorithm = https
> ssl.engine.factory.class = null
> ssl.key.password = null
> ssl.keymanager.algorithm = SunX509
> ssl.keystore.certificate.chain = null
> ssl.keystore.key = null
> ssl.keystore.location = null
> ssl.keystore.password = null
> ssl.keystore.type = JKS
> ssl.principal.mapping.rules = DEFAULT
> ssl.protocol = TLSv1.3
> ssl.provider = null
> ssl.secure.random.implementation = null
> ssl.trustmanager.algorithm = PKIX
> ssl.truststore.certificates = null
> ssl.truststore.location = null
> ssl.truststore.password = null
> ssl.truststore.type = JKS
> transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
> transaction.max.timeout.ms = 900000
> transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
> transaction.state.log.load.buffer.size = 5242880
> transaction.state.log.min.isr = 1
> transaction.state.log.num.partitions = 50
> transaction.state.log.replication.factor = 1
> transaction.state.log.segment.bytes = 104857600
> transactional.id.expiration.ms = 604800000
> unclean.leader.election.enable = false
> zookeeper.clientCnxnSocket = null
> zookeeper.connect = localhost:2181
> zookeeper.connection.timeout.ms = 18000
> zookeeper.max.in.flight.requests = 10
> zookeeper.session.timeout.ms = 18000
> zookeeper.set.acl = false
> zookeeper.ssl.cipher.suites = null
> zookeeper.ssl.client.enable = false
> zookeeper.ssl.crl.enable = false
> zookeeper.ssl.enabled.protocols = null
> zookeeper.ssl.endpoint.identification.algorithm = HTTPS
> zookeeper.ssl.keystore.location = null
> zookeeper.ssl.keystore.password = null
> zookeeper.ssl.keystore.type = null
> zookeeper.ssl.ocsp.enable = false
> zookeeper.ssl.protocol = TLSv1.2
> zookeeper.ssl.truststore.location = null
> zookeeper.ssl.truststore.password = null
> zookeeper.ssl.truststore.type = null
> zookeeper.sync.time.ms = 2000
>
> client config shown in console:
> allow.auto.create.topics = true
> auto.commit.interval.ms = 5000
> auto.offset.reset = latest
> bootstrap.servers = [localhost:9093]
> check.crcs = true
> client.dns.lookup = use_all_dns_ips
> client.id = notifications
> client.rack =
> connections.max.idle.ms = 540000
> default.api.timeout.ms = 60000
> enable.auto.commit = false
> exclude.internal.topics = true
> fetch.max.bytes = 52428800
> fetch.max.wait.ms = 500
> fetch.min.bytes = 1
> group.id = wvcs:7d65616877c7a7a6bb8c20ed06d293d9
> group.instance.id = null
> heartbeat.interval.ms = 3000
> interceptor.classes =
> internal.leave.group.on.close = true
> internal.throw.on.fetch.stable.offset.unsupported = false
> isolation.level = read_uncommitted
> key.deserializer = class
> org.apache.kafka.common.serialization.StringDeserializer
> max.partition.fetch.bytes = 1048576
> max.poll.interval.ms = 300000
> max.poll.records = 500
> metadata.max.age.ms = 300000
> metric.reporters =
> metrics.num.samples = 2
> metrics.recording.level = INFO
> metrics.sample.window.ms = 30000
> partition.assignment.strategy = [class
> org.apache.kafka.clients.consumer.RangeAssignor]
> receive.buffer.bytes = 65536
> reconnect.backoff.max.ms = 1000
> reconnect.backoff.ms = 50
> request.timeout.ms = 30000
> retry.backoff.ms = 100
> sasl.client.callback.handler.class = null
> sasl.jaas.config = [hidden]
> sasl.kerberos.kinit.cmd = /usr/bin/kinit
> sasl.kerberos.min.time.before.relogin = 60000
> sasl.kerberos.service.name = null
> sasl.kerberos.ticket.renew.jitter = 0.05
> sasl.kerberos.ticket.renew.window.factor = 0.8
> sasl.login.callback.handler.class = null
> sasl.login.class = null
> sasl.login.refresh.buffer.seconds = 300
> sasl.login.refresh.min.period.seconds = 60
> sasl.login.refresh.window.factor = 0.8
> sasl.login.refresh.window.jitter = 0.05
> sasl.mechanism = PLAIN
> security.protocol = SASL_PLAINTEXT
> security.providers = null
> send.buffer.bytes = 131072
> session.timeout.ms = 10000
> socket.connection.setup.timeout.max.ms = 127000
> socket.connection.setup.timeout.ms = 10000
> ssl.cipher.suites = null
> ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
> ssl.endpoint.identification.algorithm = https
> ssl.engine.factory.class = null
> ssl.key.password = null
> ssl.keymanager.algorithm = SunX509
> ssl.keystore.certificate.chain = null
> ssl.keystore.key = null
> ssl.keystore.location = null
> ssl.keystore.password = null
> ssl.keystore.type = JKS
> ssl.protocol = TLSv1.3
> ssl.provider = null
> ssl.secure.random.implementation = null
> ssl.trustmanager.algorithm = PKIX
> ssl.truststore.certificates = null
> ssl.truststore.location = null
> ssl.truststore.password = null
> ssl.truststore.type = JKS
> value.deserializer = class agb.serializer.NotificationDeserializer
>
>
>
> kind regards,
> Andrzej Trzeciak
>
>
> ________________________________
> Please consider the environment before printing or forwarding this email.
> If you do print this email, please recycle the paper.
>
> This email message may contain confidential, proprietary and/or privileged
> information. It is intended only for the use of the intended recipient(s).
> If you have received it in error, please immediately advise the sender by
> reply email and then delete this email message. Any disclosure, copying,
> distribution or use of the information contained in this email message to
> or by anyone other than the intended recipient is strictly prohibited. Any
> views expressed in this message are those of the individual sender, except
> where the sender specifically states them to be the views of Exela
> Technologies, Inc. or its subsidiaries.
>
> This email does not constitute an agreement to conduct transactions by
> electronic means and does not create any legally binding contract or
> enforceable obligation against Exela in the absence of a fully signed
> written agreement.
>