You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Srikrishna Alla <Sr...@aexp.com.INVALID> on 2016/02/18 23:49:26 UTC
Java Client connection errors with Kafka 0.9.0.0 when SSL is enabled
Hi,
We are getting the below error when trying to use a Java new producer client. Please let us know the reason for this error -
Error message:
[2016-02-18 15:41:06,182] DEBUG Accepted connection from /10.**.***.** on /10.**.***.**:9093. sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] (kafka.network.Acceptor)
[2016-02-18 15:41:06,183] DEBUG Processor 1 listening to new connection from /10.**.**.**:46419 (kafka.network.Processor)
[2016-02-18 15:41:06,283] DEBUG SSLEngine.closeInBound() raised an exception. (org.apache.kafka.common.network.SslTransportLayer)
javax.net.ssl.SSLException: Inbound closed before receiving peer's close_notify: possible truncation attack?
at sun.security.ssl.Alerts.getSSLException(Alerts.java:208)
at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1639)
at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1607)
at sun.security.ssl.SSLEngineImpl.closeInbound(SSLEngineImpl.java:1537)
at org.apache.kafka.common.network.SslTransportLayer.handshakeFailure(SslTransportLayer.java:723)
at org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:313)
at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:68)
at org.apache.kafka.common.network.Selector.poll(Selector.java:281)
at kafka.network.Processor.run(SocketServer.scala:413)
at java.lang.Thread.run(Thread.java:722)
[2016-02-18 15:41:06,283] DEBUG Connection with l************.com/10.**.**.** disconnected (org.apache.kafka.common.network.Selector)
javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection?
at sun.security.ssl.EngineInputRecord.bytesInCompletePacket(EngineInputRecord.java:171)
at sun.security.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:845)
at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:758)
at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:624)
at org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:408)
at org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:269)
at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:68)
at org.apache.kafka.common.network.Selector.poll(Selector.java:281)
at kafka.network.Processor.run(SocketServer.scala:413)
at java.lang.Thread.run(Thread.java:722)
Producer Java client code:
System.setProperty("javax.net.debug","ssl:handshake:verbose");
Properties props = new Properties();
props.put("bootstrap.servers", "************.com:9093");
props.put("acks", "all");
props.put("retries", "0");
props.put("batch.size", "16384");
props.put("linger.ms", "1");
props.put("buffer.memory", "33554432");
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("security.protocol", "SSL");
props.put("ssl.protocal", "SSL");
props.put("ssl.truststore.location", "/idn/home/salla8/ssl/kafka_client_truststore.jks");
props.put("ssl.truststore.password", "p@ssw0rd");
props.put("ssl.keystore.location", "/idn/home/salla8/ssl/kafka_client_keystore.jks");
props.put("ssl.keystore.password", "p@ssw0rd");
props.put("ssl.key.password", "p@ssw0rd");
Producer<String, String> producer = new KafkaProducer<String, String>(props);
Configuration -server.properties:
broker.id=0
listeners=SSL://:9093
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
security.inter.broker.protocol=SSL
ssl.keystore.location=/opt/kafka_2.11-0.9.0.0/config/ssl/kafka.server.keystore.jks
ssl.keystore.password=p@ssw0rd
ssl.key.password=p@ssw0rd
ssl.truststore.location=/opt/kafka_2.11-0.9.0.0/config/ssl/kafka.server.truststore.jks
ssl.truststore.password=p@ssw0rd
ssl.client.auth=required
log.dirs=/tmp/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
log.cleaner.enable=false
zookeeper.connect=*********:5181/test900
zookeeper.connection.timeout.ms=6000
Logs - kafkaServer.out:
[2016-02-17 08:58:00,226] INFO KafkaConfig values:
request.timeout.ms = 30000
log.roll.hours = 168
inter.broker.protocol.version = 0.9.0.X
log.preallocate = false
security.inter.broker.protocol = SSL
controller.socket.timeout.ms = 30000
ssl.keymanager.algorithm = SunX509
ssl.key.password = null
log.cleaner.enable = false
num.recovery.threads.per.data.dir = 1
background.threads = 10
unclean.leader.election.enable = true
sasl.kerberos.kinit.cmd = /usr/bin/kinit
replica.lag.time.max.ms = 10000
ssl.endpoint.identification.algorithm = null
auto.create.topics.enable = true
zookeeper.sync.time.ms = 2000
ssl.client.auth = required
ssl.keystore.password = [hidden]
log.cleaner.io.buffer.load.factor = 0.9
offsets.topic.compression.codec = 0
log.retention.hours = 168
ssl.protocol = TLS
log.dirs = /tmp/kafka-logs
log.index.size.max.bytes = 10485760
sasl.kerberos.min.time.before.relogin = 60000
log.retention.minutes = null
connections.max.idle.ms = 600000
ssl.trustmanager.algorithm = PKIX
offsets.retention.minutes = 1440
max.connections.per.ip = 2147483647
replica.fetch.wait.max.ms = 500
metrics.num.samples = 2
port = 9092
offsets.retention.check.interval.ms = 600000
log.cleaner.dedupe.buffer.size = 524288000
log.segment.bytes = 1073741824
group.min.session.timeout.ms = 6000
producer.purgatory.purge.interval.requests = 1000
min.insync.replicas = 1
ssl.truststore.password = [hidden]
log.flush.scheduler.interval.ms = 9223372036854775807
socket.receive.buffer.bytes = 102400
leader.imbalance.per.broker.percentage = 10
num.io.threads = 8
offsets.topic.replication.factor = 3
zookeeper.connect = lpdbd0055:5181/test900
queued.max.requests = 500
replica.socket.timeout.ms = 30000
offsets.topic.segment.bytes = 104857600
replica.high.watermark.checkpoint.interval.ms = 5000
broker.id = 0
ssl.keystore.location = /opt/kafka_2.11-0.9.0.0/config/ssl/keystore.jks
listeners = SSL://:9093
log.flush.interval.messages = 9223372036854775807
principal.builder.class = class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
log.retention.ms = null
offsets.commit.required.acks = -1
sasl.kerberos.principal.to.local.rules = [DEFAULT]
group.max.session.timeout.ms = 30000
num.replica.fetchers = 1
advertised.listeners = null
replica.socket.receive.buffer.bytes = 65536
delete.topic.enable = false
log.index.interval.bytes = 4096
metric.reporters = []
compression.type = producer
log.cleanup.policy = delete
controlled.shutdown.max.retries = 3
log.cleaner.threads = 1
quota.window.size.seconds = 1
zookeeper.connection.timeout.ms = 6000
offsets.load.buffer.size = 5242880
zookeeper.session.timeout.ms = 6000
ssl.cipher.suites = null
authorizer.class.name =
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.service.name = null
controlled.shutdown.enable = true
offsets.topic.num.partitions = 50
quota.window.num = 11
message.max.bytes = 1000012
log.cleaner.backoff.ms = 15000
log.roll.jitter.hours = 0
log.retention.check.interval.ms = 300000
replica.fetch.max.bytes = 1048576
log.cleaner.delete.retention.ms = 86400000
fetch.purgatory.purge.interval.requests = 1000
log.cleaner.min.cleanable.ratio = 0.5
offsets.commit.timeout.ms = 5000
zookeeper.set.acl = false
log.retention.bytes = -1
offset.metadata.max.bytes = 4096
leader.imbalance.check.interval.seconds = 300
quota.consumer.default = 9223372036854775807
log.roll.jitter.ms = null
reserved.broker.max.id = 1000
replica.fetch.backoff.ms = 1000
advertised.host.name = null
quota.producer.default = 9223372036854775807
log.cleaner.io.buffer.size = 524288
controlled.shutdown.retry.backoff.ms = 5000
log.dir = /tmp/kafka-logs
log.flush.offset.checkpoint.interval.ms = 60000
log.segment.delete.delay.ms = 60000
num.partitions = 1
num.network.threads = 3
socket.request.max.bytes = 104857600
sasl.kerberos.ticket.renew.window.factor = 0.8
log.roll.ms = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
socket.send.buffer.bytes = 102400
log.flush.interval.ms = null
ssl.truststore.location = /opt/kafka_2.11-0.9.0.0/config/ssl/truststore.jks
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
default.replication.factor = 1
metrics.sample.window.ms = 30000
auto.leader.rebalance.enable = true
host.name =
ssl.truststore.type = JKS
advertised.port = null
max.connections.per.ip.overrides =
replica.fetch.min.bytes = 1
ssl.keystore.type = JKS
(kafka.server.KafkaConfig)
Thanks,
Sri
American Express made the following annotations
******************************************************************************
"This message and any attachments are solely for the intended recipient and may contain confidential or privileged information. If you are not the intended recipient, any disclosure, copying, use, or distribution of the information included in this message and any attachments is prohibited. If you have received this communication in error, please notify us by reply e-mail and immediately and permanently delete this message and any attachments. Thank you."
American Express a ajout� le commentaire suivant le Ce courrier et toute pi�ce jointe qu'il contient sont r�serv�s au seul destinataire indiqu� et peuvent renfermer des
renseignements confidentiels et privil�gi�s. Si vous n'�tes pas le destinataire pr�vu, toute divulgation, duplication, utilisation ou distribution du courrier ou de toute pi�ce jointe est interdite. Si vous avez re�u cette communication par erreur, veuillez nous en aviser par courrier et d�truire imm�diatement le courrier et les pi�ces jointes. Merci.
******************************************************************************
Re: Java Client connection errors with Kafka 0.9.0.0 when SSL is enabled
Posted by Srikrishna Alla <al...@gmail.com>.
We have added the client public certs into broker truststore and vice
versa. We removed the keystone related properties from client code and
tried with ssl.client.auth as requested and none as well. We are still
getting same error. Please let us know what else we can use to try
On Fri, Feb 19, 2016 at 12:45 AM, Harsha <ka...@harsha.io> wrote:
> Did you try what Adam is suggesting in the earlier email. Also to
> quickly check you can try remove keystore and key.password configs from
> client side.
> -Harsha
>
> On Thu, Feb 18, 2016, at 02:49 PM, Srikrishna Alla wrote:
> > Hi,
> >
> > We are getting the below error when trying to use a Java new producer
> > client. Please let us know the reason for this error -
> >
> > Error message:
> > [2016-02-18 15:41:06,182] DEBUG Accepted connection from /10.**.***.** on
> > /10.**.***.**:9093. sendBufferSize [actual|requested]: [102400|102400]
> > recvBufferSize [actual|requested]: [102400|102400]
> > (kafka.network.Acceptor)
> > [2016-02-18 15:41:06,183] DEBUG Processor 1 listening to new connection
> > from /10.**.**.**:46419 (kafka.network.Processor)
> > [2016-02-18 15:41:06,283] DEBUG SSLEngine.closeInBound() raised an
> > exception. (org.apache.kafka.common.network.SslTransportLayer)
> > javax.net.ssl.SSLException: Inbound closed before receiving peer's
> > close_notify: possible truncation attack?
> > at sun.security.ssl.Alerts.getSSLException(Alerts.java:208)
> > at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1639)
> > at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1607)
> > at sun.security.ssl.SSLEngineImpl.closeInbound(SSLEngineImpl.java:1537)
> > at
> >
> org.apache.kafka.common.network.SslTransportLayer.handshakeFailure(SslTransportLayer.java:723)
> > at
> >
> org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:313)
> > at
> >
> org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:68)
> > at org.apache.kafka.common.network.Selector.poll(Selector.java:281)
> > at kafka.network.Processor.run(SocketServer.scala:413)
> > at java.lang.Thread.run(Thread.java:722)
> > [2016-02-18 15:41:06,283] DEBUG Connection with
> > l************.com/10.**.**.** disconnected
> > (org.apache.kafka.common.network.Selector)
> > javax.net.ssl.SSLException: Unrecognized SSL message, plaintext
> > connection?
> > at
> >
> sun.security.ssl.EngineInputRecord.bytesInCompletePacket(EngineInputRecord.java:171)
> > at sun.security.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:845)
> > at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:758)
> > at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:624)
> > at
> >
> org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:408)
> > at
> >
> org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:269)
> > at
> >
> org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:68)
> > at org.apache.kafka.common.network.Selector.poll(Selector.java:281)
> > at kafka.network.Processor.run(SocketServer.scala:413)
> > at java.lang.Thread.run(Thread.java:722)
> >
> > Producer Java client code:
> >
> System.setProperty("javax.net.debug","ssl:handshake:verbose");
> > Properties props = new Properties();
> > props.put("bootstrap.servers", "************.com:9093");
> > props.put("acks", "all");
> > props.put("retries", "0");
> > props.put("batch.size", "16384");
> > props.put("linger.ms", "1");
> > props.put("buffer.memory", "33554432");
> > props.put("key.serializer",
> > "org.apache.kafka.common.serialization.StringSerializer");
> > props.put("value.serializer",
> > "org.apache.kafka.common.serialization.StringSerializer");
> > props.put("security.protocol", "SSL");
> > props.put("ssl.protocal", "SSL");
> > props.put("ssl.truststore.location",
> > "/idn/home/salla8/ssl/kafka_client_truststore.jks");
> > props.put("ssl.truststore.password", "p@ssw0rd");
> > props.put("ssl.keystore.location",
> > "/idn/home/salla8/ssl/kafka_client_keystore.jks");
> > props.put("ssl.keystore.password", "p@ssw0rd");
> > props.put("ssl.key.password", "p@ssw0rd");
> > Producer<String, String> producer = new
> > KafkaProducer<String, String>(props);
> >
> >
> > Configuration -server.properties:
> > broker.id=0
> > listeners=SSL://:9093
> > num.network.threads=3
> > num.io.threads=8
> > socket.send.buffer.bytes=102400
> > socket.receive.buffer.bytes=102400
> > socket.request.max.bytes=104857600
> > security.inter.broker.protocol=SSL
> >
> ssl.keystore.location=/opt/kafka_2.11-0.9.0.0/config/ssl/kafka.server.keystore.jks
> > ssl.keystore.password=p@ssw0rd
> > ssl.key.password=p@ssw0rd
> >
> ssl.truststore.location=/opt/kafka_2.11-0.9.0.0/config/ssl/kafka.server.truststore.jks
> > ssl.truststore.password=p@ssw0rd
> > ssl.client.auth=required
> > log.dirs=/tmp/kafka-logs
> > num.partitions=1
> > num.recovery.threads.per.data.dir=1
> > log.retention.hours=168
> > log.segment.bytes=1073741824
> > log.retention.check.interval.ms=300000
> > log.cleaner.enable=false
> > zookeeper.connect=*********:5181/test900
> > zookeeper.connection.timeout.ms=6000
> >
> >
> > Logs - kafkaServer.out:
> > [2016-02-17 08:58:00,226] INFO KafkaConfig values:
> > request.timeout.ms = 30000
> > log.roll.hours = 168
> > inter.broker.protocol.version = 0.9.0.X
> > log.preallocate = false
> > security.inter.broker.protocol = SSL
> > controller.socket.timeout.ms = 30000
> > ssl.keymanager.algorithm = SunX509
> > ssl.key.password = null
> > log.cleaner.enable = false
> > num.recovery.threads.per.data.dir = 1
> > background.threads = 10
> > unclean.leader.election.enable = true
> > sasl.kerberos.kinit.cmd = /usr/bin/kinit
> > replica.lag.time.max.ms = 10000
> > ssl.endpoint.identification.algorithm = null
> > auto.create.topics.enable = true
> > zookeeper.sync.time.ms = 2000
> > ssl.client.auth = required
> > ssl.keystore.password = [hidden]
> > log.cleaner.io.buffer.load.factor = 0.9
> > offsets.topic.compression.codec = 0
> > log.retention.hours = 168
> > ssl.protocol = TLS
> > log.dirs = /tmp/kafka-logs
> > log.index.size.max.bytes = 10485760
> > sasl.kerberos.min.time.before.relogin = 60000
> > log.retention.minutes = null
> > connections.max.idle.ms = 600000
> > ssl.trustmanager.algorithm = PKIX
> > offsets.retention.minutes = 1440
> > max.connections.per.ip = 2147483647
> > replica.fetch.wait.max.ms = 500
> > metrics.num.samples = 2
> > port = 9092
> > offsets.retention.check.interval.ms = 600000
> > log.cleaner.dedupe.buffer.size = 524288000
> > log.segment.bytes = 1073741824
> > group.min.session.timeout.ms = 6000
> > producer.purgatory.purge.interval.requests = 1000
> > min.insync.replicas = 1
> > ssl.truststore.password = [hidden]
> > log.flush.scheduler.interval.ms = 9223372036854775807
> > socket.receive.buffer.bytes = 102400
> > leader.imbalance.per.broker.percentage = 10
> > num.io.threads = 8
> > offsets.topic.replication.factor = 3
> > zookeeper.connect = lpdbd0055:5181/test900
> > queued.max.requests = 500
> > replica.socket.timeout.ms = 30000
> > offsets.topic.segment.bytes = 104857600
> > replica.high.watermark.checkpoint.interval.ms = 5000
> > broker.id = 0
> > ssl.keystore.location =
> > /opt/kafka_2.11-0.9.0.0/config/ssl/keystore.jks
> > listeners = SSL://:9093
> > log.flush.interval.messages = 9223372036854775807
> > principal.builder.class = class
> >
> org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
> > log.retention.ms = null
> > offsets.commit.required.acks = -1
> > sasl.kerberos.principal.to.local.rules = [DEFAULT]
> > group.max.session.timeout.ms = 30000
> > num.replica.fetchers = 1
> > advertised.listeners = null
> > replica.socket.receive.buffer.bytes = 65536
> > delete.topic.enable = false
> > log.index.interval.bytes = 4096
> > metric.reporters = []
> > compression.type = producer
> > log.cleanup.policy = delete
> > controlled.shutdown.max.retries = 3
> > log.cleaner.threads = 1
> > quota.window.size.seconds = 1
> > zookeeper.connection.timeout.ms = 6000
> > offsets.load.buffer.size = 5242880
> > zookeeper.session.timeout.ms = 6000
> > ssl.cipher.suites = null
> > authorizer.class.name =
> > sasl.kerberos.ticket.renew.jitter = 0.05
> > sasl.kerberos.service.name = null
> > controlled.shutdown.enable = true
> > offsets.topic.num.partitions = 50
> > quota.window.num = 11
> > message.max.bytes = 1000012
> > log.cleaner.backoff.ms = 15000
> > log.roll.jitter.hours = 0
> > log.retention.check.interval.ms = 300000
> > replica.fetch.max.bytes = 1048576
> > log.cleaner.delete.retention.ms = 86400000
> > fetch.purgatory.purge.interval.requests = 1000
> > log.cleaner.min.cleanable.ratio = 0.5
> > offsets.commit.timeout.ms = 5000
> > zookeeper.set.acl = false
> > log.retention.bytes = -1
> > offset.metadata.max.bytes = 4096
> > leader.imbalance.check.interval.seconds = 300
> > quota.consumer.default = 9223372036854775807
> > log.roll.jitter.ms = null
> > reserved.broker.max.id = 1000
> > replica.fetch.backoff.ms = 1000
> > advertised.host.name = null
> > quota.producer.default = 9223372036854775807
> > log.cleaner.io.buffer.size = 524288
> > controlled.shutdown.retry.backoff.ms = 5000
> > log.dir = /tmp/kafka-logs
> > log.flush.offset.checkpoint.interval.ms = 60000
> > log.segment.delete.delay.ms = 60000
> > num.partitions = 1
> > num.network.threads = 3
> > socket.request.max.bytes = 104857600
> > sasl.kerberos.ticket.renew.window.factor = 0.8
> > log.roll.ms = null
> > ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
> > socket.send.buffer.bytes = 102400
> > log.flush.interval.ms = null
> > ssl.truststore.location =
> > /opt/kafka_2.11-0.9.0.0/config/ssl/truststore.jks
> > log.cleaner.io.max.bytes.per.second =
> > 1.7976931348623157E308
> > default.replication.factor = 1
> > metrics.sample.window.ms = 30000
> > auto.leader.rebalance.enable = true
> > host.name =
> > ssl.truststore.type = JKS
> > advertised.port = null
> > max.connections.per.ip.overrides =
> > replica.fetch.min.bytes = 1
> > ssl.keystore.type = JKS
> > (kafka.server.KafkaConfig)
> > Thanks,
> > Sri
> >
> >
> >
> > American Express made the following annotations
> >
> ******************************************************************************
> > "This message and any attachments are solely for the intended recipient
> > and may contain confidential or privileged information. If you are not
> > the intended recipient, any disclosure, copying, use, or distribution of
> > the information included in this message and any attachments is
> > prohibited. If you have received this communication in error, please
> > notify us by reply e-mail and immediately and permanently delete this
> > message and any attachments. Thank you."
> >
> > American Express a ajouté le commentaire suivant le Ce courrier et toute
> > pièce jointe qu'il contient sont réservés au seul destinataire indiqué et
> > peuvent renfermer des
> > renseignements confidentiels et privilégiés. Si vous n'êtes pas le
> > destinataire prévu, toute divulgation, duplication, utilisation ou
> > distribution du courrier ou de toute pièce jointe est interdite. Si vous
> > avez reçu cette communication par erreur, veuillez nous en aviser par
> > courrier et détruire immédiatement le courrier et les pièces jointes.
> > Merci.
> >
> >
> ******************************************************************************
>
Re: Java Client connection errors with Kafka 0.9.0.0 when SSL is enabled
Posted by Harsha <ka...@harsha.io>.
Did you try what Adam is suggesting in the earlier email. Also to
quickly check you can try remove keystore and key.password configs from
client side.
-Harsha
On Thu, Feb 18, 2016, at 02:49 PM, Srikrishna Alla wrote:
> Hi,
>
> We are getting the below error when trying to use a Java new producer
> client. Please let us know the reason for this error -
>
> Error message:
> [2016-02-18 15:41:06,182] DEBUG Accepted connection from /10.**.***.** on
> /10.**.***.**:9093. sendBufferSize [actual|requested]: [102400|102400]
> recvBufferSize [actual|requested]: [102400|102400]
> (kafka.network.Acceptor)
> [2016-02-18 15:41:06,183] DEBUG Processor 1 listening to new connection
> from /10.**.**.**:46419 (kafka.network.Processor)
> [2016-02-18 15:41:06,283] DEBUG SSLEngine.closeInBound() raised an
> exception. (org.apache.kafka.common.network.SslTransportLayer)
> javax.net.ssl.SSLException: Inbound closed before receiving peer's
> close_notify: possible truncation attack?
> at sun.security.ssl.Alerts.getSSLException(Alerts.java:208)
> at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1639)
> at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1607)
> at sun.security.ssl.SSLEngineImpl.closeInbound(SSLEngineImpl.java:1537)
> at
> org.apache.kafka.common.network.SslTransportLayer.handshakeFailure(SslTransportLayer.java:723)
> at
> org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:313)
> at
> org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:68)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:281)
> at kafka.network.Processor.run(SocketServer.scala:413)
> at java.lang.Thread.run(Thread.java:722)
> [2016-02-18 15:41:06,283] DEBUG Connection with
> l************.com/10.**.**.** disconnected
> (org.apache.kafka.common.network.Selector)
> javax.net.ssl.SSLException: Unrecognized SSL message, plaintext
> connection?
> at
> sun.security.ssl.EngineInputRecord.bytesInCompletePacket(EngineInputRecord.java:171)
> at sun.security.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:845)
> at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:758)
> at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:624)
> at
> org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:408)
> at
> org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:269)
> at
> org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:68)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:281)
> at kafka.network.Processor.run(SocketServer.scala:413)
> at java.lang.Thread.run(Thread.java:722)
>
> Producer Java client code:
> System.setProperty("javax.net.debug","ssl:handshake:verbose");
> Properties props = new Properties();
> props.put("bootstrap.servers", "************.com:9093");
> props.put("acks", "all");
> props.put("retries", "0");
> props.put("batch.size", "16384");
> props.put("linger.ms", "1");
> props.put("buffer.memory", "33554432");
> props.put("key.serializer",
> "org.apache.kafka.common.serialization.StringSerializer");
> props.put("value.serializer",
> "org.apache.kafka.common.serialization.StringSerializer");
> props.put("security.protocol", "SSL");
> props.put("ssl.protocal", "SSL");
> props.put("ssl.truststore.location",
> "/idn/home/salla8/ssl/kafka_client_truststore.jks");
> props.put("ssl.truststore.password", "p@ssw0rd");
> props.put("ssl.keystore.location",
> "/idn/home/salla8/ssl/kafka_client_keystore.jks");
> props.put("ssl.keystore.password", "p@ssw0rd");
> props.put("ssl.key.password", "p@ssw0rd");
> Producer<String, String> producer = new
> KafkaProducer<String, String>(props);
>
>
> Configuration -server.properties:
> broker.id=0
> listeners=SSL://:9093
> num.network.threads=3
> num.io.threads=8
> socket.send.buffer.bytes=102400
> socket.receive.buffer.bytes=102400
> socket.request.max.bytes=104857600
> security.inter.broker.protocol=SSL
> ssl.keystore.location=/opt/kafka_2.11-0.9.0.0/config/ssl/kafka.server.keystore.jks
> ssl.keystore.password=p@ssw0rd
> ssl.key.password=p@ssw0rd
> ssl.truststore.location=/opt/kafka_2.11-0.9.0.0/config/ssl/kafka.server.truststore.jks
> ssl.truststore.password=p@ssw0rd
> ssl.client.auth=required
> log.dirs=/tmp/kafka-logs
> num.partitions=1
> num.recovery.threads.per.data.dir=1
> log.retention.hours=168
> log.segment.bytes=1073741824
> log.retention.check.interval.ms=300000
> log.cleaner.enable=false
> zookeeper.connect=*********:5181/test900
> zookeeper.connection.timeout.ms=6000
>
>
> Logs - kafkaServer.out:
> [2016-02-17 08:58:00,226] INFO KafkaConfig values:
> request.timeout.ms = 30000
> log.roll.hours = 168
> inter.broker.protocol.version = 0.9.0.X
> log.preallocate = false
> security.inter.broker.protocol = SSL
> controller.socket.timeout.ms = 30000
> ssl.keymanager.algorithm = SunX509
> ssl.key.password = null
> log.cleaner.enable = false
> num.recovery.threads.per.data.dir = 1
> background.threads = 10
> unclean.leader.election.enable = true
> sasl.kerberos.kinit.cmd = /usr/bin/kinit
> replica.lag.time.max.ms = 10000
> ssl.endpoint.identification.algorithm = null
> auto.create.topics.enable = true
> zookeeper.sync.time.ms = 2000
> ssl.client.auth = required
> ssl.keystore.password = [hidden]
> log.cleaner.io.buffer.load.factor = 0.9
> offsets.topic.compression.codec = 0
> log.retention.hours = 168
> ssl.protocol = TLS
> log.dirs = /tmp/kafka-logs
> log.index.size.max.bytes = 10485760
> sasl.kerberos.min.time.before.relogin = 60000
> log.retention.minutes = null
> connections.max.idle.ms = 600000
> ssl.trustmanager.algorithm = PKIX
> offsets.retention.minutes = 1440
> max.connections.per.ip = 2147483647
> replica.fetch.wait.max.ms = 500
> metrics.num.samples = 2
> port = 9092
> offsets.retention.check.interval.ms = 600000
> log.cleaner.dedupe.buffer.size = 524288000
> log.segment.bytes = 1073741824
> group.min.session.timeout.ms = 6000
> producer.purgatory.purge.interval.requests = 1000
> min.insync.replicas = 1
> ssl.truststore.password = [hidden]
> log.flush.scheduler.interval.ms = 9223372036854775807
> socket.receive.buffer.bytes = 102400
> leader.imbalance.per.broker.percentage = 10
> num.io.threads = 8
> offsets.topic.replication.factor = 3
> zookeeper.connect = lpdbd0055:5181/test900
> queued.max.requests = 500
> replica.socket.timeout.ms = 30000
> offsets.topic.segment.bytes = 104857600
> replica.high.watermark.checkpoint.interval.ms = 5000
> broker.id = 0
> ssl.keystore.location =
> /opt/kafka_2.11-0.9.0.0/config/ssl/keystore.jks
> listeners = SSL://:9093
> log.flush.interval.messages = 9223372036854775807
> principal.builder.class = class
> org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
> log.retention.ms = null
> offsets.commit.required.acks = -1
> sasl.kerberos.principal.to.local.rules = [DEFAULT]
> group.max.session.timeout.ms = 30000
> num.replica.fetchers = 1
> advertised.listeners = null
> replica.socket.receive.buffer.bytes = 65536
> delete.topic.enable = false
> log.index.interval.bytes = 4096
> metric.reporters = []
> compression.type = producer
> log.cleanup.policy = delete
> controlled.shutdown.max.retries = 3
> log.cleaner.threads = 1
> quota.window.size.seconds = 1
> zookeeper.connection.timeout.ms = 6000
> offsets.load.buffer.size = 5242880
> zookeeper.session.timeout.ms = 6000
> ssl.cipher.suites = null
> authorizer.class.name =
> sasl.kerberos.ticket.renew.jitter = 0.05
> sasl.kerberos.service.name = null
> controlled.shutdown.enable = true
> offsets.topic.num.partitions = 50
> quota.window.num = 11
> message.max.bytes = 1000012
> log.cleaner.backoff.ms = 15000
> log.roll.jitter.hours = 0
> log.retention.check.interval.ms = 300000
> replica.fetch.max.bytes = 1048576
> log.cleaner.delete.retention.ms = 86400000
> fetch.purgatory.purge.interval.requests = 1000
> log.cleaner.min.cleanable.ratio = 0.5
> offsets.commit.timeout.ms = 5000
> zookeeper.set.acl = false
> log.retention.bytes = -1
> offset.metadata.max.bytes = 4096
> leader.imbalance.check.interval.seconds = 300
> quota.consumer.default = 9223372036854775807
> log.roll.jitter.ms = null
> reserved.broker.max.id = 1000
> replica.fetch.backoff.ms = 1000
> advertised.host.name = null
> quota.producer.default = 9223372036854775807
> log.cleaner.io.buffer.size = 524288
> controlled.shutdown.retry.backoff.ms = 5000
> log.dir = /tmp/kafka-logs
> log.flush.offset.checkpoint.interval.ms = 60000
> log.segment.delete.delay.ms = 60000
> num.partitions = 1
> num.network.threads = 3
> socket.request.max.bytes = 104857600
> sasl.kerberos.ticket.renew.window.factor = 0.8
> log.roll.ms = null
> ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
> socket.send.buffer.bytes = 102400
> log.flush.interval.ms = null
> ssl.truststore.location =
> /opt/kafka_2.11-0.9.0.0/config/ssl/truststore.jks
> log.cleaner.io.max.bytes.per.second =
> 1.7976931348623157E308
> default.replication.factor = 1
> metrics.sample.window.ms = 30000
> auto.leader.rebalance.enable = true
> host.name =
> ssl.truststore.type = JKS
> advertised.port = null
> max.connections.per.ip.overrides =
> replica.fetch.min.bytes = 1
> ssl.keystore.type = JKS
> (kafka.server.KafkaConfig)
> Thanks,
> Sri
>
>
>
> American Express made the following annotations
> ******************************************************************************
> "This message and any attachments are solely for the intended recipient
> and may contain confidential or privileged information. If you are not
> the intended recipient, any disclosure, copying, use, or distribution of
> the information included in this message and any attachments is
> prohibited. If you have received this communication in error, please
> notify us by reply e-mail and immediately and permanently delete this
> message and any attachments. Thank you."
>
> American Express a ajouté le commentaire suivant le Ce courrier et toute
> pièce jointe qu'il contient sont réservés au seul destinataire indiqué et
> peuvent renfermer des
> renseignements confidentiels et privilégiés. Si vous n'êtes pas le
> destinataire prévu, toute divulgation, duplication, utilisation ou
> distribution du courrier ou de toute pièce jointe est interdite. Si vous
> avez reçu cette communication par erreur, veuillez nous en aviser par
> courrier et détruire immédiatement le courrier et les pièces jointes.
> Merci.
>
> ******************************************************************************
Re: Java Client connection errors with Kafka 0.9.0.0 when SSL is enabled
Posted by Adam Kunicki <ad...@streamsets.com>.
Just to be thorough, it seems you have client authentication enabled as
well.
This means that each broker must have your client's public certificate in
its truststore.
I felt like it might be easier to draw a diagram than write it out, but
this is what your setup should look like:
[image: Inline image 1]
Also, keep in mind that the truststores are only required when using
certificates that are not signed by a CA that is trusted by the JRE out of
the box, such as self-signed certs or some less expensive certs that are
available e.g. Comodo PositiveSSL.
On Thu, Feb 18, 2016 at 2:49 PM, Srikrishna Alla <
Srikrishna.Alla@aexp.com.invalid> wrote:
> Hi,
>
> We are getting the below error when trying to use a Java new producer
> client. Please let us know the reason for this error -
>
> Error message:
> [2016-02-18 15:41:06,182] DEBUG Accepted connection from /10.**.***.** on
> /10.**.***.**:9093. sendBufferSize [actual|requested]: [102400|102400]
> recvBufferSize [actual|requested]: [102400|102400] (kafka.network.Acceptor)
> [2016-02-18 15:41:06,183] DEBUG Processor 1 listening to new connection
> from /10.**.**.**:46419 (kafka.network.Processor)
> [2016-02-18 15:41:06,283] DEBUG SSLEngine.closeInBound() raised an
> exception. (org.apache.kafka.common.network.SslTransportLayer)
> javax.net.ssl.SSLException: Inbound closed before receiving peer's
> close_notify: possible truncation attack?
> at sun.security.ssl.Alerts.getSSLException(Alerts.java:208)
> at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1639)
> at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1607)
> at sun.security.ssl.SSLEngineImpl.closeInbound(SSLEngineImpl.java:1537)
> at
> org.apache.kafka.common.network.SslTransportLayer.handshakeFailure(SslTransportLayer.java:723)
> at
> org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:313)
> at
> org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:68)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:281)
> at kafka.network.Processor.run(SocketServer.scala:413)
> at java.lang.Thread.run(Thread.java:722)
> [2016-02-18 15:41:06,283] DEBUG Connection with
> l************.com/10.**.**.** disconnected
> (org.apache.kafka.common.network.Selector)
> javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection?
> at
> sun.security.ssl.EngineInputRecord.bytesInCompletePacket(EngineInputRecord.java:171)
> at sun.security.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:845)
> at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:758)
> at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:624)
> at
> org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:408)
> at
> org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:269)
> at
> org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:68)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:281)
> at kafka.network.Processor.run(SocketServer.scala:413)
> at java.lang.Thread.run(Thread.java:722)
>
> Producer Java client code:
>
> System.setProperty("javax.net.debug","ssl:handshake:verbose");
> Properties props = new Properties();
> props.put("bootstrap.servers", "************.com:9093");
> props.put("acks", "all");
> props.put("retries", "0");
> props.put("batch.size", "16384");
> props.put("linger.ms", "1");
> props.put("buffer.memory", "33554432");
> props.put("key.serializer",
> "org.apache.kafka.common.serialization.StringSerializer");
> props.put("value.serializer",
> "org.apache.kafka.common.serialization.StringSerializer");
> props.put("security.protocol", "SSL");
> props.put("ssl.protocal", "SSL");
> props.put("ssl.truststore.location",
> "/idn/home/salla8/ssl/kafka_client_truststore.jks");
> props.put("ssl.truststore.password", "p@ssw0rd");
> props.put("ssl.keystore.location",
> "/idn/home/salla8/ssl/kafka_client_keystore.jks");
> props.put("ssl.keystore.password", "p@ssw0rd");
> props.put("ssl.key.password", "p@ssw0rd");
> Producer<String, String> producer = new
> KafkaProducer<String, String>(props);
>
>
> Configuration -server.properties:
> broker.id=0
> listeners=SSL://:9093
> num.network.threads=3
> num.io.threads=8
> socket.send.buffer.bytes=102400
> socket.receive.buffer.bytes=102400
> socket.request.max.bytes=104857600
> security.inter.broker.protocol=SSL
>
> ssl.keystore.location=/opt/kafka_2.11-0.9.0.0/config/ssl/kafka.server.keystore.jks
> ssl.keystore.password=p@ssw0rd
> ssl.key.password=p@ssw0rd
>
> ssl.truststore.location=/opt/kafka_2.11-0.9.0.0/config/ssl/kafka.server.truststore.jks
> ssl.truststore.password=p@ssw0rd
> ssl.client.auth=required
> log.dirs=/tmp/kafka-logs
> num.partitions=1
> num.recovery.threads.per.data.dir=1
> log.retention.hours=168
> log.segment.bytes=1073741824
> log.retention.check.interval.ms=300000
> log.cleaner.enable=false
> zookeeper.connect=*********:5181/test900
> zookeeper.connection.timeout.ms=6000
>
>
> Logs - kafkaServer.out:
> [2016-02-17 08:58:00,226] INFO KafkaConfig values:
> request.timeout.ms = 30000
> log.roll.hours = 168
> inter.broker.protocol.version = 0.9.0.X
> log.preallocate = false
> security.inter.broker.protocol = SSL
> controller.socket.timeout.ms = 30000
> ssl.keymanager.algorithm = SunX509
> ssl.key.password = null
> log.cleaner.enable = false
> num.recovery.threads.per.data.dir = 1
> background.threads = 10
> unclean.leader.election.enable = true
> sasl.kerberos.kinit.cmd = /usr/bin/kinit
> replica.lag.time.max.ms = 10000
> ssl.endpoint.identification.algorithm = null
> auto.create.topics.enable = true
> zookeeper.sync.time.ms = 2000
> ssl.client.auth = required
> ssl.keystore.password = [hidden]
> log.cleaner.io.buffer.load.factor = 0.9
> offsets.topic.compression.codec = 0
> log.retention.hours = 168
> ssl.protocol = TLS
> log.dirs = /tmp/kafka-logs
> log.index.size.max.bytes = 10485760
> sasl.kerberos.min.time.before.relogin = 60000
> log.retention.minutes = null
> connections.max.idle.ms = 600000
> ssl.trustmanager.algorithm = PKIX
> offsets.retention.minutes = 1440
> max.connections.per.ip = 2147483647
> replica.fetch.wait.max.ms = 500
> metrics.num.samples = 2
> port = 9092
> offsets.retention.check.interval.ms = 600000
> log.cleaner.dedupe.buffer.size = 524288000
> log.segment.bytes = 1073741824
> group.min.session.timeout.ms = 6000
> producer.purgatory.purge.interval.requests = 1000
> min.insync.replicas = 1
> ssl.truststore.password = [hidden]
> log.flush.scheduler.interval.ms = 9223372036854775807
> socket.receive.buffer.bytes = 102400
> leader.imbalance.per.broker.percentage = 10
> num.io.threads = 8
> offsets.topic.replication.factor = 3
> zookeeper.connect = lpdbd0055:5181/test900
> queued.max.requests = 500
> replica.socket.timeout.ms = 30000
> offsets.topic.segment.bytes = 104857600
> replica.high.watermark.checkpoint.interval.ms = 5000
> broker.id = 0
> ssl.keystore.location =
> /opt/kafka_2.11-0.9.0.0/config/ssl/keystore.jks
> listeners = SSL://:9093
> log.flush.interval.messages = 9223372036854775807
> principal.builder.class = class
> org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
> log.retention.ms = null
> offsets.commit.required.acks = -1
> sasl.kerberos.principal.to.local.rules = [DEFAULT]
> group.max.session.timeout.ms = 30000
> num.replica.fetchers = 1
> advertised.listeners = null
> replica.socket.receive.buffer.bytes = 65536
> delete.topic.enable = false
> log.index.interval.bytes = 4096
> metric.reporters = []
> compression.type = producer
> log.cleanup.policy = delete
> controlled.shutdown.max.retries = 3
> log.cleaner.threads = 1
> quota.window.size.seconds = 1
> zookeeper.connection.timeout.ms = 6000
> offsets.load.buffer.size = 5242880
> zookeeper.session.timeout.ms = 6000
> ssl.cipher.suites = null
> authorizer.class.name =
> sasl.kerberos.ticket.renew.jitter = 0.05
> sasl.kerberos.service.name = null
> controlled.shutdown.enable = true
> offsets.topic.num.partitions = 50
> quota.window.num = 11
> message.max.bytes = 1000012
> log.cleaner.backoff.ms = 15000
> log.roll.jitter.hours = 0
> log.retention.check.interval.ms = 300000
> replica.fetch.max.bytes = 1048576
> log.cleaner.delete.retention.ms = 86400000
> fetch.purgatory.purge.interval.requests = 1000
> log.cleaner.min.cleanable.ratio = 0.5
> offsets.commit.timeout.ms = 5000
> zookeeper.set.acl = false
> log.retention.bytes = -1
> offset.metadata.max.bytes = 4096
> leader.imbalance.check.interval.seconds = 300
> quota.consumer.default = 9223372036854775807
> log.roll.jitter.ms = null
> reserved.broker.max.id = 1000
> replica.fetch.backoff.ms = 1000
> advertised.host.name = null
> quota.producer.default = 9223372036854775807
> log.cleaner.io.buffer.size = 524288
> controlled.shutdown.retry.backoff.ms = 5000
> log.dir = /tmp/kafka-logs
> log.flush.offset.checkpoint.interval.ms = 60000
> log.segment.delete.delay.ms = 60000
> num.partitions = 1
> num.network.threads = 3
> socket.request.max.bytes = 104857600
> sasl.kerberos.ticket.renew.window.factor = 0.8
> log.roll.ms = null
> ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
> socket.send.buffer.bytes = 102400
> log.flush.interval.ms = null
> ssl.truststore.location =
> /opt/kafka_2.11-0.9.0.0/config/ssl/truststore.jks
> log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
> default.replication.factor = 1
> metrics.sample.window.ms = 30000
> auto.leader.rebalance.enable = true
> host.name =
> ssl.truststore.type = JKS
> advertised.port = null
> max.connections.per.ip.overrides =
> replica.fetch.min.bytes = 1
> ssl.keystore.type = JKS
> (kafka.server.KafkaConfig)
> Thanks,
> Sri
>
>
>
> American Express made the following annotations
>
>
> ******************************************************************************
>
> "This message and any attachments are solely for the intended recipient
> and may contain confidential or privileged information. If you are not the
> intended recipient, any disclosure, copying, use, or distribution of the
> information included in this message and any attachments is prohibited. If
> you have received this communication in error, please notify us by reply
> e-mail and immediately and permanently delete this message and any
> attachments. Thank you."
>
>
>
> American Express a ajouté le commentaire suivant le Ce courrier et toute
> pièce jointe qu'il contient sont réservés au seul destinataire indiqué et
> peuvent renfermer des
>
> renseignements confidentiels et privilégiés. Si vous n'êtes pas le
> destinataire prévu, toute divulgation, duplication, utilisation ou
> distribution du courrier ou de toute pièce jointe est interdite. Si vous
> avez reçu cette communication par erreur, veuillez nous en aviser par
> courrier et détruire immédiatement le courrier et les pièces jointes. Merci.
>
>
>
>
> ******************************************************************************
>
--
Adam Kunicki
StreamSets | Field Engineer
mobile: 415.890.DATA (3282) | linkedin
<https://mailtrack.io/trace/link/b57e8946270e694f85005a161e8bd5c56690fd46?url=http%3A%2F%2Fwww.adamkunicki.com&signature=deb218604f58236d>
Re: Java Client connection errors with Kafka 0.9.0.0 when SSL is enabled
Posted by Srikrishna Alla <al...@gmail.com>.
That was a typo. I did remove that and still same error.
Thanks,
Sri
> On Feb 18, 2016, at 4:21 PM, Adam Kunicki <ad...@streamsets.com> wrote:
>
> Ha! nice catch Gwen!
>
>> On Thu, Feb 18, 2016 at 3:20 PM, Gwen Shapira <gw...@confluent.io> wrote:
>>
>> props.put("ssl.protocal", "SSL"); <- looks like a typo.
>>
>> On Thu, Feb 18, 2016 at 2:49 PM, Srikrishna Alla <
>> Srikrishna.Alla@aexp.com.invalid> wrote:
>>
>>> Hi,
>>>
>>> We are getting the below error when trying to use a Java new producer
>>> client. Please let us know the reason for this error -
>>>
>>> Error message:
>>> [2016-02-18 15:41:06,182] DEBUG Accepted connection from /10.**.***.** on
>>> /10.**.***.**:9093. sendBufferSize [actual|requested]: [102400|102400]
>>> recvBufferSize [actual|requested]: [102400|102400]
>> (kafka.network.Acceptor)
>>> [2016-02-18 15:41:06,183] DEBUG Processor 1 listening to new connection
>>> from /10.**.**.**:46419 (kafka.network.Processor)
>>> [2016-02-18 15:41:06,283] DEBUG SSLEngine.closeInBound() raised an
>>> exception. (org.apache.kafka.common.network.SslTransportLayer)
>>> javax.net.ssl.SSLException: Inbound closed before receiving peer's
>>> close_notify: possible truncation attack?
>>> at sun.security.ssl.Alerts.getSSLException(Alerts.java:208)
>>> at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1639)
>>> at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1607)
>>> at sun.security.ssl.SSLEngineImpl.closeInbound(SSLEngineImpl.java:1537)
>>> at
>> org.apache.kafka.common.network.SslTransportLayer.handshakeFailure(SslTransportLayer.java:723)
>>> at
>> org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:313)
>>> at
>> org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:68)
>>> at org.apache.kafka.common.network.Selector.poll(Selector.java:281)
>>> at kafka.network.Processor.run(SocketServer.scala:413)
>>> at java.lang.Thread.run(Thread.java:722)
>>> [2016-02-18 15:41:06,283] DEBUG Connection with
>>> l************.com/10.**.**.** disconnected
>>> (org.apache.kafka.common.network.Selector)
>>> javax.net.ssl.SSLException: Unrecognized SSL message, plaintext
>> connection?
>>> at
>> sun.security.ssl.EngineInputRecord.bytesInCompletePacket(EngineInputRecord.java:171)
>>> at sun.security.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:845)
>>> at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:758)
>>> at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:624)
>>> at
>> org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:408)
>>> at
>> org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:269)
>>> at
>> org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:68)
>>> at org.apache.kafka.common.network.Selector.poll(Selector.java:281)
>>> at kafka.network.Processor.run(SocketServer.scala:413)
>>> at java.lang.Thread.run(Thread.java:722)
>>>
>>> Producer Java client code:
>>>
>>> System.setProperty("javax.net.debug","ssl:handshake:verbose");
>>> Properties props = new Properties();
>>> props.put("bootstrap.servers", "************.com:9093");
>>> props.put("acks", "all");
>>> props.put("retries", "0");
>>> props.put("batch.size", "16384");
>>> props.put("linger.ms", "1");
>>> props.put("buffer.memory", "33554432");
>>> props.put("key.serializer",
>>> "org.apache.kafka.common.serialization.StringSerializer");
>>> props.put("value.serializer",
>>> "org.apache.kafka.common.serialization.StringSerializer");
>>> props.put("security.protocol", "SSL");
>>> props.put("ssl.protocal", "SSL");
>>> props.put("ssl.truststore.location",
>>> "/idn/home/salla8/ssl/kafka_client_truststore.jks");
>>> props.put("ssl.truststore.password", "p@ssw0rd");
>>> props.put("ssl.keystore.location",
>>> "/idn/home/salla8/ssl/kafka_client_keystore.jks");
>>> props.put("ssl.keystore.password", "p@ssw0rd");
>>> props.put("ssl.key.password", "p@ssw0rd");
>>> Producer<String, String> producer = new
>>> KafkaProducer<String, String>(props);
>>>
>>>
>>> Configuration -server.properties:
>>> broker.id=0
>>> listeners=SSL://:9093
>>> num.network.threads=3
>>> num.io.threads=8
>>> socket.send.buffer.bytes=102400
>>> socket.receive.buffer.bytes=102400
>>> socket.request.max.bytes=104857600
>>> security.inter.broker.protocol=SSL
>> ssl.keystore.location=/opt/kafka_2.11-0.9.0.0/config/ssl/kafka.server.keystore.jks
>>> ssl.keystore.password=p@ssw0rd
>>> ssl.key.password=p@ssw0rd
>> ssl.truststore.location=/opt/kafka_2.11-0.9.0.0/config/ssl/kafka.server.truststore.jks
>>> ssl.truststore.password=p@ssw0rd
>>> ssl.client.auth=required
>>> log.dirs=/tmp/kafka-logs
>>> num.partitions=1
>>> num.recovery.threads.per.data.dir=1
>>> log.retention.hours=168
>>> log.segment.bytes=1073741824
>>> log.retention.check.interval.ms=300000
>>> log.cleaner.enable=false
>>> zookeeper.connect=*********:5181/test900
>>> zookeeper.connection.timeout.ms=6000
>>>
>>>
>>> Logs - kafkaServer.out:
>>> [2016-02-17 08:58:00,226] INFO KafkaConfig values:
>>> request.timeout.ms = 30000
>>> log.roll.hours = 168
>>> inter.broker.protocol.version = 0.9.0.X
>>> log.preallocate = false
>>> security.inter.broker.protocol = SSL
>>> controller.socket.timeout.ms = 30000
>>> ssl.keymanager.algorithm = SunX509
>>> ssl.key.password = null
>>> log.cleaner.enable = false
>>> num.recovery.threads.per.data.dir = 1
>>> background.threads = 10
>>> unclean.leader.election.enable = true
>>> sasl.kerberos.kinit.cmd = /usr/bin/kinit
>>> replica.lag.time.max.ms = 10000
>>> ssl.endpoint.identification.algorithm = null
>>> auto.create.topics.enable = true
>>> zookeeper.sync.time.ms = 2000
>>> ssl.client.auth = required
>>> ssl.keystore.password = [hidden]
>>> log.cleaner.io.buffer.load.factor = 0.9
>>> offsets.topic.compression.codec = 0
>>> log.retention.hours = 168
>>> ssl.protocol = TLS
>>> log.dirs = /tmp/kafka-logs
>>> log.index.size.max.bytes = 10485760
>>> sasl.kerberos.min.time.before.relogin = 60000
>>> log.retention.minutes = null
>>> connections.max.idle.ms = 600000
>>> ssl.trustmanager.algorithm = PKIX
>>> offsets.retention.minutes = 1440
>>> max.connections.per.ip = 2147483647
>>> replica.fetch.wait.max.ms = 500
>>> metrics.num.samples = 2
>>> port = 9092
>>> offsets.retention.check.interval.ms = 600000
>>> log.cleaner.dedupe.buffer.size = 524288000
>>> log.segment.bytes = 1073741824
>>> group.min.session.timeout.ms = 6000
>>> producer.purgatory.purge.interval.requests = 1000
>>> min.insync.replicas = 1
>>> ssl.truststore.password = [hidden]
>>> log.flush.scheduler.interval.ms = 9223372036854775807
>>> socket.receive.buffer.bytes = 102400
>>> leader.imbalance.per.broker.percentage = 10
>>> num.io.threads = 8
>>> offsets.topic.replication.factor = 3
>>> zookeeper.connect = lpdbd0055:5181/test900
>>> queued.max.requests = 500
>>> replica.socket.timeout.ms = 30000
>>> offsets.topic.segment.bytes = 104857600
>>> replica.high.watermark.checkpoint.interval.ms = 5000
>>> broker.id = 0
>>> ssl.keystore.location =
>>> /opt/kafka_2.11-0.9.0.0/config/ssl/keystore.jks
>>> listeners = SSL://:9093
>>> log.flush.interval.messages = 9223372036854775807
>>> principal.builder.class = class
>>> org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
>>> log.retention.ms = null
>>> offsets.commit.required.acks = -1
>>> sasl.kerberos.principal.to.local.rules = [DEFAULT]
>>> group.max.session.timeout.ms = 30000
>>> num.replica.fetchers = 1
>>> advertised.listeners = null
>>> replica.socket.receive.buffer.bytes = 65536
>>> delete.topic.enable = false
>>> log.index.interval.bytes = 4096
>>> metric.reporters = []
>>> compression.type = producer
>>> log.cleanup.policy = delete
>>> controlled.shutdown.max.retries = 3
>>> log.cleaner.threads = 1
>>> quota.window.size.seconds = 1
>>> zookeeper.connection.timeout.ms = 6000
>>> offsets.load.buffer.size = 5242880
>>> zookeeper.session.timeout.ms = 6000
>>> ssl.cipher.suites = null
>>> authorizer.class.name =
>>> sasl.kerberos.ticket.renew.jitter = 0.05
>>> sasl.kerberos.service.name = null
>>> controlled.shutdown.enable = true
>>> offsets.topic.num.partitions = 50
>>> quota.window.num = 11
>>> message.max.bytes = 1000012
>>> log.cleaner.backoff.ms = 15000
>>> log.roll.jitter.hours = 0
>>> log.retention.check.interval.ms = 300000
>>> replica.fetch.max.bytes = 1048576
>>> log.cleaner.delete.retention.ms = 86400000
>>> fetch.purgatory.purge.interval.requests = 1000
>>> log.cleaner.min.cleanable.ratio = 0.5
>>> offsets.commit.timeout.ms = 5000
>>> zookeeper.set.acl = false
>>> log.retention.bytes = -1
>>> offset.metadata.max.bytes = 4096
>>> leader.imbalance.check.interval.seconds = 300
>>> quota.consumer.default = 9223372036854775807
>>> log.roll.jitter.ms = null
>>> reserved.broker.max.id = 1000
>>> replica.fetch.backoff.ms = 1000
>>> advertised.host.name = null
>>> quota.producer.default = 9223372036854775807
>>> log.cleaner.io.buffer.size = 524288
>>> controlled.shutdown.retry.backoff.ms = 5000
>>> log.dir = /tmp/kafka-logs
>>> log.flush.offset.checkpoint.interval.ms = 60000
>>> log.segment.delete.delay.ms = 60000
>>> num.partitions = 1
>>> num.network.threads = 3
>>> socket.request.max.bytes = 104857600
>>> sasl.kerberos.ticket.renew.window.factor = 0.8
>>> log.roll.ms = null
>>> ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
>>> socket.send.buffer.bytes = 102400
>>> log.flush.interval.ms = null
>>> ssl.truststore.location =
>>> /opt/kafka_2.11-0.9.0.0/config/ssl/truststore.jks
>>> log.cleaner.io.max.bytes.per.second =
>> 1.7976931348623157E308
>>> default.replication.factor = 1
>>> metrics.sample.window.ms = 30000
>>> auto.leader.rebalance.enable = true
>>> host.name =
>>> ssl.truststore.type = JKS
>>> advertised.port = null
>>> max.connections.per.ip.overrides =
>>> replica.fetch.min.bytes = 1
>>> ssl.keystore.type = JKS
>>> (kafka.server.KafkaConfig)
>>> Thanks,
>>> Sri
>>>
>>>
>>>
>>> American Express made the following annotations
>> ******************************************************************************
>>>
>>> "This message and any attachments are solely for the intended recipient
>>> and may contain confidential or privileged information. If you are not
>> the
>>> intended recipient, any disclosure, copying, use, or distribution of the
>>> information included in this message and any attachments is prohibited.
>> If
>>> you have received this communication in error, please notify us by reply
>>> e-mail and immediately and permanently delete this message and any
>>> attachments. Thank you."
>>>
>>>
>>>
>>> American Express a ajouté le commentaire suivant le Ce courrier et toute
>>> pièce jointe qu'il contient sont réservés au seul destinataire indiqué et
>>> peuvent renfermer des
>>>
>>> renseignements confidentiels et privilégiés. Si vous n'êtes pas le
>>> destinataire prévu, toute divulgation, duplication, utilisation ou
>>> distribution du courrier ou de toute pièce jointe est interdite. Si vous
>>> avez reçu cette communication par erreur, veuillez nous en aviser par
>>> courrier et détruire immédiatement le courrier et les pièces jointes.
>> Merci.
>> ******************************************************************************
>
>
>
> --
> Adam Kunicki
> StreamSets | Field Engineer
> mobile: 415.890.DATA (3282) | linkedin
> <https://mailtrack.io/trace/link/9e104526a1e18b4c530acf4360fe41a70aad661f?url=http%3A%2F%2Fwww.adamkunicki.com&signature=431adc36a66f0019>
Re: Java Client connection errors with Kafka 0.9.0.0 when SSL is enabled
Posted by Adam Kunicki <ad...@streamsets.com>.
Ha! nice catch Gwen!
On Thu, Feb 18, 2016 at 3:20 PM, Gwen Shapira <gw...@confluent.io> wrote:
> props.put("ssl.protocal", "SSL"); <- looks like a typo.
>
> On Thu, Feb 18, 2016 at 2:49 PM, Srikrishna Alla <
> Srikrishna.Alla@aexp.com.invalid> wrote:
>
> > Hi,
> >
> > We are getting the below error when trying to use a Java new producer
> > client. Please let us know the reason for this error -
> >
> > Error message:
> > [2016-02-18 15:41:06,182] DEBUG Accepted connection from /10.**.***.** on
> > /10.**.***.**:9093. sendBufferSize [actual|requested]: [102400|102400]
> > recvBufferSize [actual|requested]: [102400|102400]
> (kafka.network.Acceptor)
> > [2016-02-18 15:41:06,183] DEBUG Processor 1 listening to new connection
> > from /10.**.**.**:46419 (kafka.network.Processor)
> > [2016-02-18 15:41:06,283] DEBUG SSLEngine.closeInBound() raised an
> > exception. (org.apache.kafka.common.network.SslTransportLayer)
> > javax.net.ssl.SSLException: Inbound closed before receiving peer's
> > close_notify: possible truncation attack?
> > at sun.security.ssl.Alerts.getSSLException(Alerts.java:208)
> > at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1639)
> > at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1607)
> > at sun.security.ssl.SSLEngineImpl.closeInbound(SSLEngineImpl.java:1537)
> > at
> >
> org.apache.kafka.common.network.SslTransportLayer.handshakeFailure(SslTransportLayer.java:723)
> > at
> >
> org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:313)
> > at
> >
> org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:68)
> > at org.apache.kafka.common.network.Selector.poll(Selector.java:281)
> > at kafka.network.Processor.run(SocketServer.scala:413)
> > at java.lang.Thread.run(Thread.java:722)
> > [2016-02-18 15:41:06,283] DEBUG Connection with
> > l************.com/10.**.**.** disconnected
> > (org.apache.kafka.common.network.Selector)
> > javax.net.ssl.SSLException: Unrecognized SSL message, plaintext
> connection?
> > at
> >
> sun.security.ssl.EngineInputRecord.bytesInCompletePacket(EngineInputRecord.java:171)
> > at sun.security.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:845)
> > at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:758)
> > at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:624)
> > at
> >
> org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:408)
> > at
> >
> org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:269)
> > at
> >
> org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:68)
> > at org.apache.kafka.common.network.Selector.poll(Selector.java:281)
> > at kafka.network.Processor.run(SocketServer.scala:413)
> > at java.lang.Thread.run(Thread.java:722)
> >
> > Producer Java client code:
> >
> > System.setProperty("javax.net.debug","ssl:handshake:verbose");
> > Properties props = new Properties();
> > props.put("bootstrap.servers", "************.com:9093");
> > props.put("acks", "all");
> > props.put("retries", "0");
> > props.put("batch.size", "16384");
> > props.put("linger.ms", "1");
> > props.put("buffer.memory", "33554432");
> > props.put("key.serializer",
> > "org.apache.kafka.common.serialization.StringSerializer");
> > props.put("value.serializer",
> > "org.apache.kafka.common.serialization.StringSerializer");
> > props.put("security.protocol", "SSL");
> > props.put("ssl.protocal", "SSL");
> > props.put("ssl.truststore.location",
> > "/idn/home/salla8/ssl/kafka_client_truststore.jks");
> > props.put("ssl.truststore.password", "p@ssw0rd");
> > props.put("ssl.keystore.location",
> > "/idn/home/salla8/ssl/kafka_client_keystore.jks");
> > props.put("ssl.keystore.password", "p@ssw0rd");
> > props.put("ssl.key.password", "p@ssw0rd");
> > Producer<String, String> producer = new
> > KafkaProducer<String, String>(props);
> >
> >
> > Configuration -server.properties:
> > broker.id=0
> > listeners=SSL://:9093
> > num.network.threads=3
> > num.io.threads=8
> > socket.send.buffer.bytes=102400
> > socket.receive.buffer.bytes=102400
> > socket.request.max.bytes=104857600
> > security.inter.broker.protocol=SSL
> >
> >
> ssl.keystore.location=/opt/kafka_2.11-0.9.0.0/config/ssl/kafka.server.keystore.jks
> > ssl.keystore.password=p@ssw0rd
> > ssl.key.password=p@ssw0rd
> >
> >
> ssl.truststore.location=/opt/kafka_2.11-0.9.0.0/config/ssl/kafka.server.truststore.jks
> > ssl.truststore.password=p@ssw0rd
> > ssl.client.auth=required
> > log.dirs=/tmp/kafka-logs
> > num.partitions=1
> > num.recovery.threads.per.data.dir=1
> > log.retention.hours=168
> > log.segment.bytes=1073741824
> > log.retention.check.interval.ms=300000
> > log.cleaner.enable=false
> > zookeeper.connect=*********:5181/test900
> > zookeeper.connection.timeout.ms=6000
> >
> >
> > Logs - kafkaServer.out:
> > [2016-02-17 08:58:00,226] INFO KafkaConfig values:
> > request.timeout.ms = 30000
> > log.roll.hours = 168
> > inter.broker.protocol.version = 0.9.0.X
> > log.preallocate = false
> > security.inter.broker.protocol = SSL
> > controller.socket.timeout.ms = 30000
> > ssl.keymanager.algorithm = SunX509
> > ssl.key.password = null
> > log.cleaner.enable = false
> > num.recovery.threads.per.data.dir = 1
> > background.threads = 10
> > unclean.leader.election.enable = true
> > sasl.kerberos.kinit.cmd = /usr/bin/kinit
> > replica.lag.time.max.ms = 10000
> > ssl.endpoint.identification.algorithm = null
> > auto.create.topics.enable = true
> > zookeeper.sync.time.ms = 2000
> > ssl.client.auth = required
> > ssl.keystore.password = [hidden]
> > log.cleaner.io.buffer.load.factor = 0.9
> > offsets.topic.compression.codec = 0
> > log.retention.hours = 168
> > ssl.protocol = TLS
> > log.dirs = /tmp/kafka-logs
> > log.index.size.max.bytes = 10485760
> > sasl.kerberos.min.time.before.relogin = 60000
> > log.retention.minutes = null
> > connections.max.idle.ms = 600000
> > ssl.trustmanager.algorithm = PKIX
> > offsets.retention.minutes = 1440
> > max.connections.per.ip = 2147483647
> > replica.fetch.wait.max.ms = 500
> > metrics.num.samples = 2
> > port = 9092
> > offsets.retention.check.interval.ms = 600000
> > log.cleaner.dedupe.buffer.size = 524288000
> > log.segment.bytes = 1073741824
> > group.min.session.timeout.ms = 6000
> > producer.purgatory.purge.interval.requests = 1000
> > min.insync.replicas = 1
> > ssl.truststore.password = [hidden]
> > log.flush.scheduler.interval.ms = 9223372036854775807
> > socket.receive.buffer.bytes = 102400
> > leader.imbalance.per.broker.percentage = 10
> > num.io.threads = 8
> > offsets.topic.replication.factor = 3
> > zookeeper.connect = lpdbd0055:5181/test900
> > queued.max.requests = 500
> > replica.socket.timeout.ms = 30000
> > offsets.topic.segment.bytes = 104857600
> > replica.high.watermark.checkpoint.interval.ms = 5000
> > broker.id = 0
> > ssl.keystore.location =
> > /opt/kafka_2.11-0.9.0.0/config/ssl/keystore.jks
> > listeners = SSL://:9093
> > log.flush.interval.messages = 9223372036854775807
> > principal.builder.class = class
> > org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
> > log.retention.ms = null
> > offsets.commit.required.acks = -1
> > sasl.kerberos.principal.to.local.rules = [DEFAULT]
> > group.max.session.timeout.ms = 30000
> > num.replica.fetchers = 1
> > advertised.listeners = null
> > replica.socket.receive.buffer.bytes = 65536
> > delete.topic.enable = false
> > log.index.interval.bytes = 4096
> > metric.reporters = []
> > compression.type = producer
> > log.cleanup.policy = delete
> > controlled.shutdown.max.retries = 3
> > log.cleaner.threads = 1
> > quota.window.size.seconds = 1
> > zookeeper.connection.timeout.ms = 6000
> > offsets.load.buffer.size = 5242880
> > zookeeper.session.timeout.ms = 6000
> > ssl.cipher.suites = null
> > authorizer.class.name =
> > sasl.kerberos.ticket.renew.jitter = 0.05
> > sasl.kerberos.service.name = null
> > controlled.shutdown.enable = true
> > offsets.topic.num.partitions = 50
> > quota.window.num = 11
> > message.max.bytes = 1000012
> > log.cleaner.backoff.ms = 15000
> > log.roll.jitter.hours = 0
> > log.retention.check.interval.ms = 300000
> > replica.fetch.max.bytes = 1048576
> > log.cleaner.delete.retention.ms = 86400000
> > fetch.purgatory.purge.interval.requests = 1000
> > log.cleaner.min.cleanable.ratio = 0.5
> > offsets.commit.timeout.ms = 5000
> > zookeeper.set.acl = false
> > log.retention.bytes = -1
> > offset.metadata.max.bytes = 4096
> > leader.imbalance.check.interval.seconds = 300
> > quota.consumer.default = 9223372036854775807
> > log.roll.jitter.ms = null
> > reserved.broker.max.id = 1000
> > replica.fetch.backoff.ms = 1000
> > advertised.host.name = null
> > quota.producer.default = 9223372036854775807
> > log.cleaner.io.buffer.size = 524288
> > controlled.shutdown.retry.backoff.ms = 5000
> > log.dir = /tmp/kafka-logs
> > log.flush.offset.checkpoint.interval.ms = 60000
> > log.segment.delete.delay.ms = 60000
> > num.partitions = 1
> > num.network.threads = 3
> > socket.request.max.bytes = 104857600
> > sasl.kerberos.ticket.renew.window.factor = 0.8
> > log.roll.ms = null
> > ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
> > socket.send.buffer.bytes = 102400
> > log.flush.interval.ms = null
> > ssl.truststore.location =
> > /opt/kafka_2.11-0.9.0.0/config/ssl/truststore.jks
> > log.cleaner.io.max.bytes.per.second =
> 1.7976931348623157E308
> > default.replication.factor = 1
> > metrics.sample.window.ms = 30000
> > auto.leader.rebalance.enable = true
> > host.name =
> > ssl.truststore.type = JKS
> > advertised.port = null
> > max.connections.per.ip.overrides =
> > replica.fetch.min.bytes = 1
> > ssl.keystore.type = JKS
> > (kafka.server.KafkaConfig)
> > Thanks,
> > Sri
> >
> >
> >
> > American Express made the following annotations
> >
> >
> >
> ******************************************************************************
> >
> > "This message and any attachments are solely for the intended recipient
> > and may contain confidential or privileged information. If you are not
> the
> > intended recipient, any disclosure, copying, use, or distribution of the
> > information included in this message and any attachments is prohibited.
> If
> > you have received this communication in error, please notify us by reply
> > e-mail and immediately and permanently delete this message and any
> > attachments. Thank you."
> >
> >
> >
> > American Express a ajouté le commentaire suivant le Ce courrier et toute
> > pièce jointe qu'il contient sont réservés au seul destinataire indiqué et
> > peuvent renfermer des
> >
> > renseignements confidentiels et privilégiés. Si vous n'êtes pas le
> > destinataire prévu, toute divulgation, duplication, utilisation ou
> > distribution du courrier ou de toute pièce jointe est interdite. Si vous
> > avez reçu cette communication par erreur, veuillez nous en aviser par
> > courrier et détruire immédiatement le courrier et les pièces jointes.
> Merci.
> >
> >
> >
> >
> >
> ******************************************************************************
> >
>
--
Adam Kunicki
StreamSets | Field Engineer
mobile: 415.890.DATA (3282) | linkedin
<https://mailtrack.io/trace/link/9e104526a1e18b4c530acf4360fe41a70aad661f?url=http%3A%2F%2Fwww.adamkunicki.com&signature=431adc36a66f0019>
Re: Java Client connection errors with Kafka 0.9.0.0 when SSL is enabled
Posted by Gwen Shapira <gw...@confluent.io>.
props.put("ssl.protocal", "SSL"); <- looks like a typo.
On Thu, Feb 18, 2016 at 2:49 PM, Srikrishna Alla <
Srikrishna.Alla@aexp.com.invalid> wrote:
> Hi,
>
> We are getting the below error when trying to use a Java new producer
> client. Please let us know the reason for this error -
>
> Error message:
> [2016-02-18 15:41:06,182] DEBUG Accepted connection from /10.**.***.** on
> /10.**.***.**:9093. sendBufferSize [actual|requested]: [102400|102400]
> recvBufferSize [actual|requested]: [102400|102400] (kafka.network.Acceptor)
> [2016-02-18 15:41:06,183] DEBUG Processor 1 listening to new connection
> from /10.**.**.**:46419 (kafka.network.Processor)
> [2016-02-18 15:41:06,283] DEBUG SSLEngine.closeInBound() raised an
> exception. (org.apache.kafka.common.network.SslTransportLayer)
> javax.net.ssl.SSLException: Inbound closed before receiving peer's
> close_notify: possible truncation attack?
> at sun.security.ssl.Alerts.getSSLException(Alerts.java:208)
> at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1639)
> at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1607)
> at sun.security.ssl.SSLEngineImpl.closeInbound(SSLEngineImpl.java:1537)
> at
> org.apache.kafka.common.network.SslTransportLayer.handshakeFailure(SslTransportLayer.java:723)
> at
> org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:313)
> at
> org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:68)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:281)
> at kafka.network.Processor.run(SocketServer.scala:413)
> at java.lang.Thread.run(Thread.java:722)
> [2016-02-18 15:41:06,283] DEBUG Connection with
> l************.com/10.**.**.** disconnected
> (org.apache.kafka.common.network.Selector)
> javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection?
> at
> sun.security.ssl.EngineInputRecord.bytesInCompletePacket(EngineInputRecord.java:171)
> at sun.security.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:845)
> at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:758)
> at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:624)
> at
> org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:408)
> at
> org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:269)
> at
> org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:68)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:281)
> at kafka.network.Processor.run(SocketServer.scala:413)
> at java.lang.Thread.run(Thread.java:722)
>
> Producer Java client code:
>
> System.setProperty("javax.net.debug","ssl:handshake:verbose");
> Properties props = new Properties();
> props.put("bootstrap.servers", "************.com:9093");
> props.put("acks", "all");
> props.put("retries", "0");
> props.put("batch.size", "16384");
> props.put("linger.ms", "1");
> props.put("buffer.memory", "33554432");
> props.put("key.serializer",
> "org.apache.kafka.common.serialization.StringSerializer");
> props.put("value.serializer",
> "org.apache.kafka.common.serialization.StringSerializer");
> props.put("security.protocol", "SSL");
> props.put("ssl.protocal", "SSL");
> props.put("ssl.truststore.location",
> "/idn/home/salla8/ssl/kafka_client_truststore.jks");
> props.put("ssl.truststore.password", "p@ssw0rd");
> props.put("ssl.keystore.location",
> "/idn/home/salla8/ssl/kafka_client_keystore.jks");
> props.put("ssl.keystore.password", "p@ssw0rd");
> props.put("ssl.key.password", "p@ssw0rd");
> Producer<String, String> producer = new
> KafkaProducer<String, String>(props);
>
>
> Configuration -server.properties:
> broker.id=0
> listeners=SSL://:9093
> num.network.threads=3
> num.io.threads=8
> socket.send.buffer.bytes=102400
> socket.receive.buffer.bytes=102400
> socket.request.max.bytes=104857600
> security.inter.broker.protocol=SSL
>
> ssl.keystore.location=/opt/kafka_2.11-0.9.0.0/config/ssl/kafka.server.keystore.jks
> ssl.keystore.password=p@ssw0rd
> ssl.key.password=p@ssw0rd
>
> ssl.truststore.location=/opt/kafka_2.11-0.9.0.0/config/ssl/kafka.server.truststore.jks
> ssl.truststore.password=p@ssw0rd
> ssl.client.auth=required
> log.dirs=/tmp/kafka-logs
> num.partitions=1
> num.recovery.threads.per.data.dir=1
> log.retention.hours=168
> log.segment.bytes=1073741824
> log.retention.check.interval.ms=300000
> log.cleaner.enable=false
> zookeeper.connect=*********:5181/test900
> zookeeper.connection.timeout.ms=6000
>
>
> Logs - kafkaServer.out:
> [2016-02-17 08:58:00,226] INFO KafkaConfig values:
> request.timeout.ms = 30000
> log.roll.hours = 168
> inter.broker.protocol.version = 0.9.0.X
> log.preallocate = false
> security.inter.broker.protocol = SSL
> controller.socket.timeout.ms = 30000
> ssl.keymanager.algorithm = SunX509
> ssl.key.password = null
> log.cleaner.enable = false
> num.recovery.threads.per.data.dir = 1
> background.threads = 10
> unclean.leader.election.enable = true
> sasl.kerberos.kinit.cmd = /usr/bin/kinit
> replica.lag.time.max.ms = 10000
> ssl.endpoint.identification.algorithm = null
> auto.create.topics.enable = true
> zookeeper.sync.time.ms = 2000
> ssl.client.auth = required
> ssl.keystore.password = [hidden]
> log.cleaner.io.buffer.load.factor = 0.9
> offsets.topic.compression.codec = 0
> log.retention.hours = 168
> ssl.protocol = TLS
> log.dirs = /tmp/kafka-logs
> log.index.size.max.bytes = 10485760
> sasl.kerberos.min.time.before.relogin = 60000
> log.retention.minutes = null
> connections.max.idle.ms = 600000
> ssl.trustmanager.algorithm = PKIX
> offsets.retention.minutes = 1440
> max.connections.per.ip = 2147483647
> replica.fetch.wait.max.ms = 500
> metrics.num.samples = 2
> port = 9092
> offsets.retention.check.interval.ms = 600000
> log.cleaner.dedupe.buffer.size = 524288000
> log.segment.bytes = 1073741824
> group.min.session.timeout.ms = 6000
> producer.purgatory.purge.interval.requests = 1000
> min.insync.replicas = 1
> ssl.truststore.password = [hidden]
> log.flush.scheduler.interval.ms = 9223372036854775807
> socket.receive.buffer.bytes = 102400
> leader.imbalance.per.broker.percentage = 10
> num.io.threads = 8
> offsets.topic.replication.factor = 3
> zookeeper.connect = lpdbd0055:5181/test900
> queued.max.requests = 500
> replica.socket.timeout.ms = 30000
> offsets.topic.segment.bytes = 104857600
> replica.high.watermark.checkpoint.interval.ms = 5000
> broker.id = 0
> ssl.keystore.location =
> /opt/kafka_2.11-0.9.0.0/config/ssl/keystore.jks
> listeners = SSL://:9093
> log.flush.interval.messages = 9223372036854775807
> principal.builder.class = class
> org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
> log.retention.ms = null
> offsets.commit.required.acks = -1
> sasl.kerberos.principal.to.local.rules = [DEFAULT]
> group.max.session.timeout.ms = 30000
> num.replica.fetchers = 1
> advertised.listeners = null
> replica.socket.receive.buffer.bytes = 65536
> delete.topic.enable = false
> log.index.interval.bytes = 4096
> metric.reporters = []
> compression.type = producer
> log.cleanup.policy = delete
> controlled.shutdown.max.retries = 3
> log.cleaner.threads = 1
> quota.window.size.seconds = 1
> zookeeper.connection.timeout.ms = 6000
> offsets.load.buffer.size = 5242880
> zookeeper.session.timeout.ms = 6000
> ssl.cipher.suites = null
> authorizer.class.name =
> sasl.kerberos.ticket.renew.jitter = 0.05
> sasl.kerberos.service.name = null
> controlled.shutdown.enable = true
> offsets.topic.num.partitions = 50
> quota.window.num = 11
> message.max.bytes = 1000012
> log.cleaner.backoff.ms = 15000
> log.roll.jitter.hours = 0
> log.retention.check.interval.ms = 300000
> replica.fetch.max.bytes = 1048576
> log.cleaner.delete.retention.ms = 86400000
> fetch.purgatory.purge.interval.requests = 1000
> log.cleaner.min.cleanable.ratio = 0.5
> offsets.commit.timeout.ms = 5000
> zookeeper.set.acl = false
> log.retention.bytes = -1
> offset.metadata.max.bytes = 4096
> leader.imbalance.check.interval.seconds = 300
> quota.consumer.default = 9223372036854775807
> log.roll.jitter.ms = null
> reserved.broker.max.id = 1000
> replica.fetch.backoff.ms = 1000
> advertised.host.name = null
> quota.producer.default = 9223372036854775807
> log.cleaner.io.buffer.size = 524288
> controlled.shutdown.retry.backoff.ms = 5000
> log.dir = /tmp/kafka-logs
> log.flush.offset.checkpoint.interval.ms = 60000
> log.segment.delete.delay.ms = 60000
> num.partitions = 1
> num.network.threads = 3
> socket.request.max.bytes = 104857600
> sasl.kerberos.ticket.renew.window.factor = 0.8
> log.roll.ms = null
> ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
> socket.send.buffer.bytes = 102400
> log.flush.interval.ms = null
> ssl.truststore.location =
> /opt/kafka_2.11-0.9.0.0/config/ssl/truststore.jks
> log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
> default.replication.factor = 1
> metrics.sample.window.ms = 30000
> auto.leader.rebalance.enable = true
> host.name =
> ssl.truststore.type = JKS
> advertised.port = null
> max.connections.per.ip.overrides =
> replica.fetch.min.bytes = 1
> ssl.keystore.type = JKS
> (kafka.server.KafkaConfig)
> Thanks,
> Sri
>
>
>
> American Express made the following annotations
>
>
> ******************************************************************************
>
> "This message and any attachments are solely for the intended recipient
> and may contain confidential or privileged information. If you are not the
> intended recipient, any disclosure, copying, use, or distribution of the
> information included in this message and any attachments is prohibited. If
> you have received this communication in error, please notify us by reply
> e-mail and immediately and permanently delete this message and any
> attachments. Thank you."
>
>
>
> American Express a ajouté le commentaire suivant le Ce courrier et toute
> pièce jointe qu'il contient sont réservés au seul destinataire indiqué et
> peuvent renfermer des
>
> renseignements confidentiels et privilégiés. Si vous n'êtes pas le
> destinataire prévu, toute divulgation, duplication, utilisation ou
> distribution du courrier ou de toute pièce jointe est interdite. Si vous
> avez reçu cette communication par erreur, veuillez nous en aviser par
> courrier et détruire immédiatement le courrier et les pièces jointes. Merci.
>
>
>
>
> ******************************************************************************
>