You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Nazario Parsacala <do...@gmail.com> on 2016/02/01 17:44:51 UTC
Kafka SSL Configuration Problems
Hi,
We were using kafka for a while now. We have been using the binary release 2.10-0.8.2.1 . But we have been needing a encrypted communication between our publishers and subscribers. So we got 2.10-0.9.0.0. This works very well with no SSL enabled. But currently have issues with SSL enabled.
So configured SSL according to http://kafka.apache.org/documentation.html#security . And only place the following changes in the server.properties to enable SSL
listeners=PLAINTEXT://servername:9092, SSL://servername:9093
# The port the socket server listens on
#port=9092
# Hostname the broker will bind to. If not set, the server will bind to all interfaces
host.name=servername
# SSL Stuff
#
ssl.client.auth=required
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.location=/pathto/certs/server.keystore.jks
ssl.keystore.password=123456
ssl.key.password=123456
ssl.truststore.location=/pathto/certs/server.truststore.jks
ssl.truststore.password=123456
At start up I see the following in the logs:
advertised.host.name = servername
metric.reporters = []
quota.producer.default = 9223372036854775807
offsets.topic.num.partitions = 50
log.flush.interval.messages = 9223372036854775807
auto.create.topics.enable = true
controller.socket.timeout.ms = 30000
log.flush.interval.ms = null
principal.builder.class = class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
replica.socket.receive.buffer.bytes = 65536
min.insync.replicas = 1
replica.fetch.wait.max.ms = 500
num.recovery.threads.per.data.dir = 1
ssl.keystore.type = JKS
default.replication.factor = 1
ssl.truststore.password = [hidden]
log.preallocate = false
sasl.kerberos.principal.to.local.rules = [DEFAULT]
fetch.purgatory.purge.interval.requests = 1000
ssl.endpoint.identification.algorithm = null
replica.socket.timeout.ms = 30000
message.max.bytes = 1000012
num.io.threads = 8
offsets.commit.required.acks = -1
log.flush.offset.checkpoint.interval.ms = 60000
delete.topic.enable = false
quota.window.size.seconds = 1
ssl.truststore.type = JKS
offsets.commit.timeout.ms = 5000
quota.window.num = 11
zookeeper.connect = servername:2181
authorizer.class.name =
num.replica.fetchers = 1
log.retention.ms = null
log.roll.jitter.hours = 0
log.cleaner.enable = false
offsets.load.buffer.size = 5242880
log.cleaner.delete.retention.ms = 86400000
ssl.client.auth = required
controlled.shutdown.max.retries = 3
queued.max.requests = 500
offsets.topic.replication.factor = 3
log.cleaner.threads = 1
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
socket.request.max.bytes = 104857600
ssl.trustmanager.algorithm = PKIX
zookeeper.session.timeout.ms = 6000
log.retention.bytes = -1
sasl.kerberos.min.time.before.relogin = 60000
zookeeper.set.acl = false
connections.max.idle.ms = 600000
offsets.retention.minutes = 1440
replica.fetch.backoff.ms = 1000
inter.broker.protocol.version = 0.9.0.X
log.retention.hours = 168
num.partitions = 4
listeners = PLAINTEXT://servername:9092, SSL://servername:9093
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
log.roll.ms = null
log.flush.scheduler.interval.ms = 9223372036854775807
ssl.cipher.suites = null
log.index.size.max.bytes = 10485760
ssl.keymanager.algorithm = SunX509
security.inter.broker.protocol = PLAINTEXT
replica.fetch.max.bytes = 1048576
advertised.port = null
log.cleaner.dedupe.buffer.size = 524288000
replica.high.watermark.checkpoint.interval.ms = 5000
log.cleaner.io.buffer.size = 524288
sasl.kerberos.ticket.renew.window.factor = 0.8
zookeeper.connection.timeout.ms = 6000
controlled.shutdown.retry.backoff.ms = 5000
log.roll.hours = 168
log.cleanup.policy = delete
host.name = servername
log.roll.jitter.ms = null
max.connections.per.ip = 2147483647
offsets.topic.segment.bytes = 104857600
background.threads = 10
quota.consumer.default = 9223372036854775807
request.timeout.ms = 30000
log.index.interval.bytes = 4096
log.dir = /tmp/kafka-logs
log.segment.bytes = 1073741824
log.cleaner.backoff.ms = 15000
offset.metadata.max.bytes = 4096
ssl.truststore.location = /pathto/certs/server.truststore.jks
group.max.session.timeout.ms = 30000
ssl.keystore.password = [hidden]
zookeeper.sync.time.ms = 2000
port = 9092
log.retention.minutes = null
log.segment.delete.delay.ms = 60000
log.dirs = /pathto/logs/kafka
controlled.shutdown.enable = true
compression.type = producer
max.connections.per.ip.overrides =
sasl.kerberos.kinit.cmd = /usr/bin/kinit
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
auto.leader.rebalance.enable = true
leader.imbalance.check.interval.seconds = 300
log.cleaner.min.cleanable.ratio = 0.5
replica.lag.time.max.ms = 10000
num.network.threads = 3
ssl.key.password = [hidden]
reserved.broker.max.id = 1000
metrics.num.samples = 2
socket.send.buffer.bytes = 102400
ssl.protocol = TLS
socket.receive.buffer.bytes = 102400
ssl.keystore.location = /pathto/certs/server.keystore.jks
replica.fetch.min.bytes = 1
unclean.leader.election.enable = true
group.min.session.timeout.ms = 6000
log.cleaner.io.buffer.load.factor = 0.9
offsets.retention.check.interval.ms = 600000
producer.purgatory.purge.interval.requests = 1000
So as you can see the listeners are supposedly setup as
listeners = PLAINTEXT://servername:9092, SSL://servername:9093
in the logs which reflected what was setup in the server.properties.
However further down the logs, it is only PLAINTEXT that is being registered ..
[2016-02-01 11:27:49,712] INFO Registered broker 0 at path /brokers/ids/0 with addresses: PLAINTEXT -> EndPoint(servername,9092,PLAINTEXT) (kafka.utils.ZkUtils)
not the port 9093 nor the SSL.
I have done multiple permutations of this config including clearing the entire kafka and zookeeper data. Still no luck. I even forced the the SSL on port 9092 with the same issue. The resulting effect on this is that the producer and consumer is giving me errors like :
lients.NetworkClient)
[2016-02-01 10:58:41,001] WARN Error while fetching metadata with correlation id 57 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2016-02-01 10:58:41,103] WARN Error while fetching metadata with correlation id 58 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2016-02-01 10:58:41,205] WARN Error while fetching metadata with correlation id 59 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
Any help is appreciated.
Re: Kafka SSL Configuration Problems
Posted by Nazario Parsacala <do...@gmail.com>.
Aha , got it. So thats where I got confused.
> On Feb 1, 2016, at 3:04 PM, Ismael Juma <is...@gmail.com> wrote:
>
> Hi Nazario,
>
> The problem in the original post is that you were setting
> advertised.host.name, which means that advertised.listeners won't fall back
> to listeners anymore. Yes, it's bit confusing given how the configs
> evolved over time.
>
> I have configured several clusters to use SSL by setting listeners
> exclusively. It works, trust me. :)
>
> Ismael
> On 1 Feb 2016 19:57, "Nazario Parsacala" <do...@gmail.com> wrote:
>
>> I dont think that is the behavior I have seen. If I set listeners only (
>> as per my original post) , SSL will never get registered.
>>
>> [2016-02-01 11:27:49,712] INFO Registered broker 0 at path /brokers/ids/0
>> with addresses: PLAINTEXT -> EndPoint(servername,9092,PLAINTEXT)
>> (kafka.utils.ZkUtils)
>>
>>
>> The only way I have been able to do this, is setting both.
>>
>>
>>
>>> On Feb 1, 2016, at 2:54 PM, Ismael Juma <is...@juma.me.uk> wrote:
>>>
>>> On Mon, Feb 1, 2016 at 7:15 PM, Nazario Parsacala <do...@gmail.com>
>>> wrote:
>>>
>>>> So it looks like you need both listeners and advertised.listeners ..?
>>>>
>>>
>>> No, you always need to set `listeners` (`advertised.listeners` defaults
>> to
>>> `listeners`). If you want `advertised.listeners` to be different than
>>> `listeners`, then you need to set both. A pull request to improve the
>>> documentation is welcome.
>>>
>>> Ismael
>>
>>
Re: Kafka SSL Configuration Problems
Posted by Ismael Juma <is...@gmail.com>.
Hi Nazario,
The problem in the original post is that you were setting
advertised.host.name, which means that advertised.listeners won't fall back
to listeners anymore. Yes, it's bit confusing given how the configs
evolved over time.
I have configured several clusters to use SSL by setting listeners
exclusively. It works, trust me. :)
Ismael
On 1 Feb 2016 19:57, "Nazario Parsacala" <do...@gmail.com> wrote:
> I dont think that is the behavior I have seen. If I set listeners only (
> as per my original post) , SSL will never get registered.
>
> [2016-02-01 11:27:49,712] INFO Registered broker 0 at path /brokers/ids/0
> with addresses: PLAINTEXT -> EndPoint(servername,9092,PLAINTEXT)
> (kafka.utils.ZkUtils)
>
>
> The only way I have been able to do this, is setting both.
>
>
>
> > On Feb 1, 2016, at 2:54 PM, Ismael Juma <is...@juma.me.uk> wrote:
> >
> > On Mon, Feb 1, 2016 at 7:15 PM, Nazario Parsacala <do...@gmail.com>
> > wrote:
> >
> >> So it looks like you need both listeners and advertised.listeners ..?
> >>
> >
> > No, you always need to set `listeners` (`advertised.listeners` defaults
> to
> > `listeners`). If you want `advertised.listeners` to be different than
> > `listeners`, then you need to set both. A pull request to improve the
> > documentation is welcome.
> >
> > Ismael
>
>
Re: Kafka SSL Configuration Problems
Posted by Nazario Parsacala <do...@gmail.com>.
I dont think that is the behavior I have seen. If I set listeners only ( as per my original post) , SSL will never get registered.
[2016-02-01 11:27:49,712] INFO Registered broker 0 at path /brokers/ids/0 with addresses: PLAINTEXT -> EndPoint(servername,9092,PLAINTEXT) (kafka.utils.ZkUtils)
The only way I have been able to do this, is setting both.
> On Feb 1, 2016, at 2:54 PM, Ismael Juma <is...@juma.me.uk> wrote:
>
> On Mon, Feb 1, 2016 at 7:15 PM, Nazario Parsacala <do...@gmail.com>
> wrote:
>
>> So it looks like you need both listeners and advertised.listeners ..?
>>
>
> No, you always need to set `listeners` (`advertised.listeners` defaults to
> `listeners`). If you want `advertised.listeners` to be different than
> `listeners`, then you need to set both. A pull request to improve the
> documentation is welcome.
>
> Ismael
Re: Kafka SSL Configuration Problems
Posted by Ismael Juma <is...@juma.me.uk>.
On Mon, Feb 1, 2016 at 7:15 PM, Nazario Parsacala <do...@gmail.com>
wrote:
> So it looks like you need both listeners and advertised.listeners ..?
>
No, you always need to set `listeners` (`advertised.listeners` defaults to
`listeners`). If you want `advertised.listeners` to be different than
`listeners`, then you need to set both. A pull request to improve the
documentation is welcome.
Ismael
Re: Kafka SSL Configuration Problems
Posted by Nazario Parsacala <do...@gmail.com>.
So it looks like you need both listeners and advertised.listeners ..?
When I set both configs .. It finally worked.
Maybe we can update the docs ..?
> On Feb 1, 2016, at 1:59 PM, Nazario Parsacala <do...@gmail.com> wrote:
>
> So I made the port 9092 but SSL. But it seems like it is just openning it for PLAINTEXT. Even though it has registered it as SSL
>
> [2016-02-01 13:42:20,536] INFO Registered broker 0 at path /brokers/ids/0 with addresses: SSL -> EndPoint(reactor.us.cixsoft.net <http://reactor.us.cixsoft.net/>,9092,SSL) (kafka.utils.ZkUtils)
>
>
> openssl test seems to indicate that this is not an SSL enabled port.
>
>
> openssl s_client -debug -connect servername:9092 -tls1
> CONNECTED(00000003)
> write to 0x1885950 [0x1890c23] (207 bytes => 207 (0xCF))
> 0000 - 16 03 01 00 ca 01 00 00-c6 03 01 06 72 23 1b e7 ............r#..
> 0010 - b2 9a 6f 2d 78 26 40 a0-38 db f1 1d 31 e4 f6 72 ..o-x&@.8...1..r
> 0020 - 0b 6e aa 6c c6 ef 29 1b-0e 2e f9 00 00 6c c0 14 .n.l..)......l..
> 0030 - c0 0a 00 39 00 38 00 37-00 36 00 88 00 87 00 86 ...9.8.7.6......
> 0040 - 00 85 c0 0f c0 05 00 35-00 84 c0 13 c0 09 00 33 .......5.......3
> 0050 - 00 32 00 31 00 30 00 9a-00 99 00 98 00 97 00 45 .2.1.0.........E
> 0060 - 00 44 00 43 00 42 c0 0e-c0 04 00 2f 00 96 00 41 .D.C.B...../...A
> 0070 - c0 11 c0 07 c0 0c c0 02-00 05 00 04 c0 12 c0 08 ................
> 0080 - 00 16 00 13 00 10 00 0d-c0 0d c0 03 00 0a 00 15 ................
> 0090 - 00 12 00 0f 00 0c 00 09-00 ff 01 00 00 31 00 0b .............1..
> 00a0 - 00 04 03 00 01 02 00 0a-00 1c 00 1a 00 17 00 19 ................
> 00b0 - 00 1c 00 1b 00 18 00 1a-00 16 00 0e 00 0d 00 0b ................
> 00c0 - 00 0c 00 09 00 0a 00 23-00 00 00 0f 00 01 01 .......#.......
> read from 0x1885950 [0x188c6d3] (5 bytes => -1 (0xFFFFFFFFFFFFFFFF))
> write:errno=104
> ---
> no peer certificate available
> ---
> No client certificate CA names sent
> ---
> SSL handshake has read 0 bytes and written 0 bytes
> ---
> New, (NONE), Cipher is (NONE)
> Secure Renegotiation IS NOT supported
> Compression: NONE
> Expansion: NONE
> No ALPN negotiated
> SSL-Session:
> Protocol : TLSv1
> Cipher : 0000
> Session-ID:
> Session-ID-ctx:
> Master-Key:
> Key-Arg : None
> PSK identity: None
> PSK identity hint: None
> SRP username: None
> Start Time: 1454352953
> Timeout : 7200 (sec)
> Verify return code: 0 (ok)
> ---
>
>
>
>
>> On Feb 1, 2016, at 1:39 PM, Nazario Parsacala <dodongjuan@gmail.com <ma...@gmail.com>> wrote:
>>
>> Hmm. So I removed port 9092 and just use port 9093. So no PLAINTEXT just SSL
>>
>> advertised.listeners=SSL://reactor.us.cixsoft.net:9093 <ssl://reactor.us.cixsoft.net:9093>
>>
>> Cleared Zookeeper and Kafka store and restart ..
>>
>> You see that it is registering 9093 onbly
>> [2016-02-01 13:35:51,729] INFO Registered broker 0 at path /brokers/ids/0 with addresses: SSL -> EndPoint(servername,9093,SSL) (kafka.utils.ZkUtils)
>>
>>
>> But lsof says ..
>>
>>
>> lsof -p 7910 | grep LIST
>> java 7910 bushido 67u IPv6 73382 0t0 TCP *:35878 (LISTEN)
>> java 7910 bushido 92u IPv6 113423 0t0 TCP servername:9092 (LISTEN)
>>
>>
>>> On Feb 1, 2016, at 1:02 PM, Anirudh P <panirudh2001@gmail.com <ma...@gmail.com>> wrote:
>>>
>>> Hello Nazario,
>>>
>>> Could you try it by creating a new topic?
>>>
>>> Thank you,
>>> Anirudh
>>> That works. At least it is saying that it is registering now with the SSL
>>> side.
>>>
>>>
>>> [2016-02-01 12:29:40,184] INFO Registered broker 0 at path /brokers/ids/0
>>> with addresses: PLAINTEXT -> EndPoint(servername,9092,PLAINTEXT),SSL ->
>>> EndPoint(servername,9093,SSL) (kafka.utils.ZkUtils)
>>>
>>>
>>> Thank you.
>>>
>>> Now to the next problem. :-) Still related to SSL.
>>>
>>>
>>> The producer is not giving any more LEADER_NOT_AVAILABLE errors. but is now
>>> having this problem instead.
>>>
>>> [2016-02-01 12:41:59,273] ERROR Error when sending message to topic test
>>> with key: null, value: 5 bytes with error: Failed to update metadata after
>>> 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
>>> [2016-02-01 12:42:59,274] ERROR Error when sending message to topic test
>>> with key: null, value: 7 bytes with error: Failed to update metadata after
>>> 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
>>> [2016-02-01 12:43:59,275] ERROR Error when sending message to topic test
>>> with key: null, value: 0 bytes with error: Failed to update metadata after
>>> 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
>>>
>>>
>>> Consumer is connecting too but not receiving any data
>>>
>>>
>>>
>>>
>>>> On Feb 1, 2016, at 12:15 PM, Ismael Juma <ismael@juma.me.uk <ma...@juma.me.uk>> wrote:
>>>>
>>>> Please use advertised.listeners instead of advertised.host.name. See this
>>>> comment:
>>>>
>>>> https://github.com/apache/kafka/pull/793#issuecomment-174287124 <https://github.com/apache/kafka/pull/793#issuecomment-174287124>
>>>>
>>>> Ismael
>>>>
>>>> On Mon, Feb 1, 2016 at 4:44 PM, Nazario Parsacala <dodongjuan@gmail.com <ma...@gmail.com>>
>>>> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> We were using kafka for a while now. We have been using the binary
>>> release
>>>>> 2.10-0.8.2.1 . But we have been needing a encrypted communication between
>>>>> our publishers and subscribers. So we got 2.10-0.9.0.0. This works very
>>>>> well with no SSL enabled. But currently have issues with SSL enabled.
>>>>>
>>>>> So configured SSL according to
>>>>> http://kafka.apache.org/documentation.html#security <http://kafka.apache.org/documentation.html#security> . And only place the
>>>>> following changes in the server.properties to enable SSL
>>>>>
>>>>> listeners=PLAINTEXT://servername:9092 <plaintext://servername:9092>, SSL://servername:9093 <ssl://servername:9093>
>>>>>
>>>>> # The port the socket server listens on
>>>>> #port=9092
>>>>>
>>>>> # Hostname the broker will bind to. If not set, the server will bind to
>>>>> all interfaces
>>>>> host.name=servername
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> # SSL Stuff
>>>>> #
>>>>> ssl.client.auth=required
>>>>> ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
>>>>> ssl.keystore.location=/pathto/certs/server.keystore.jks
>>>>> ssl.keystore.password=123456
>>>>> ssl.key.password=123456
>>>>> ssl.truststore.location=/pathto/certs/server.truststore.jks
>>>>> ssl.truststore.password=123456
>>>>>
>>>>>
>>>>> At start up I see the following in the logs:
>>>>>
>>>>>
>>>>> advertised.host.name = servername
>>>>> metric.reporters = []
>>>>> quota.producer.default = 9223372036854775807
>>>>> offsets.topic.num.partitions = 50
>>>>> log.flush.interval.messages = 9223372036854775807
>>>>> auto.create.topics.enable = true
>>>>> controller.socket.timeout.ms = 30000
>>>>> log.flush.interval.ms = null
>>>>> principal.builder.class = class
>>>>> org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
>>>>> replica.socket.receive.buffer.bytes = 65536
>>>>> min.insync.replicas = 1
>>>>> replica.fetch.wait.max.ms = 500
>>>>> num.recovery.threads.per.data.dir = 1
>>>>> ssl.keystore.type = JKS
>>>>> default.replication.factor = 1
>>>>> ssl.truststore.password = [hidden]
>>>>> log.preallocate = false
>>>>> sasl.kerberos.principal.to.local.rules = [DEFAULT]
>>>>> fetch.purgatory.purge.interval.requests = 1000
>>>>> ssl.endpoint.identification.algorithm = null
>>>>> replica.socket.timeout.ms = 30000
>>>>> message.max.bytes = 1000012
>>>>> num.io.threads = 8
>>>>> offsets.commit.required.acks = -1
>>>>> log.flush.offset.checkpoint.interval.ms = 60000
>>>>> delete.topic.enable = false
>>>>> quota.window.size.seconds = 1
>>>>> ssl.truststore.type = JKS
>>>>> offsets.commit.timeout.ms = 5000
>>>>> quota.window.num = 11
>>>>> zookeeper.connect = servername:2181
>>>>> authorizer.class.name =
>>>>> num.replica.fetchers = 1
>>>>> log.retention.ms = null
>>>>> log.roll.jitter.hours = 0
>>>>> log.cleaner.enable = false
>>>>> offsets.load.buffer.size = 5242880
>>>>> log.cleaner.delete.retention.ms = 86400000
>>>>> ssl.client.auth = required
>>>>> controlled.shutdown.max.retries = 3
>>>>> queued.max.requests = 500
>>>>> offsets.topic.replication.factor = 3
>>>>> log.cleaner.threads = 1
>>>>> sasl.kerberos.service.name = null
>>>>> sasl.kerberos.ticket.renew.jitter = 0.05
>>>>> socket.request.max.bytes = 104857600
>>>>> ssl.trustmanager.algorithm = PKIX
>>>>> zookeeper.session.timeout.ms = 6000
>>>>> log.retention.bytes = -1
>>>>> sasl.kerberos.min.time.before.relogin = 60000
>>>>> zookeeper.set.acl = false
>>>>> connections.max.idle.ms = 600000
>>>>> offsets.retention.minutes = 1440
>>>>> replica.fetch.backoff.ms = 1000
>>>>> inter.broker.protocol.version = 0.9.0.X
>>>>> log.retention.hours = 168
>>>>> num.partitions = 4
>>>>> listeners = PLAINTEXT://servername:9092 <plaintext://servername:9092>, SSL://servername:9093 <ssl://servername:9093>
>>>>> ssl.provider = null
>>>>> ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
>>>>> log.roll.ms = null
>>>>> log.flush.scheduler.interval.ms = 9223372036854775807
>>>>> ssl.cipher.suites = null
>>>>> log.index.size.max.bytes = 10485760
>>>>> ssl.keymanager.algorithm = SunX509
>>>>> security.inter.broker.protocol = PLAINTEXT
>>>>> replica.fetch.max.bytes = 1048576
>>>>> advertised.port = null
>>>>> log.cleaner.dedupe.buffer.size = 524288000
>>>>> replica.high.watermark.checkpoint.interval.ms = 5000
>>>>> log.cleaner.io.buffer.size = 524288
>>>>> sasl.kerberos.ticket.renew.window.factor = 0.8
>>>>> zookeeper.connection.timeout.ms = 6000
>>>>> controlled.shutdown.retry.backoff.ms = 5000
>>>>> log.roll.hours = 168
>>>>> log.cleanup.policy = delete
>>>>> host.name = servername
>>>>> log.roll.jitter.ms = null
>>>>> max.connections.per.ip = 2147483647
>>>>> offsets.topic.segment.bytes = 104857600
>>>>> background.threads = 10
>>>>> quota.consumer.default = 9223372036854775807
>>>>> request.timeout.ms = 30000
>>>>> log.index.interval.bytes = 4096
>>>>> log.dir = /tmp/kafka-logs
>>>>> log.segment.bytes = 1073741824
>>>>> log.cleaner.backoff.ms = 15000
>>>>> offset.metadata.max.bytes = 4096
>>>>> ssl.truststore.location = /pathto/certs/server.truststore.jks
>>>>> group.max.session.timeout.ms = 30000
>>>>> ssl.keystore.password = [hidden]
>>>>> zookeeper.sync.time.ms = 2000
>>>>> port = 9092
>>>>> log.retention.minutes = null
>>>>> log.segment.delete.delay.ms = 60000
>>>>> log.dirs = /pathto/logs/kafka
>>>>> controlled.shutdown.enable = true
>>>>> compression.type = producer
>>>>> max.connections.per.ip.overrides =
>>>>> sasl.kerberos.kinit.cmd = /usr/bin/kinit
>>>>> log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
>>>>> auto.leader.rebalance.enable = true
>>>>> leader.imbalance.check.interval.seconds = 300
>>>>> log.cleaner.min.cleanable.ratio = 0.5
>>>>> replica.lag.time.max.ms = 10000
>>>>> num.network.threads = 3
>>>>> ssl.key.password = [hidden]
>>>>> reserved.broker.max.id = 1000
>>>>> metrics.num.samples = 2
>>>>> socket.send.buffer.bytes = 102400
>>>>> ssl.protocol = TLS
>>>>> socket.receive.buffer.bytes = 102400
>>>>> ssl.keystore.location = /pathto/certs/server.keystore.jks
>>>>> replica.fetch.min.bytes = 1
>>>>> unclean.leader.election.enable = true
>>>>> group.min.session.timeout.ms = 6000
>>>>> log.cleaner.io.buffer.load.factor = 0.9
>>>>> offsets.retention.check.interval.ms = 600000
>>>>> producer.purgatory.purge.interval.requests = 1000
>>>>>
>>>>>
>>>>>
>>>>> So as you can see the listeners are supposedly setup as
>>>>>
>>>>> listeners = PLAINTEXT://servername:9092 <plaintext://servername:9092>, SSL://servername:9093 <ssl://servername:9093>
>>>>>
>>>>> in the logs which reflected what was setup in the server.properties.
>>>>>
>>>>> However further down the logs, it is only PLAINTEXT that is being
>>>>> registered ..
>>>>>
>>>>> [2016-02-01 11:27:49,712] INFO Registered broker 0 at path /brokers/ids/0
>>>>> with addresses: PLAINTEXT -> EndPoint(servername,9092,PLAINTEXT)
>>>>> (kafka.utils.ZkUtils)
>>>>>
>>>>>
>>>>> not the port 9093 nor the SSL.
>>>>>
>>>>> I have done multiple permutations of this config including clearing the
>>>>> entire kafka and zookeeper data. Still no luck. I even forced the the SSL
>>>>> on port 9092 with the same issue. The resulting effect on this is that
>>> the
>>>>> producer and consumer is giving me errors like :
>>>>>
>>>>> lients.NetworkClient)
>>>>> [2016-02-01 10:58:41,001] WARN Error while fetching metadata with
>>>>> correlation id 57 : {test=LEADER_NOT_AVAILABLE}
>>>>> (org.apache.kafka.clients.NetworkClient)
>>>>> [2016-02-01 10:58:41,103] WARN Error while fetching metadata with
>>>>> correlation id 58 : {test=LEADER_NOT_AVAILABLE}
>>>>> (org.apache.kafka.clients.NetworkClient)
>>>>> [2016-02-01 10:58:41,205] WARN Error while fetching metadata with
>>>>> correlation id 59 : {test=LEADER_NOT_AVAILABLE}
>>>>> (org.apache.kafka.clients.NetworkClient)
>>>>>
>>>>>
>>>>> Any help is appreciated.
>>>>>
>>>>>
>>
>
Re: Kafka SSL Configuration Problems
Posted by Nazario Parsacala <do...@gmail.com>.
So I made the port 9092 but SSL. But it seems like it is just openning it for PLAINTEXT. Even though it has registered it as SSL
[2016-02-01 13:42:20,536] INFO Registered broker 0 at path /brokers/ids/0 with addresses: SSL -> EndPoint(reactor.us.cixsoft.net,9092,SSL) (kafka.utils.ZkUtils)
openssl test seems to indicate that this is not an SSL enabled port.
openssl s_client -debug -connect servername:9092 -tls1
CONNECTED(00000003)
write to 0x1885950 [0x1890c23] (207 bytes => 207 (0xCF))
0000 - 16 03 01 00 ca 01 00 00-c6 03 01 06 72 23 1b e7 ............r#..
0010 - b2 9a 6f 2d 78 26 40 a0-38 db f1 1d 31 e4 f6 72 ..o-x&@.8...1..r
0020 - 0b 6e aa 6c c6 ef 29 1b-0e 2e f9 00 00 6c c0 14 .n.l..)......l..
0030 - c0 0a 00 39 00 38 00 37-00 36 00 88 00 87 00 86 ...9.8.7.6......
0040 - 00 85 c0 0f c0 05 00 35-00 84 c0 13 c0 09 00 33 .......5.......3
0050 - 00 32 00 31 00 30 00 9a-00 99 00 98 00 97 00 45 .2.1.0.........E
0060 - 00 44 00 43 00 42 c0 0e-c0 04 00 2f 00 96 00 41 .D.C.B...../...A
0070 - c0 11 c0 07 c0 0c c0 02-00 05 00 04 c0 12 c0 08 ................
0080 - 00 16 00 13 00 10 00 0d-c0 0d c0 03 00 0a 00 15 ................
0090 - 00 12 00 0f 00 0c 00 09-00 ff 01 00 00 31 00 0b .............1..
00a0 - 00 04 03 00 01 02 00 0a-00 1c 00 1a 00 17 00 19 ................
00b0 - 00 1c 00 1b 00 18 00 1a-00 16 00 0e 00 0d 00 0b ................
00c0 - 00 0c 00 09 00 0a 00 23-00 00 00 0f 00 01 01 .......#.......
read from 0x1885950 [0x188c6d3] (5 bytes => -1 (0xFFFFFFFFFFFFFFFF))
write:errno=104
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 0 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1
Cipher : 0000
Session-ID:
Session-ID-ctx:
Master-Key:
Key-Arg : None
PSK identity: None
PSK identity hint: None
SRP username: None
Start Time: 1454352953
Timeout : 7200 (sec)
Verify return code: 0 (ok)
---
> On Feb 1, 2016, at 1:39 PM, Nazario Parsacala <do...@gmail.com> wrote:
>
> Hmm. So I removed port 9092 and just use port 9093. So no PLAINTEXT just SSL
>
> advertised.listeners=SSL://reactor.us.cixsoft.net:9093 <ssl://reactor.us.cixsoft.net:9093>
>
> Cleared Zookeeper and Kafka store and restart ..
>
> You see that it is registering 9093 onbly
> [2016-02-01 13:35:51,729] INFO Registered broker 0 at path /brokers/ids/0 with addresses: SSL -> EndPoint(servername,9093,SSL) (kafka.utils.ZkUtils)
>
>
> But lsof says ..
>
>
> lsof -p 7910 | grep LIST
> java 7910 bushido 67u IPv6 73382 0t0 TCP *:35878 (LISTEN)
> java 7910 bushido 92u IPv6 113423 0t0 TCP servername:9092 (LISTEN)
>
>
>> On Feb 1, 2016, at 1:02 PM, Anirudh P <panirudh2001@gmail.com <ma...@gmail.com>> wrote:
>>
>> Hello Nazario,
>>
>> Could you try it by creating a new topic?
>>
>> Thank you,
>> Anirudh
>> That works. At least it is saying that it is registering now with the SSL
>> side.
>>
>>
>> [2016-02-01 12:29:40,184] INFO Registered broker 0 at path /brokers/ids/0
>> with addresses: PLAINTEXT -> EndPoint(servername,9092,PLAINTEXT),SSL ->
>> EndPoint(servername,9093,SSL) (kafka.utils.ZkUtils)
>>
>>
>> Thank you.
>>
>> Now to the next problem. :-) Still related to SSL.
>>
>>
>> The producer is not giving any more LEADER_NOT_AVAILABLE errors. but is now
>> having this problem instead.
>>
>> [2016-02-01 12:41:59,273] ERROR Error when sending message to topic test
>> with key: null, value: 5 bytes with error: Failed to update metadata after
>> 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
>> [2016-02-01 12:42:59,274] ERROR Error when sending message to topic test
>> with key: null, value: 7 bytes with error: Failed to update metadata after
>> 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
>> [2016-02-01 12:43:59,275] ERROR Error when sending message to topic test
>> with key: null, value: 0 bytes with error: Failed to update metadata after
>> 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
>>
>>
>> Consumer is connecting too but not receiving any data
>>
>>
>>
>>
>>> On Feb 1, 2016, at 12:15 PM, Ismael Juma <ismael@juma.me.uk <ma...@juma.me.uk>> wrote:
>>>
>>> Please use advertised.listeners instead of advertised.host.name. See this
>>> comment:
>>>
>>> https://github.com/apache/kafka/pull/793#issuecomment-174287124 <https://github.com/apache/kafka/pull/793#issuecomment-174287124>
>>>
>>> Ismael
>>>
>>> On Mon, Feb 1, 2016 at 4:44 PM, Nazario Parsacala <do...@gmail.com>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> We were using kafka for a while now. We have been using the binary
>> release
>>>> 2.10-0.8.2.1 . But we have been needing a encrypted communication between
>>>> our publishers and subscribers. So we got 2.10-0.9.0.0. This works very
>>>> well with no SSL enabled. But currently have issues with SSL enabled.
>>>>
>>>> So configured SSL according to
>>>> http://kafka.apache.org/documentation.html#security <http://kafka.apache.org/documentation.html#security> . And only place the
>>>> following changes in the server.properties to enable SSL
>>>>
>>>> listeners=PLAINTEXT://servername:9092 <plaintext://servername:9092>, SSL://servername:9093 <ssl://servername:9093>
>>>>
>>>> # The port the socket server listens on
>>>> #port=9092
>>>>
>>>> # Hostname the broker will bind to. If not set, the server will bind to
>>>> all interfaces
>>>> host.name=servername
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> # SSL Stuff
>>>> #
>>>> ssl.client.auth=required
>>>> ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
>>>> ssl.keystore.location=/pathto/certs/server.keystore.jks
>>>> ssl.keystore.password=123456
>>>> ssl.key.password=123456
>>>> ssl.truststore.location=/pathto/certs/server.truststore.jks
>>>> ssl.truststore.password=123456
>>>>
>>>>
>>>> At start up I see the following in the logs:
>>>>
>>>>
>>>> advertised.host.name = servername
>>>> metric.reporters = []
>>>> quota.producer.default = 9223372036854775807
>>>> offsets.topic.num.partitions = 50
>>>> log.flush.interval.messages = 9223372036854775807
>>>> auto.create.topics.enable = true
>>>> controller.socket.timeout.ms = 30000
>>>> log.flush.interval.ms = null
>>>> principal.builder.class = class
>>>> org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
>>>> replica.socket.receive.buffer.bytes = 65536
>>>> min.insync.replicas = 1
>>>> replica.fetch.wait.max.ms = 500
>>>> num.recovery.threads.per.data.dir = 1
>>>> ssl.keystore.type = JKS
>>>> default.replication.factor = 1
>>>> ssl.truststore.password = [hidden]
>>>> log.preallocate = false
>>>> sasl.kerberos.principal.to.local.rules = [DEFAULT]
>>>> fetch.purgatory.purge.interval.requests = 1000
>>>> ssl.endpoint.identification.algorithm = null
>>>> replica.socket.timeout.ms = 30000
>>>> message.max.bytes = 1000012
>>>> num.io.threads = 8
>>>> offsets.commit.required.acks = -1
>>>> log.flush.offset.checkpoint.interval.ms = 60000
>>>> delete.topic.enable = false
>>>> quota.window.size.seconds = 1
>>>> ssl.truststore.type = JKS
>>>> offsets.commit.timeout.ms = 5000
>>>> quota.window.num = 11
>>>> zookeeper.connect = servername:2181
>>>> authorizer.class.name =
>>>> num.replica.fetchers = 1
>>>> log.retention.ms = null
>>>> log.roll.jitter.hours = 0
>>>> log.cleaner.enable = false
>>>> offsets.load.buffer.size = 5242880
>>>> log.cleaner.delete.retention.ms = 86400000
>>>> ssl.client.auth = required
>>>> controlled.shutdown.max.retries = 3
>>>> queued.max.requests = 500
>>>> offsets.topic.replication.factor = 3
>>>> log.cleaner.threads = 1
>>>> sasl.kerberos.service.name = null
>>>> sasl.kerberos.ticket.renew.jitter = 0.05
>>>> socket.request.max.bytes = 104857600
>>>> ssl.trustmanager.algorithm = PKIX
>>>> zookeeper.session.timeout.ms = 6000
>>>> log.retention.bytes = -1
>>>> sasl.kerberos.min.time.before.relogin = 60000
>>>> zookeeper.set.acl = false
>>>> connections.max.idle.ms = 600000
>>>> offsets.retention.minutes = 1440
>>>> replica.fetch.backoff.ms = 1000
>>>> inter.broker.protocol.version = 0.9.0.X
>>>> log.retention.hours = 168
>>>> num.partitions = 4
>>>> listeners = PLAINTEXT://servername:9092 <plaintext://servername:9092>, SSL://servername:9093 <ssl://servername:9093>
>>>> ssl.provider = null
>>>> ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
>>>> log.roll.ms = null
>>>> log.flush.scheduler.interval.ms = 9223372036854775807
>>>> ssl.cipher.suites = null
>>>> log.index.size.max.bytes = 10485760
>>>> ssl.keymanager.algorithm = SunX509
>>>> security.inter.broker.protocol = PLAINTEXT
>>>> replica.fetch.max.bytes = 1048576
>>>> advertised.port = null
>>>> log.cleaner.dedupe.buffer.size = 524288000
>>>> replica.high.watermark.checkpoint.interval.ms = 5000
>>>> log.cleaner.io.buffer.size = 524288
>>>> sasl.kerberos.ticket.renew.window.factor = 0.8
>>>> zookeeper.connection.timeout.ms = 6000
>>>> controlled.shutdown.retry.backoff.ms = 5000
>>>> log.roll.hours = 168
>>>> log.cleanup.policy = delete
>>>> host.name = servername
>>>> log.roll.jitter.ms = null
>>>> max.connections.per.ip = 2147483647
>>>> offsets.topic.segment.bytes = 104857600
>>>> background.threads = 10
>>>> quota.consumer.default = 9223372036854775807
>>>> request.timeout.ms = 30000
>>>> log.index.interval.bytes = 4096
>>>> log.dir = /tmp/kafka-logs
>>>> log.segment.bytes = 1073741824
>>>> log.cleaner.backoff.ms = 15000
>>>> offset.metadata.max.bytes = 4096
>>>> ssl.truststore.location = /pathto/certs/server.truststore.jks
>>>> group.max.session.timeout.ms = 30000
>>>> ssl.keystore.password = [hidden]
>>>> zookeeper.sync.time.ms = 2000
>>>> port = 9092
>>>> log.retention.minutes = null
>>>> log.segment.delete.delay.ms = 60000
>>>> log.dirs = /pathto/logs/kafka
>>>> controlled.shutdown.enable = true
>>>> compression.type = producer
>>>> max.connections.per.ip.overrides =
>>>> sasl.kerberos.kinit.cmd = /usr/bin/kinit
>>>> log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
>>>> auto.leader.rebalance.enable = true
>>>> leader.imbalance.check.interval.seconds = 300
>>>> log.cleaner.min.cleanable.ratio = 0.5
>>>> replica.lag.time.max.ms = 10000
>>>> num.network.threads = 3
>>>> ssl.key.password = [hidden]
>>>> reserved.broker.max.id = 1000
>>>> metrics.num.samples = 2
>>>> socket.send.buffer.bytes = 102400
>>>> ssl.protocol = TLS
>>>> socket.receive.buffer.bytes = 102400
>>>> ssl.keystore.location = /pathto/certs/server.keystore.jks
>>>> replica.fetch.min.bytes = 1
>>>> unclean.leader.election.enable = true
>>>> group.min.session.timeout.ms = 6000
>>>> log.cleaner.io.buffer.load.factor = 0.9
>>>> offsets.retention.check.interval.ms = 600000
>>>> producer.purgatory.purge.interval.requests = 1000
>>>>
>>>>
>>>>
>>>> So as you can see the listeners are supposedly setup as
>>>>
>>>> listeners = PLAINTEXT://servername:9092 <plaintext://servername:9092>, SSL://servername:9093 <ssl://servername:9093>
>>>>
>>>> in the logs which reflected what was setup in the server.properties.
>>>>
>>>> However further down the logs, it is only PLAINTEXT that is being
>>>> registered ..
>>>>
>>>> [2016-02-01 11:27:49,712] INFO Registered broker 0 at path /brokers/ids/0
>>>> with addresses: PLAINTEXT -> EndPoint(servername,9092,PLAINTEXT)
>>>> (kafka.utils.ZkUtils)
>>>>
>>>>
>>>> not the port 9093 nor the SSL.
>>>>
>>>> I have done multiple permutations of this config including clearing the
>>>> entire kafka and zookeeper data. Still no luck. I even forced the the SSL
>>>> on port 9092 with the same issue. The resulting effect on this is that
>> the
>>>> producer and consumer is giving me errors like :
>>>>
>>>> lients.NetworkClient)
>>>> [2016-02-01 10:58:41,001] WARN Error while fetching metadata with
>>>> correlation id 57 : {test=LEADER_NOT_AVAILABLE}
>>>> (org.apache.kafka.clients.NetworkClient)
>>>> [2016-02-01 10:58:41,103] WARN Error while fetching metadata with
>>>> correlation id 58 : {test=LEADER_NOT_AVAILABLE}
>>>> (org.apache.kafka.clients.NetworkClient)
>>>> [2016-02-01 10:58:41,205] WARN Error while fetching metadata with
>>>> correlation id 59 : {test=LEADER_NOT_AVAILABLE}
>>>> (org.apache.kafka.clients.NetworkClient)
>>>>
>>>>
>>>> Any help is appreciated.
>>>>
>>>>
>
Re: Kafka SSL Configuration Problems
Posted by Nazario Parsacala <do...@gmail.com>.
Hmm. So I removed port 9092 and just use port 9093. So no PLAINTEXT just SSL
advertised.listeners=SSL://reactor.us.cixsoft.net:9093
Cleared Zookeeper and Kafka store and restart ..
You see that it is registering 9093 onbly
[2016-02-01 13:35:51,729] INFO Registered broker 0 at path /brokers/ids/0 with addresses: SSL -> EndPoint(servername,9093,SSL) (kafka.utils.ZkUtils)
But lsof says ..
lsof -p 7910 | grep LIST
java 7910 bushido 67u IPv6 73382 0t0 TCP *:35878 (LISTEN)
java 7910 bushido 92u IPv6 113423 0t0 TCP servername:9092 (LISTEN)
> On Feb 1, 2016, at 1:02 PM, Anirudh P <pa...@gmail.com> wrote:
>
> Hello Nazario,
>
> Could you try it by creating a new topic?
>
> Thank you,
> Anirudh
> That works. At least it is saying that it is registering now with the SSL
> side.
>
>
> [2016-02-01 12:29:40,184] INFO Registered broker 0 at path /brokers/ids/0
> with addresses: PLAINTEXT -> EndPoint(servername,9092,PLAINTEXT),SSL ->
> EndPoint(servername,9093,SSL) (kafka.utils.ZkUtils)
>
>
> Thank you.
>
> Now to the next problem. :-) Still related to SSL.
>
>
> The producer is not giving any more LEADER_NOT_AVAILABLE errors. but is now
> having this problem instead.
>
> [2016-02-01 12:41:59,273] ERROR Error when sending message to topic test
> with key: null, value: 5 bytes with error: Failed to update metadata after
> 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> [2016-02-01 12:42:59,274] ERROR Error when sending message to topic test
> with key: null, value: 7 bytes with error: Failed to update metadata after
> 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> [2016-02-01 12:43:59,275] ERROR Error when sending message to topic test
> with key: null, value: 0 bytes with error: Failed to update metadata after
> 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
>
>
> Consumer is connecting too but not receiving any data
>
>
>
>
>> On Feb 1, 2016, at 12:15 PM, Ismael Juma <is...@juma.me.uk> wrote:
>>
>> Please use advertised.listeners instead of advertised.host.name. See this
>> comment:
>>
>> https://github.com/apache/kafka/pull/793#issuecomment-174287124
>>
>> Ismael
>>
>> On Mon, Feb 1, 2016 at 4:44 PM, Nazario Parsacala <do...@gmail.com>
>> wrote:
>>
>>> Hi,
>>>
>>> We were using kafka for a while now. We have been using the binary
> release
>>> 2.10-0.8.2.1 . But we have been needing a encrypted communication between
>>> our publishers and subscribers. So we got 2.10-0.9.0.0. This works very
>>> well with no SSL enabled. But currently have issues with SSL enabled.
>>>
>>> So configured SSL according to
>>> http://kafka.apache.org/documentation.html#security . And only place the
>>> following changes in the server.properties to enable SSL
>>>
>>> listeners=PLAINTEXT://servername:9092, SSL://servername:9093
>>>
>>> # The port the socket server listens on
>>> #port=9092
>>>
>>> # Hostname the broker will bind to. If not set, the server will bind to
>>> all interfaces
>>> host.name=servername
>>>
>>>
>>>
>>>
>>>
>>> # SSL Stuff
>>> #
>>> ssl.client.auth=required
>>> ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
>>> ssl.keystore.location=/pathto/certs/server.keystore.jks
>>> ssl.keystore.password=123456
>>> ssl.key.password=123456
>>> ssl.truststore.location=/pathto/certs/server.truststore.jks
>>> ssl.truststore.password=123456
>>>
>>>
>>> At start up I see the following in the logs:
>>>
>>>
>>> advertised.host.name = servername
>>> metric.reporters = []
>>> quota.producer.default = 9223372036854775807
>>> offsets.topic.num.partitions = 50
>>> log.flush.interval.messages = 9223372036854775807
>>> auto.create.topics.enable = true
>>> controller.socket.timeout.ms = 30000
>>> log.flush.interval.ms = null
>>> principal.builder.class = class
>>> org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
>>> replica.socket.receive.buffer.bytes = 65536
>>> min.insync.replicas = 1
>>> replica.fetch.wait.max.ms = 500
>>> num.recovery.threads.per.data.dir = 1
>>> ssl.keystore.type = JKS
>>> default.replication.factor = 1
>>> ssl.truststore.password = [hidden]
>>> log.preallocate = false
>>> sasl.kerberos.principal.to.local.rules = [DEFAULT]
>>> fetch.purgatory.purge.interval.requests = 1000
>>> ssl.endpoint.identification.algorithm = null
>>> replica.socket.timeout.ms = 30000
>>> message.max.bytes = 1000012
>>> num.io.threads = 8
>>> offsets.commit.required.acks = -1
>>> log.flush.offset.checkpoint.interval.ms = 60000
>>> delete.topic.enable = false
>>> quota.window.size.seconds = 1
>>> ssl.truststore.type = JKS
>>> offsets.commit.timeout.ms = 5000
>>> quota.window.num = 11
>>> zookeeper.connect = servername:2181
>>> authorizer.class.name =
>>> num.replica.fetchers = 1
>>> log.retention.ms = null
>>> log.roll.jitter.hours = 0
>>> log.cleaner.enable = false
>>> offsets.load.buffer.size = 5242880
>>> log.cleaner.delete.retention.ms = 86400000
>>> ssl.client.auth = required
>>> controlled.shutdown.max.retries = 3
>>> queued.max.requests = 500
>>> offsets.topic.replication.factor = 3
>>> log.cleaner.threads = 1
>>> sasl.kerberos.service.name = null
>>> sasl.kerberos.ticket.renew.jitter = 0.05
>>> socket.request.max.bytes = 104857600
>>> ssl.trustmanager.algorithm = PKIX
>>> zookeeper.session.timeout.ms = 6000
>>> log.retention.bytes = -1
>>> sasl.kerberos.min.time.before.relogin = 60000
>>> zookeeper.set.acl = false
>>> connections.max.idle.ms = 600000
>>> offsets.retention.minutes = 1440
>>> replica.fetch.backoff.ms = 1000
>>> inter.broker.protocol.version = 0.9.0.X
>>> log.retention.hours = 168
>>> num.partitions = 4
>>> listeners = PLAINTEXT://servername:9092, SSL://servername:9093
>>> ssl.provider = null
>>> ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
>>> log.roll.ms = null
>>> log.flush.scheduler.interval.ms = 9223372036854775807
>>> ssl.cipher.suites = null
>>> log.index.size.max.bytes = 10485760
>>> ssl.keymanager.algorithm = SunX509
>>> security.inter.broker.protocol = PLAINTEXT
>>> replica.fetch.max.bytes = 1048576
>>> advertised.port = null
>>> log.cleaner.dedupe.buffer.size = 524288000
>>> replica.high.watermark.checkpoint.interval.ms = 5000
>>> log.cleaner.io.buffer.size = 524288
>>> sasl.kerberos.ticket.renew.window.factor = 0.8
>>> zookeeper.connection.timeout.ms = 6000
>>> controlled.shutdown.retry.backoff.ms = 5000
>>> log.roll.hours = 168
>>> log.cleanup.policy = delete
>>> host.name = servername
>>> log.roll.jitter.ms = null
>>> max.connections.per.ip = 2147483647
>>> offsets.topic.segment.bytes = 104857600
>>> background.threads = 10
>>> quota.consumer.default = 9223372036854775807
>>> request.timeout.ms = 30000
>>> log.index.interval.bytes = 4096
>>> log.dir = /tmp/kafka-logs
>>> log.segment.bytes = 1073741824
>>> log.cleaner.backoff.ms = 15000
>>> offset.metadata.max.bytes = 4096
>>> ssl.truststore.location = /pathto/certs/server.truststore.jks
>>> group.max.session.timeout.ms = 30000
>>> ssl.keystore.password = [hidden]
>>> zookeeper.sync.time.ms = 2000
>>> port = 9092
>>> log.retention.minutes = null
>>> log.segment.delete.delay.ms = 60000
>>> log.dirs = /pathto/logs/kafka
>>> controlled.shutdown.enable = true
>>> compression.type = producer
>>> max.connections.per.ip.overrides =
>>> sasl.kerberos.kinit.cmd = /usr/bin/kinit
>>> log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
>>> auto.leader.rebalance.enable = true
>>> leader.imbalance.check.interval.seconds = 300
>>> log.cleaner.min.cleanable.ratio = 0.5
>>> replica.lag.time.max.ms = 10000
>>> num.network.threads = 3
>>> ssl.key.password = [hidden]
>>> reserved.broker.max.id = 1000
>>> metrics.num.samples = 2
>>> socket.send.buffer.bytes = 102400
>>> ssl.protocol = TLS
>>> socket.receive.buffer.bytes = 102400
>>> ssl.keystore.location = /pathto/certs/server.keystore.jks
>>> replica.fetch.min.bytes = 1
>>> unclean.leader.election.enable = true
>>> group.min.session.timeout.ms = 6000
>>> log.cleaner.io.buffer.load.factor = 0.9
>>> offsets.retention.check.interval.ms = 600000
>>> producer.purgatory.purge.interval.requests = 1000
>>>
>>>
>>>
>>> So as you can see the listeners are supposedly setup as
>>>
>>> listeners = PLAINTEXT://servername:9092, SSL://servername:9093
>>>
>>> in the logs which reflected what was setup in the server.properties.
>>>
>>> However further down the logs, it is only PLAINTEXT that is being
>>> registered ..
>>>
>>> [2016-02-01 11:27:49,712] INFO Registered broker 0 at path /brokers/ids/0
>>> with addresses: PLAINTEXT -> EndPoint(servername,9092,PLAINTEXT)
>>> (kafka.utils.ZkUtils)
>>>
>>>
>>> not the port 9093 nor the SSL.
>>>
>>> I have done multiple permutations of this config including clearing the
>>> entire kafka and zookeeper data. Still no luck. I even forced the the SSL
>>> on port 9092 with the same issue. The resulting effect on this is that
> the
>>> producer and consumer is giving me errors like :
>>>
>>> lients.NetworkClient)
>>> [2016-02-01 10:58:41,001] WARN Error while fetching metadata with
>>> correlation id 57 : {test=LEADER_NOT_AVAILABLE}
>>> (org.apache.kafka.clients.NetworkClient)
>>> [2016-02-01 10:58:41,103] WARN Error while fetching metadata with
>>> correlation id 58 : {test=LEADER_NOT_AVAILABLE}
>>> (org.apache.kafka.clients.NetworkClient)
>>> [2016-02-01 10:58:41,205] WARN Error while fetching metadata with
>>> correlation id 59 : {test=LEADER_NOT_AVAILABLE}
>>> (org.apache.kafka.clients.NetworkClient)
>>>
>>>
>>> Any help is appreciated.
>>>
>>>
Re: Kafka SSL Configuration Problems
Posted by Nazario Parsacala <do...@gmail.com>.
Ok, This is getting interesting .. On the broker side, it is saying that it is registering 9092 as PLAINTEXT and 9093 as SSL
[2016-02-01 13:26:33,796] INFO Registered broker 0 at path /brokers/ids/0 with addresses: PLAINTEXT -> EndPoint(servername,9092,PLAINTEXT),SSL -> EndPoint(servername,9093,SSL) (kafka.utils.ZkUtils)
But if you check on the ports that is open by the broker., you only see port 9092 ..
lsof -p 7675 | grep LIST
java 7675 bushido 67u IPv6 110567 0t0 TCP *:45688 (LISTEN)
java 7675 bushido 96u IPv6 113359 0t0 TCP servername:9092 (LISTEN)
Why ..?
> On Feb 1, 2016, at 1:16 PM, Nazario Parsacala <do...@gmail.com> wrote:
>
> No juice.
>
> /kafka-topics.sh --describe --topic anotherone --zookeeper localhost:2181
> Topic:anotherone PartitionCount:4 ReplicationFactor:1 Configs:
> Topic: anotherone Partition: 0 Leader: 0 Replicas: 0 Isr: 0
> Topic: anotherone Partition: 1 Leader: 0 Replicas: 0 Isr: 0
> Topic: anotherone Partition: 2 Leader: 0 Replicas: 0 Isr: 0
> Topic: anotherone Partition: 3 Leader: 0 Replicas: 0 Isr: 0
>
> Same error.
>
> bin/kafka-console-producer.sh --broker-list servername:9093 --topic anotherone --producer.config config/client-ssl.properties
> [2016-02-01 13:09:45,205] ERROR Error when sending message to topic anotherone with key: null, value: 0 bytes with error: Failed to update metadata after 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> [2016-02-01 13:10:45,206] ERROR Error when sending message to topic anotherone with key: null, value: 0 bytes with error: Failed to update metadata after 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
>
>
> I have read from somewhere that you need to configure configure meta.broker.list ? Is this true ? Anyways tried setting that too with no luck.
>
>
>
>> On Feb 1, 2016, at 1:02 PM, Anirudh P <panirudh2001@gmail.com <ma...@gmail.com>> wrote:
>>
>> Hello Nazario,
>>
>> Could you try it by creating a new topic?
>>
>> Thank you,
>> Anirudh
>> That works. At least it is saying that it is registering now with the SSL
>> side.
>>
>>
>> [2016-02-01 12:29:40,184] INFO Registered broker 0 at path /brokers/ids/0
>> with addresses: PLAINTEXT -> EndPoint(servername,9092,PLAINTEXT),SSL ->
>> EndPoint(servername,9093,SSL) (kafka.utils.ZkUtils)
>>
>>
>> Thank you.
>>
>> Now to the next problem. :-) Still related to SSL.
>>
>>
>> The producer is not giving any more LEADER_NOT_AVAILABLE errors. but is now
>> having this problem instead.
>>
>> [2016-02-01 12:41:59,273] ERROR Error when sending message to topic test
>> with key: null, value: 5 bytes with error: Failed to update metadata after
>> 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
>> [2016-02-01 12:42:59,274] ERROR Error when sending message to topic test
>> with key: null, value: 7 bytes with error: Failed to update metadata after
>> 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
>> [2016-02-01 12:43:59,275] ERROR Error when sending message to topic test
>> with key: null, value: 0 bytes with error: Failed to update metadata after
>> 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
>>
>>
>> Consumer is connecting too but not receiving any data
>>
>>
>>
>>
>>> On Feb 1, 2016, at 12:15 PM, Ismael Juma <ismael@juma.me.uk <ma...@juma.me.uk>> wrote:
>>>
>>> Please use advertised.listeners instead of advertised.host.name. See this
>>> comment:
>>>
>>> https://github.com/apache/kafka/pull/793#issuecomment-174287124 <https://github.com/apache/kafka/pull/793#issuecomment-174287124>
>>>
>>> Ismael
>>>
>>> On Mon, Feb 1, 2016 at 4:44 PM, Nazario Parsacala <do...@gmail.com>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> We were using kafka for a while now. We have been using the binary
>> release
>>>> 2.10-0.8.2.1 . But we have been needing a encrypted communication between
>>>> our publishers and subscribers. So we got 2.10-0.9.0.0. This works very
>>>> well with no SSL enabled. But currently have issues with SSL enabled.
>>>>
>>>> So configured SSL according to
>>>> http://kafka.apache.org/documentation.html#security <http://kafka.apache.org/documentation.html#security> . And only place the
>>>> following changes in the server.properties to enable SSL
>>>>
>>>> listeners=PLAINTEXT://servername:9092 <plaintext://servername:9092>, SSL://servername:9093 <ssl://servername:9093>
>>>>
>>>> # The port the socket server listens on
>>>> #port=9092
>>>>
>>>> # Hostname the broker will bind to. If not set, the server will bind to
>>>> all interfaces
>>>> host.name=servername
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> # SSL Stuff
>>>> #
>>>> ssl.client.auth=required
>>>> ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
>>>> ssl.keystore.location=/pathto/certs/server.keystore.jks
>>>> ssl.keystore.password=123456
>>>> ssl.key.password=123456
>>>> ssl.truststore.location=/pathto/certs/server.truststore.jks
>>>> ssl.truststore.password=123456
>>>>
>>>>
>>>> At start up I see the following in the logs:
>>>>
>>>>
>>>> advertised.host.name = servername
>>>> metric.reporters = []
>>>> quota.producer.default = 9223372036854775807
>>>> offsets.topic.num.partitions = 50
>>>> log.flush.interval.messages = 9223372036854775807
>>>> auto.create.topics.enable = true
>>>> controller.socket.timeout.ms = 30000
>>>> log.flush.interval.ms = null
>>>> principal.builder.class = class
>>>> org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
>>>> replica.socket.receive.buffer.bytes = 65536
>>>> min.insync.replicas = 1
>>>> replica.fetch.wait.max.ms = 500
>>>> num.recovery.threads.per.data.dir = 1
>>>> ssl.keystore.type = JKS
>>>> default.replication.factor = 1
>>>> ssl.truststore.password = [hidden]
>>>> log.preallocate = false
>>>> sasl.kerberos.principal.to.local.rules = [DEFAULT]
>>>> fetch.purgatory.purge.interval.requests = 1000
>>>> ssl.endpoint.identification.algorithm = null
>>>> replica.socket.timeout.ms = 30000
>>>> message.max.bytes = 1000012
>>>> num.io.threads = 8
>>>> offsets.commit.required.acks = -1
>>>> log.flush.offset.checkpoint.interval.ms = 60000
>>>> delete.topic.enable = false
>>>> quota.window.size.seconds = 1
>>>> ssl.truststore.type = JKS
>>>> offsets.commit.timeout.ms = 5000
>>>> quota.window.num = 11
>>>> zookeeper.connect = servername:2181
>>>> authorizer.class.name =
>>>> num.replica.fetchers = 1
>>>> log.retention.ms = null
>>>> log.roll.jitter.hours = 0
>>>> log.cleaner.enable = false
>>>> offsets.load.buffer.size = 5242880
>>>> log.cleaner.delete.retention.ms = 86400000
>>>> ssl.client.auth = required
>>>> controlled.shutdown.max.retries = 3
>>>> queued.max.requests = 500
>>>> offsets.topic.replication.factor = 3
>>>> log.cleaner.threads = 1
>>>> sasl.kerberos.service.name = null
>>>> sasl.kerberos.ticket.renew.jitter = 0.05
>>>> socket.request.max.bytes = 104857600
>>>> ssl.trustmanager.algorithm = PKIX
>>>> zookeeper.session.timeout.ms = 6000
>>>> log.retention.bytes = -1
>>>> sasl.kerberos.min.time.before.relogin = 60000
>>>> zookeeper.set.acl = false
>>>> connections.max.idle.ms = 600000
>>>> offsets.retention.minutes = 1440
>>>> replica.fetch.backoff.ms = 1000
>>>> inter.broker.protocol.version = 0.9.0.X
>>>> log.retention.hours = 168
>>>> num.partitions = 4
>>>> listeners = PLAINTEXT://servername:9092 <plaintext://servername:9092>, SSL://servername:9093 <ssl://servername:9093>
>>>> ssl.provider = null
>>>> ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
>>>> log.roll.ms = null
>>>> log.flush.scheduler.interval.ms = 9223372036854775807
>>>> ssl.cipher.suites = null
>>>> log.index.size.max.bytes = 10485760
>>>> ssl.keymanager.algorithm = SunX509
>>>> security.inter.broker.protocol = PLAINTEXT
>>>> replica.fetch.max.bytes = 1048576
>>>> advertised.port = null
>>>> log.cleaner.dedupe.buffer.size = 524288000
>>>> replica.high.watermark.checkpoint.interval.ms = 5000
>>>> log.cleaner.io.buffer.size = 524288
>>>> sasl.kerberos.ticket.renew.window.factor = 0.8
>>>> zookeeper.connection.timeout.ms = 6000
>>>> controlled.shutdown.retry.backoff.ms = 5000
>>>> log.roll.hours = 168
>>>> log.cleanup.policy = delete
>>>> host.name = servername
>>>> log.roll.jitter.ms = null
>>>> max.connections.per.ip = 2147483647
>>>> offsets.topic.segment.bytes = 104857600
>>>> background.threads = 10
>>>> quota.consumer.default = 9223372036854775807
>>>> request.timeout.ms = 30000
>>>> log.index.interval.bytes = 4096
>>>> log.dir = /tmp/kafka-logs
>>>> log.segment.bytes = 1073741824
>>>> log.cleaner.backoff.ms = 15000
>>>> offset.metadata.max.bytes = 4096
>>>> ssl.truststore.location = /pathto/certs/server.truststore.jks
>>>> group.max.session.timeout.ms = 30000
>>>> ssl.keystore.password = [hidden]
>>>> zookeeper.sync.time.ms = 2000
>>>> port = 9092
>>>> log.retention.minutes = null
>>>> log.segment.delete.delay.ms = 60000
>>>> log.dirs = /pathto/logs/kafka
>>>> controlled.shutdown.enable = true
>>>> compression.type = producer
>>>> max.connections.per.ip.overrides =
>>>> sasl.kerberos.kinit.cmd = /usr/bin/kinit
>>>> log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
>>>> auto.leader.rebalance.enable = true
>>>> leader.imbalance.check.interval.seconds = 300
>>>> log.cleaner.min.cleanable.ratio = 0.5
>>>> replica.lag.time.max.ms = 10000
>>>> num.network.threads = 3
>>>> ssl.key.password = [hidden]
>>>> reserved.broker.max.id = 1000
>>>> metrics.num.samples = 2
>>>> socket.send.buffer.bytes = 102400
>>>> ssl.protocol = TLS
>>>> socket.receive.buffer.bytes = 102400
>>>> ssl.keystore.location = /pathto/certs/server.keystore.jks
>>>> replica.fetch.min.bytes = 1
>>>> unclean.leader.election.enable = true
>>>> group.min.session.timeout.ms = 6000
>>>> log.cleaner.io.buffer.load.factor = 0.9
>>>> offsets.retention.check.interval.ms = 600000
>>>> producer.purgatory.purge.interval.requests = 1000
>>>>
>>>>
>>>>
>>>> So as you can see the listeners are supposedly setup as
>>>>
>>>> listeners = PLAINTEXT://servername:9092 <plaintext://servername:9092>, SSL://servername:9093 <ssl://servername:9093>
>>>>
>>>> in the logs which reflected what was setup in the server.properties.
>>>>
>>>> However further down the logs, it is only PLAINTEXT that is being
>>>> registered ..
>>>>
>>>> [2016-02-01 11:27:49,712] INFO Registered broker 0 at path /brokers/ids/0
>>>> with addresses: PLAINTEXT -> EndPoint(servername,9092,PLAINTEXT)
>>>> (kafka.utils.ZkUtils)
>>>>
>>>>
>>>> not the port 9093 nor the SSL.
>>>>
>>>> I have done multiple permutations of this config including clearing the
>>>> entire kafka and zookeeper data. Still no luck. I even forced the the SSL
>>>> on port 9092 with the same issue. The resulting effect on this is that
>> the
>>>> producer and consumer is giving me errors like :
>>>>
>>>> lients.NetworkClient)
>>>> [2016-02-01 10:58:41,001] WARN Error while fetching metadata with
>>>> correlation id 57 : {test=LEADER_NOT_AVAILABLE}
>>>> (org.apache.kafka.clients.NetworkClient)
>>>> [2016-02-01 10:58:41,103] WARN Error while fetching metadata with
>>>> correlation id 58 : {test=LEADER_NOT_AVAILABLE}
>>>> (org.apache.kafka.clients.NetworkClient)
>>>> [2016-02-01 10:58:41,205] WARN Error while fetching metadata with
>>>> correlation id 59 : {test=LEADER_NOT_AVAILABLE}
>>>> (org.apache.kafka.clients.NetworkClient)
>>>>
>>>>
>>>> Any help is appreciated.
>>>>
>>>>
>
Re: Kafka SSL Configuration Problems
Posted by Nazario Parsacala <do...@gmail.com>.
No juice.
/kafka-topics.sh --describe --topic anotherone --zookeeper localhost:2181
Topic:anotherone PartitionCount:4 ReplicationFactor:1 Configs:
Topic: anotherone Partition: 0 Leader: 0 Replicas: 0 Isr: 0
Topic: anotherone Partition: 1 Leader: 0 Replicas: 0 Isr: 0
Topic: anotherone Partition: 2 Leader: 0 Replicas: 0 Isr: 0
Topic: anotherone Partition: 3 Leader: 0 Replicas: 0 Isr: 0
Same error.
bin/kafka-console-producer.sh --broker-list servername:9093 --topic anotherone --producer.config config/client-ssl.properties
[2016-02-01 13:09:45,205] ERROR Error when sending message to topic anotherone with key: null, value: 0 bytes with error: Failed to update metadata after 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
[2016-02-01 13:10:45,206] ERROR Error when sending message to topic anotherone with key: null, value: 0 bytes with error: Failed to update metadata after 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
I have read from somewhere that you need to configure configure meta.broker.list ? Is this true ? Anyways tried setting that too with no luck.
> On Feb 1, 2016, at 1:02 PM, Anirudh P <pa...@gmail.com> wrote:
>
> Hello Nazario,
>
> Could you try it by creating a new topic?
>
> Thank you,
> Anirudh
> That works. At least it is saying that it is registering now with the SSL
> side.
>
>
> [2016-02-01 12:29:40,184] INFO Registered broker 0 at path /brokers/ids/0
> with addresses: PLAINTEXT -> EndPoint(servername,9092,PLAINTEXT),SSL ->
> EndPoint(servername,9093,SSL) (kafka.utils.ZkUtils)
>
>
> Thank you.
>
> Now to the next problem. :-) Still related to SSL.
>
>
> The producer is not giving any more LEADER_NOT_AVAILABLE errors. but is now
> having this problem instead.
>
> [2016-02-01 12:41:59,273] ERROR Error when sending message to topic test
> with key: null, value: 5 bytes with error: Failed to update metadata after
> 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> [2016-02-01 12:42:59,274] ERROR Error when sending message to topic test
> with key: null, value: 7 bytes with error: Failed to update metadata after
> 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> [2016-02-01 12:43:59,275] ERROR Error when sending message to topic test
> with key: null, value: 0 bytes with error: Failed to update metadata after
> 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
>
>
> Consumer is connecting too but not receiving any data
>
>
>
>
>> On Feb 1, 2016, at 12:15 PM, Ismael Juma <is...@juma.me.uk> wrote:
>>
>> Please use advertised.listeners instead of advertised.host.name. See this
>> comment:
>>
>> https://github.com/apache/kafka/pull/793#issuecomment-174287124
>>
>> Ismael
>>
>> On Mon, Feb 1, 2016 at 4:44 PM, Nazario Parsacala <do...@gmail.com>
>> wrote:
>>
>>> Hi,
>>>
>>> We were using kafka for a while now. We have been using the binary
> release
>>> 2.10-0.8.2.1 . But we have been needing a encrypted communication between
>>> our publishers and subscribers. So we got 2.10-0.9.0.0. This works very
>>> well with no SSL enabled. But currently have issues with SSL enabled.
>>>
>>> So configured SSL according to
>>> http://kafka.apache.org/documentation.html#security . And only place the
>>> following changes in the server.properties to enable SSL
>>>
>>> listeners=PLAINTEXT://servername:9092, SSL://servername:9093
>>>
>>> # The port the socket server listens on
>>> #port=9092
>>>
>>> # Hostname the broker will bind to. If not set, the server will bind to
>>> all interfaces
>>> host.name=servername
>>>
>>>
>>>
>>>
>>>
>>> # SSL Stuff
>>> #
>>> ssl.client.auth=required
>>> ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
>>> ssl.keystore.location=/pathto/certs/server.keystore.jks
>>> ssl.keystore.password=123456
>>> ssl.key.password=123456
>>> ssl.truststore.location=/pathto/certs/server.truststore.jks
>>> ssl.truststore.password=123456
>>>
>>>
>>> At start up I see the following in the logs:
>>>
>>>
>>> advertised.host.name = servername
>>> metric.reporters = []
>>> quota.producer.default = 9223372036854775807
>>> offsets.topic.num.partitions = 50
>>> log.flush.interval.messages = 9223372036854775807
>>> auto.create.topics.enable = true
>>> controller.socket.timeout.ms = 30000
>>> log.flush.interval.ms = null
>>> principal.builder.class = class
>>> org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
>>> replica.socket.receive.buffer.bytes = 65536
>>> min.insync.replicas = 1
>>> replica.fetch.wait.max.ms = 500
>>> num.recovery.threads.per.data.dir = 1
>>> ssl.keystore.type = JKS
>>> default.replication.factor = 1
>>> ssl.truststore.password = [hidden]
>>> log.preallocate = false
>>> sasl.kerberos.principal.to.local.rules = [DEFAULT]
>>> fetch.purgatory.purge.interval.requests = 1000
>>> ssl.endpoint.identification.algorithm = null
>>> replica.socket.timeout.ms = 30000
>>> message.max.bytes = 1000012
>>> num.io.threads = 8
>>> offsets.commit.required.acks = -1
>>> log.flush.offset.checkpoint.interval.ms = 60000
>>> delete.topic.enable = false
>>> quota.window.size.seconds = 1
>>> ssl.truststore.type = JKS
>>> offsets.commit.timeout.ms = 5000
>>> quota.window.num = 11
>>> zookeeper.connect = servername:2181
>>> authorizer.class.name =
>>> num.replica.fetchers = 1
>>> log.retention.ms = null
>>> log.roll.jitter.hours = 0
>>> log.cleaner.enable = false
>>> offsets.load.buffer.size = 5242880
>>> log.cleaner.delete.retention.ms = 86400000
>>> ssl.client.auth = required
>>> controlled.shutdown.max.retries = 3
>>> queued.max.requests = 500
>>> offsets.topic.replication.factor = 3
>>> log.cleaner.threads = 1
>>> sasl.kerberos.service.name = null
>>> sasl.kerberos.ticket.renew.jitter = 0.05
>>> socket.request.max.bytes = 104857600
>>> ssl.trustmanager.algorithm = PKIX
>>> zookeeper.session.timeout.ms = 6000
>>> log.retention.bytes = -1
>>> sasl.kerberos.min.time.before.relogin = 60000
>>> zookeeper.set.acl = false
>>> connections.max.idle.ms = 600000
>>> offsets.retention.minutes = 1440
>>> replica.fetch.backoff.ms = 1000
>>> inter.broker.protocol.version = 0.9.0.X
>>> log.retention.hours = 168
>>> num.partitions = 4
>>> listeners = PLAINTEXT://servername:9092, SSL://servername:9093
>>> ssl.provider = null
>>> ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
>>> log.roll.ms = null
>>> log.flush.scheduler.interval.ms = 9223372036854775807
>>> ssl.cipher.suites = null
>>> log.index.size.max.bytes = 10485760
>>> ssl.keymanager.algorithm = SunX509
>>> security.inter.broker.protocol = PLAINTEXT
>>> replica.fetch.max.bytes = 1048576
>>> advertised.port = null
>>> log.cleaner.dedupe.buffer.size = 524288000
>>> replica.high.watermark.checkpoint.interval.ms = 5000
>>> log.cleaner.io.buffer.size = 524288
>>> sasl.kerberos.ticket.renew.window.factor = 0.8
>>> zookeeper.connection.timeout.ms = 6000
>>> controlled.shutdown.retry.backoff.ms = 5000
>>> log.roll.hours = 168
>>> log.cleanup.policy = delete
>>> host.name = servername
>>> log.roll.jitter.ms = null
>>> max.connections.per.ip = 2147483647
>>> offsets.topic.segment.bytes = 104857600
>>> background.threads = 10
>>> quota.consumer.default = 9223372036854775807
>>> request.timeout.ms = 30000
>>> log.index.interval.bytes = 4096
>>> log.dir = /tmp/kafka-logs
>>> log.segment.bytes = 1073741824
>>> log.cleaner.backoff.ms = 15000
>>> offset.metadata.max.bytes = 4096
>>> ssl.truststore.location = /pathto/certs/server.truststore.jks
>>> group.max.session.timeout.ms = 30000
>>> ssl.keystore.password = [hidden]
>>> zookeeper.sync.time.ms = 2000
>>> port = 9092
>>> log.retention.minutes = null
>>> log.segment.delete.delay.ms = 60000
>>> log.dirs = /pathto/logs/kafka
>>> controlled.shutdown.enable = true
>>> compression.type = producer
>>> max.connections.per.ip.overrides =
>>> sasl.kerberos.kinit.cmd = /usr/bin/kinit
>>> log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
>>> auto.leader.rebalance.enable = true
>>> leader.imbalance.check.interval.seconds = 300
>>> log.cleaner.min.cleanable.ratio = 0.5
>>> replica.lag.time.max.ms = 10000
>>> num.network.threads = 3
>>> ssl.key.password = [hidden]
>>> reserved.broker.max.id = 1000
>>> metrics.num.samples = 2
>>> socket.send.buffer.bytes = 102400
>>> ssl.protocol = TLS
>>> socket.receive.buffer.bytes = 102400
>>> ssl.keystore.location = /pathto/certs/server.keystore.jks
>>> replica.fetch.min.bytes = 1
>>> unclean.leader.election.enable = true
>>> group.min.session.timeout.ms = 6000
>>> log.cleaner.io.buffer.load.factor = 0.9
>>> offsets.retention.check.interval.ms = 600000
>>> producer.purgatory.purge.interval.requests = 1000
>>>
>>>
>>>
>>> So as you can see the listeners are supposedly setup as
>>>
>>> listeners = PLAINTEXT://servername:9092, SSL://servername:9093
>>>
>>> in the logs which reflected what was setup in the server.properties.
>>>
>>> However further down the logs, it is only PLAINTEXT that is being
>>> registered ..
>>>
>>> [2016-02-01 11:27:49,712] INFO Registered broker 0 at path /brokers/ids/0
>>> with addresses: PLAINTEXT -> EndPoint(servername,9092,PLAINTEXT)
>>> (kafka.utils.ZkUtils)
>>>
>>>
>>> not the port 9093 nor the SSL.
>>>
>>> I have done multiple permutations of this config including clearing the
>>> entire kafka and zookeeper data. Still no luck. I even forced the the SSL
>>> on port 9092 with the same issue. The resulting effect on this is that
> the
>>> producer and consumer is giving me errors like :
>>>
>>> lients.NetworkClient)
>>> [2016-02-01 10:58:41,001] WARN Error while fetching metadata with
>>> correlation id 57 : {test=LEADER_NOT_AVAILABLE}
>>> (org.apache.kafka.clients.NetworkClient)
>>> [2016-02-01 10:58:41,103] WARN Error while fetching metadata with
>>> correlation id 58 : {test=LEADER_NOT_AVAILABLE}
>>> (org.apache.kafka.clients.NetworkClient)
>>> [2016-02-01 10:58:41,205] WARN Error while fetching metadata with
>>> correlation id 59 : {test=LEADER_NOT_AVAILABLE}
>>> (org.apache.kafka.clients.NetworkClient)
>>>
>>>
>>> Any help is appreciated.
>>>
>>>
Fwd: Re: Kafka SSL Configuration Problems
Posted by Anirudh P <pa...@gmail.com>.
Hello Nazario,
Could you try it by creating a new topic?
Thank you,
Anirudh
That works. At least it is saying that it is registering now with the SSL
side.
[2016-02-01 12:29:40,184] INFO Registered broker 0 at path /brokers/ids/0
with addresses: PLAINTEXT -> EndPoint(servername,9092,PLAINTEXT),SSL ->
EndPoint(servername,9093,SSL) (kafka.utils.ZkUtils)
Thank you.
Now to the next problem. :-) Still related to SSL.
The producer is not giving any more LEADER_NOT_AVAILABLE errors. but is now
having this problem instead.
[2016-02-01 12:41:59,273] ERROR Error when sending message to topic test
with key: null, value: 5 bytes with error: Failed to update metadata after
60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
[2016-02-01 12:42:59,274] ERROR Error when sending message to topic test
with key: null, value: 7 bytes with error: Failed to update metadata after
60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
[2016-02-01 12:43:59,275] ERROR Error when sending message to topic test
with key: null, value: 0 bytes with error: Failed to update metadata after
60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
Consumer is connecting too but not receiving any data
> On Feb 1, 2016, at 12:15 PM, Ismael Juma <is...@juma.me.uk> wrote:
>
> Please use advertised.listeners instead of advertised.host.name. See this
> comment:
>
> https://github.com/apache/kafka/pull/793#issuecomment-174287124
>
> Ismael
>
> On Mon, Feb 1, 2016 at 4:44 PM, Nazario Parsacala <do...@gmail.com>
> wrote:
>
>> Hi,
>>
>> We were using kafka for a while now. We have been using the binary
release
>> 2.10-0.8.2.1 . But we have been needing a encrypted communication between
>> our publishers and subscribers. So we got 2.10-0.9.0.0. This works very
>> well with no SSL enabled. But currently have issues with SSL enabled.
>>
>> So configured SSL according to
>> http://kafka.apache.org/documentation.html#security . And only place the
>> following changes in the server.properties to enable SSL
>>
>> listeners=PLAINTEXT://servername:9092, SSL://servername:9093
>>
>> # The port the socket server listens on
>> #port=9092
>>
>> # Hostname the broker will bind to. If not set, the server will bind to
>> all interfaces
>> host.name=servername
>>
>>
>>
>>
>>
>> # SSL Stuff
>> #
>> ssl.client.auth=required
>> ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
>> ssl.keystore.location=/pathto/certs/server.keystore.jks
>> ssl.keystore.password=123456
>> ssl.key.password=123456
>> ssl.truststore.location=/pathto/certs/server.truststore.jks
>> ssl.truststore.password=123456
>>
>>
>> At start up I see the following in the logs:
>>
>>
>> advertised.host.name = servername
>> metric.reporters = []
>> quota.producer.default = 9223372036854775807
>> offsets.topic.num.partitions = 50
>> log.flush.interval.messages = 9223372036854775807
>> auto.create.topics.enable = true
>> controller.socket.timeout.ms = 30000
>> log.flush.interval.ms = null
>> principal.builder.class = class
>> org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
>> replica.socket.receive.buffer.bytes = 65536
>> min.insync.replicas = 1
>> replica.fetch.wait.max.ms = 500
>> num.recovery.threads.per.data.dir = 1
>> ssl.keystore.type = JKS
>> default.replication.factor = 1
>> ssl.truststore.password = [hidden]
>> log.preallocate = false
>> sasl.kerberos.principal.to.local.rules = [DEFAULT]
>> fetch.purgatory.purge.interval.requests = 1000
>> ssl.endpoint.identification.algorithm = null
>> replica.socket.timeout.ms = 30000
>> message.max.bytes = 1000012
>> num.io.threads = 8
>> offsets.commit.required.acks = -1
>> log.flush.offset.checkpoint.interval.ms = 60000
>> delete.topic.enable = false
>> quota.window.size.seconds = 1
>> ssl.truststore.type = JKS
>> offsets.commit.timeout.ms = 5000
>> quota.window.num = 11
>> zookeeper.connect = servername:2181
>> authorizer.class.name =
>> num.replica.fetchers = 1
>> log.retention.ms = null
>> log.roll.jitter.hours = 0
>> log.cleaner.enable = false
>> offsets.load.buffer.size = 5242880
>> log.cleaner.delete.retention.ms = 86400000
>> ssl.client.auth = required
>> controlled.shutdown.max.retries = 3
>> queued.max.requests = 500
>> offsets.topic.replication.factor = 3
>> log.cleaner.threads = 1
>> sasl.kerberos.service.name = null
>> sasl.kerberos.ticket.renew.jitter = 0.05
>> socket.request.max.bytes = 104857600
>> ssl.trustmanager.algorithm = PKIX
>> zookeeper.session.timeout.ms = 6000
>> log.retention.bytes = -1
>> sasl.kerberos.min.time.before.relogin = 60000
>> zookeeper.set.acl = false
>> connections.max.idle.ms = 600000
>> offsets.retention.minutes = 1440
>> replica.fetch.backoff.ms = 1000
>> inter.broker.protocol.version = 0.9.0.X
>> log.retention.hours = 168
>> num.partitions = 4
>> listeners = PLAINTEXT://servername:9092, SSL://servername:9093
>> ssl.provider = null
>> ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
>> log.roll.ms = null
>> log.flush.scheduler.interval.ms = 9223372036854775807
>> ssl.cipher.suites = null
>> log.index.size.max.bytes = 10485760
>> ssl.keymanager.algorithm = SunX509
>> security.inter.broker.protocol = PLAINTEXT
>> replica.fetch.max.bytes = 1048576
>> advertised.port = null
>> log.cleaner.dedupe.buffer.size = 524288000
>> replica.high.watermark.checkpoint.interval.ms = 5000
>> log.cleaner.io.buffer.size = 524288
>> sasl.kerberos.ticket.renew.window.factor = 0.8
>> zookeeper.connection.timeout.ms = 6000
>> controlled.shutdown.retry.backoff.ms = 5000
>> log.roll.hours = 168
>> log.cleanup.policy = delete
>> host.name = servername
>> log.roll.jitter.ms = null
>> max.connections.per.ip = 2147483647
>> offsets.topic.segment.bytes = 104857600
>> background.threads = 10
>> quota.consumer.default = 9223372036854775807
>> request.timeout.ms = 30000
>> log.index.interval.bytes = 4096
>> log.dir = /tmp/kafka-logs
>> log.segment.bytes = 1073741824
>> log.cleaner.backoff.ms = 15000
>> offset.metadata.max.bytes = 4096
>> ssl.truststore.location = /pathto/certs/server.truststore.jks
>> group.max.session.timeout.ms = 30000
>> ssl.keystore.password = [hidden]
>> zookeeper.sync.time.ms = 2000
>> port = 9092
>> log.retention.minutes = null
>> log.segment.delete.delay.ms = 60000
>> log.dirs = /pathto/logs/kafka
>> controlled.shutdown.enable = true
>> compression.type = producer
>> max.connections.per.ip.overrides =
>> sasl.kerberos.kinit.cmd = /usr/bin/kinit
>> log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
>> auto.leader.rebalance.enable = true
>> leader.imbalance.check.interval.seconds = 300
>> log.cleaner.min.cleanable.ratio = 0.5
>> replica.lag.time.max.ms = 10000
>> num.network.threads = 3
>> ssl.key.password = [hidden]
>> reserved.broker.max.id = 1000
>> metrics.num.samples = 2
>> socket.send.buffer.bytes = 102400
>> ssl.protocol = TLS
>> socket.receive.buffer.bytes = 102400
>> ssl.keystore.location = /pathto/certs/server.keystore.jks
>> replica.fetch.min.bytes = 1
>> unclean.leader.election.enable = true
>> group.min.session.timeout.ms = 6000
>> log.cleaner.io.buffer.load.factor = 0.9
>> offsets.retention.check.interval.ms = 600000
>> producer.purgatory.purge.interval.requests = 1000
>>
>>
>>
>> So as you can see the listeners are supposedly setup as
>>
>> listeners = PLAINTEXT://servername:9092, SSL://servername:9093
>>
>> in the logs which reflected what was setup in the server.properties.
>>
>> However further down the logs, it is only PLAINTEXT that is being
>> registered ..
>>
>> [2016-02-01 11:27:49,712] INFO Registered broker 0 at path /brokers/ids/0
>> with addresses: PLAINTEXT -> EndPoint(servername,9092,PLAINTEXT)
>> (kafka.utils.ZkUtils)
>>
>>
>> not the port 9093 nor the SSL.
>>
>> I have done multiple permutations of this config including clearing the
>> entire kafka and zookeeper data. Still no luck. I even forced the the SSL
>> on port 9092 with the same issue. The resulting effect on this is that
the
>> producer and consumer is giving me errors like :
>>
>> lients.NetworkClient)
>> [2016-02-01 10:58:41,001] WARN Error while fetching metadata with
>> correlation id 57 : {test=LEADER_NOT_AVAILABLE}
>> (org.apache.kafka.clients.NetworkClient)
>> [2016-02-01 10:58:41,103] WARN Error while fetching metadata with
>> correlation id 58 : {test=LEADER_NOT_AVAILABLE}
>> (org.apache.kafka.clients.NetworkClient)
>> [2016-02-01 10:58:41,205] WARN Error while fetching metadata with
>> correlation id 59 : {test=LEADER_NOT_AVAILABLE}
>> (org.apache.kafka.clients.NetworkClient)
>>
>>
>> Any help is appreciated.
>>
>>
Re: Kafka SSL Configuration Problems
Posted by Nazario Parsacala <do...@gmail.com>.
That works. At least it is saying that it is registering now with the SSL side.
[2016-02-01 12:29:40,184] INFO Registered broker 0 at path /brokers/ids/0 with addresses: PLAINTEXT -> EndPoint(servername,9092,PLAINTEXT),SSL -> EndPoint(servername,9093,SSL) (kafka.utils.ZkUtils)
Thank you.
Now to the next problem. :-) Still related to SSL.
The producer is not giving any more LEADER_NOT_AVAILABLE errors. but is now having this problem instead.
[2016-02-01 12:41:59,273] ERROR Error when sending message to topic test with key: null, value: 5 bytes with error: Failed to update metadata after 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
[2016-02-01 12:42:59,274] ERROR Error when sending message to topic test with key: null, value: 7 bytes with error: Failed to update metadata after 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
[2016-02-01 12:43:59,275] ERROR Error when sending message to topic test with key: null, value: 0 bytes with error: Failed to update metadata after 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
Consumer is connecting too but not receiving any data
> On Feb 1, 2016, at 12:15 PM, Ismael Juma <is...@juma.me.uk> wrote:
>
> Please use advertised.listeners instead of advertised.host.name. See this
> comment:
>
> https://github.com/apache/kafka/pull/793#issuecomment-174287124
>
> Ismael
>
> On Mon, Feb 1, 2016 at 4:44 PM, Nazario Parsacala <do...@gmail.com>
> wrote:
>
>> Hi,
>>
>> We were using kafka for a while now. We have been using the binary release
>> 2.10-0.8.2.1 . But we have been needing a encrypted communication between
>> our publishers and subscribers. So we got 2.10-0.9.0.0. This works very
>> well with no SSL enabled. But currently have issues with SSL enabled.
>>
>> So configured SSL according to
>> http://kafka.apache.org/documentation.html#security . And only place the
>> following changes in the server.properties to enable SSL
>>
>> listeners=PLAINTEXT://servername:9092, SSL://servername:9093
>>
>> # The port the socket server listens on
>> #port=9092
>>
>> # Hostname the broker will bind to. If not set, the server will bind to
>> all interfaces
>> host.name=servername
>>
>>
>>
>>
>>
>> # SSL Stuff
>> #
>> ssl.client.auth=required
>> ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
>> ssl.keystore.location=/pathto/certs/server.keystore.jks
>> ssl.keystore.password=123456
>> ssl.key.password=123456
>> ssl.truststore.location=/pathto/certs/server.truststore.jks
>> ssl.truststore.password=123456
>>
>>
>> At start up I see the following in the logs:
>>
>>
>> advertised.host.name = servername
>> metric.reporters = []
>> quota.producer.default = 9223372036854775807
>> offsets.topic.num.partitions = 50
>> log.flush.interval.messages = 9223372036854775807
>> auto.create.topics.enable = true
>> controller.socket.timeout.ms = 30000
>> log.flush.interval.ms = null
>> principal.builder.class = class
>> org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
>> replica.socket.receive.buffer.bytes = 65536
>> min.insync.replicas = 1
>> replica.fetch.wait.max.ms = 500
>> num.recovery.threads.per.data.dir = 1
>> ssl.keystore.type = JKS
>> default.replication.factor = 1
>> ssl.truststore.password = [hidden]
>> log.preallocate = false
>> sasl.kerberos.principal.to.local.rules = [DEFAULT]
>> fetch.purgatory.purge.interval.requests = 1000
>> ssl.endpoint.identification.algorithm = null
>> replica.socket.timeout.ms = 30000
>> message.max.bytes = 1000012
>> num.io.threads = 8
>> offsets.commit.required.acks = -1
>> log.flush.offset.checkpoint.interval.ms = 60000
>> delete.topic.enable = false
>> quota.window.size.seconds = 1
>> ssl.truststore.type = JKS
>> offsets.commit.timeout.ms = 5000
>> quota.window.num = 11
>> zookeeper.connect = servername:2181
>> authorizer.class.name =
>> num.replica.fetchers = 1
>> log.retention.ms = null
>> log.roll.jitter.hours = 0
>> log.cleaner.enable = false
>> offsets.load.buffer.size = 5242880
>> log.cleaner.delete.retention.ms = 86400000
>> ssl.client.auth = required
>> controlled.shutdown.max.retries = 3
>> queued.max.requests = 500
>> offsets.topic.replication.factor = 3
>> log.cleaner.threads = 1
>> sasl.kerberos.service.name = null
>> sasl.kerberos.ticket.renew.jitter = 0.05
>> socket.request.max.bytes = 104857600
>> ssl.trustmanager.algorithm = PKIX
>> zookeeper.session.timeout.ms = 6000
>> log.retention.bytes = -1
>> sasl.kerberos.min.time.before.relogin = 60000
>> zookeeper.set.acl = false
>> connections.max.idle.ms = 600000
>> offsets.retention.minutes = 1440
>> replica.fetch.backoff.ms = 1000
>> inter.broker.protocol.version = 0.9.0.X
>> log.retention.hours = 168
>> num.partitions = 4
>> listeners = PLAINTEXT://servername:9092, SSL://servername:9093
>> ssl.provider = null
>> ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
>> log.roll.ms = null
>> log.flush.scheduler.interval.ms = 9223372036854775807
>> ssl.cipher.suites = null
>> log.index.size.max.bytes = 10485760
>> ssl.keymanager.algorithm = SunX509
>> security.inter.broker.protocol = PLAINTEXT
>> replica.fetch.max.bytes = 1048576
>> advertised.port = null
>> log.cleaner.dedupe.buffer.size = 524288000
>> replica.high.watermark.checkpoint.interval.ms = 5000
>> log.cleaner.io.buffer.size = 524288
>> sasl.kerberos.ticket.renew.window.factor = 0.8
>> zookeeper.connection.timeout.ms = 6000
>> controlled.shutdown.retry.backoff.ms = 5000
>> log.roll.hours = 168
>> log.cleanup.policy = delete
>> host.name = servername
>> log.roll.jitter.ms = null
>> max.connections.per.ip = 2147483647
>> offsets.topic.segment.bytes = 104857600
>> background.threads = 10
>> quota.consumer.default = 9223372036854775807
>> request.timeout.ms = 30000
>> log.index.interval.bytes = 4096
>> log.dir = /tmp/kafka-logs
>> log.segment.bytes = 1073741824
>> log.cleaner.backoff.ms = 15000
>> offset.metadata.max.bytes = 4096
>> ssl.truststore.location = /pathto/certs/server.truststore.jks
>> group.max.session.timeout.ms = 30000
>> ssl.keystore.password = [hidden]
>> zookeeper.sync.time.ms = 2000
>> port = 9092
>> log.retention.minutes = null
>> log.segment.delete.delay.ms = 60000
>> log.dirs = /pathto/logs/kafka
>> controlled.shutdown.enable = true
>> compression.type = producer
>> max.connections.per.ip.overrides =
>> sasl.kerberos.kinit.cmd = /usr/bin/kinit
>> log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
>> auto.leader.rebalance.enable = true
>> leader.imbalance.check.interval.seconds = 300
>> log.cleaner.min.cleanable.ratio = 0.5
>> replica.lag.time.max.ms = 10000
>> num.network.threads = 3
>> ssl.key.password = [hidden]
>> reserved.broker.max.id = 1000
>> metrics.num.samples = 2
>> socket.send.buffer.bytes = 102400
>> ssl.protocol = TLS
>> socket.receive.buffer.bytes = 102400
>> ssl.keystore.location = /pathto/certs/server.keystore.jks
>> replica.fetch.min.bytes = 1
>> unclean.leader.election.enable = true
>> group.min.session.timeout.ms = 6000
>> log.cleaner.io.buffer.load.factor = 0.9
>> offsets.retention.check.interval.ms = 600000
>> producer.purgatory.purge.interval.requests = 1000
>>
>>
>>
>> So as you can see the listeners are supposedly setup as
>>
>> listeners = PLAINTEXT://servername:9092, SSL://servername:9093
>>
>> in the logs which reflected what was setup in the server.properties.
>>
>> However further down the logs, it is only PLAINTEXT that is being
>> registered ..
>>
>> [2016-02-01 11:27:49,712] INFO Registered broker 0 at path /brokers/ids/0
>> with addresses: PLAINTEXT -> EndPoint(servername,9092,PLAINTEXT)
>> (kafka.utils.ZkUtils)
>>
>>
>> not the port 9093 nor the SSL.
>>
>> I have done multiple permutations of this config including clearing the
>> entire kafka and zookeeper data. Still no luck. I even forced the the SSL
>> on port 9092 with the same issue. The resulting effect on this is that the
>> producer and consumer is giving me errors like :
>>
>> lients.NetworkClient)
>> [2016-02-01 10:58:41,001] WARN Error while fetching metadata with
>> correlation id 57 : {test=LEADER_NOT_AVAILABLE}
>> (org.apache.kafka.clients.NetworkClient)
>> [2016-02-01 10:58:41,103] WARN Error while fetching metadata with
>> correlation id 58 : {test=LEADER_NOT_AVAILABLE}
>> (org.apache.kafka.clients.NetworkClient)
>> [2016-02-01 10:58:41,205] WARN Error while fetching metadata with
>> correlation id 59 : {test=LEADER_NOT_AVAILABLE}
>> (org.apache.kafka.clients.NetworkClient)
>>
>>
>> Any help is appreciated.
>>
>>
Re: Kafka SSL Configuration Problems
Posted by Ismael Juma <is...@juma.me.uk>.
Please use advertised.listeners instead of advertised.host.name. See this
comment:
https://github.com/apache/kafka/pull/793#issuecomment-174287124
Ismael
On Mon, Feb 1, 2016 at 4:44 PM, Nazario Parsacala <do...@gmail.com>
wrote:
> Hi,
>
> We were using kafka for a while now. We have been using the binary release
> 2.10-0.8.2.1 . But we have been needing a encrypted communication between
> our publishers and subscribers. So we got 2.10-0.9.0.0. This works very
> well with no SSL enabled. But currently have issues with SSL enabled.
>
> So configured SSL according to
> http://kafka.apache.org/documentation.html#security . And only place the
> following changes in the server.properties to enable SSL
>
> listeners=PLAINTEXT://servername:9092, SSL://servername:9093
>
> # The port the socket server listens on
> #port=9092
>
> # Hostname the broker will bind to. If not set, the server will bind to
> all interfaces
> host.name=servername
>
>
>
>
>
> # SSL Stuff
> #
> ssl.client.auth=required
> ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
> ssl.keystore.location=/pathto/certs/server.keystore.jks
> ssl.keystore.password=123456
> ssl.key.password=123456
> ssl.truststore.location=/pathto/certs/server.truststore.jks
> ssl.truststore.password=123456
>
>
> At start up I see the following in the logs:
>
>
> advertised.host.name = servername
> metric.reporters = []
> quota.producer.default = 9223372036854775807
> offsets.topic.num.partitions = 50
> log.flush.interval.messages = 9223372036854775807
> auto.create.topics.enable = true
> controller.socket.timeout.ms = 30000
> log.flush.interval.ms = null
> principal.builder.class = class
> org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
> replica.socket.receive.buffer.bytes = 65536
> min.insync.replicas = 1
> replica.fetch.wait.max.ms = 500
> num.recovery.threads.per.data.dir = 1
> ssl.keystore.type = JKS
> default.replication.factor = 1
> ssl.truststore.password = [hidden]
> log.preallocate = false
> sasl.kerberos.principal.to.local.rules = [DEFAULT]
> fetch.purgatory.purge.interval.requests = 1000
> ssl.endpoint.identification.algorithm = null
> replica.socket.timeout.ms = 30000
> message.max.bytes = 1000012
> num.io.threads = 8
> offsets.commit.required.acks = -1
> log.flush.offset.checkpoint.interval.ms = 60000
> delete.topic.enable = false
> quota.window.size.seconds = 1
> ssl.truststore.type = JKS
> offsets.commit.timeout.ms = 5000
> quota.window.num = 11
> zookeeper.connect = servername:2181
> authorizer.class.name =
> num.replica.fetchers = 1
> log.retention.ms = null
> log.roll.jitter.hours = 0
> log.cleaner.enable = false
> offsets.load.buffer.size = 5242880
> log.cleaner.delete.retention.ms = 86400000
> ssl.client.auth = required
> controlled.shutdown.max.retries = 3
> queued.max.requests = 500
> offsets.topic.replication.factor = 3
> log.cleaner.threads = 1
> sasl.kerberos.service.name = null
> sasl.kerberos.ticket.renew.jitter = 0.05
> socket.request.max.bytes = 104857600
> ssl.trustmanager.algorithm = PKIX
> zookeeper.session.timeout.ms = 6000
> log.retention.bytes = -1
> sasl.kerberos.min.time.before.relogin = 60000
> zookeeper.set.acl = false
> connections.max.idle.ms = 600000
> offsets.retention.minutes = 1440
> replica.fetch.backoff.ms = 1000
> inter.broker.protocol.version = 0.9.0.X
> log.retention.hours = 168
> num.partitions = 4
> listeners = PLAINTEXT://servername:9092, SSL://servername:9093
> ssl.provider = null
> ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
> log.roll.ms = null
> log.flush.scheduler.interval.ms = 9223372036854775807
> ssl.cipher.suites = null
> log.index.size.max.bytes = 10485760
> ssl.keymanager.algorithm = SunX509
> security.inter.broker.protocol = PLAINTEXT
> replica.fetch.max.bytes = 1048576
> advertised.port = null
> log.cleaner.dedupe.buffer.size = 524288000
> replica.high.watermark.checkpoint.interval.ms = 5000
> log.cleaner.io.buffer.size = 524288
> sasl.kerberos.ticket.renew.window.factor = 0.8
> zookeeper.connection.timeout.ms = 6000
> controlled.shutdown.retry.backoff.ms = 5000
> log.roll.hours = 168
> log.cleanup.policy = delete
> host.name = servername
> log.roll.jitter.ms = null
> max.connections.per.ip = 2147483647
> offsets.topic.segment.bytes = 104857600
> background.threads = 10
> quota.consumer.default = 9223372036854775807
> request.timeout.ms = 30000
> log.index.interval.bytes = 4096
> log.dir = /tmp/kafka-logs
> log.segment.bytes = 1073741824
> log.cleaner.backoff.ms = 15000
> offset.metadata.max.bytes = 4096
> ssl.truststore.location = /pathto/certs/server.truststore.jks
> group.max.session.timeout.ms = 30000
> ssl.keystore.password = [hidden]
> zookeeper.sync.time.ms = 2000
> port = 9092
> log.retention.minutes = null
> log.segment.delete.delay.ms = 60000
> log.dirs = /pathto/logs/kafka
> controlled.shutdown.enable = true
> compression.type = producer
> max.connections.per.ip.overrides =
> sasl.kerberos.kinit.cmd = /usr/bin/kinit
> log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
> auto.leader.rebalance.enable = true
> leader.imbalance.check.interval.seconds = 300
> log.cleaner.min.cleanable.ratio = 0.5
> replica.lag.time.max.ms = 10000
> num.network.threads = 3
> ssl.key.password = [hidden]
> reserved.broker.max.id = 1000
> metrics.num.samples = 2
> socket.send.buffer.bytes = 102400
> ssl.protocol = TLS
> socket.receive.buffer.bytes = 102400
> ssl.keystore.location = /pathto/certs/server.keystore.jks
> replica.fetch.min.bytes = 1
> unclean.leader.election.enable = true
> group.min.session.timeout.ms = 6000
> log.cleaner.io.buffer.load.factor = 0.9
> offsets.retention.check.interval.ms = 600000
> producer.purgatory.purge.interval.requests = 1000
>
>
>
> So as you can see the listeners are supposedly setup as
>
> listeners = PLAINTEXT://servername:9092, SSL://servername:9093
>
> in the logs which reflected what was setup in the server.properties.
>
> However further down the logs, it is only PLAINTEXT that is being
> registered ..
>
> [2016-02-01 11:27:49,712] INFO Registered broker 0 at path /brokers/ids/0
> with addresses: PLAINTEXT -> EndPoint(servername,9092,PLAINTEXT)
> (kafka.utils.ZkUtils)
>
>
> not the port 9093 nor the SSL.
>
> I have done multiple permutations of this config including clearing the
> entire kafka and zookeeper data. Still no luck. I even forced the the SSL
> on port 9092 with the same issue. The resulting effect on this is that the
> producer and consumer is giving me errors like :
>
> lients.NetworkClient)
> [2016-02-01 10:58:41,001] WARN Error while fetching metadata with
> correlation id 57 : {test=LEADER_NOT_AVAILABLE}
> (org.apache.kafka.clients.NetworkClient)
> [2016-02-01 10:58:41,103] WARN Error while fetching metadata with
> correlation id 58 : {test=LEADER_NOT_AVAILABLE}
> (org.apache.kafka.clients.NetworkClient)
> [2016-02-01 10:58:41,205] WARN Error while fetching metadata with
> correlation id 59 : {test=LEADER_NOT_AVAILABLE}
> (org.apache.kafka.clients.NetworkClient)
>
>
> Any help is appreciated.
>
>