You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by karan alang <ka...@gmail.com> on 2017/07/25 19:19:41 UTC

Kafka with SSL enabled - Not able to publish messages (org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms)

hi - I've enabled SSL for Kafka & i'm trying to publish messages using
console Producer

Error is as shown below, any ideas ?

>
>    1. /usr/hdp/2.5.3.0-37/kafka/bin/kafka-console-producer.sh --broker-list
>    nwk2-bdp-kafka-05.gdcs-qa.apple.com:6668,nwk2-bdp-kafka-04.gdcs-qa.
>    apple.com:6668,nwk2-bdp-kafka-06.gdcs-qa.apple.com:6668 --topic
>    sslTopic1 --producer.config /tmp/ssl-kafka/client-ssl.properties --
>    security-protocol SSL
>    2.
>    3. hi
>    4.
>    5. [2017-07-25 19:10:54,750] ERROR Error when sending message to topic
>    sslTopic1 with key: null, value: 2 bytes with error: (org.apache.kafka.
>    clients.producer.internals.ErrorLoggingCallback)org.apache.kafka.common
>    .errors.TimeoutException: Failed to update metadata after 60000 ms.
>
>
client-ssl.properties


>    1. security.protocol=SSL
>    2. ssl.truststore.location=/tmp/ssl-kafka/client.truststore.jks
>    3. ssl.truststore.password=changeit
>    4. ssl.keystore.location=/tmp/ssl-kafka/client.keystore.jks
>    5. ssl.keystore.password=changeitssl.key.password=changeit
>    6. ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1ssl.keystore.type=JKS
>    7. ssl.truststore.type=JKS
>
>
Attaching the server.properties

Re: Kafka with SSL enabled - Not able to publish messages (org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms)

Posted by Manikumar <ma...@gmail.com>.
it looks like SSL configuration issue.  Brokers are not able to
authenticate with each other.
Hope you followed the instructions given at http://kafka.apache.org/
documentation/#security_configbroker
You can enable SSL debug logs by using JVM flag -Djavax.net.debug=all

On Wed, Jul 26, 2017 at 11:38 AM, karan alang <ka...@gmail.com> wrote:

> Hello, here is the update on this...
>
> with -> security.inter.broker.protocol = PLAINTEXT, *i'm able to start the
> Console Producer & consumer and publish & read the messages published*.
>
> *However, when i set -> security.inter.broker.protocol = SSL, the errors
> start. (in both PLAINTEXT & SSL modes)*
>
> when i start the Console Producer in PLAINTEXT Mode :
>
> /usr/hdp/2.5.3.0-37/kafka/bin/kafka-console-producer.sh --broker-list
> nwk2-bdp-kafka-04.gdcs-qa.ale.com:6667 --topic sslTopic3
> --security-protocol PLAINTEXT
> [2017-07-26 05:53:26,172] WARN Error while fetching metadata with
> correlation id 17 : {sslTopic3=LEADER_NOT_AVAILABLE}
> (org.apache.kafka.clients.NetworkClient)
> [2017-07-26 05:53:26,277] WARN Error while fetching metadata with
> correlation id 18 : {sslTopic3=LEADER_NOT_AVAILABLE}
> (org.apache.kafka.clients.NetworkClient)
> [2017-07-26 05:53:26,388] WARN Error while fetching metadata with
> correlation id 19 : {sslTopic3=LEADER_NOT_AVAILABLE}
> (org.apache.kafka.clients.NetworkClient)
>
> when i start the Console Producer in SSL Mode :
>
> /usr/hdp/2.5.3.0-37/kafka/bin/kafka-console-producer.sh --broker-list
> nwk2-bdp-kafka-04.gdcs-qa.ale.com:6668 --topic sslTopic3
> --producer.config /tmp/ssl-kafka/client-ssl.properties
> --security-protocol SSL
> hi
> HELLO
> [2017-07-26 05:59:31,888] ERROR Error when sending message to topic
> sslTopic3 with key: null, value: 2 bytes with error:
> (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> org.apache.kafka.common.errors.TimeoutException: Failed to update
> metadata after 60000 ms.
>
> Error in controller.log file :
>
> [2017-07-26 05:58:49,535] WARN
> [Controller-1001-to-broker-1001-send-thread], Controller 1001's
> connection to broker nwk2-bdp-kafka-04.gdcs-qa.apple.com:6668 (id:
> 1001 rack: null) was unsuccessful (kafka.controller.RequestSendThread)
> java.io.IOException: Connection to
> nwk2-bdp-kafka-04.gdcs-qa.apple.com:6668 (id: 1001 rack: null) failed
> at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingReady$
> extension$2.apply(NetworkClientBlockingOps.scala:63)
> at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingReady$
> extension$2.apply(NetworkClientBlockingOps.scala:59)
> at kafka.utils.NetworkClientBlockingOps$.recursivePoll$1(Networ
> kClientBlockingOps.scala:112)
> at kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkCli
> entBlockingOps$$pollUntil$extension(NetworkClientBlockingOps.scala:120)
> at kafka.utils.NetworkClientBlockingOps$.blockingReady$
> extension(NetworkClientBlockingOps.scala:59)
> at kafka.controller.RequestSendThread.brokerReady(ControllerCha
> nnelManager.scala:233)
> at kafka.controller.RequestSendThread.liftedTree1$1(
> ControllerChannelManager.scala:182)
> at kafka.controller.RequestSendThread.doWork(ControllerChannelM
> anager.scala:181)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
> [2017-07-26 05:58:49,853] WARN
> [Controller-1001-to-broker-1001-send-thread], Controller 1001's
> connection to broker nwk2-bdp-kafka-04.gdcs-qa.apple.com:6668 (id:
> 1001 rack: null) was unsuccessful (kafka.controller.RequestSendThread)
> java.io.IOException: Connection to
> nwk2-bdp-kafka-04.gdcs-qa.apple.com:6668 (id: 1001 rack: null) failed
> at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingReady$
> extension$2.apply(NetworkClientBlockingOps.scala:63)
> at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingReady$
> extension$2.apply(NetworkClientBlockingOps.scala:59)
> at kafka.utils.NetworkClientBlockingOps$.recursivePoll$1(Networ
> kClientBlockingOps.scala:112)
> at kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkCli
> entBlockingOps$$pollUntil$extension(NetworkClientBlockingOps.scala:120)
> at kafka.utils.NetworkClientBlockingOps$.blockingReady$
> extension(NetworkClientBlockingOps.scala:59)
> at kafka.controller.RequestSendThread.brokerReady(ControllerCha
> nnelManager.scala:233)
> at kafka.controller.RequestSendThread.liftedTree1$1(
> ControllerChannelManager.scala:182)
> at kafka.controller.RequestSendThread.doWork(ControllerChannelM
> anager.scala:181)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
>
> when i describe the topic, i see that the leader is 1001 & Isr has only
> 1001
>
> /usr/hdp/2.5.3.0-37/kafka/bin/kafka-topics.sh --describe --zookeeper
> nwk2-bdp-kafka-05.gdcs-qa.apple.com:2181,nwk2-bdp-kafka-04.
> gdcs-qa.apple.com:2181,nwk2-bdp-kafka-06.gdcs-qa.apple.com:2181
> --topic sslTopic3
> Topic:sslTopic3PartitionCount:3ReplicationFactor:3Configs:
> Topic: sslTopic3 Partition: 0 Leader: 1001 Replicas: 1003,1001,1002 Isr:
> 1001
> Topic: sslTopic3 Partition: 1 Leader: 1001 Replicas: 1001,1002,1003 Isr:
> 1001
> Topic: sslTopic3 Partition: 2 Leader: 1001 Replicas: 1002,1003,1001 Isr:
> 1001
>
> It seems setting the parameter -> security.inter.broker.protocol = SSL
> causes connectivity issues between the Controller (in this case 1001) & the
> Brokers (1001, 1002, 1003)
>
> The question is why & what needs to be done to fix this ?
>
> On Tue, Jul 25, 2017 at 10:31 PM, Manikumar <ma...@gmail.com>
> wrote:
>
> > enable debug logs to find out the actual error.
> >
> > On Wed, Jul 26, 2017 at 12:49 AM, karan alang <ka...@gmail.com>
> > wrote:
> >
> > > hi - I've enabled SSL for Kafka & i'm trying to publish messages using
> > > console Producer
> > >
> > > Error is as shown below, any ideas ?
> > >
> > >>
> > >>    1. /usr/hdp/2.5.3.0-37/kafka/bin/kafka-console-producer.sh
> > --broker-list
> > >>    nwk2-bdp-kafka-05.gdcs-qa.apple.com:6668,nwk2-bdp-kafka-04.
> gdcs-qa.
> > >>    apple.com:6668,nwk2-bdp-kafka-06.gdcs-qa.apple.com:6668 --topic
> > >>    sslTopic1 --producer.config /tmp/ssl-kafka/client-ssl.properties
> --
> > >>    security-protocol SSL
> > >>    2.
> > >>    3. hi
> > >>    4.
> > >>    5. [2017-07-25 19:10:54,750] ERROR Error when sending message to
> > >>    topic sslTopic1 with key: null, value: 2 bytes with error:
> > (org.apache
> > >>    .kafka.clients.producer.internals.ErrorLoggingCallback)org.apache.
> > >>    kafka.common.errors.TimeoutException: Failed to update metadata
> > after
> > >>    60000 ms.
> > >>
> > >>
> > > client-ssl.properties
> > >
> > >
> > >>    1. security.protocol=SSL
> > >>    2. ssl.truststore.location=/tmp/ssl-kafka/client.truststore.jks
> > >>    3. ssl.truststore.password=changeit
> > >>    4. ssl.keystore.location=/tmp/ssl-kafka/client.keystore.jks
> > >>    5. ssl.keystore.password=changeitssl.key.password=changeit
> > >>    6. ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1ssl.keystore.type
> =JKS
> > >>    7. ssl.truststore.type=JKS
> > >>
> > >>
> > > Attaching the server.properties
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> >
>

Re: Kafka with SSL enabled - Not able to publish messages (org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms)

Posted by karan alang <ka...@gmail.com>.
Hello, here is the update on this...

with -> security.inter.broker.protocol = PLAINTEXT, *i'm able to start the
Console Producer & consumer and publish & read the messages published*.

*However, when i set -> security.inter.broker.protocol = SSL, the errors
start. (in both PLAINTEXT & SSL modes)*

when i start the Console Producer in PLAINTEXT Mode :

/usr/hdp/2.5.3.0-37/kafka/bin/kafka-console-producer.sh --broker-list
nwk2-bdp-kafka-04.gdcs-qa.ale.com:6667 --topic sslTopic3
--security-protocol PLAINTEXT
[2017-07-26 05:53:26,172] WARN Error while fetching metadata with
correlation id 17 : {sslTopic3=LEADER_NOT_AVAILABLE}
(org.apache.kafka.clients.NetworkClient)
[2017-07-26 05:53:26,277] WARN Error while fetching metadata with
correlation id 18 : {sslTopic3=LEADER_NOT_AVAILABLE}
(org.apache.kafka.clients.NetworkClient)
[2017-07-26 05:53:26,388] WARN Error while fetching metadata with
correlation id 19 : {sslTopic3=LEADER_NOT_AVAILABLE}
(org.apache.kafka.clients.NetworkClient)

when i start the Console Producer in SSL Mode :

/usr/hdp/2.5.3.0-37/kafka/bin/kafka-console-producer.sh --broker-list
nwk2-bdp-kafka-04.gdcs-qa.ale.com:6668 --topic sslTopic3
--producer.config /tmp/ssl-kafka/client-ssl.properties
--security-protocol SSL
hi
HELLO
[2017-07-26 05:59:31,888] ERROR Error when sending message to topic
sslTopic3 with key: null, value: 2 bytes with error:
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update
metadata after 60000 ms.

Error in controller.log file :

[2017-07-26 05:58:49,535] WARN
[Controller-1001-to-broker-1001-send-thread], Controller 1001's
connection to broker nwk2-bdp-kafka-04.gdcs-qa.apple.com:6668 (id:
1001 rack: null) was unsuccessful (kafka.controller.RequestSendThread)
java.io.IOException: Connection to
nwk2-bdp-kafka-04.gdcs-qa.apple.com:6668 (id: 1001 rack: null) failed
at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingReady$extension$2.apply(NetworkClientBlockingOps.scala:63)
at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingReady$extension$2.apply(NetworkClientBlockingOps.scala:59)
at kafka.utils.NetworkClientBlockingOps$.recursivePoll$1(NetworkClientBlockingOps.scala:112)
at kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntil$extension(NetworkClientBlockingOps.scala:120)
at kafka.utils.NetworkClientBlockingOps$.blockingReady$extension(NetworkClientBlockingOps.scala:59)
at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:233)
at kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:182)
at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:181)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
[2017-07-26 05:58:49,853] WARN
[Controller-1001-to-broker-1001-send-thread], Controller 1001's
connection to broker nwk2-bdp-kafka-04.gdcs-qa.apple.com:6668 (id:
1001 rack: null) was unsuccessful (kafka.controller.RequestSendThread)
java.io.IOException: Connection to
nwk2-bdp-kafka-04.gdcs-qa.apple.com:6668 (id: 1001 rack: null) failed
at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingReady$extension$2.apply(NetworkClientBlockingOps.scala:63)
at kafka.utils.NetworkClientBlockingOps$$anonfun$blockingReady$extension$2.apply(NetworkClientBlockingOps.scala:59)
at kafka.utils.NetworkClientBlockingOps$.recursivePoll$1(NetworkClientBlockingOps.scala:112)
at kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntil$extension(NetworkClientBlockingOps.scala:120)
at kafka.utils.NetworkClientBlockingOps$.blockingReady$extension(NetworkClientBlockingOps.scala:59)
at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:233)
at kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:182)
at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:181)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)

when i describe the topic, i see that the leader is 1001 & Isr has only 1001

/usr/hdp/2.5.3.0-37/kafka/bin/kafka-topics.sh --describe --zookeeper
nwk2-bdp-kafka-05.gdcs-qa.apple.com:2181,nwk2-bdp-kafka-04.gdcs-qa.apple.com:2181,nwk2-bdp-kafka-06.gdcs-qa.apple.com:2181
--topic sslTopic3
Topic:sslTopic3PartitionCount:3ReplicationFactor:3Configs:
Topic: sslTopic3 Partition: 0 Leader: 1001 Replicas: 1003,1001,1002 Isr: 1001
Topic: sslTopic3 Partition: 1 Leader: 1001 Replicas: 1001,1002,1003 Isr: 1001
Topic: sslTopic3 Partition: 2 Leader: 1001 Replicas: 1002,1003,1001 Isr: 1001

It seems setting the parameter -> security.inter.broker.protocol = SSL
causes connectivity issues between the Controller (in this case 1001) & the
Brokers (1001, 1002, 1003)

The question is why & what needs to be done to fix this ?

On Tue, Jul 25, 2017 at 10:31 PM, Manikumar <ma...@gmail.com>
wrote:

> enable debug logs to find out the actual error.
>
> On Wed, Jul 26, 2017 at 12:49 AM, karan alang <ka...@gmail.com>
> wrote:
>
> > hi - I've enabled SSL for Kafka & i'm trying to publish messages using
> > console Producer
> >
> > Error is as shown below, any ideas ?
> >
> >>
> >>    1. /usr/hdp/2.5.3.0-37/kafka/bin/kafka-console-producer.sh
> --broker-list
> >>    nwk2-bdp-kafka-05.gdcs-qa.apple.com:6668,nwk2-bdp-kafka-04.gdcs-qa.
> >>    apple.com:6668,nwk2-bdp-kafka-06.gdcs-qa.apple.com:6668 --topic
> >>    sslTopic1 --producer.config /tmp/ssl-kafka/client-ssl.properties --
> >>    security-protocol SSL
> >>    2.
> >>    3. hi
> >>    4.
> >>    5. [2017-07-25 19:10:54,750] ERROR Error when sending message to
> >>    topic sslTopic1 with key: null, value: 2 bytes with error:
> (org.apache
> >>    .kafka.clients.producer.internals.ErrorLoggingCallback)org.apache.
> >>    kafka.common.errors.TimeoutException: Failed to update metadata
> after
> >>    60000 ms.
> >>
> >>
> > client-ssl.properties
> >
> >
> >>    1. security.protocol=SSL
> >>    2. ssl.truststore.location=/tmp/ssl-kafka/client.truststore.jks
> >>    3. ssl.truststore.password=changeit
> >>    4. ssl.keystore.location=/tmp/ssl-kafka/client.keystore.jks
> >>    5. ssl.keystore.password=changeitssl.key.password=changeit
> >>    6. ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1ssl.keystore.type=JKS
> >>    7. ssl.truststore.type=JKS
> >>
> >>
> > Attaching the server.properties
> >
> >
> >
> >
> >
> >
> >
> >
> >
>

Re: Kafka with SSL enabled - Not able to publish messages (org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms)

Posted by Manikumar <ma...@gmail.com>.
enable debug logs to find out the actual error.

On Wed, Jul 26, 2017 at 12:49 AM, karan alang <ka...@gmail.com> wrote:

> hi - I've enabled SSL for Kafka & i'm trying to publish messages using
> console Producer
>
> Error is as shown below, any ideas ?
>
>>
>>    1. /usr/hdp/2.5.3.0-37/kafka/bin/kafka-console-producer.sh --broker-list
>>    nwk2-bdp-kafka-05.gdcs-qa.apple.com:6668,nwk2-bdp-kafka-04.gdcs-qa.
>>    apple.com:6668,nwk2-bdp-kafka-06.gdcs-qa.apple.com:6668 --topic
>>    sslTopic1 --producer.config /tmp/ssl-kafka/client-ssl.properties --
>>    security-protocol SSL
>>    2.
>>    3. hi
>>    4.
>>    5. [2017-07-25 19:10:54,750] ERROR Error when sending message to
>>    topic sslTopic1 with key: null, value: 2 bytes with error: (org.apache
>>    .kafka.clients.producer.internals.ErrorLoggingCallback)org.apache.
>>    kafka.common.errors.TimeoutException: Failed to update metadata after
>>    60000 ms.
>>
>>
> client-ssl.properties
>
>
>>    1. security.protocol=SSL
>>    2. ssl.truststore.location=/tmp/ssl-kafka/client.truststore.jks
>>    3. ssl.truststore.password=changeit
>>    4. ssl.keystore.location=/tmp/ssl-kafka/client.keystore.jks
>>    5. ssl.keystore.password=changeitssl.key.password=changeit
>>    6. ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1ssl.keystore.type=JKS
>>    7. ssl.truststore.type=JKS
>>
>>
> Attaching the server.properties
>
>
>
>
>
>
>
>
>