You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by prabhu v <pr...@gmail.com> on 2015/12/28 06:48:05 UTC

Consumer - Failed to find leader

Hi Experts,

I am getting the below error when running the consumer
"kafka-console-consumer.sh" .

I am using the new version 0.9.0.1.
Topic name: test


[2015-12-28 06:13:34,409] WARN
[console-consumer-61657_localhost-1451283204993-5512891d-leader-finder-thread],
Failed to find leader for Set([test,0])
(kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
kafka.common.BrokerEndPointNotAvailableException: End point PLAINTEXT not
found for broker 0
        at kafka.cluster.Broker.getBrokerEndPoint(Broker.scala:136)


Please find the current configuration below.

Configuration:


[root@localhost config]# grep -v "^#" consumer.properties
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=60000
group.id=test-consumer-group
security.protocol=SASL_PLAINTEXT
sasl.kerberos.service.name="kafka"


[root@localhost config]# grep -v "^#" producer.properties
metadata.broker.list=localhost:9094,localhost:9095
producer.type=sync
compression.codec=none
serializer.class=kafka.serializer.DefaultEncoder
security.protocol=SASL_PLAINTEXT
sasl.kerberos.service.name="kafka"

[root@localhost config]# grep -v "^#" server1.properties

broker.id=0
listeners=SASL_PLAINTEXT://localhost:9094
delete.topic.enable=true
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/kafka_2.11-0.9.0.0/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
log.cleaner.enable=false
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=60000
inter.broker.protocol.version=0.9.0.0
security.inter.broker.protocol=SASL_PLAINTEXT
allow.everyone.if.no.acl.found=true


[root@localhost config]# grep -v "^#" server4.properties
broker.id=1
listeners=SASL_PLAINTEXT://localhost:9095
delete.topic.enable=true
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/kafka_2.11-0.9.0.0/kafka-logs-1
num.partitions=1
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
log.cleaner.enable=false
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=60000
inter.broker.protocol.version=0.9.0.0
security.inter.broker.protocol=SASL_PLAINTEXT
zookeeper.sasl.client=zkclient

[root@localhost config]# grep -v "^#" zookeeper.properties
dataDir=/data/kafka_2.11-0.9.0.0/zookeeper
clientPort=2181
maxClientCnxns=0
requireClientAuthScheme=sasl
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
jaasLoginRenew=3600000


Need your valuable inputs on this issue.
-- 
Regards,

Prabhu.V

Re: Consumer - Failed to find leader

Posted by Ismael Juma <is...@juma.me.uk>.
Prabhu, were you able to get this to work in the end?

Ismael

Re: Consumer - Failed to find leader

Posted by prabhu v <pr...@gmail.com>.
Hi Harsh/Ismael,

Any suggestions or inputs for the above issue?

When i run the producer client, I still get this error

./kafka-console-producer.sh --broker-list hostname:9094 --topic topic3


*[2016-01-05 10:16:20,272] ERROR Error when sending message to topic test
with key: null, value: 5 bytes with error: Failed to update metadata after
60000 ms.
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)*

Also, i can see the below error in controller.log

*[2016-01-20 09:39:33,408] DEBUG [Controller 0]: preferred replicas by
broker Map(1 -> Map([topic3,0] -> List(1, 0)), 0 -> Map([topic3,1] ->
List(0, 1), [topic2,0] -> List(0), [topic1,0] -> List(0)))
(kafka.controller.KafkaController)*
*[2016-01-20 09:39:33,408] DEBUG [Controller 0]: topics not in preferred
replica Map() (kafka.controller.KafkaController)*
*[2016-01-20 09:39:33,408] TRACE [Controller 0]: leader imbalance ratio for
broker 1 is 0.000000 (kafka.controller.KafkaController)*
*[2016-01-20 09:39:33,408] DEBUG [Controller 0]: topics not in preferred
replica Map() (kafka.controller.KafkaController)*
*[2016-01-20 09:39:33,409] TRACE [Controller 0]: leader imbalance ratio for
broker 0 is 0.000000 (kafka.controller.KafkaController)*


Tried reinstalling kafka, but no luck:(


Checked telnet also, I am able to connect to that port.
[root@blrd-cmgvapp46 logs]# telnet hostname 9094
Trying 172.31.31.186...
Connected to hostname (172.31.31.186).
Escape character is '^]'.

I can see the topic is created properly.

[root@hostname bin]# ./kafka-topics.sh --describe --zookeeper hostname:2181
--topic topic3
Topic:topic3    PartitionCount:2        ReplicationFactor:2     Configs:
        Topic: topic3   Partition: 0    Leader: 1       Replicas: 1,0
Isr: 1,0
        Topic: topic3   Partition: 1    Leader: 0       Replicas: 0,1
Isr: 0,1


Thanks in advance,


On Tue, Jan 5, 2016 at 3:17 PM, prabhu v <pr...@gmail.com> wrote:

> Hi Harsha,
>
> This is my Kafka_server_jaas.config file. This is passed as JVM param to
> the Kafka broker while start up.
>
> =============
> KafkaServer {
>     com.sun.security.auth.module.Krb5LoginModule required
>       useKeyTab=true
>        storeKey=true
>       serviceName="kafka"
>        keyTab="/etc/security/keytabs/kafka1.keytab"
>         useTicketCache=true
>         principal="kafka/hostname@realmname";
> };
>
> zkclient{
>
> com.sun.security.auth.module.Krb5LoginModule required
>       useKeyTab=true
>        storeKey=true
>       serviceName="zookeeper"
>        keyTab="/etc/security/keytabs/kafka1.keytab"
>         useTicketCache=true
>         principal="kafka@realmname";
>
> };
> =============
>
> Note: For security reasons, changed my original FQDN to hostname and
> original realm name to realm name in the below output.
>
> I am able to view the ticket using klist command as well. Please find
> below output.
>
> [root@localhost config]# kinit -k -t /etc/security/keytabs/kafka1.keytab
> kafka/hostname@realmname
> [root@localhost config]# klist
> Ticket cache: FILE:/tmp/krb5cc_0
> Default principal: kafka/hostname@realmname
>
> Valid starting     Expires            Service principal
> 01/05/16 08:14:28  01/06/16 08:14:28  krbtgt/realm@realm
>         renew until 01/05/16 08:14:28
>
>
>
>
>
>
> For(topics,producer and consumer) clients, I am using the below JAAS
> Config:
>
> =============
>
> Client {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> keyTab="/etc/security/keytabs/kafka_client.keytab"
> storeKey=true
> useTicketCache=true
> serviceName="kafka"
> principal="kafkaclient/hostname@realmname";
> };
>
> =============
>
> I am able to view the ticket using klist command as well. Please find
> below output.
>
> [root@localhost config]# kinit -k -t
> /etc/security/keytabs/kafka_client.keytab kafkaclient/hostname@realmname
> [root@localhost config]# klist
> Ticket cache: FILE:/tmp/krb5cc_0
> Default principal: kafkaclient/hostname@realmname
>
> Valid starting     Expires            Service principal
> 01/05/16 08:14:28  01/06/16 08:14:28  krbtgt/realm@realm
>         renew until 01/05/16 08:14:28
>
> Error when running producer client:
>
> ./kafka-console-producer.sh --broker-list hostname:9095 --topic test
>
>
> [2016-01-05 10:16:20,272] ERROR Error when sending message to topic test
> with key: null, value: 5 bytes with error: Failed to update metadata after
> 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
>
> Error when running topics.sh:
>
> [root@localhost bin]# ./kafka-topics.sh --list --zookeeper hostname:2181
> [2015-12-28 12:41:32,589] WARN SASL configuration failed:
> javax.security.auth.login.LoginException: No key to store Will continue
> connection to Zookeeper server without SASL authentication, if Zookeeper
> server allows it. (org.apache.zookeeper.ClientCnxn)
> ^Z
>
> Please let me know if i am missing anything.
>
>
>
>
> Thanks,
> Prabhu
>
>
>
>
> On Wed, Dec 30, 2015 at 9:28 PM, Harsha <ka...@harsha.io> wrote:
>
>> can you add your jass file details. Your jaas file might have
>> useTicketCache=true and storeKey=true as well
>> example of KafkaServer jass file
>>
>> KafkaServer {
>>
>> com.sun.security.auth.module.Krb5LoginModule required
>>
>> useKeyTab=true
>>
>> storeKey=true
>>
>> serviceName="kafka"
>>
>> keyTab="/vagrant/keytabs/kafka1.keytab"
>>
>> principal="kafka/kafka1.witzend.com@WITZEND.COM";
>> };
>>
>> and KafkaClient
>> KafkaClient {
>>
>> com.sun.security.auth.module.Krb5LoginModule required
>>
>> useTicketCache=true
>>
>> serviceName="kafka";
>>
>> };
>>
>> On Wed, Dec 30, 2015, at 03:10 AM, prabhu v wrote:
>>
>> Hi Harsha,
>>
>> I have used the Fully qualified domain name. Just for security concerns,
>> Before sending this mail,i have replaced our FQDN hostname to localhost.
>>
>> yes, i have tried KINIT and I am able to view the tickets using klist
>> command as well.
>>
>> Thanks,
>> Prabhu
>>
>> On Wed, Dec 30, 2015 at 11:27 AM, Harsha <ka...@harsha.io> wrote:
>>
>> Prabhu,
>>            When using SASL/kerberos always make sure you give FQDN of
>>            the hostname . In your command you are using --zookeeper
>>            localhost:2181 and make sure you change that hostname.
>>
>> "avax.security.auth.login.LoginException: No key to store Will continue
>> > connection to Zookeeper server without SASL authentication, if
>> Zookeeper"
>>
>> did you try  kinit with that keytab at the command line.
>>
>> -Harsha
>> On Mon, Dec 28, 2015, at 04:07 AM, prabhu v wrote:
>> > Thanks for the input Ismael.
>> >
>> > I will try and let you know.
>> >
>> > Also need your valuable inputs for the below issue:)
>> >
>> > i am not able to run kafka-topics.sh(0.9.0.0 version)
>> >
>> > [root@localhost bin]# ./kafka-topics.sh --list --zookeeper
>> localhost:2181
>> > [2015-12-28 12:41:32,589] WARN SASL configuration failed:
>> > javax.security.auth.login.LoginException: No key to store Will continue
>> > connection to Zookeeper server without SASL authentication, if Zookeeper
>> > server allows it. (org.apache.zookeeper.ClientCnxn)
>> > ^Z
>> >
>> > I am sure the key is present in its keytab file ( I have cross verified
>> > using kinit command as well).
>> >
>> > Am i missing anything while calling the kafka-topics.sh??
>> >
>> >
>> >
>> > On Mon, Dec 28, 2015 at 3:53 PM, Ismael Juma <is...@gmail.com> wrote:
>> >
>> > > Hi Prabhu,
>> > >
>> > > kafka-console-consumer.sh uses the old consumer by default, but only
>> the
>> > > new consumer supports security. Use --new-consumer to change this.
>> > >
>> > > Hope this helps.
>> > >
>> > > Ismael
>> > > On 28 Dec 2015 05:48, "prabhu v" <pr...@gmail.com> wrote:
>> > >
>> > > > Hi Experts,
>> > > >
>> > > > I am getting the below error when running the consumer
>> > > > "kafka-console-consumer.sh" .
>> > > >
>> > > > I am using the new version 0.9.0.1.
>> > > > Topic name: test
>> > > >
>> > > >
>> > > > [2015-12-28 06:13:34,409] WARN
>> > > >
>> > > >
>> > >
>> [console-consumer-61657_localhost-1451283204993-5512891d-leader-finder-thread],
>> > > > Failed to find leader for Set([test,0])
>> > > > (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
>> > > > kafka.common.BrokerEndPointNotAvailableException: End point
>> PLAINTEXT not
>> > > > found for broker 0
>> > > >         at kafka.cluster.Broker.getBrokerEndPoint(Broker.scala:136)
>> > > >
>> > > >
>> > > > Please find the current configuration below.
>> > > >
>> > > > Configuration:
>> > > >
>> > > >
>> > > > [root@localhost config]# grep -v "^#" consumer.properties
>> > > > zookeeper.connect=localhost:2181
>> > > > zookeeper.connection.timeout.ms=60000
>> > > > group.id=test-consumer-group
>> > > > security.protocol=SASL_PLAINTEXT
>> > > > sasl.kerberos.service.name="kafka"
>> > > >
>> > > >
>> > > > [root@localhost config]# grep -v "^#" producer.properties
>> > > > metadata.broker.list=localhost:9094,localhost:9095
>> > > > producer.type=sync
>> > > > compression.codec=none
>> > > > serializer.class=kafka.serializer.DefaultEncoder
>> > > > security.protocol=SASL_PLAINTEXT
>> > > > sasl.kerberos.service.name="kafka"
>> > > >
>> > > > [root@localhost config]# grep -v "^#" server1.properties
>> > > >
>> > > > broker.id=0
>> > > > listeners=SASL_PLAINTEXT://localhost:9094
>> > > > delete.topic.enable=true
>> > > > num.network.threads=3
>> > > > num.io.threads=8
>> > > > socket.send.buffer.bytes=102400
>> > > > socket.receive.buffer.bytes=102400
>> > > > socket.request.max.bytes=104857600
>> > > > log.dirs=/data/kafka_2.11-0.9.0.0/kafka-logs
>> > > > num.partitions=1
>> > > > num.recovery.threads.per.data.dir=1
>> > > > log.retention.hours=168
>> > > > log.segment.bytes=1073741824
>> > > > log.retention.check.interval.ms=300000
>> > > > log.cleaner.enable=false
>> > > > zookeeper.connect=localhost:2181
>> > > > zookeeper.connection.timeout.ms=60000
>> > > > inter.broker.protocol.version=0.9.0.0
>> > > > security.inter.broker.protocol=SASL_PLAINTEXT
>> > > > allow.everyone.if.no.acl.found=true
>> > > >
>> > > >
>> > > > [root@localhost config]# grep -v "^#" server4.properties
>> > > > broker.id=1
>> > > > listeners=SASL_PLAINTEXT://localhost:9095
>> > > > delete.topic.enable=true
>> > > > num.network.threads=3
>> > > > num.io.threads=8
>> > > > socket.send.buffer.bytes=102400
>> > > > socket.receive.buffer.bytes=102400
>> > > > socket.request.max.bytes=104857600
>> > > > log.dirs=/data/kafka_2.11-0.9.0.0/kafka-logs-1
>> > > > num.partitions=1
>> > > > num.recovery.threads.per.data.dir=1
>> > > > log.retention.hours=168
>> > > > log.segment.bytes=1073741824
>> > > > log.retention.check.interval.ms=300000
>> > > > log.cleaner.enable=false
>> > > > zookeeper.connect=localhost:2181
>> > > > zookeeper.connection.timeout.ms=60000
>> > > > inter.broker.protocol.version=0.9.0.0
>> > > > security.inter.broker.protocol=SASL_PLAINTEXT
>> > > > zookeeper.sasl.client=zkclient
>> > > >
>> > > > [root@localhost config]# grep -v "^#" zookeeper.properties
>> > > > dataDir=/data/kafka_2.11-0.9.0.0/zookeeper
>> > > > clientPort=2181
>> > > > maxClientCnxns=0
>> > > > requireClientAuthScheme=sasl
>> > > >
>> > >
>> authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
>> > > > jaasLoginRenew=3600000
>> > > >
>> > > >
>> > > > Need your valuable inputs on this issue.
>> > > > --
>> > > > Regards,
>> > > >
>> > > > Prabhu.V
>> > > >
>> > >
>> >
>> >
>> >
>> > --
>> > Regards,
>> >
>> > Prabhu.V
>>
>>
>>
>>
>> --
>> Regards,
>>
>> Prabhu.V
>>
>>
>>
>>
>
>
>
> --
> Regards,
>
> Prabhu.V
>
>



-- 
Regards,

Prabhu.V

Re: Consumer - Failed to find leader

Posted by prabhu v <pr...@gmail.com>.
Hi Harsha,

This is my Kafka_server_jaas.config file. This is passed as JVM param to
the Kafka broker while start up.

=============
KafkaServer {
    com.sun.security.auth.module.Krb5LoginModule required
      useKeyTab=true
       storeKey=true
      serviceName="kafka"
       keyTab="/etc/security/keytabs/kafka1.keytab"
        useTicketCache=true
        principal="kafka/hostname@realmname";
};

zkclient{

com.sun.security.auth.module.Krb5LoginModule required
      useKeyTab=true
       storeKey=true
      serviceName="zookeeper"
       keyTab="/etc/security/keytabs/kafka1.keytab"
        useTicketCache=true
        principal="kafka@realmname";

};
=============

Note: For security reasons, changed my original FQDN to hostname and
original realm name to realm name in the below output.

I am able to view the ticket using klist command as well. Please find below
output.

[root@localhost config]# kinit -k -t /etc/security/keytabs/kafka1.keytab
kafka/hostname@realmname
[root@localhost config]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: kafka/hostname@realmname

Valid starting     Expires            Service principal
01/05/16 08:14:28  01/06/16 08:14:28  krbtgt/realm@realm
        renew until 01/05/16 08:14:28






For(topics,producer and consumer) clients, I am using the below JAAS Config:

=============

Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/etc/security/keytabs/kafka_client.keytab"
storeKey=true
useTicketCache=true
serviceName="kafka"
principal="kafkaclient/hostname@realmname";
};

=============

I am able to view the ticket using klist command as well. Please find below
output.

[root@localhost config]# kinit -k -t
/etc/security/keytabs/kafka_client.keytab kafkaclient/hostname@realmname
[root@localhost config]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: kafkaclient/hostname@realmname

Valid starting     Expires            Service principal
01/05/16 08:14:28  01/06/16 08:14:28  krbtgt/realm@realm
        renew until 01/05/16 08:14:28

Error when running producer client:

./kafka-console-producer.sh --broker-list hostname:9095 --topic test


[2016-01-05 10:16:20,272] ERROR Error when sending message to topic test
with key: null, value: 5 bytes with error: Failed to update metadata after
60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)

Error when running topics.sh:

[root@localhost bin]# ./kafka-topics.sh --list --zookeeper hostname:2181
[2015-12-28 12:41:32,589] WARN SASL configuration failed:
javax.security.auth.login.LoginException: No key to store Will continue
connection to Zookeeper server without SASL authentication, if Zookeeper
server allows it. (org.apache.zookeeper.ClientCnxn)
^Z

Please let me know if i am missing anything.




Thanks,
Prabhu




On Wed, Dec 30, 2015 at 9:28 PM, Harsha <ka...@harsha.io> wrote:

> can you add your jass file details. Your jaas file might have
> useTicketCache=true and storeKey=true as well
> example of KafkaServer jass file
>
> KafkaServer {
>
> com.sun.security.auth.module.Krb5LoginModule required
>
> useKeyTab=true
>
> storeKey=true
>
> serviceName="kafka"
>
> keyTab="/vagrant/keytabs/kafka1.keytab"
>
> principal="kafka/kafka1.witzend.com@WITZEND.COM";
> };
>
> and KafkaClient
> KafkaClient {
>
> com.sun.security.auth.module.Krb5LoginModule required
>
> useTicketCache=true
>
> serviceName="kafka";
>
> };
>
> On Wed, Dec 30, 2015, at 03:10 AM, prabhu v wrote:
>
> Hi Harsha,
>
> I have used the Fully qualified domain name. Just for security concerns,
> Before sending this mail,i have replaced our FQDN hostname to localhost.
>
> yes, i have tried KINIT and I am able to view the tickets using klist
> command as well.
>
> Thanks,
> Prabhu
>
> On Wed, Dec 30, 2015 at 11:27 AM, Harsha <ka...@harsha.io> wrote:
>
> Prabhu,
>            When using SASL/kerberos always make sure you give FQDN of
>            the hostname . In your command you are using --zookeeper
>            localhost:2181 and make sure you change that hostname.
>
> "avax.security.auth.login.LoginException: No key to store Will continue
> > connection to Zookeeper server without SASL authentication, if Zookeeper"
>
> did you try  kinit with that keytab at the command line.
>
> -Harsha
> On Mon, Dec 28, 2015, at 04:07 AM, prabhu v wrote:
> > Thanks for the input Ismael.
> >
> > I will try and let you know.
> >
> > Also need your valuable inputs for the below issue:)
> >
> > i am not able to run kafka-topics.sh(0.9.0.0 version)
> >
> > [root@localhost bin]# ./kafka-topics.sh --list --zookeeper
> localhost:2181
> > [2015-12-28 12:41:32,589] WARN SASL configuration failed:
> > javax.security.auth.login.LoginException: No key to store Will continue
> > connection to Zookeeper server without SASL authentication, if Zookeeper
> > server allows it. (org.apache.zookeeper.ClientCnxn)
> > ^Z
> >
> > I am sure the key is present in its keytab file ( I have cross verified
> > using kinit command as well).
> >
> > Am i missing anything while calling the kafka-topics.sh??
> >
> >
> >
> > On Mon, Dec 28, 2015 at 3:53 PM, Ismael Juma <is...@gmail.com> wrote:
> >
> > > Hi Prabhu,
> > >
> > > kafka-console-consumer.sh uses the old consumer by default, but only
> the
> > > new consumer supports security. Use --new-consumer to change this.
> > >
> > > Hope this helps.
> > >
> > > Ismael
> > > On 28 Dec 2015 05:48, "prabhu v" <pr...@gmail.com> wrote:
> > >
> > > > Hi Experts,
> > > >
> > > > I am getting the below error when running the consumer
> > > > "kafka-console-consumer.sh" .
> > > >
> > > > I am using the new version 0.9.0.1.
> > > > Topic name: test
> > > >
> > > >
> > > > [2015-12-28 06:13:34,409] WARN
> > > >
> > > >
> > >
> [console-consumer-61657_localhost-1451283204993-5512891d-leader-finder-thread],
> > > > Failed to find leader for Set([test,0])
> > > > (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
> > > > kafka.common.BrokerEndPointNotAvailableException: End point
> PLAINTEXT not
> > > > found for broker 0
> > > >         at kafka.cluster.Broker.getBrokerEndPoint(Broker.scala:136)
> > > >
> > > >
> > > > Please find the current configuration below.
> > > >
> > > > Configuration:
> > > >
> > > >
> > > > [root@localhost config]# grep -v "^#" consumer.properties
> > > > zookeeper.connect=localhost:2181
> > > > zookeeper.connection.timeout.ms=60000
> > > > group.id=test-consumer-group
> > > > security.protocol=SASL_PLAINTEXT
> > > > sasl.kerberos.service.name="kafka"
> > > >
> > > >
> > > > [root@localhost config]# grep -v "^#" producer.properties
> > > > metadata.broker.list=localhost:9094,localhost:9095
> > > > producer.type=sync
> > > > compression.codec=none
> > > > serializer.class=kafka.serializer.DefaultEncoder
> > > > security.protocol=SASL_PLAINTEXT
> > > > sasl.kerberos.service.name="kafka"
> > > >
> > > > [root@localhost config]# grep -v "^#" server1.properties
> > > >
> > > > broker.id=0
> > > > listeners=SASL_PLAINTEXT://localhost:9094
> > > > delete.topic.enable=true
> > > > num.network.threads=3
> > > > num.io.threads=8
> > > > socket.send.buffer.bytes=102400
> > > > socket.receive.buffer.bytes=102400
> > > > socket.request.max.bytes=104857600
> > > > log.dirs=/data/kafka_2.11-0.9.0.0/kafka-logs
> > > > num.partitions=1
> > > > num.recovery.threads.per.data.dir=1
> > > > log.retention.hours=168
> > > > log.segment.bytes=1073741824
> > > > log.retention.check.interval.ms=300000
> > > > log.cleaner.enable=false
> > > > zookeeper.connect=localhost:2181
> > > > zookeeper.connection.timeout.ms=60000
> > > > inter.broker.protocol.version=0.9.0.0
> > > > security.inter.broker.protocol=SASL_PLAINTEXT
> > > > allow.everyone.if.no.acl.found=true
> > > >
> > > >
> > > > [root@localhost config]# grep -v "^#" server4.properties
> > > > broker.id=1
> > > > listeners=SASL_PLAINTEXT://localhost:9095
> > > > delete.topic.enable=true
> > > > num.network.threads=3
> > > > num.io.threads=8
> > > > socket.send.buffer.bytes=102400
> > > > socket.receive.buffer.bytes=102400
> > > > socket.request.max.bytes=104857600
> > > > log.dirs=/data/kafka_2.11-0.9.0.0/kafka-logs-1
> > > > num.partitions=1
> > > > num.recovery.threads.per.data.dir=1
> > > > log.retention.hours=168
> > > > log.segment.bytes=1073741824
> > > > log.retention.check.interval.ms=300000
> > > > log.cleaner.enable=false
> > > > zookeeper.connect=localhost:2181
> > > > zookeeper.connection.timeout.ms=60000
> > > > inter.broker.protocol.version=0.9.0.0
> > > > security.inter.broker.protocol=SASL_PLAINTEXT
> > > > zookeeper.sasl.client=zkclient
> > > >
> > > > [root@localhost config]# grep -v "^#" zookeeper.properties
> > > > dataDir=/data/kafka_2.11-0.9.0.0/zookeeper
> > > > clientPort=2181
> > > > maxClientCnxns=0
> > > > requireClientAuthScheme=sasl
> > > >
> > >
> authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
> > > > jaasLoginRenew=3600000
> > > >
> > > >
> > > > Need your valuable inputs on this issue.
> > > > --
> > > > Regards,
> > > >
> > > > Prabhu.V
> > > >
> > >
> >
> >
> >
> > --
> > Regards,
> >
> > Prabhu.V
>
>
>
>
> --
> Regards,
>
> Prabhu.V
>
>
>
>



-- 
Regards,

Prabhu.V

Re: Consumer - Failed to find leader

Posted by Harsha <ka...@harsha.io>.
can you add your jass file details. Your jaas file might have
useTicketCache=true and storeKey=true as well example of
KafkaServer jass file

KafkaServer {

com.sun.security.auth.module.Krb5LoginModule required

useKeyTab=true

storeKey=true

serviceName="kafka"

keyTab="/vagrant/keytabs/kafka1.keytab"

principal="kafka/kafka1.witzend.com@WITZEND.COM"; };

and KafkaClient KafkaClient {

com.sun.security.auth.module.Krb5LoginModule required

useTicketCache=true

serviceName="kafka";

};

On Wed, Dec 30, 2015, at 03:10 AM, prabhu v wrote:
> Hi Harsha,
>
> I have used the Fully qualified domain name. Just for security
> concerns, Before sending this mail,i have replaced our FQDN hostname
> to localhost.
>
> yes, i have tried KINIT and I am able to view the tickets using klist
> command as well.
>
> Thanks, Prabhu
>
> On Wed, Dec 30, 2015 at 11:27 AM, Harsha <ka...@harsha.io> wrote:
>> Prabhu,
>>
When using SASL/kerberos always make sure you give FQDN of
>>
the hostname . In your command you are using --zookeeper
>>
localhost:2181 and make sure you change that hostname.
>>
>>
"avax.security.auth.login.LoginException: No key to store Will continue
>>
> connection to Zookeeper server without SASL authentication, if
> Zookeeper"
>>
>> did you try  kinit with that keytab at the command line.
>>
>>
-Harsha
>> On Mon, Dec 28, 2015, at 04:07 AM, prabhu v wrote:
>>
> Thanks for the input Ismael.
>>
>
>>
> I will try and let you know.
>>
>
>>
> Also need your valuable inputs for the below issue:)
>>
>
>>
> i am not able to run kafka-topics.sh(0.9.0.0 version)
>>
>
>>
> [root@localhost bin]# ./kafka-topics.sh --list --zookeeper
> localhost:2181
>>
> [2015-12-28 12:41:32,589] WARN SASL configuration failed:
>>
> javax.security.auth.login.LoginException: No key to store Will
> continue
>>
> connection to Zookeeper server without SASL authentication, if
> Zookeeper
>>
> server allows it. (org.apache.zookeeper.ClientCnxn)
>>
> ^Z
>>
>
>>
> I am sure the key is present in its keytab file ( I have cross
> verified
>>
> using kinit command as well).
>>
>
>>
> Am i missing anything while calling the kafka-topics.sh??
>>
>
>>
>
>>
>
>>
> On Mon, Dec 28, 2015 at 3:53 PM, Ismael Juma
> <is...@gmail.com> wrote:
>>
>
>>
> > Hi Prabhu,
>>
> >
>>
> > kafka-console-consumer.sh uses the old consumer by default, but
> > only the
>>
> > new consumer supports security. Use --new-consumer to change this.
>>
> >
>>
> > Hope this helps.
>>
> >
>>
> > Ismael
>>
> > On 28 Dec 2015 05:48, "prabhu v" <pr...@gmail.com> wrote:
>>
> >
>>
> > > Hi Experts,
>>
> > >
>>
> > > I am getting the below error when running the consumer
>>
> > > "kafka-console-consumer.sh" .
>>
> > >
>>
> > > I am using the new version 0.9.0.1.
>>
> > > Topic name: test
>>
> > >
>>
> > >
>>
> > > [2015-12-28 06:13:34,409] WARN
>>
> > >
>>
> > >
>>
> > [console-consumer-61657_localhost-1451283204993-5512891d-leader-
> > finder-thread],
>>
> > > Failed to find leader for Set([test,0])
>>
> > > (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
>>
> > > kafka.common.BrokerEndPointNotAvailableException: End point
> > > PLAINTEXT not
>>
> > > found for broker 0
>>
> > >at kafka.cluster.Broker.getBrokerEndPoint(Broker.scala:136)
>>
> > >
>>
> > >
>>
> > > Please find the current configuration below.
>>
> > >
>>
> > > Configuration:
>>
> > >
>>
> > >
>>
> > > [root@localhost config]# grep -v "^#" consumer.properties
>>
> > > zookeeper.connect=localhost:2181
>>
> > > zookeeper.connection.timeout.ms=60000
>>
> > > group.id=test-consumer-group
>>
> > > security.protocol=SASL_PLAINTEXT
>>
> > > sasl.kerberos.service.name="kafka"
>>
> > >
>>
> > >
>>
> > > [root@localhost config]# grep -v "^#" producer.properties
>>
> > > metadata.broker.list=localhost:9094,localhost:9095
>>
> > > producer.type=sync
>>
> > > compression.codec=none
>>
> > > serializer.class=kafka.serializer.DefaultEncoder
>>
> > > security.protocol=SASL_PLAINTEXT
>>
> > > sasl.kerberos.service.name="kafka"
>>
> > >
>>
> > > [root@localhost config]# grep -v "^#" server1.properties
>>
> > >
>>
> > > broker.id=0
>>
> > > listeners=SASL_PLAINTEXT://localhost:9094
>>
> > > delete.topic.enable=true
>>
> > > num.network.threads=3
>>
> > > num.io.threads=8
>>
> > > socket.send.buffer.bytes=102400
>>
> > > socket.receive.buffer.bytes=102400
>>
> > > socket.request.max.bytes=104857600
>>
> > > log.dirs=/data/kafka_2.11-0.9.0.0/kafka-logs
>>
> > > num.partitions=1
>>
> > > num.recovery.threads.per.data.dir=1
>>
> > > log.retention.hours=168
>>
> > > log.segment.bytes=1073741824
>>
> > > log.retention.check.interval.ms=300000
>>
> > > log.cleaner.enable=false
>>
> > > zookeeper.connect=localhost:2181
>>
> > > zookeeper.connection.timeout.ms=60000
>>
> > > inter.broker.protocol.version=0.9.0.0
>>
> > > security.inter.broker.protocol=SASL_PLAINTEXT
>>
> > > allow.everyone.if.no.acl.found=true
>>
> > >
>>
> > >
>>
> > > [root@localhost config]# grep -v "^#" server4.properties
>>
> > > broker.id=1
>>
> > > listeners=SASL_PLAINTEXT://localhost:9095
>>
> > > delete.topic.enable=true
>>
> > > num.network.threads=3
>>
> > > num.io.threads=8
>>
> > > socket.send.buffer.bytes=102400
>>
> > > socket.receive.buffer.bytes=102400
>>
> > > socket.request.max.bytes=104857600
>>
> > > log.dirs=/data/kafka_2.11-0.9.0.0/kafka-logs-1
>>
> > > num.partitions=1
>>
> > > num.recovery.threads.per.data.dir=1
>>
> > > log.retention.hours=168
>>
> > > log.segment.bytes=1073741824
>>
> > > log.retention.check.interval.ms=300000
>>
> > > log.cleaner.enable=false
>>
> > > zookeeper.connect=localhost:2181
>>
> > > zookeeper.connection.timeout.ms=60000
>>
> > > inter.broker.protocol.version=0.9.0.0
>>
> > > security.inter.broker.protocol=SASL_PLAINTEXT
>>
> > > zookeeper.sasl.client=zkclient
>>
> > >
>>
> > > [root@localhost config]# grep -v "^#" zookeeper.properties
>>
> > > dataDir=/data/kafka_2.11-0.9.0.0/zookeeper
>>
> > > clientPort=2181
>>
> > > maxClientCnxns=0
>>
> > > requireClientAuthScheme=sasl
>>
> > >
>>
> > authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationP-
> > rovider
>>
> > > jaasLoginRenew=3600000
>>
> > >
>>
> > >
>>
> > > Need your valuable inputs on this issue.
>>
> > > --
>>
> > > Regards,
>>
> > >
>>
> > > Prabhu.V
>>
> > >
>>
> >
>>
>
>>
>
>>
>
>>
> --
>>
> Regards,
>>
>
>>
> Prabhu.V
>
>
>
> --
> Regards,
>
> Prabhu.V
>

Re: Consumer - Failed to find leader

Posted by prabhu v <pr...@gmail.com>.
Hi Harsha,

I have used the Fully qualified domain name. Just for security concerns,
Before sending this mail,i have replaced our FQDN hostname to localhost.

yes, i have tried KINIT and I am able to view the tickets using klist
command as well.

Thanks,
Prabhu

On Wed, Dec 30, 2015 at 11:27 AM, Harsha <ka...@harsha.io> wrote:

> Prabhu,
>            When using SASL/kerberos always make sure you give FQDN of
>            the hostname . In your command you are using --zookeeper
>            localhost:2181 and make sure you change that hostname.
>
> "avax.security.auth.login.LoginException: No key to store Will continue
> > connection to Zookeeper server without SASL authentication, if Zookeeper"
>
> did you try  kinit with that keytab at the command line.
>
> -Harsha
> On Mon, Dec 28, 2015, at 04:07 AM, prabhu v wrote:
> > Thanks for the input Ismael.
> >
> > I will try and let you know.
> >
> > Also need your valuable inputs for the below issue:)
> >
> > i am not able to run kafka-topics.sh(0.9.0.0 version)
> >
> > [root@localhost bin]# ./kafka-topics.sh --list --zookeeper
> localhost:2181
> > [2015-12-28 12:41:32,589] WARN SASL configuration failed:
> > javax.security.auth.login.LoginException: No key to store Will continue
> > connection to Zookeeper server without SASL authentication, if Zookeeper
> > server allows it. (org.apache.zookeeper.ClientCnxn)
> > ^Z
> >
> > I am sure the key is present in its keytab file ( I have cross verified
> > using kinit command as well).
> >
> > Am i missing anything while calling the kafka-topics.sh??
> >
> >
> >
> > On Mon, Dec 28, 2015 at 3:53 PM, Ismael Juma <is...@gmail.com> wrote:
> >
> > > Hi Prabhu,
> > >
> > > kafka-console-consumer.sh uses the old consumer by default, but only
> the
> > > new consumer supports security. Use --new-consumer to change this.
> > >
> > > Hope this helps.
> > >
> > > Ismael
> > > On 28 Dec 2015 05:48, "prabhu v" <pr...@gmail.com> wrote:
> > >
> > > > Hi Experts,
> > > >
> > > > I am getting the below error when running the consumer
> > > > "kafka-console-consumer.sh" .
> > > >
> > > > I am using the new version 0.9.0.1.
> > > > Topic name: test
> > > >
> > > >
> > > > [2015-12-28 06:13:34,409] WARN
> > > >
> > > >
> > >
> [console-consumer-61657_localhost-1451283204993-5512891d-leader-finder-thread],
> > > > Failed to find leader for Set([test,0])
> > > > (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
> > > > kafka.common.BrokerEndPointNotAvailableException: End point
> PLAINTEXT not
> > > > found for broker 0
> > > >         at kafka.cluster.Broker.getBrokerEndPoint(Broker.scala:136)
> > > >
> > > >
> > > > Please find the current configuration below.
> > > >
> > > > Configuration:
> > > >
> > > >
> > > > [root@localhost config]# grep -v "^#" consumer.properties
> > > > zookeeper.connect=localhost:2181
> > > > zookeeper.connection.timeout.ms=60000
> > > > group.id=test-consumer-group
> > > > security.protocol=SASL_PLAINTEXT
> > > > sasl.kerberos.service.name="kafka"
> > > >
> > > >
> > > > [root@localhost config]# grep -v "^#" producer.properties
> > > > metadata.broker.list=localhost:9094,localhost:9095
> > > > producer.type=sync
> > > > compression.codec=none
> > > > serializer.class=kafka.serializer.DefaultEncoder
> > > > security.protocol=SASL_PLAINTEXT
> > > > sasl.kerberos.service.name="kafka"
> > > >
> > > > [root@localhost config]# grep -v "^#" server1.properties
> > > >
> > > > broker.id=0
> > > > listeners=SASL_PLAINTEXT://localhost:9094
> > > > delete.topic.enable=true
> > > > num.network.threads=3
> > > > num.io.threads=8
> > > > socket.send.buffer.bytes=102400
> > > > socket.receive.buffer.bytes=102400
> > > > socket.request.max.bytes=104857600
> > > > log.dirs=/data/kafka_2.11-0.9.0.0/kafka-logs
> > > > num.partitions=1
> > > > num.recovery.threads.per.data.dir=1
> > > > log.retention.hours=168
> > > > log.segment.bytes=1073741824
> > > > log.retention.check.interval.ms=300000
> > > > log.cleaner.enable=false
> > > > zookeeper.connect=localhost:2181
> > > > zookeeper.connection.timeout.ms=60000
> > > > inter.broker.protocol.version=0.9.0.0
> > > > security.inter.broker.protocol=SASL_PLAINTEXT
> > > > allow.everyone.if.no.acl.found=true
> > > >
> > > >
> > > > [root@localhost config]# grep -v "^#" server4.properties
> > > > broker.id=1
> > > > listeners=SASL_PLAINTEXT://localhost:9095
> > > > delete.topic.enable=true
> > > > num.network.threads=3
> > > > num.io.threads=8
> > > > socket.send.buffer.bytes=102400
> > > > socket.receive.buffer.bytes=102400
> > > > socket.request.max.bytes=104857600
> > > > log.dirs=/data/kafka_2.11-0.9.0.0/kafka-logs-1
> > > > num.partitions=1
> > > > num.recovery.threads.per.data.dir=1
> > > > log.retention.hours=168
> > > > log.segment.bytes=1073741824
> > > > log.retention.check.interval.ms=300000
> > > > log.cleaner.enable=false
> > > > zookeeper.connect=localhost:2181
> > > > zookeeper.connection.timeout.ms=60000
> > > > inter.broker.protocol.version=0.9.0.0
> > > > security.inter.broker.protocol=SASL_PLAINTEXT
> > > > zookeeper.sasl.client=zkclient
> > > >
> > > > [root@localhost config]# grep -v "^#" zookeeper.properties
> > > > dataDir=/data/kafka_2.11-0.9.0.0/zookeeper
> > > > clientPort=2181
> > > > maxClientCnxns=0
> > > > requireClientAuthScheme=sasl
> > > >
> > >
> authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
> > > > jaasLoginRenew=3600000
> > > >
> > > >
> > > > Need your valuable inputs on this issue.
> > > > --
> > > > Regards,
> > > >
> > > > Prabhu.V
> > > >
> > >
> >
> >
> >
> > --
> > Regards,
> >
> > Prabhu.V
>



-- 
Regards,

Prabhu.V

Re: Consumer - Failed to find leader

Posted by Harsha <ka...@harsha.io>.
Prabhu,
           When using SASL/kerberos always make sure you give FQDN of
           the hostname . In your command you are using --zookeeper
           localhost:2181 and make sure you change that hostname. 

"avax.security.auth.login.LoginException: No key to store Will continue
> connection to Zookeeper server without SASL authentication, if Zookeeper"

did you try  kinit with that keytab at the command line.

-Harsha
On Mon, Dec 28, 2015, at 04:07 AM, prabhu v wrote:
> Thanks for the input Ismael.
> 
> I will try and let you know.
> 
> Also need your valuable inputs for the below issue:)
> 
> i am not able to run kafka-topics.sh(0.9.0.0 version)
> 
> [root@localhost bin]# ./kafka-topics.sh --list --zookeeper localhost:2181
> [2015-12-28 12:41:32,589] WARN SASL configuration failed:
> javax.security.auth.login.LoginException: No key to store Will continue
> connection to Zookeeper server without SASL authentication, if Zookeeper
> server allows it. (org.apache.zookeeper.ClientCnxn)
> ^Z
> 
> I am sure the key is present in its keytab file ( I have cross verified
> using kinit command as well).
> 
> Am i missing anything while calling the kafka-topics.sh??
> 
> 
> 
> On Mon, Dec 28, 2015 at 3:53 PM, Ismael Juma <is...@gmail.com> wrote:
> 
> > Hi Prabhu,
> >
> > kafka-console-consumer.sh uses the old consumer by default, but only the
> > new consumer supports security. Use --new-consumer to change this.
> >
> > Hope this helps.
> >
> > Ismael
> > On 28 Dec 2015 05:48, "prabhu v" <pr...@gmail.com> wrote:
> >
> > > Hi Experts,
> > >
> > > I am getting the below error when running the consumer
> > > "kafka-console-consumer.sh" .
> > >
> > > I am using the new version 0.9.0.1.
> > > Topic name: test
> > >
> > >
> > > [2015-12-28 06:13:34,409] WARN
> > >
> > >
> > [console-consumer-61657_localhost-1451283204993-5512891d-leader-finder-thread],
> > > Failed to find leader for Set([test,0])
> > > (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
> > > kafka.common.BrokerEndPointNotAvailableException: End point PLAINTEXT not
> > > found for broker 0
> > >         at kafka.cluster.Broker.getBrokerEndPoint(Broker.scala:136)
> > >
> > >
> > > Please find the current configuration below.
> > >
> > > Configuration:
> > >
> > >
> > > [root@localhost config]# grep -v "^#" consumer.properties
> > > zookeeper.connect=localhost:2181
> > > zookeeper.connection.timeout.ms=60000
> > > group.id=test-consumer-group
> > > security.protocol=SASL_PLAINTEXT
> > > sasl.kerberos.service.name="kafka"
> > >
> > >
> > > [root@localhost config]# grep -v "^#" producer.properties
> > > metadata.broker.list=localhost:9094,localhost:9095
> > > producer.type=sync
> > > compression.codec=none
> > > serializer.class=kafka.serializer.DefaultEncoder
> > > security.protocol=SASL_PLAINTEXT
> > > sasl.kerberos.service.name="kafka"
> > >
> > > [root@localhost config]# grep -v "^#" server1.properties
> > >
> > > broker.id=0
> > > listeners=SASL_PLAINTEXT://localhost:9094
> > > delete.topic.enable=true
> > > num.network.threads=3
> > > num.io.threads=8
> > > socket.send.buffer.bytes=102400
> > > socket.receive.buffer.bytes=102400
> > > socket.request.max.bytes=104857600
> > > log.dirs=/data/kafka_2.11-0.9.0.0/kafka-logs
> > > num.partitions=1
> > > num.recovery.threads.per.data.dir=1
> > > log.retention.hours=168
> > > log.segment.bytes=1073741824
> > > log.retention.check.interval.ms=300000
> > > log.cleaner.enable=false
> > > zookeeper.connect=localhost:2181
> > > zookeeper.connection.timeout.ms=60000
> > > inter.broker.protocol.version=0.9.0.0
> > > security.inter.broker.protocol=SASL_PLAINTEXT
> > > allow.everyone.if.no.acl.found=true
> > >
> > >
> > > [root@localhost config]# grep -v "^#" server4.properties
> > > broker.id=1
> > > listeners=SASL_PLAINTEXT://localhost:9095
> > > delete.topic.enable=true
> > > num.network.threads=3
> > > num.io.threads=8
> > > socket.send.buffer.bytes=102400
> > > socket.receive.buffer.bytes=102400
> > > socket.request.max.bytes=104857600
> > > log.dirs=/data/kafka_2.11-0.9.0.0/kafka-logs-1
> > > num.partitions=1
> > > num.recovery.threads.per.data.dir=1
> > > log.retention.hours=168
> > > log.segment.bytes=1073741824
> > > log.retention.check.interval.ms=300000
> > > log.cleaner.enable=false
> > > zookeeper.connect=localhost:2181
> > > zookeeper.connection.timeout.ms=60000
> > > inter.broker.protocol.version=0.9.0.0
> > > security.inter.broker.protocol=SASL_PLAINTEXT
> > > zookeeper.sasl.client=zkclient
> > >
> > > [root@localhost config]# grep -v "^#" zookeeper.properties
> > > dataDir=/data/kafka_2.11-0.9.0.0/zookeeper
> > > clientPort=2181
> > > maxClientCnxns=0
> > > requireClientAuthScheme=sasl
> > >
> > authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
> > > jaasLoginRenew=3600000
> > >
> > >
> > > Need your valuable inputs on this issue.
> > > --
> > > Regards,
> > >
> > > Prabhu.V
> > >
> >
> 
> 
> 
> -- 
> Regards,
> 
> Prabhu.V

Re: Consumer - Failed to find leader

Posted by prabhu v <pr...@gmail.com>.
Thanks for the input Ismael.

I will try and let you know.

Also need your valuable inputs for the below issue:)

i am not able to run kafka-topics.sh(0.9.0.0 version)

[root@localhost bin]# ./kafka-topics.sh --list --zookeeper localhost:2181
[2015-12-28 12:41:32,589] WARN SASL configuration failed:
javax.security.auth.login.LoginException: No key to store Will continue
connection to Zookeeper server without SASL authentication, if Zookeeper
server allows it. (org.apache.zookeeper.ClientCnxn)
^Z

I am sure the key is present in its keytab file ( I have cross verified
using kinit command as well).

Am i missing anything while calling the kafka-topics.sh??



On Mon, Dec 28, 2015 at 3:53 PM, Ismael Juma <is...@gmail.com> wrote:

> Hi Prabhu,
>
> kafka-console-consumer.sh uses the old consumer by default, but only the
> new consumer supports security. Use --new-consumer to change this.
>
> Hope this helps.
>
> Ismael
> On 28 Dec 2015 05:48, "prabhu v" <pr...@gmail.com> wrote:
>
> > Hi Experts,
> >
> > I am getting the below error when running the consumer
> > "kafka-console-consumer.sh" .
> >
> > I am using the new version 0.9.0.1.
> > Topic name: test
> >
> >
> > [2015-12-28 06:13:34,409] WARN
> >
> >
> [console-consumer-61657_localhost-1451283204993-5512891d-leader-finder-thread],
> > Failed to find leader for Set([test,0])
> > (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
> > kafka.common.BrokerEndPointNotAvailableException: End point PLAINTEXT not
> > found for broker 0
> >         at kafka.cluster.Broker.getBrokerEndPoint(Broker.scala:136)
> >
> >
> > Please find the current configuration below.
> >
> > Configuration:
> >
> >
> > [root@localhost config]# grep -v "^#" consumer.properties
> > zookeeper.connect=localhost:2181
> > zookeeper.connection.timeout.ms=60000
> > group.id=test-consumer-group
> > security.protocol=SASL_PLAINTEXT
> > sasl.kerberos.service.name="kafka"
> >
> >
> > [root@localhost config]# grep -v "^#" producer.properties
> > metadata.broker.list=localhost:9094,localhost:9095
> > producer.type=sync
> > compression.codec=none
> > serializer.class=kafka.serializer.DefaultEncoder
> > security.protocol=SASL_PLAINTEXT
> > sasl.kerberos.service.name="kafka"
> >
> > [root@localhost config]# grep -v "^#" server1.properties
> >
> > broker.id=0
> > listeners=SASL_PLAINTEXT://localhost:9094
> > delete.topic.enable=true
> > num.network.threads=3
> > num.io.threads=8
> > socket.send.buffer.bytes=102400
> > socket.receive.buffer.bytes=102400
> > socket.request.max.bytes=104857600
> > log.dirs=/data/kafka_2.11-0.9.0.0/kafka-logs
> > num.partitions=1
> > num.recovery.threads.per.data.dir=1
> > log.retention.hours=168
> > log.segment.bytes=1073741824
> > log.retention.check.interval.ms=300000
> > log.cleaner.enable=false
> > zookeeper.connect=localhost:2181
> > zookeeper.connection.timeout.ms=60000
> > inter.broker.protocol.version=0.9.0.0
> > security.inter.broker.protocol=SASL_PLAINTEXT
> > allow.everyone.if.no.acl.found=true
> >
> >
> > [root@localhost config]# grep -v "^#" server4.properties
> > broker.id=1
> > listeners=SASL_PLAINTEXT://localhost:9095
> > delete.topic.enable=true
> > num.network.threads=3
> > num.io.threads=8
> > socket.send.buffer.bytes=102400
> > socket.receive.buffer.bytes=102400
> > socket.request.max.bytes=104857600
> > log.dirs=/data/kafka_2.11-0.9.0.0/kafka-logs-1
> > num.partitions=1
> > num.recovery.threads.per.data.dir=1
> > log.retention.hours=168
> > log.segment.bytes=1073741824
> > log.retention.check.interval.ms=300000
> > log.cleaner.enable=false
> > zookeeper.connect=localhost:2181
> > zookeeper.connection.timeout.ms=60000
> > inter.broker.protocol.version=0.9.0.0
> > security.inter.broker.protocol=SASL_PLAINTEXT
> > zookeeper.sasl.client=zkclient
> >
> > [root@localhost config]# grep -v "^#" zookeeper.properties
> > dataDir=/data/kafka_2.11-0.9.0.0/zookeeper
> > clientPort=2181
> > maxClientCnxns=0
> > requireClientAuthScheme=sasl
> >
> authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
> > jaasLoginRenew=3600000
> >
> >
> > Need your valuable inputs on this issue.
> > --
> > Regards,
> >
> > Prabhu.V
> >
>



-- 
Regards,

Prabhu.V

Re: Consumer - Failed to find leader

Posted by Ismael Juma <is...@gmail.com>.
Hi Prabhu,

kafka-console-consumer.sh uses the old consumer by default, but only the
new consumer supports security. Use --new-consumer to change this.

Hope this helps.

Ismael
On 28 Dec 2015 05:48, "prabhu v" <pr...@gmail.com> wrote:

> Hi Experts,
>
> I am getting the below error when running the consumer
> "kafka-console-consumer.sh" .
>
> I am using the new version 0.9.0.1.
> Topic name: test
>
>
> [2015-12-28 06:13:34,409] WARN
>
> [console-consumer-61657_localhost-1451283204993-5512891d-leader-finder-thread],
> Failed to find leader for Set([test,0])
> (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
> kafka.common.BrokerEndPointNotAvailableException: End point PLAINTEXT not
> found for broker 0
>         at kafka.cluster.Broker.getBrokerEndPoint(Broker.scala:136)
>
>
> Please find the current configuration below.
>
> Configuration:
>
>
> [root@localhost config]# grep -v "^#" consumer.properties
> zookeeper.connect=localhost:2181
> zookeeper.connection.timeout.ms=60000
> group.id=test-consumer-group
> security.protocol=SASL_PLAINTEXT
> sasl.kerberos.service.name="kafka"
>
>
> [root@localhost config]# grep -v "^#" producer.properties
> metadata.broker.list=localhost:9094,localhost:9095
> producer.type=sync
> compression.codec=none
> serializer.class=kafka.serializer.DefaultEncoder
> security.protocol=SASL_PLAINTEXT
> sasl.kerberos.service.name="kafka"
>
> [root@localhost config]# grep -v "^#" server1.properties
>
> broker.id=0
> listeners=SASL_PLAINTEXT://localhost:9094
> delete.topic.enable=true
> num.network.threads=3
> num.io.threads=8
> socket.send.buffer.bytes=102400
> socket.receive.buffer.bytes=102400
> socket.request.max.bytes=104857600
> log.dirs=/data/kafka_2.11-0.9.0.0/kafka-logs
> num.partitions=1
> num.recovery.threads.per.data.dir=1
> log.retention.hours=168
> log.segment.bytes=1073741824
> log.retention.check.interval.ms=300000
> log.cleaner.enable=false
> zookeeper.connect=localhost:2181
> zookeeper.connection.timeout.ms=60000
> inter.broker.protocol.version=0.9.0.0
> security.inter.broker.protocol=SASL_PLAINTEXT
> allow.everyone.if.no.acl.found=true
>
>
> [root@localhost config]# grep -v "^#" server4.properties
> broker.id=1
> listeners=SASL_PLAINTEXT://localhost:9095
> delete.topic.enable=true
> num.network.threads=3
> num.io.threads=8
> socket.send.buffer.bytes=102400
> socket.receive.buffer.bytes=102400
> socket.request.max.bytes=104857600
> log.dirs=/data/kafka_2.11-0.9.0.0/kafka-logs-1
> num.partitions=1
> num.recovery.threads.per.data.dir=1
> log.retention.hours=168
> log.segment.bytes=1073741824
> log.retention.check.interval.ms=300000
> log.cleaner.enable=false
> zookeeper.connect=localhost:2181
> zookeeper.connection.timeout.ms=60000
> inter.broker.protocol.version=0.9.0.0
> security.inter.broker.protocol=SASL_PLAINTEXT
> zookeeper.sasl.client=zkclient
>
> [root@localhost config]# grep -v "^#" zookeeper.properties
> dataDir=/data/kafka_2.11-0.9.0.0/zookeeper
> clientPort=2181
> maxClientCnxns=0
> requireClientAuthScheme=sasl
> authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
> jaasLoginRenew=3600000
>
>
> Need your valuable inputs on this issue.
> --
> Regards,
>
> Prabhu.V
>