You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by amir masood khezrain <am...@yahoo.com.INVALID> on 2017/01/20 00:36:38 UTC

Kerberos/SASL Enabled Kafka - broker fails NoAuth due ACL

Hi
I am planning to setup a Kerberos/SASL enabled kafka cluster with three brokers. Since “zookeeper.set.acl=true” is set, when running the first broker, it creates the znodes required. It also sets the ACL of nodes which locks down the nodes to the first broker. Here is the output of the ACL on node “/brokers” after running the first broker.

'world,'anyone: r'sasl,'mykafka/myhost1.name.dd.com@example.com: cdrwa
Then, when the other two brokers start, they fail since the node “brokers” is locked and they do only have read access to it. This is the case for all nodes created by the first broker. How can I give access to the other two brokers. I don’t think manually setting ACL for nodes make sense since some nodes like partitions are created dynamically. Is there a way to resolve it while “zookeeper.set.acl=true” is kept? Note that the hostname is in 4 segments.

The solutions that I have tried but did not work:
1- As you have noticed, I tried to give super access to all three brokers by setting “super.users”. However, it trick did not work!2- Also, I tried to use “sasl.kerberos.principal.to.local.rules”, as you can find in the configuration, which did not help as well.3- In addition, I tried to set “Dzookeeper.sasl.client =mykafka” which was causing the broker throws the below exception:
ERROR JAAS configuration is present, but system property zookeeper.sasl.client is set to false, which disables SASL in the ZooKeeper client (org.apache.kafka.common.security.JaasUtils)

I would appreciate if you could help me with this issue.

Below are my configurations: 
============jaas file
KafkaServer {  com.sun.security.auth.module.Krb5LoginModule required  useKeyTab=true  storeKey=true  useTicketCache=true  ticketCache="/var/security/tickets/mykafka"  keyTab="/var/ security/keytabs/mykafka"  serviceName="mykafka"  principal="mykafka/myhost1.name.dd.com@example.com"  debug=true;}; Client {  com.sun.security.auth.module.Krb5LoginModule required  useKeyTab=true  storeKey=true  useTicketCache=true  ticketCache="/var/security/tickets/mykafka "  keyTab="/var/ security/keytabs/mykafka "  serviceName=" mykafka "  principal="mykafka/myhost1.name.dd.com@example.com "  debug=true;};:

============each broker’s configuration:

listeners=SASL_PLAINTEXT://:44310advertised.listeners=SASL_PLAINTEXT://:44310zookeeper.set.acl=trueallow.everyone.if.no.acl.found=trueauthorizer.class.name=kafka.security.auth.SimpleAclAuthorizersasl.kerberos.principal.to.local.rules=RULE:[2:$1](.*)s/@.*//,DEFAULT num.partitions=120security.inter.broker.protocol=SASL_PLAINTEXTsecurity.protocol=SASL_PLAINTEXTsasl.kerberos.service.name=mykafkainter.broker.protocol.version=0.10.0.0 zookeeper.connection.timeout.ms=60000auto.create.topics.enable=falsedelete.topic.enable=truedefault.replication.factor=3
super.users=User:mykafka;User:mykafka/myhost1.name.dd.com@example.com;User: mykafka/myhost2.name.dd.com@example.com;User:mykafka/myhost3.name.dd.com@example.com

============
And finally: export KAFKA_OPTS="\         -Djava.security.auth.login.config=/path/to/the/jaas/file \         -Djavax.security.auth.useSubjectCredsOnly=false \         -Dzookeeper.sasl.client.username=mykafka"

Re: Kerberos/SASL Enabled Kafka - broker fails NoAuth due ACL

Posted by Stephane Maarek <st...@simplemachines.com.au>.
So the issue is that you need to have your kafka/fqdn@realm.com in the
KafkaServer jaas part, but the same zkclient@realm.com in the Client jaas
part. That should solve your issues


On 9 February 2017 at 7:42:54 pm, Ashish Bhushan (ashish6785@gmail.com)
wrote:

Any help ?


On 09-Feb-2017 1:13 PM, "Ashish Bhushan" <as...@gmail.com> wrote:

> Hi,
>
> I used same principal and keytab across all brokers jass file ( Client
> section )
>
> Still not working , now the second broker that starts is throwing
> 'Authentication failure' exception
>
> Do I need to set sasl.kerberos.principal.to.local.rules to something in
> all brokers ?
>
> On 09-Feb-2017 12:11 PM, "Manikumar" <ma...@gmail.com> wrote:
>
>> It is necessary to have the same principal name (in Client Section of
>> jaas.config) across all brokers.
>> Not sure why we need to modify kerberos.principal.to.local.rules in this
>> case
>>
>>
>> On Wed, Feb 8, 2017 at 11:48 PM, Ashish Bhushan <as...@gmail.com>
>> wrote:
>>
>> > Hi ,
>> >
>> > Were you able to resolve this problem ?
>> >
>> >
>> > On Fri, Jan 20, 2017 at 6:06 AM, amir masood khezrain <
>> > amir_lines@yahoo.com.invalid> wrote:
>> >
>> > > Hi
>> > > I am planning to setup a Kerberos/SASL enabled kafka cluster with
>> three
>> > > brokers. Since “zookeeper.set.acl=true” is set, when running the
first
>> > > broker, it creates the znodes required. It also sets the ACL of
nodes
>> > which
>> > > locks down the nodes to the first broker. Here is the output of the
>> ACL
>> > on
>> > > node “/brokers” after running the first broker.
>> > >
>> > > 'world,'anyone: r'sasl,'mykafka/myhost1.name.dd.com@example.com:
>> cdrwa
>> > > Then, when the other two brokers start, they fail since the node
>> > “brokers”
>> > > is locked and they do only have read access to it. This is the case
>> for
>> > all
>> > > nodes created by the first broker. How can I give access to the
other
>> two
>> > > brokers. I don’t think manually setting ACL for nodes make sense
since
>> > some
>> > > nodes like partitions are created dynamically. Is there a way to
>> resolve
>> > it
>> > > while “zookeeper.set.acl=true” is kept? Note that the hostname is in
4
>> > > segments.
>> > >
>> > > The solutions that I have tried but did not work:
>> > > 1- As you have noticed, I tried to give super access to all three
>> brokers
>> > > by setting “super.users”. However, it trick did not work!2- Also, I
>> tried
>> > > to use “sasl.kerberos.principal.to.local.rules”, as you can find in
>> the
>> > > configuration, which did not help as well.3- In addition, I tried to
>> set
>> > > “Dzookeeper.sasl.client =mykafka” which was causing the broker
throws
>> the
>> > > below exception:
>> > > ERROR JAAS configuration is present, but system property
>> > > zookeeper.sasl.client is set to false, which disables SASL in the
>> > ZooKeeper
>> > > client (org.apache.kafka.common.security.JaasUtils)
>> > >
>> > > I would appreciate if you could help me with this issue.
>> > >
>> > > Below are my configurations:
>> > > ============jaas file
>> > > KafkaServer { com.sun.security.auth.module.Krb5LoginModule required
>> > > useKeyTab=true storeKey=true useTicketCache=true
>> > > ticketCache="/var/security/tickets/mykafka" keyTab="/var/
>> > > security/keytabs/mykafka" serviceName="mykafka" principal="mykafka/
>> > > myhost1.name.dd.com@example.com" debug=true;}; Client {
>> > > com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true
>> > > storeKey=true useTicketCache=true ticketCache="/var/security/tic
>> > kets/mykafka
>> > > " keyTab="/var/ security/keytabs/mykafka " serviceName=" mykafka "
>> > > principal="mykafka/myhost1.name.dd.com@example.com " debug=true;};:
>> > >
>> > > ============each broker’s configuration:
>> > >
>> > > listeners=SASL_PLAINTEXT://:44310advertised.listeners=SASL_
>> PLAINTEXT://:
>> > > 44310zookeeper.set.acl=trueallow.everyone.if.no.acl.found=
>> > > trueauthorizer.class.name=kafka.security.auth.SimpleAclAutho
>> rizersasl.
>> > > kerberos.principal.to.local.rules=RULE:[2:$1](.*)s/@.*//,
>> > > DEFAULT num.partitions=120security.inter.broker.protocol=SASL_
>> > > PLAINTEXTsecurity.protocol=SASL_PLAINTEXTsasl.kerberos.service.name
>> > > =mykafkainter.broker.protocol.version=0.10.0.0 zookeeper.connection.
>> > > timeout.ms=60000auto.create.topics.enable=falsedelete.
>> > > topic.enable=truedefault.replication.factor=3
>> > > super.users=User:mykafka;User:mykafka/myhost1.name.dd.com@example.com
>> > ;User:
>> > > mykafka/myhost2.name.dd.com@example.com;User:mykafka/myhos
>> > > t3.name.dd.com@example.com
>> > >
>> > > ============
>> > > And finally: export KAFKA_OPTS="\ -
>> Djava.security.auth.login.co
>> > nfig=/path/to/the/jaas/file
>> > > \ -Djavax.security.auth.useSubjectCredsOnly=false \
>> > > -Dzookeeper.sasl.client.username=mykafka"
>> >
>>
>

Re: Kerberos/SASL Enabled Kafka - broker fails NoAuth due ACL

Posted by Ashish Bhushan <as...@gmail.com>.
Any help ?


On 09-Feb-2017 1:13 PM, "Ashish Bhushan" <as...@gmail.com> wrote:

> Hi,
>
> I used same principal and keytab across all brokers jass file ( Client
> section )
>
> Still not working , now the second broker that starts is throwing
> 'Authentication failure' exception
>
> Do I need to set sasl.kerberos.principal.to.local.rules to something in
> all brokers ?
>
> On 09-Feb-2017 12:11 PM, "Manikumar" <ma...@gmail.com> wrote:
>
>> It is necessary to have the same principal name (in Client Section of
>> jaas.config) across all brokers.
>> Not sure why we need to modify kerberos.principal.to.local.rules in this
>> case
>>
>>
>> On Wed, Feb 8, 2017 at 11:48 PM, Ashish Bhushan <as...@gmail.com>
>> wrote:
>>
>> > Hi ,
>> >
>> > Were you able to resolve this problem ?
>> >
>> >
>> > On Fri, Jan 20, 2017 at 6:06 AM, amir masood khezrain <
>> > amir_lines@yahoo.com.invalid> wrote:
>> >
>> > > Hi
>> > > I am planning to setup a Kerberos/SASL enabled kafka cluster with
>> three
>> > > brokers. Since “zookeeper.set.acl=true” is set, when running the first
>> > > broker, it creates the znodes required. It also sets the ACL of nodes
>> > which
>> > > locks down the nodes to the first broker. Here is the output of the
>> ACL
>> > on
>> > > node “/brokers” after running the first broker.
>> > >
>> > > 'world,'anyone: r'sasl,'mykafka/myhost1.name.dd.com@example.com:
>> cdrwa
>> > > Then, when the other two brokers start, they fail since the node
>> > “brokers”
>> > > is locked and they do only have read access to it. This is the case
>> for
>> > all
>> > > nodes created by the first broker. How can I give access to the other
>> two
>> > > brokers. I don’t think manually setting ACL for nodes make sense since
>> > some
>> > > nodes like partitions are created dynamically. Is there a way to
>> resolve
>> > it
>> > > while “zookeeper.set.acl=true” is kept? Note that the hostname is in 4
>> > > segments.
>> > >
>> > > The solutions that I have tried but did not work:
>> > > 1- As you have noticed, I tried to give super access to all three
>> brokers
>> > > by setting “super.users”. However, it trick did not work!2- Also, I
>> tried
>> > > to use “sasl.kerberos.principal.to.local.rules”, as you can find in
>> the
>> > > configuration, which did not help as well.3- In addition, I tried to
>> set
>> > > “Dzookeeper.sasl.client =mykafka” which was causing the broker throws
>> the
>> > > below exception:
>> > > ERROR JAAS configuration is present, but system property
>> > > zookeeper.sasl.client is set to false, which disables SASL in the
>> > ZooKeeper
>> > > client (org.apache.kafka.common.security.JaasUtils)
>> > >
>> > > I would appreciate if you could help me with this issue.
>> > >
>> > > Below are my configurations:
>> > > ============jaas file
>> > > KafkaServer {  com.sun.security.auth.module.Krb5LoginModule required
>> > > useKeyTab=true  storeKey=true  useTicketCache=true
>> > > ticketCache="/var/security/tickets/mykafka"  keyTab="/var/
>> > > security/keytabs/mykafka"  serviceName="mykafka"  principal="mykafka/
>> > > myhost1.name.dd.com@example.com"  debug=true;}; Client {
>> > > com.sun.security.auth.module.Krb5LoginModule required  useKeyTab=true
>> > > storeKey=true  useTicketCache=true  ticketCache="/var/security/tic
>> > kets/mykafka
>> > > "  keyTab="/var/ security/keytabs/mykafka "  serviceName=" mykafka "
>> > > principal="mykafka/myhost1.name.dd.com@example.com "  debug=true;};:
>> > >
>> > > ============each broker’s configuration:
>> > >
>> > > listeners=SASL_PLAINTEXT://:44310advertised.listeners=SASL_
>> PLAINTEXT://:
>> > > 44310zookeeper.set.acl=trueallow.everyone.if.no.acl.found=
>> > > trueauthorizer.class.name=kafka.security.auth.SimpleAclAutho
>> rizersasl.
>> > > kerberos.principal.to.local.rules=RULE:[2:$1](.*)s/@.*//,
>> > > DEFAULT num.partitions=120security.inter.broker.protocol=SASL_
>> > > PLAINTEXTsecurity.protocol=SASL_PLAINTEXTsasl.kerberos.service.name
>> > > =mykafkainter.broker.protocol.version=0.10.0.0 zookeeper.connection.
>> > > timeout.ms=60000auto.create.topics.enable=falsedelete.
>> > > topic.enable=truedefault.replication.factor=3
>> > > super.users=User:mykafka;User:mykafka/myhost1.name.dd.com@example.com
>> > ;User:
>> > > mykafka/myhost2.name.dd.com@example.com;User:mykafka/myhos
>> > > t3.name.dd.com@example.com
>> > >
>> > > ============
>> > > And finally: export KAFKA_OPTS="\         -
>> Djava.security.auth.login.co
>> > nfig=/path/to/the/jaas/file
>> > > \         -Djavax.security.auth.useSubjectCredsOnly=false \
>> > > -Dzookeeper.sasl.client.username=mykafka"
>> >
>>
>

Re: Kerberos/SASL Enabled Kafka - broker fails NoAuth due ACL

Posted by Ashish Bhushan <as...@gmail.com>.
Hi,

I used same principal and keytab across all brokers jass file ( Client
section )

Still not working , now the second broker that starts is throwing
'Authentication failure' exception

Do I need to set sasl.kerberos.principal.to.local.rules to something in all
brokers ?

On 09-Feb-2017 12:11 PM, "Manikumar" <ma...@gmail.com> wrote:

> It is necessary to have the same principal name (in Client Section of
> jaas.config) across all brokers.
> Not sure why we need to modify kerberos.principal.to.local.rules in this
> case
>
>
> On Wed, Feb 8, 2017 at 11:48 PM, Ashish Bhushan <as...@gmail.com>
> wrote:
>
> > Hi ,
> >
> > Were you able to resolve this problem ?
> >
> >
> > On Fri, Jan 20, 2017 at 6:06 AM, amir masood khezrain <
> > amir_lines@yahoo.com.invalid> wrote:
> >
> > > Hi
> > > I am planning to setup a Kerberos/SASL enabled kafka cluster with three
> > > brokers. Since “zookeeper.set.acl=true” is set, when running the first
> > > broker, it creates the znodes required. It also sets the ACL of nodes
> > which
> > > locks down the nodes to the first broker. Here is the output of the ACL
> > on
> > > node “/brokers” after running the first broker.
> > >
> > > 'world,'anyone: r'sasl,'mykafka/myhost1.name.dd.com@example.com: cdrwa
> > > Then, when the other two brokers start, they fail since the node
> > “brokers”
> > > is locked and they do only have read access to it. This is the case for
> > all
> > > nodes created by the first broker. How can I give access to the other
> two
> > > brokers. I don’t think manually setting ACL for nodes make sense since
> > some
> > > nodes like partitions are created dynamically. Is there a way to
> resolve
> > it
> > > while “zookeeper.set.acl=true” is kept? Note that the hostname is in 4
> > > segments.
> > >
> > > The solutions that I have tried but did not work:
> > > 1- As you have noticed, I tried to give super access to all three
> brokers
> > > by setting “super.users”. However, it trick did not work!2- Also, I
> tried
> > > to use “sasl.kerberos.principal.to.local.rules”, as you can find in
> the
> > > configuration, which did not help as well.3- In addition, I tried to
> set
> > > “Dzookeeper.sasl.client =mykafka” which was causing the broker throws
> the
> > > below exception:
> > > ERROR JAAS configuration is present, but system property
> > > zookeeper.sasl.client is set to false, which disables SASL in the
> > ZooKeeper
> > > client (org.apache.kafka.common.security.JaasUtils)
> > >
> > > I would appreciate if you could help me with this issue.
> > >
> > > Below are my configurations:
> > > ============jaas file
> > > KafkaServer {  com.sun.security.auth.module.Krb5LoginModule required
> > > useKeyTab=true  storeKey=true  useTicketCache=true
> > > ticketCache="/var/security/tickets/mykafka"  keyTab="/var/
> > > security/keytabs/mykafka"  serviceName="mykafka"  principal="mykafka/
> > > myhost1.name.dd.com@example.com"  debug=true;}; Client {
> > > com.sun.security.auth.module.Krb5LoginModule required  useKeyTab=true
> > > storeKey=true  useTicketCache=true  ticketCache="/var/security/tic
> > kets/mykafka
> > > "  keyTab="/var/ security/keytabs/mykafka "  serviceName=" mykafka "
> > > principal="mykafka/myhost1.name.dd.com@example.com "  debug=true;};:
> > >
> > > ============each broker’s configuration:
> > >
> > > listeners=SASL_PLAINTEXT://:44310advertised.listeners=
> SASL_PLAINTEXT://:
> > > 44310zookeeper.set.acl=trueallow.everyone.if.no.acl.found=
> > > trueauthorizer.class.name=kafka.security.auth.SimpleAclAuthorizersasl.
> > > kerberos.principal.to.local.rules=RULE:[2:$1](.*)s/@.*//,
> > > DEFAULT num.partitions=120security.inter.broker.protocol=SASL_
> > > PLAINTEXTsecurity.protocol=SASL_PLAINTEXTsasl.kerberos.service.name
> > > =mykafkainter.broker.protocol.version=0.10.0.0 zookeeper.connection.
> > > timeout.ms=60000auto.create.topics.enable=falsedelete.
> > > topic.enable=truedefault.replication.factor=3
> > > super.users=User:mykafka;User:mykafka/myhost1.name.dd.com@example.com
> > ;User:
> > > mykafka/myhost2.name.dd.com@example.com;User:mykafka/myhos
> > > t3.name.dd.com@example.com
> > >
> > > ============
> > > And finally: export KAFKA_OPTS="\         -
> Djava.security.auth.login.co
> > nfig=/path/to/the/jaas/file
> > > \         -Djavax.security.auth.useSubjectCredsOnly=false \
> > > -Dzookeeper.sasl.client.username=mykafka"
> >
>

Re: Kerberos/SASL Enabled Kafka - broker fails NoAuth due ACL

Posted by Manikumar <ma...@gmail.com>.
It is necessary to have the same principal name (in Client Section of
jaas.config) across all brokers.
Not sure why we need to modify kerberos.principal.to.local.rules in this
case


On Wed, Feb 8, 2017 at 11:48 PM, Ashish Bhushan <as...@gmail.com>
wrote:

> Hi ,
>
> Were you able to resolve this problem ?
>
>
> On Fri, Jan 20, 2017 at 6:06 AM, amir masood khezrain <
> amir_lines@yahoo.com.invalid> wrote:
>
> > Hi
> > I am planning to setup a Kerberos/SASL enabled kafka cluster with three
> > brokers. Since “zookeeper.set.acl=true” is set, when running the first
> > broker, it creates the znodes required. It also sets the ACL of nodes
> which
> > locks down the nodes to the first broker. Here is the output of the ACL
> on
> > node “/brokers” after running the first broker.
> >
> > 'world,'anyone: r'sasl,'mykafka/myhost1.name.dd.com@example.com: cdrwa
> > Then, when the other two brokers start, they fail since the node
> “brokers”
> > is locked and they do only have read access to it. This is the case for
> all
> > nodes created by the first broker. How can I give access to the other two
> > brokers. I don’t think manually setting ACL for nodes make sense since
> some
> > nodes like partitions are created dynamically. Is there a way to resolve
> it
> > while “zookeeper.set.acl=true” is kept? Note that the hostname is in 4
> > segments.
> >
> > The solutions that I have tried but did not work:
> > 1- As you have noticed, I tried to give super access to all three brokers
> > by setting “super.users”. However, it trick did not work!2- Also, I tried
> > to use “sasl.kerberos.principal.to.local.rules”, as you can find in the
> > configuration, which did not help as well.3- In addition, I tried to set
> > “Dzookeeper.sasl.client =mykafka” which was causing the broker throws the
> > below exception:
> > ERROR JAAS configuration is present, but system property
> > zookeeper.sasl.client is set to false, which disables SASL in the
> ZooKeeper
> > client (org.apache.kafka.common.security.JaasUtils)
> >
> > I would appreciate if you could help me with this issue.
> >
> > Below are my configurations:
> > ============jaas file
> > KafkaServer {  com.sun.security.auth.module.Krb5LoginModule required
> > useKeyTab=true  storeKey=true  useTicketCache=true
> > ticketCache="/var/security/tickets/mykafka"  keyTab="/var/
> > security/keytabs/mykafka"  serviceName="mykafka"  principal="mykafka/
> > myhost1.name.dd.com@example.com"  debug=true;}; Client {
> > com.sun.security.auth.module.Krb5LoginModule required  useKeyTab=true
> > storeKey=true  useTicketCache=true  ticketCache="/var/security/tic
> kets/mykafka
> > "  keyTab="/var/ security/keytabs/mykafka "  serviceName=" mykafka "
> > principal="mykafka/myhost1.name.dd.com@example.com "  debug=true;};:
> >
> > ============each broker’s configuration:
> >
> > listeners=SASL_PLAINTEXT://:44310advertised.listeners=SASL_PLAINTEXT://:
> > 44310zookeeper.set.acl=trueallow.everyone.if.no.acl.found=
> > trueauthorizer.class.name=kafka.security.auth.SimpleAclAuthorizersasl.
> > kerberos.principal.to.local.rules=RULE:[2:$1](.*)s/@.*//,
> > DEFAULT num.partitions=120security.inter.broker.protocol=SASL_
> > PLAINTEXTsecurity.protocol=SASL_PLAINTEXTsasl.kerberos.service.name
> > =mykafkainter.broker.protocol.version=0.10.0.0 zookeeper.connection.
> > timeout.ms=60000auto.create.topics.enable=falsedelete.
> > topic.enable=truedefault.replication.factor=3
> > super.users=User:mykafka;User:mykafka/myhost1.name.dd.com@example.com
> ;User:
> > mykafka/myhost2.name.dd.com@example.com;User:mykafka/myhos
> > t3.name.dd.com@example.com
> >
> > ============
> > And finally: export KAFKA_OPTS="\         -Djava.security.auth.login.co
> nfig=/path/to/the/jaas/file
> > \         -Djavax.security.auth.useSubjectCredsOnly=false \
> > -Dzookeeper.sasl.client.username=mykafka"
>

Re: Kerberos/SASL Enabled Kafka - broker fails NoAuth due ACL

Posted by Ashish Bhushan <as...@gmail.com>.
Hi ,

Were you able to resolve this problem ?


On Fri, Jan 20, 2017 at 6:06 AM, amir masood khezrain <
amir_lines@yahoo.com.invalid> wrote:

> Hi
> I am planning to setup a Kerberos/SASL enabled kafka cluster with three
> brokers. Since “zookeeper.set.acl=true” is set, when running the first
> broker, it creates the znodes required. It also sets the ACL of nodes which
> locks down the nodes to the first broker. Here is the output of the ACL on
> node “/brokers” after running the first broker.
>
> 'world,'anyone: r'sasl,'mykafka/myhost1.name.dd.com@example.com: cdrwa
> Then, when the other two brokers start, they fail since the node “brokers”
> is locked and they do only have read access to it. This is the case for all
> nodes created by the first broker. How can I give access to the other two
> brokers. I don’t think manually setting ACL for nodes make sense since some
> nodes like partitions are created dynamically. Is there a way to resolve it
> while “zookeeper.set.acl=true” is kept? Note that the hostname is in 4
> segments.
>
> The solutions that I have tried but did not work:
> 1- As you have noticed, I tried to give super access to all three brokers
> by setting “super.users”. However, it trick did not work!2- Also, I tried
> to use “sasl.kerberos.principal.to.local.rules”, as you can find in the
> configuration, which did not help as well.3- In addition, I tried to set
> “Dzookeeper.sasl.client =mykafka” which was causing the broker throws the
> below exception:
> ERROR JAAS configuration is present, but system property
> zookeeper.sasl.client is set to false, which disables SASL in the ZooKeeper
> client (org.apache.kafka.common.security.JaasUtils)
>
> I would appreciate if you could help me with this issue.
>
> Below are my configurations:
> ============jaas file
> KafkaServer {  com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true  storeKey=true  useTicketCache=true
> ticketCache="/var/security/tickets/mykafka"  keyTab="/var/
> security/keytabs/mykafka"  serviceName="mykafka"  principal="mykafka/
> myhost1.name.dd.com@example.com"  debug=true;}; Client {
> com.sun.security.auth.module.Krb5LoginModule required  useKeyTab=true
> storeKey=true  useTicketCache=true  ticketCache="/var/security/tickets/mykafka
> "  keyTab="/var/ security/keytabs/mykafka "  serviceName=" mykafka "
> principal="mykafka/myhost1.name.dd.com@example.com "  debug=true;};:
>
> ============each broker’s configuration:
>
> listeners=SASL_PLAINTEXT://:44310advertised.listeners=SASL_PLAINTEXT://:
> 44310zookeeper.set.acl=trueallow.everyone.if.no.acl.found=
> trueauthorizer.class.name=kafka.security.auth.SimpleAclAuthorizersasl.
> kerberos.principal.to.local.rules=RULE:[2:$1](.*)s/@.*//,
> DEFAULT num.partitions=120security.inter.broker.protocol=SASL_
> PLAINTEXTsecurity.protocol=SASL_PLAINTEXTsasl.kerberos.service.name
> =mykafkainter.broker.protocol.version=0.10.0.0 zookeeper.connection.
> timeout.ms=60000auto.create.topics.enable=falsedelete.
> topic.enable=truedefault.replication.factor=3
> super.users=User:mykafka;User:mykafka/myhost1.name.dd.com@example.com;User:
> mykafka/myhost2.name.dd.com@example.com;User:mykafka/myhos
> t3.name.dd.com@example.com
>
> ============
> And finally: export KAFKA_OPTS="\         -Djava.security.auth.login.config=/path/to/the/jaas/file
> \         -Djavax.security.auth.useSubjectCredsOnly=false \
> -Dzookeeper.sasl.client.username=mykafka"