You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ambari.apache.org by "Hari Sekhon (JIRA)" <ji...@apache.org> on 2015/01/07 12:59:34 UTC

[jira] [Updated] (AMBARI-9022) Kerberos config lost after adding Kafka service

     [ https://issues.apache.org/jira/browse/AMBARI-9022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hari Sekhon updated AMBARI-9022:
--------------------------------
    Description: 
Adding the Kafka service to an existing kerberized HDP 2.2 cluster resulted in all the Kerberos fields in core-site.xml getting blank or literal "null" string which prevented all the HDFS and Yarn instances from restarting. This caused a major outage - lucky this cluster isn't prod but this is going to bite somebody badly.

Fields which ended up being with "null" string literals in the value field in core-site.xml: {code}hadoop.http.authentication.kerberos.keytab
hadoop.http.authentication.kerberos.principal
hadoop.security.auth_to_local{code}

Fields which ended up being blank ("") for value field in core-site.xml:
{code}hadoop.http.authentication.cookie.domain
hadoop.http.authentication.cookie.path
hadoop.http.authentication.kerberos.name.rules
hadoop.http.authentication.signature.secret
hadoop.http.authentication.signature.secret.file
hadoop.http.authentication.signer.secret.provider
hadoop.http.authentication.signer.secret.provider.object
hadoop.http.authentication.token.validity
hadoop.http.filter.initializers{code}

Previous revisions showed undefined which was definitely not the case for past months this was a working fully kerberized cluster.

Removing the Kafka service via rest API calls and restarting ambari-server didn't make the config reappear either.

I had to de-kerberize cluster and re-kerberize the whole cluster in Ambari in order to get all those 12 configuration settings re-populated.

A remaining side effect of this bug even after recovering the cluster is that all the previous config revisions are now ruined due to the many undefined values that would prevent the cluster from starting and are therefore no longer viable as a backup to revert to for any reason. There doesn't seem to be much I can workaround that.

Ironically the kafka brokers started up fine after ruining all the core components since Kafka has no security itself.

Regards,

Hari Sekhon
http://www.linkedin.com/in/harisekhon

  was:
Adding the Kafka service to an existing kerberized HDP 2.2 cluster resulted in all the Kerberos fields in core-site.xml getting blank or literal "null" string which prevented all the HDFS and Yarn instances from restarting. This caused a major outage - lucky this cluster isn't prod but this is going to bite somebody badly.

Fields which ended up being with "null" string literals in the value field in core-site.xml: {code}hadoop.http.authentication.kerberos.keytab
hadoop.http.authentication.kerberos.principal
hadoop.security.auth_to_local{code}

Fields which ended up being blank ("") for value field in core-site.xml:
{code}hadoop.http.authentication.cookie.domain
hadoop.http.authentication.cookie.path
hadoop.http.authentication.kerberos.name.rules
hadoop.http.authentication.signature.secret
hadoop.http.authentication.signature.secret.file
hadoop.http.authentication.signer.secret.provider
hadoop.http.authentication.signer.secret.provider.object
hadoop.http.authentication.token.validity
hadoop.http.filter.initializers{code}

Previous revisions showed undefined which was definitely not the case for past months this was a working fully kerberized cluster.

Removing the Kafka service via rest API calls and restarting ambari-server didn't make the config reappear either.

I had to de-kerberize cluster and re-kerberize the whole cluster in Ambari in order to get all those 12 configuration settings re-populated.

Ironically the kafka brokers started up fine after ruining all the core components since Kafka has no security itself.

Regards,

Hari Sekhon
http://www.linkedin.com/in/harisekhon


> Kerberos config lost after adding Kafka service
> -----------------------------------------------
>
>                 Key: AMBARI-9022
>                 URL: https://issues.apache.org/jira/browse/AMBARI-9022
>             Project: Ambari
>          Issue Type: Bug
>    Affects Versions: 1.7.0
>         Environment: HDP 2.2
>            Reporter: Hari Sekhon
>            Priority: Critical
>
> Adding the Kafka service to an existing kerberized HDP 2.2 cluster resulted in all the Kerberos fields in core-site.xml getting blank or literal "null" string which prevented all the HDFS and Yarn instances from restarting. This caused a major outage - lucky this cluster isn't prod but this is going to bite somebody badly.
> Fields which ended up being with "null" string literals in the value field in core-site.xml: {code}hadoop.http.authentication.kerberos.keytab
> hadoop.http.authentication.kerberos.principal
> hadoop.security.auth_to_local{code}
> Fields which ended up being blank ("") for value field in core-site.xml:
> {code}hadoop.http.authentication.cookie.domain
> hadoop.http.authentication.cookie.path
> hadoop.http.authentication.kerberos.name.rules
> hadoop.http.authentication.signature.secret
> hadoop.http.authentication.signature.secret.file
> hadoop.http.authentication.signer.secret.provider
> hadoop.http.authentication.signer.secret.provider.object
> hadoop.http.authentication.token.validity
> hadoop.http.filter.initializers{code}
> Previous revisions showed undefined which was definitely not the case for past months this was a working fully kerberized cluster.
> Removing the Kafka service via rest API calls and restarting ambari-server didn't make the config reappear either.
> I had to de-kerberize cluster and re-kerberize the whole cluster in Ambari in order to get all those 12 configuration settings re-populated.
> A remaining side effect of this bug even after recovering the cluster is that all the previous config revisions are now ruined due to the many undefined values that would prevent the cluster from starting and are therefore no longer viable as a backup to revert to for any reason. There doesn't seem to be much I can workaround that.
> Ironically the kafka brokers started up fine after ruining all the core components since Kafka has no security itself.
> Regards,
> Hari Sekhon
> http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)