You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ambari.apache.org by "Vitaly Brodetskyi (JIRA)" <ji...@apache.org> on 2018/10/01 11:54:00 UTC

[jira] [Updated] (AMBARI-24718) STS fails after start, after stack upgrade from 3.0.1 to 3.0.3

     [ https://issues.apache.org/jira/browse/AMBARI-24718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Vitaly Brodetskyi updated AMBARI-24718:
---------------------------------------
    Description: 
See this exception in SHS log:
{code:java}
========================================
Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" with specified deploy mode instead.
Exception in thread "main" java.lang.IllegalArgumentException: requirement failed: Keytab file: none does not exist
 at scala.Predef$.require(Predef.scala:224)
 at org.apache.spark.deploy.SparkSubmit$.doPrepareSubmitEnvironment(SparkSubmit.scala:390)
 at org.apache.spark.deploy.SparkSubmit$.prepareSubmitEnvironment(SparkSubmit.scala:250)
 at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:171)
 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
{code}
After i've removed spark.yarn.keytab/principal properties, it start work fine. By the way, this cluster is NOT kerberized. It's strange why SHS is trying to use these properties. In the same time properties spark.history.kerberos.keytab/principal also available but there is no issues. I expect question why spark.yarn.keytab/principal were added during stack upgrade if cluster is not kerberized, here is answer:
{code:java}
<transfer operation="copy" from-type="spark2-defaults" from-key="spark.history.kerberos.keytab" to-key="spark.yarn.keytab" default-value="" if-type="spark2-thrift-sparkconf" if-key="spark.yarn.keytab" if-key-state="absent"/>
 <transfer operation="copy" from-type="spark2-defaults" from-key="spark.history.kerberos.principal" to-key="spark.yarn.principal" default-value="" if-type="spark2-thrift-sparkconf" if-key="spark.yarn.principal" if-key-state="absent"/>
{code}
I thought if "spark.history.kerberos.keyta/principal" is available in non kerberized cluster then "spark.yarn.keytab/principal" could be added too. Also we have same logic for many other components in ambari. So the question should it be fixed on ambari side, i mean add spark.yarn.keytab/principal only if kerberos enabled or some condition should be modified/added on SPARK side, not to use it if kerberos disabled or value empty/none?

 

  was:
See this exception in SHS log:
{code}
========================================
Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" with specified deploy mode instead.
Exception in thread "main" java.lang.IllegalArgumentException: requirement failed: Keytab file: none does not exist
 at scala.Predef$.require(Predef.scala:224)
 at org.apache.spark.deploy.SparkSubmit$.doPrepareSubmitEnvironment(SparkSubmit.scala:390)
 at org.apache.spark.deploy.SparkSubmit$.prepareSubmitEnvironment(SparkSubmit.scala:250)
 at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:171)
 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
{code}
After i've removed spark.yarn.keytab/principal properties, it start work fine. By the way, this cluster is NOT kerberized. It's strange why SHS is trying to use these properties. In the same time properties spark.history.kerberos.keytab/principal also available but there is no issues. I expect question why spark.yarn.keytab/principal were added during stack upgrade if cluster is not kerberized, here is answer:
{code}
<transfer operation="copy" from-type="spark2-defaults" from-key="spark.history.kerberos.keytab" to-key="spark.yarn.keytab" default-value="" if-type="spark2-thrift-sparkconf" if-key="spark.yarn.keytab" if-key-state="absent"/>
 <transfer operation="copy" from-type="spark2-defaults" from-key="spark.history.kerberos.principal" to-key="spark.yarn.principal" default-value="" if-type="spark2-thrift-sparkconf" if-key="spark.yarn.principal" if-key-state="absent"/>
{code}
I thought if "spark.history.kerberos.keyta/principal" is available in non kerberized cluster then "spark.yarn.keytab/principal" could be added too. Also we have same logic for many other components in ambari. So the question should it be fixed on ambari side, i mean add spark.yarn.keytab/principal only if kerberos enabled or some condition should be modified/added on SPARK side, not to use it if kerberos disabled or value empty/none?

Cluster with repro: http://104.196.75.237:8080 (GCE)


> STS fails after start, after stack upgrade from 3.0.1 to 3.0.3
> --------------------------------------------------------------
>
>                 Key: AMBARI-24718
>                 URL: https://issues.apache.org/jira/browse/AMBARI-24718
>             Project: Ambari
>          Issue Type: Bug
>            Reporter: Vitaly Brodetskyi
>            Assignee: Vitaly Brodetskyi
>            Priority: Blocker
>             Fix For: 2.7.3
>
>
> See this exception in SHS log:
> {code:java}
> ========================================
> Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" with specified deploy mode instead.
> Exception in thread "main" java.lang.IllegalArgumentException: requirement failed: Keytab file: none does not exist
>  at scala.Predef$.require(Predef.scala:224)
>  at org.apache.spark.deploy.SparkSubmit$.doPrepareSubmitEnvironment(SparkSubmit.scala:390)
>  at org.apache.spark.deploy.SparkSubmit$.prepareSubmitEnvironment(SparkSubmit.scala:250)
>  at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:171)
>  at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
>  at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> {code}
> After i've removed spark.yarn.keytab/principal properties, it start work fine. By the way, this cluster is NOT kerberized. It's strange why SHS is trying to use these properties. In the same time properties spark.history.kerberos.keytab/principal also available but there is no issues. I expect question why spark.yarn.keytab/principal were added during stack upgrade if cluster is not kerberized, here is answer:
> {code:java}
> <transfer operation="copy" from-type="spark2-defaults" from-key="spark.history.kerberos.keytab" to-key="spark.yarn.keytab" default-value="" if-type="spark2-thrift-sparkconf" if-key="spark.yarn.keytab" if-key-state="absent"/>
>  <transfer operation="copy" from-type="spark2-defaults" from-key="spark.history.kerberos.principal" to-key="spark.yarn.principal" default-value="" if-type="spark2-thrift-sparkconf" if-key="spark.yarn.principal" if-key-state="absent"/>
> {code}
> I thought if "spark.history.kerberos.keyta/principal" is available in non kerberized cluster then "spark.yarn.keytab/principal" could be added too. Also we have same logic for many other components in ambari. So the question should it be fixed on ambari side, i mean add spark.yarn.keytab/principal only if kerberos enabled or some condition should be modified/added on SPARK side, not to use it if kerberos disabled or value empty/none?
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)