You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by ifilonenko <gi...@git.apache.org> on 2018/11/05 18:28:25 UTC

[GitHub] spark pull request #22911: [SPARK-25815][k8s] Support kerberos in client mod...

Github user ifilonenko commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22911#discussion_r230860519
  
    --- Diff: resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/KubernetesClusterSchedulerBackend.scala ---
    @@ -123,7 +126,11 @@ private[spark] class KubernetesClusterSchedulerBackend(
       }
     
       override def createDriverEndpoint(properties: Seq[(String, String)]): DriverEndpoint = {
    -    new KubernetesDriverEndpoint(rpcEnv, properties)
    +    new KubernetesDriverEndpoint(sc.env.rpcEnv, properties)
    +  }
    +
    +  override protected def createTokenManager(): Option[HadoopDelegationTokenManager] = {
    +    Some(new HadoopDelegationTokenManager(conf, sc.hadoopConfiguration))
    --- End diff --
    
    If we are introducing this change, I think it is important that we talk about the future of secret creation upon using `--keytab` + `principle`.  Right now, secrets are created when a keytab is used by the client or for client-mode by the driver; this was used primarily for testing (on my end) but also because this logic wasn't previously generalized for all cluster-managers. Should we create an option for the user to create a secret or get rid of it as a whole, as delegation token logic is handled via the UpdateDelegationToken message passing framework. In essence, if we leave the ability to create a secret we are twice obtaining a DT which is extraneous. And if we are removing it, it is sensible to refactor the KerberosConfig logic to account for this removal. I was planning to do this in my token renewal PR where I was also introducing this change, but it seems that this will probably get merged in before mine, as such, here would be a better place to refactor. Or maybe a sepe
 rate PR that introduces this line and does the refactor, and then this and my PR could be introduced subsequently. 
    
    thoughts, @vanzin ? 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org