You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by yaooqinn <gi...@git.apache.org> on 2017/11/06 08:30:07 UTC

[GitHub] spark pull request #19663: [SPARK-21888][YARN][SQL][Hive]add hadoop/hive/hba...

Github user yaooqinn commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19663#discussion_r149016913
  
    --- Diff: resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala ---
    @@ -705,6 +705,19 @@ private[spark] class Client(
           }
         }
     
    +    val confDir =
    +      sys.env.getOrElse("SPARK_CONF_DIR", sys.env("SPARK_HOME") + File.separator + "conf")
    +    val dir = new File(confDir)
    +    if (dir.isDirectory) {
    +      val files = dir.listFiles(new FileFilter {
    +        override def accept(pathname: File): Boolean = {
    +          pathname.isFile && pathname.getName.endsWith("xml")
    --- End diff --
    
    According to the doc, 
    > Configuration of Hive is done by placing your hive-site.xml, core-site.xml (for security configuration), and hdfs-site.xml (for HDFS configuration) file in conf/.
    here we may not only get hive-site.xml


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org