You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Reynold Xin (JIRA)" <ji...@apache.org> on 2016/03/17 06:57:33 UTC

[jira] [Resolved] (SPARK-13403) HiveConf used for SparkSQL is not based on the Hadoop configuration

     [ https://issues.apache.org/jira/browse/SPARK-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Reynold Xin resolved SPARK-13403.
---------------------------------
       Resolution: Fixed
         Assignee: Ryan Blue
    Fix Version/s: 2.0.0

> HiveConf used for SparkSQL is not based on the Hadoop configuration
> -------------------------------------------------------------------
>
>                 Key: SPARK-13403
>                 URL: https://issues.apache.org/jira/browse/SPARK-13403
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.6.0
>            Reporter: Ryan Blue
>            Assignee: Ryan Blue
>             Fix For: 2.0.0
>
>
> The HiveConf instances used by HiveContext are not instantiated by passing in the SparkContext's Hadoop conf and are instead based only on the config files in the environment. Hadoop best practice is to instantiate just one Configuration from the environment and then pass that conf when instantiating others so that modifications aren't lost.
> Spark will set configuration variables that start with "spark.hadoop." from spark-defaults.conf when creating {{sc.hadoopConfiguration}}, which are not correctly passed to the HiveConf because of this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org