You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by yhuai <gi...@git.apache.org> on 2016/06/16 17:52:29 UTC

[GitHub] spark pull request #13711: [SPARK-15991] SparkContext.hadoopConfiguration sh...

GitHub user yhuai opened a pull request:

    https://github.com/apache/spark/pull/13711

    [SPARK-15991] SparkContext.hadoopConfiguration should be always the base of hadoop conf created by SessionState

    ## What changes were proposed in this pull request?
    Before this patch, after a SparkSession has been created, hadoop conf set directly to SparkContext.hadoopConfiguration will not affect the hadoop conf created by SessionState. This patch makes the change to always use SparkContext.hadoopConfiguration  as the base.
    
    This patch also changes the behavior of hive-site.xml support added in https://github.com/apache/spark/pull/12689/. With this patch, we will load hive-site.xml to SparkContext.hadoopConfiguration. 
    
    ## How was this patch tested?
    New test in SparkSessionBuilderSuite.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/yhuai/spark SPARK-15991

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/13711.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #13711
    
----

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #13711: [SPARK-15991] SparkContext.hadoopConfiguration sh...

Posted by yhuai <gi...@git.apache.org>.
Github user yhuai commented on a diff in the pull request:

    https://github.com/apache/spark/pull/13711#discussion_r67430972
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala ---
    @@ -43,23 +43,17 @@ private[sql] class SharedState(val sparkContext: SparkContext) extends Logging {
        */
       val listener: SQLListener = createListenerAndUI(sparkContext)
     
    -  /**
    -   * The base hadoop configuration which is shared among all spark sessions. It is based on the
    -   * default hadoop configuration of Spark, with custom configurations inside `hive-site.xml`.
    -   */
    -  val hadoopConf: Configuration = {
    --- End diff --
    
    oh, I feel it is good to remove this extra layer since it does not really provide any benefit. I'd also like to avoid of calling addResource multiple times.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #13711: [SPARK-15991] SparkContext.hadoopConfiguration should be...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/13711
  
    Test PASSed.
    Refer to this link for build results (access rights to CI server needed): 
    https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/60663/
    Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #13711: [SPARK-15991] SparkContext.hadoopConfiguration should be...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:

    https://github.com/apache/spark/pull/13711
  
    **[Test build #60663 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60663/consoleFull)** for PR 13711 at commit [`9c255df`](https://github.com/apache/spark/commit/9c255dfa01ccf660a3110642a1e663955769b0be).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #13711: [SPARK-15991] SparkContext.hadoopConfiguration should be...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/13711
  
    Merged build finished. Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #13711: [SPARK-15991] SparkContext.hadoopConfiguration should be...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:

    https://github.com/apache/spark/pull/13711
  
    **[Test build #60651 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60651/consoleFull)** for PR 13711 at commit [`f7e994a`](https://github.com/apache/spark/commit/f7e994a9d401966f4b33d9421f080e61abd425a6).
     * This patch passes all tests.
     * This patch merges cleanly.
     * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #13711: [SPARK-15991] SparkContext.hadoopConfiguration sh...

Posted by zsxwing <gi...@git.apache.org>.
Github user zsxwing commented on a diff in the pull request:

    https://github.com/apache/spark/pull/13711#discussion_r67431627
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/internal/SessionState.scala ---
    @@ -49,7 +49,7 @@ private[sql] class SessionState(sparkSession: SparkSession) {
       lazy val conf: SQLConf = new SQLConf
     
       def newHadoopConf(): Configuration = {
    -    val hadoopConf = new Configuration(sparkSession.sharedState.hadoopConf)
    +    val hadoopConf = new Configuration(sparkSession.sparkContext.hadoopConfiguration)
    --- End diff --
    
    Got it. Thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #13711: [SPARK-15991] SparkContext.hadoopConfiguration should be...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/13711
  
    Test PASSed.
    Refer to this link for build results (access rights to CI server needed): 
    https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/60651/
    Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #13711: [SPARK-15991] SparkContext.hadoopConfiguration should be...

Posted by cloud-fan <gi...@git.apache.org>.
Github user cloud-fan commented on the issue:

    https://github.com/apache/spark/pull/13711
  
    LGTM, should we document the difference somewhere?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #13711: [SPARK-15991] SparkContext.hadoopConfiguration sh...

Posted by andrewor14 <gi...@git.apache.org>.
Github user andrewor14 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/13711#discussion_r67425283
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala ---
    @@ -43,23 +43,17 @@ private[sql] class SharedState(val sparkContext: SparkContext) extends Logging {
        */
       val listener: SQLListener = createListenerAndUI(sparkContext)
     
    -  /**
    -   * The base hadoop configuration which is shared among all spark sessions. It is based on the
    -   * default hadoop configuration of Spark, with custom configurations inside `hive-site.xml`.
    -   */
    -  val hadoopConf: Configuration = {
    --- End diff --
    
    you could also just turn this into a def, then the patch can be much smaller


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #13711: [SPARK-15991] SparkContext.hadoopConfiguration should be...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/13711
  
    Merged build finished. Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #13711: [SPARK-15991] SparkContext.hadoopConfiguration should be...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:

    https://github.com/apache/spark/pull/13711
  
    **[Test build #60663 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60663/consoleFull)** for PR 13711 at commit [`9c255df`](https://github.com/apache/spark/commit/9c255dfa01ccf660a3110642a1e663955769b0be).
     * This patch passes all tests.
     * This patch merges cleanly.
     * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #13711: [SPARK-15991] SparkContext.hadoopConfiguration sh...

Posted by cloud-fan <gi...@git.apache.org>.
Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/13711#discussion_r67421204
  
    --- Diff: sql/core/src/test/scala/org/apache/spark/sql/SparkSessionBuilderSuite.scala ---
    @@ -102,4 +102,24 @@ class SparkSessionBuilderSuite extends SparkFunSuite {
         assert(session.sparkContext.conf.get("key2") == "value2")
         session.stop()
       }
    +
    +  test("SPARK-15887: hive-site.xml should be loaded") {
    +    val session = SparkSession.builder().master("local").getOrCreate()
    +    assert(session.sessionState.newHadoopConf().get("hive.in.test") == "true")
    +    assert(session.sparkContext.hadoopConfiguration.get("hive.in.test") == "true")
    +    session.stop()
    +  }
    +
    +  test("SPARK-15991: Set Hadoop conf through session.sparkContext.hadoopConfiguration") {
    +    val session = SparkSession.builder().master("local").getOrCreate()
    +    val mySpecialKey = "mai.special.key.15991"
    --- End diff --
    
    nit: `mai.special...` ->  `my.special...` ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #13711: [SPARK-15991] SparkContext.hadoopConfiguration sh...

Posted by rxin <gi...@git.apache.org>.
Github user rxin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/13711#discussion_r67422345
  
    --- Diff: sql/core/src/test/scala/org/apache/spark/sql/SparkSessionBuilderSuite.scala ---
    @@ -102,4 +102,24 @@ class SparkSessionBuilderSuite extends SparkFunSuite {
         assert(session.sparkContext.conf.get("key2") == "value2")
         session.stop()
       }
    +
    +  test("SPARK-15887: hive-site.xml should be loaded") {
    +    val session = SparkSession.builder().master("local").getOrCreate()
    +    assert(session.sessionState.newHadoopConf().get("hive.in.test") == "true")
    +    assert(session.sparkContext.hadoopConfiguration.get("hive.in.test") == "true")
    +    session.stop()
    +  }
    +
    +  test("SPARK-15991: Set Hadoop conf through session.sparkContext.hadoopConfiguration") {
    --- End diff --
    
    Set global Hadoop conf


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #13711: [SPARK-15991] SparkContext.hadoopConfiguration should be...

Posted by zsxwing <gi...@git.apache.org>.
Github user zsxwing commented on the issue:

    https://github.com/apache/spark/pull/13711
  
    LGTM. Merging to master and 2.0. Thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #13711: [SPARK-15991] SparkContext.hadoopConfiguration sh...

Posted by yhuai <gi...@git.apache.org>.
Github user yhuai commented on a diff in the pull request:

    https://github.com/apache/spark/pull/13711#discussion_r67431239
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/internal/SessionState.scala ---
    @@ -49,7 +49,7 @@ private[sql] class SessionState(sparkSession: SparkSession) {
       lazy val conf: SQLConf = new SQLConf
     
       def newHadoopConf(): Configuration = {
    -    val hadoopConf = new Configuration(sparkSession.sharedState.hadoopConf)
    +    val hadoopConf = new Configuration(sparkSession.sparkContext.hadoopConfiguration)
    --- End diff --
    
    @zsxwing When we call `newHadoopConf ` at here, we are about to launch a job to read/write data. So, it is fine.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #13711: [SPARK-15991] SparkContext.hadoopConfiguration should be...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:

    https://github.com/apache/spark/pull/13711
  
    **[Test build #60651 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60651/consoleFull)** for PR 13711 at commit [`f7e994a`](https://github.com/apache/spark/commit/f7e994a9d401966f4b33d9421f080e61abd425a6).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #13711: [SPARK-15991] SparkContext.hadoopConfiguration should be...

Posted by andrewor14 <gi...@git.apache.org>.
Github user andrewor14 commented on the issue:

    https://github.com/apache/spark/pull/13711
  
    LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #13711: [SPARK-15991] SparkContext.hadoopConfiguration sh...

Posted by asfgit <gi...@git.apache.org>.
Github user asfgit closed the pull request at:

    https://github.com/apache/spark/pull/13711


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #13711: [SPARK-15991] SparkContext.hadoopConfiguration sh...

Posted by zsxwing <gi...@git.apache.org>.
Github user zsxwing commented on a diff in the pull request:

    https://github.com/apache/spark/pull/13711#discussion_r67421764
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/internal/SessionState.scala ---
    @@ -49,7 +49,7 @@ private[sql] class SessionState(sparkSession: SparkSession) {
       lazy val conf: SQLConf = new SQLConf
     
       def newHadoopConf(): Configuration = {
    -    val hadoopConf = new Configuration(sparkSession.sharedState.hadoopConf)
    +    val hadoopConf = new Configuration(sparkSession.sparkContext.hadoopConfiguration)
    --- End diff --
    
    This is creating a new `Configuration`. So settings using `sparkSession.sparkContext.hadoopConfiguration` later won't  be seen in the instance returned from `newHadoopConf()`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #13711: [SPARK-15991] SparkContext.hadoopConfiguration should be...

Posted by yhuai <gi...@git.apache.org>.
Github user yhuai commented on the issue:

    https://github.com/apache/spark/pull/13711
  
    We will document the change in the release notes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #13711: [SPARK-15991] SparkContext.hadoopConfiguration sh...

Posted by yhuai <gi...@git.apache.org>.
Github user yhuai commented on a diff in the pull request:

    https://github.com/apache/spark/pull/13711#discussion_r67393518
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala ---
    @@ -43,23 +43,17 @@ private[sql] class SharedState(val sparkContext: SparkContext) extends Logging {
        */
       val listener: SQLListener = createListenerAndUI(sparkContext)
     
    -  /**
    -   * The base hadoop configuration which is shared among all spark sessions. It is based on the
    -   * default hadoop configuration of Spark, with custom configurations inside `hive-site.xml`.
    -   */
    -  val hadoopConf: Configuration = {
    -    val conf = new Configuration(sparkContext.hadoopConfiguration)
    +  {
         val configFile = Utils.getContextOrSparkClassLoader.getResource("hive-site.xml")
         if (configFile != null) {
    -      conf.addResource(configFile)
    +      sparkContext.hadoopConfiguration.addResource(configFile)
    --- End diff --
    
    @rxin @cloud-fan This change will add `hive-site.xml` to `sparkContext.hadoopConfiguration`. This behavior is different from 1.6. If there is any hadoop confs set in this file, these confs will override corresponding hadoop confs set in core-site.xml. 
    
    We can also load hive-site.xml to the new hadoop conf created in `SessionState.newHadoopConf`. But, my concern for this approach is that reloading all resources (triggered by addResource) is pretty expensive. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #13711: [SPARK-15991] SparkContext.hadoopConfiguration sh...

Posted by rxin <gi...@git.apache.org>.
Github user rxin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/13711#discussion_r67422315
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/internal/SessionState.scala ---
    @@ -49,7 +49,7 @@ private[sql] class SessionState(sparkSession: SparkSession) {
       lazy val conf: SQLConf = new SQLConf
     
       def newHadoopConf(): Configuration = {
    -    val hadoopConf = new Configuration(sparkSession.sharedState.hadoopConf)
    +    val hadoopConf = new Configuration(sparkSession.sparkContext.hadoopConfiguration)
    --- End diff --
    
    that's ok -- this is keeping a snapshot.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org