You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by do...@apache.org on 2019/04/30 02:47:38 UTC

[spark] branch master updated: [SPARK-26936][MINOR][FOLLOWUP] Don't need the JobConf anymore, it seems

This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 25ee047  [SPARK-26936][MINOR][FOLLOWUP] Don't need the JobConf anymore, it seems
25ee047 is described below

commit 25ee0474f47d9c30d6f553a7892d9549f91071cf
Author: Sean Owen <se...@databricks.com>
AuthorDate: Mon Apr 29 19:47:20 2019 -0700

    [SPARK-26936][MINOR][FOLLOWUP] Don't need the JobConf anymore, it seems
    
    ## What changes were proposed in this pull request?
    
    On a second look in comments, seems like the JobConf isn't needed anymore here. It was used inconsistently before, it seems, and I don't see any reason a Hadoop Job config is required here anyway.
    
    ## How was this patch tested?
    
    Existing tests.
    
    Closes #24491 from srowen/SPARK-26936.2.
    
    Authored-by: Sean Owen <se...@databricks.com>
    Signed-off-by: Dongjoon Hyun <dh...@apple.com>
---
 .../apache/spark/sql/hive/execution/InsertIntoHiveDirCommand.scala    | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveDirCommand.scala b/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveDirCommand.scala
index 24a67f9..b66c302 100644
--- a/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveDirCommand.scala
+++ b/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveDirCommand.scala
@@ -22,7 +22,6 @@ import org.apache.hadoop.hive.common.FileUtils
 import org.apache.hadoop.hive.ql.plan.TableDesc
 import org.apache.hadoop.hive.serde.serdeConstants
 import org.apache.hadoop.hive.serde2.`lazy`.LazySimpleSerDe
-import org.apache.hadoop.mapred._
 
 import org.apache.spark.SparkException
 import org.apache.spark.sql.{Row, SparkSession}
@@ -80,13 +79,12 @@ case class InsertIntoHiveDirCommand(
     )
 
     val hadoopConf = sparkSession.sessionState.newHadoopConf()
-    val jobConf = new JobConf(hadoopConf)
 
     val targetPath = new Path(storage.locationUri.get)
     val qualifiedPath = FileUtils.makeQualified(targetPath, hadoopConf)
     val (writeToPath: Path, fs: FileSystem) =
       if (isLocal) {
-        val localFileSystem = FileSystem.getLocal(jobConf)
+        val localFileSystem = FileSystem.getLocal(hadoopConf)
         (localFileSystem.makeQualified(targetPath), localFileSystem)
       } else {
         val dfs = qualifiedPath.getFileSystem(hadoopConf)


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org