You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by ma...@apache.org on 2015/05/14 02:58:46 UTC

spark git commit: [HOTFIX] Use 'new Job' in fsBasedParquet.scala

Repository: spark
Updated Branches:
  refs/heads/master 32e27df41 -> 728af88cf


[HOTFIX] Use 'new Job' in fsBasedParquet.scala

Same issue as #6095

cc liancheng

Author: zsxwing <zs...@gmail.com>

Closes #6136 from zsxwing/hotfix and squashes the following commits:

4beea54 [zsxwing] Use 'new Job' in fsBasedParquet.scala


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/728af88c
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/728af88c
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/728af88c

Branch: refs/heads/master
Commit: 728af88cf6be4c25a732ab7e4fe66c1ed0041164
Parents: 32e27df
Author: zsxwing <zs...@gmail.com>
Authored: Wed May 13 17:58:29 2015 -0700
Committer: Michael Armbrust <mi...@databricks.com>
Committed: Wed May 13 17:58:29 2015 -0700

----------------------------------------------------------------------
 .../main/scala/org/apache/spark/sql/parquet/fsBasedParquet.scala   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/728af88c/sql/core/src/main/scala/org/apache/spark/sql/parquet/fsBasedParquet.scala
----------------------------------------------------------------------
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/parquet/fsBasedParquet.scala b/sql/core/src/main/scala/org/apache/spark/sql/parquet/fsBasedParquet.scala
index d810d6a..c83a9c3 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/parquet/fsBasedParquet.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/parquet/fsBasedParquet.scala
@@ -231,7 +231,7 @@ private[sql] class FSBasedParquetRelation(
       filters: Array[Filter],
       inputPaths: Array[String]): RDD[Row] = {
 
-    val job = Job.getInstance(SparkHadoopUtil.get.conf)
+    val job = new Job(SparkHadoopUtil.get.conf)
     val conf = ContextUtil.getConfiguration(job)
 
     ParquetInputFormat.setReadSupportClass(job, classOf[RowReadSupport])


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org