You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by li...@apache.org on 2015/05/13 02:33:32 UTC
spark git commit: [HOTFIX] Use the old Job API to support old Hadoop
versions
Repository: spark
Updated Branches:
refs/heads/master 77f64c736 -> 247b70349
[HOTFIX] Use the old Job API to support old Hadoop versions
#5526 uses `Job.getInstance`, which does not exist in the old Hadoop versions. Just use `new Job` to replace it.
cc liancheng
Author: zsxwing <zs...@gmail.com>
Closes #6095 from zsxwing/hotfix and squashes the following commits:
b0c2049 [zsxwing] Use the old Job API to support old Hadoop versions
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/247b7034
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/247b7034
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/247b7034
Branch: refs/heads/master
Commit: 247b70349c1e4413657359d626d92e0ffbc2b7f1
Parents: 77f64c7
Author: zsxwing <zs...@gmail.com>
Authored: Wed May 13 08:33:24 2015 +0800
Committer: Cheng Lian <li...@databricks.com>
Committed: Wed May 13 08:33:24 2015 +0800
----------------------------------------------------------------------
.../src/main/scala/org/apache/spark/sql/sources/commands.scala | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/spark/blob/247b7034/sql/core/src/main/scala/org/apache/spark/sql/sources/commands.scala
----------------------------------------------------------------------
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/sources/commands.scala b/sql/core/src/main/scala/org/apache/spark/sql/sources/commands.scala
index 127133b..8372d2c 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/sources/commands.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/sources/commands.scala
@@ -88,7 +88,7 @@ private[sql] case class InsertIntoFSBasedRelation(
}
if (doInsertion) {
- val job = Job.getInstance(hadoopConf)
+ val job = new Job(hadoopConf)
job.setOutputKeyClass(classOf[Void])
job.setOutputValueClass(classOf[Row])
FileOutputFormat.setOutputPath(job, qualifiedOutputPath)
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org