You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by jo...@apache.org on 2016/02/18 00:08:37 UTC

spark git commit: [SPARK-12953][EXAMPLES] RDDRelation writer set overwrite mode

Repository: spark
Updated Branches:
  refs/heads/master 1eac38000 -> 97ee85daf


[SPARK-12953][EXAMPLES] RDDRelation writer set overwrite mode

https://issues.apache.org/jira/browse/SPARK-12953

fix error when run RDDRelation.main():
"path file:/Users/sjk/pair.parquet already exists"

Set DataFrameWriter's mode to SaveMode.Overwrite

Author: shijinkui <sh...@163.com>

Closes #10864 from shijinkui/set_mode.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/97ee85da
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/97ee85da
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/97ee85da

Branch: refs/heads/master
Commit: 97ee85daf68345cf5c3c11ae5bf288cc697bdf9e
Parents: 1eac380
Author: shijinkui <sh...@163.com>
Authored: Wed Feb 17 15:08:22 2016 -0800
Committer: Josh Rosen <jo...@databricks.com>
Committed: Wed Feb 17 15:08:22 2016 -0800

----------------------------------------------------------------------
 .../scala/org/apache/spark/examples/sql/RDDRelation.scala     | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/97ee85da/examples/src/main/scala/org/apache/spark/examples/sql/RDDRelation.scala
----------------------------------------------------------------------
diff --git a/examples/src/main/scala/org/apache/spark/examples/sql/RDDRelation.scala b/examples/src/main/scala/org/apache/spark/examples/sql/RDDRelation.scala
index 2cc56f0..a2f0fcd 100644
--- a/examples/src/main/scala/org/apache/spark/examples/sql/RDDRelation.scala
+++ b/examples/src/main/scala/org/apache/spark/examples/sql/RDDRelation.scala
@@ -19,8 +19,7 @@
 package org.apache.spark.examples.sql
 
 import org.apache.spark.{SparkConf, SparkContext}
-import org.apache.spark.sql.SQLContext
-import org.apache.spark.sql.functions._
+import org.apache.spark.sql.{SaveMode, SQLContext}
 
 // One method for defining the schema of an RDD is to make a case class with the desired column
 // names and types.
@@ -58,8 +57,8 @@ object RDDRelation {
     // Queries can also be written using a LINQ-like Scala DSL.
     df.where($"key" === 1).orderBy($"value".asc).select($"key").collect().foreach(println)
 
-    // Write out an RDD as a parquet file.
-    df.write.parquet("pair.parquet")
+    // Write out an RDD as a parquet file with overwrite mode.
+    df.write.mode(SaveMode.Overwrite).parquet("pair.parquet")
 
     // Read in parquet file.  Parquet files are self-describing so the schmema is preserved.
     val parquetFile = sqlContext.read.parquet("pair.parquet")


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org