You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by jiangxb1987 <gi...@git.apache.org> on 2017/10/02 15:01:30 UTC

[GitHub] spark pull request #19388: [SPARK-22162] Executors and the driver should use...

Github user jiangxb1987 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19388#discussion_r142155725
  
    --- Diff: core/src/test/scala/org/apache/spark/rdd/PairRDDFunctionsSuite.scala ---
    @@ -524,6 +525,14 @@ class PairRDDFunctionsSuite extends SparkFunSuite with SharedSparkContext {
         pairs.saveAsNewAPIHadoopFile[ConfigTestFormat]("ignored")
       }
     
    +  test("The JobId on driver and executor should be the same during the commit") {
    +    // Create more than one rdd to mimic stageId not equal to rddId
    +    val pairs = sc.parallelize(Array((1, 2), (2, 3)), 2).
    +      map { p => (new Integer(p._1 + 1), new Integer(p._2 + 1)) }.filter { p => p._1 > 0 }
    --- End diff --
    
    nit:
    ```
    val pairs = sc.parallelize(Array((1, 2), (2, 3)), 2).map { p =>
      (new Integer(p._1 + 1), new Integer(p._2 + 1))
    }.filter { p => p._1 > 0 }
    ```


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org