You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by do...@apache.org on 2020/06/11 00:37:34 UTC

[spark] branch master updated: [SPARK-31935][SQL][TESTS][FOLLOWUP] Fix the test case for Hadoop2/3

This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new c7d45c0  [SPARK-31935][SQL][TESTS][FOLLOWUP] Fix the test case for Hadoop2/3
c7d45c0 is described below

commit c7d45c0e0b8c077da8ed4a902503a6102becf255
Author: Dongjoon Hyun <do...@apache.org>
AuthorDate: Wed Jun 10 17:36:32 2020 -0700

    [SPARK-31935][SQL][TESTS][FOLLOWUP] Fix the test case for Hadoop2/3
    
    ### What changes were proposed in this pull request?
    
    This PR updates the test case to accept Hadoop 2/3 error message correctly.
    
    ### Why are the changes needed?
    
    SPARK-31935(https://github.com/apache/spark/pull/28760) breaks Hadoop 3.2 UT because Hadoop 2 and Hadoop 3 have different exception messages.
    
    ### Does this PR introduce _any_ user-facing change?
    
    No.
    
    ### How was this patch tested?
    
    Pass the Jenkins with both Hadoop 2/3 or do the following manually.
    
    **Hadoop 2.7**
    ```
    $ build/sbt "sql/testOnly *.FileBasedDataSourceSuite -- -z SPARK-31935"
    ...
    [info] All tests passed.
    ```
    
    **Hadoop 3.2**
    ```
    $ build/sbt "sql/testOnly *.FileBasedDataSourceSuite -- -z SPARK-31935" -Phadoop-3.2
    ...
    [info] All tests passed.
    ```
    
    Closes #28791 from dongjoon-hyun/SPARK-31935.
    
    Authored-by: Dongjoon Hyun <do...@apache.org>
    Signed-off-by: Dongjoon Hyun <do...@apache.org>
---
 .../test/scala/org/apache/spark/sql/FileBasedDataSourceSuite.scala  | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/sql/core/src/test/scala/org/apache/spark/sql/FileBasedDataSourceSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/FileBasedDataSourceSuite.scala
index efc7cac..d8157d3 100644
--- a/sql/core/src/test/scala/org/apache/spark/sql/FileBasedDataSourceSuite.scala
+++ b/sql/core/src/test/scala/org/apache/spark/sql/FileBasedDataSourceSuite.scala
@@ -849,15 +849,15 @@ class FileBasedDataSourceSuite extends QueryTest
         withTempDir { dir =>
           val path = dir.getCanonicalPath
           val defaultFs = "nonexistFS://nonexistFS"
-          val expectMessage = "No FileSystem for scheme: nonexistFS"
+          val expectMessage = "No FileSystem for scheme nonexistFS"
           val message1 = intercept[java.io.IOException] {
             spark.range(10).write.option("fs.defaultFS", defaultFs).parquet(path)
           }.getMessage
-          assert(message1 == expectMessage)
+          assert(message1.filterNot(Set(':', '"').contains) == expectMessage)
           val message2 = intercept[java.io.IOException] {
             spark.read.option("fs.defaultFS", defaultFs).parquet(path)
           }.getMessage
-          assert(message2 == expectMessage)
+          assert(message2.filterNot(Set(':', '"').contains) == expectMessage)
         }
       }
     }


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org