You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by HyukjinKwon <gi...@git.apache.org> on 2017/05/15 15:57:32 UTC

[GitHub] spark pull request #17987: [WIP][SPARK-19707][SPARK-18922][TESTS][SQL][CORE]...

GitHub user HyukjinKwon opened a pull request:

    https://github.com/apache/spark/pull/17987

    [WIP][SPARK-19707][SPARK-18922][TESTS][SQL][CORE] Fix test failures/the invalid path check for sc.addJar on Windows

    ## What changes were proposed in this pull request?
    
    This PR proposes two things:
    
    - A follow up for SPARK-19707 (Improving the invalid path check for sc.addJar on Windows as well).
    
    ```
    org.apache.spark.SparkContextSuite:
     - add jar with invalid path *** FAILED *** (32 milliseconds)
       2 was not equal to 1 (SparkContextSuite.scala:309)
       ...
    ```
    
    - Fix path vs URI related test failures on Windows.
    
    ```
    org.apache.spark.storage.LocalDirsSuite:
     - SPARK_LOCAL_DIRS override also affects driver *** FAILED *** (0 milliseconds)
       new java.io.File("/NONEXISTENT_PATH").exists() was true (LocalDirsSuite.scala:50)
       ...
    
     - Utils.getLocalDir() throws an exception if any temporary directory cannot be retrieved *** FAILED *** (15 milliseconds)
       Expected exception java.io.IOException to be thrown, but no exception was thrown. (LocalDirsSuite.scala:64)
       ...
    ```
    
    ```
    org.apache.spark.sql.hive.HiveSchemaInferenceSuite:
     - orc: schema should be inferred and saved when INFER_AND_SAVE is specified *** FAILED *** (203 milliseconds)
       java.net.URISyntaxException: Illegal character in opaque part at index 2: C:\projects\spark\target\tmp\spark-dae61ab3-a851-4dd3-bf4e-be97c501f254
       ...
    
     - parquet: schema should be inferred and saved when INFER_AND_SAVE is specified *** FAILED *** (203 milliseconds)
       java.net.URISyntaxException: Illegal character in opaque part at index 2: C:\projects\spark\target\tmp\spark-fa3aff89-a66e-4376-9a37-2a9b87596939
       ...
    
     - orc: schema should be inferred but not stored when INFER_ONLY is specified *** FAILED *** (141 milliseconds)
       java.net.URISyntaxException: Illegal character in opaque part at index 2: C:\projects\spark\target\tmp\spark-fb464e59-b049-481b-9c75-f53295c9fc2c
       ...
    
     - parquet: schema should be inferred but not stored when INFER_ONLY is specified *** FAILED *** (125 milliseconds)
       java.net.URISyntaxException: Illegal character in opaque part at index 2: C:\projects\spark\target\tmp\spark-9487568e-80a4-42b3-b0a5-d95314c4ccbc
       ...
    
     - orc: schema should not be inferred when NEVER_INFER is specified *** FAILED *** (156 milliseconds)
       java.net.URISyntaxException: Illegal character in opaque part at index 2: C:\projects\spark\target\tmp\spark-0d2dfa45-1b0f-4958-a8be-1074ed0135a
       ...
    
     - parquet: schema should not be inferred when NEVER_INFER is specified *** FAILED *** (547 milliseconds)
       java.net.URISyntaxException: Illegal character in opaque part at index 2: C:\projects\spark\target\tmp\spark-6d95d64e-613e-4a59-a0f6-d198c5aa51ee
       ...
    ```
    
    ```
    org.apache.spark.sql.execution.command.DDLSuite:
     - create temporary view using *** FAILED *** (15 milliseconds)
       org.apache.spark.sql.AnalysisException: Path does not exist: file:/C:projectsspark	arget	mpspark-3881d9ca-561b-488d-90b9-97587472b853	mp;
       ...
    
     - insert data to a data source table which has a non-existing location should succeed *** FAILED *** (109 milliseconds)
       file:/C:projectsspark%09arget%09mpspark-4cad3d19-6085-4b75-b407-fe5e9d21df54 did not equal file:///C:/projects/spark/target/tmp/spark-4cad3d19-6085-4b75-b407-fe5e9d21df54 (DDLSuite.scala:1869)
       ...
    
     - insert into a data source table with a non-existing partition location should succeed *** FAILED *** (94 milliseconds)
       file:/C:projectsspark%09arget%09mpspark-4b52e7de-e3aa-42fd-95d4-6d4d58d1d95d did not equal file:///C:/projects/spark/target/tmp/spark-4b52e7de-e3aa-42fd-95d4-6d4d58d1d95d (DDLSuite.scala:1910)
       ...
    
     - read data from a data source table which has a non-existing location should succeed *** FAILED *** (93 milliseconds)
       file:/C:projectsspark%09arget%09mpspark-f8c281e2-08c2-4f73-abbf-f3865b702c34 did not equal file:///C:/projects/spark/target/tmp/spark-f8c281e2-08c2-4f73-abbf-f3865b702c34 (DDLSuite.scala:1937)
       ...
    
     - read data from a data source table with non-existing partition location should succeed *** FAILED *** (110 milliseconds)
       java.lang.IllegalArgumentException: Can not create a Path from an empty string
       ...
    
     - create datasource table with a non-existing location *** FAILED *** (94 milliseconds)
       file:/C:projectsspark%09arget%09mpspark-387316ae-070c-4e78-9b78-19ebf7b29ec8 did not equal file:///C:/projects/spark/target/tmp/spark-387316ae-070c-4e78-9b78-19ebf7b29ec8 (DDLSuite.scala:1982)
       ...
    
     - CTAS for external data source table with a non-existing location *** FAILED *** (16 milliseconds)
       java.lang.IllegalArgumentException: Can not create a Path from an empty string
       ...
    
     - CTAS for external data source table with a existed location *** FAILED *** (15 milliseconds)
       java.lang.IllegalArgumentException: Can not create a Path from an empty string
       ...
    
     - data source table:partition column name containing a b *** FAILED *** (125 milliseconds)
       java.lang.IllegalArgumentException: Can not create a Path from an empty string
       ...
    
     - data source table:partition column name containing a:b *** FAILED *** (143 milliseconds)
       java.lang.IllegalArgumentException: Can not create a Path from an empty string
       ...
    
     - data source table:partition column name containing a%b *** FAILED *** (109 milliseconds)
       java.lang.IllegalArgumentException: Can not create a Path from an empty string
       ...
    
     - data source table:partition column name containing a,b *** FAILED *** (109 milliseconds)
       java.lang.IllegalArgumentException: Can not create a Path from an empty string
       ...
    
     - location uri contains a b for datasource table *** FAILED *** (94 milliseconds)
       file:/C:projectsspark%09arget%09mpspark-5739cda9-b702-4e14-932c-42e8c4174480a%20b did not equal file:///C:/projects/spark/target/tmp/spark-5739cda9-b702-4e14-932c-42e8c4174480/a%20b (DDLSuite.scala:2084)
       ...
    
     - location uri contains a:b for datasource table *** FAILED *** (78 milliseconds)
       file:/C:projectsspark%09arget%09mpspark-9bdd227c-840f-4f08-b7c5-4036638f098da:b did not equal file:///C:/projects/spark/target/tmp/spark-9bdd227c-840f-4f08-b7c5-4036638f098d/a:b (DDLSuite.scala:2084)
       ...
    
     - location uri contains a%b for datasource table *** FAILED *** (78 milliseconds)
       file:/C:projectsspark%09arget%09mpspark-62bb5f1d-fa20-460a-b534-cb2e172a3640a%25b did not equal file:///C:/projects/spark/target/tmp/spark-62bb5f1d-fa20-460a-b534-cb2e172a3640/a%25b (DDLSuite.scala:2084)
       ...
    
     - location uri contains a b for database *** FAILED *** (16 milliseconds)
       org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:java.lang.IllegalArgumentException: Can not create a Path from an empty string);
       ...
    
     - location uri contains a:b for database *** FAILED *** (15 milliseconds)
       org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:java.lang.IllegalArgumentException: Can not create a Path from an empty string);
       ...
    
     - location uri contains a%b for database *** FAILED *** (0 milliseconds)
       org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:java.lang.IllegalArgumentException: Can not create a Path from an empty string);
       ...
    ```
    
    ```
    org.apache.spark.sql.hive.execution.HiveDDLSuite:
     - create hive table with a non-existing location *** FAILED *** (16 milliseconds)
       org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:java.lang.IllegalArgumentException: Can not create a Path from an empty string);
       ...
    
     - CTAS for external hive table with a non-existing location *** FAILED *** (16 milliseconds)
       org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:java.lang.IllegalArgumentException: Can not create a Path from an empty string);
       ...
    
     - CTAS for external hive table with a existed location *** FAILED *** (16 milliseconds)
       org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:java.lang.IllegalArgumentException: Can not create a Path from an empty string);
       ...
    
     - partition column name of parquet table containing a b *** FAILED *** (156 milliseconds)
       java.lang.IllegalArgumentException: Can not create a Path from an empty string
       ...
    
     - partition column name of parquet table containing a:b *** FAILED *** (94 milliseconds)
       java.lang.IllegalArgumentException: Can not create a Path from an empty string
       ...
    
     - partition column name of parquet table containing a%b *** FAILED *** (125 milliseconds)
       java.lang.IllegalArgumentException: Can not create a Path from an empty string
       ...
    
     - partition column name of parquet table containing a,b *** FAILED *** (110 milliseconds)
       java.lang.IllegalArgumentException: Can not create a Path from an empty string
       ...
    
     - partition column name of hive table containing a b *** FAILED *** (15 milliseconds)
       org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:java.lang.IllegalArgumentException: Can not create a Path from an empty string);
       ...
    
     - partition column name of hive table containing a:b *** FAILED *** (16 milliseconds)
       org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:java.lang.IllegalArgumentException: Can not create a Path from an empty string);
       ...
    
     - partition column name of hive table containing a%b *** FAILED *** (16 milliseconds)
       org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:java.lang.IllegalArgumentException: Can not create a Path from an empty string);
       ...
    
     - partition column name of hive table containing a,b *** FAILED *** (0 milliseconds)
       org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:java.lang.IllegalArgumentException: Can not create a Path from an empty string);
       ...
    
     - hive table: location uri contains a b *** FAILED *** (0 milliseconds)
       org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:java.lang.IllegalArgumentException: Can not create a Path from an empty string);
       ...
    
     - hive table: location uri contains a:b *** FAILED *** (0 milliseconds)
       org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:java.lang.IllegalArgumentException: Can not create a Path from an empty string);
       ...
    
     - hive table: location uri contains a%b *** FAILED *** (0 milliseconds)
       org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:java.lang.IllegalArgumentException: Can not create a Path from an empty string);
       ...
    ```
    
    ```
    org.apache.spark.sql.sources.PathOptionSuite:
     - path option also exist for write path *** FAILED *** (94 milliseconds)
       file:/C:projectsspark%09arget%09mpspark-2870b281-7ac0-43d6-b6b6-134e01ab6fdc did not equal file:///C:/projects/spark/target/tmp/spark-2870b281-7ac0-43d6-b6b6-134e01ab6fdc (PathOptionSuite.scala:98)
       ...
    ```
    
    ```
    org.apache.spark.sql.CachedTableSuite:
     - SPARK-19765: UNCACHE TABLE should un-cache all cached plans that refer to this table *** FAILED *** (110 milliseconds)
       java.lang.IllegalArgumentException: Can not create a Path from an empty string
       ...
    ```
    
    ```
    org.apache.spark.sql.execution.DataSourceScanExecRedactionSuite:
     - treeString is redacted *** FAILED *** (250 milliseconds)
       "file:/C:/projects/spark/target/tmp/spark-3ecc1fa4-3e76-489c-95f4-f0b0500eae28" did not contain "C:\projects\spark\target\tmp\spark-3ecc1fa4-3e76-489c-95f4-f0b0500eae28" (DataSourceScanExecRedactionSuite.scala:46)
       ...
    ```
    
    ## How was this patch tested?
    
    Tested via AppVeyor for each and checked it passed once each. These should be retested via AppVeyor in this PR.


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/HyukjinKwon/spark windows-20170515

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/17987.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #17987
    
----
commit ea113211b09d0b9b7876472d8bcb47476b6f598d
Author: hyukjinkwon <gu...@gmail.com>
Date:   2017-05-15T10:00:31Z

    Fix test failures on Windows

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #17987: [WIP][SPARK-19707][SPARK-18922][TESTS][SQL][CORE]...

Posted by HyukjinKwon <gi...@git.apache.org>.
Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17987#discussion_r116534680
  
    --- Diff: sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala ---
    @@ -2041,15 +2046,22 @@ abstract class DDLSuite extends QueryTest with SQLTestUtils {
     
       Seq("a b", "a:b", "a%b").foreach { specialChars =>
         test(s"location uri contains $specialChars for datasource table") {
    +      // On Windows, it looks colon in the file name is illegal by default. See
    +      // https://support.microsoft.com/en-us/help/289627
    +      assume(!Utils.isWindows || specialChars != "a:b")
    +
           withTable("t", "t1") {
             withTempDir { dir =>
               val loc = new File(dir, specialChars)
               loc.mkdir()
    +          // The parser does not recognizes the backslashes on Windows as they are.
    +          // These currently should be escaped.
    +          val escapedLoc = loc.getAbsolutePath.replace("\\", "\\\\")
    --- End diff --
    
    Here, with special characters with path, it seems we can't use URI because it has double-encoding problem. For example, a white space ` ` becomes `%20` from path to URI and `%20` becomes `%2520` with another encoding.
    
    So, here, it just tries to make `\` to `\\` in order to make the Windows path goes through the parser.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #17987: [SPARK-19707][SPARK-18922][TESTS][SQL][CORE] Fix test fa...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/17987
  
    Test PASSed.
    Refer to this link for build results (access rights to CI server needed): 
    https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/77269/
    Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #17987: [SPARK-19707][SPARK-18922][TESTS][SQL][CORE] Fix test fa...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:

    https://github.com/apache/spark/pull/17987
  
    **[Test build #77269 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/77269/testReport)** for PR 17987 at commit [`1af7324`](https://github.com/apache/spark/commit/1af732442de7d002daf38a13aff72db335509ff2).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #17987: [SPARK-19707][SPARK-18922][TESTS][SQL][CORE] Fix test fa...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/17987
  
    Merged build finished. Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #17987: [SPARK-19707][SPARK-18922][TESTS][SQL][CORE] Fix test fa...

Posted by HyukjinKwon <gi...@git.apache.org>.
Github user HyukjinKwon commented on the issue:

    https://github.com/apache/spark/pull/17987
  
    retest this please


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #17987: [WIP][SPARK-19707][SPARK-18922][TESTS][SQL][CORE] Fix te...

Posted by HyukjinKwon <gi...@git.apache.org>.
Github user HyukjinKwon commented on the issue:

    https://github.com/apache/spark/pull/17987
  
    There are many flaky tests on Windows (and somehow it looks it became more flaky than before). So, I can't guarantee these are all the instances but I believe these are almost all. 
    
    Some of tests above probably will fail due to flaky tests. I will re-trigger them.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #17987: [WIP][SPARK-19707][SPARK-18922][TESTS][SQL][CORE] Fix te...

Posted by HyukjinKwon <gi...@git.apache.org>.
Github user HyukjinKwon commented on the issue:

    https://github.com/apache/spark/pull/17987
  
    Build started: [TESTS] `org.apache.spark.SparkContextSuite` [![PR-17987](https://ci.appveyor.com/api/projects/status/github/spark-test/spark?branch=D9FFA824-9327-4D37-9AB1-DE0E1C7FDBBC&svg=true)](https://ci.appveyor.com/project/spark-test/spark/branch/D9FFA824-9327-4D37-9AB1-DE0E1C7FDBBC)
    Build started: [TESTS] `org.apache.spark.storage.LocalDirsSuite` [![PR-17987](https://ci.appveyor.com/api/projects/status/github/spark-test/spark?branch=A4932623-3429-480A-892C-029304929171&svg=true)](https://ci.appveyor.com/project/spark-test/spark/branch/A4932623-3429-480A-892C-029304929171)
    Build started: [TESTS] `org.apache.spark.sql.hive.HiveSchemaInferenceSuite` [![PR-17987](https://ci.appveyor.com/api/projects/status/github/spark-test/spark?branch=B0CA574F-02D7-4742-AE19-C3AE5DE7C463&svg=true)](https://ci.appveyor.com/project/spark-test/spark/branch/B0CA574F-02D7-4742-AE19-C3AE5DE7C463)
    Build started: [TESTS] `org.apache.spark.sql.execution.command.DDLSuite` [![PR-17987](https://ci.appveyor.com/api/projects/status/github/spark-test/spark?branch=CDA64F15-1716-4A12-9127-6FFAE66B1721&svg=true)](https://ci.appveyor.com/project/spark-test/spark/branch/CDA64F15-1716-4A12-9127-6FFAE66B1721)
    Build started: [TESTS] `org.apache.spark.sql.hive.execution.HiveDDLSuite` [![PR-17987](https://ci.appveyor.com/api/projects/status/github/spark-test/spark?branch=2A5A4D81-01B9-4CB4-8444-11CA9C82AF24&svg=true)](https://ci.appveyor.com/project/spark-test/spark/branch/2A5A4D81-01B9-4CB4-8444-11CA9C82AF24)
    Build started: [TESTS] `org.apache.spark.sql.sources.PathOptionSuite` [![PR-17987](https://ci.appveyor.com/api/projects/status/github/spark-test/spark?branch=34354B28-942A-4DC7-931D-C708E10123D0&svg=true)](https://ci.appveyor.com/project/spark-test/spark/branch/34354B28-942A-4DC7-931D-C708E10123D0)
    Build started: [TESTS] `org.apache.spark.sql.CachedTableSuite` [![PR-17987](https://ci.appveyor.com/api/projects/status/github/spark-test/spark?branch=D244E646-8D34-4D87-B1BC-4650260216EC&svg=true)](https://ci.appveyor.com/project/spark-test/spark/branch/D244E646-8D34-4D87-B1BC-4650260216EC)
    Build started: [TESTS] `org.apache.spark.sql.execution.DataSourceScanExecRedactionSuite` [![PR-17987](https://ci.appveyor.com/api/projects/status/github/spark-test/spark?branch=70831B99-0DD3-48CB-BDB1-EED2B7F0BEB2&svg=true)](https://ci.appveyor.com/project/spark-test/spark/branch/70831B99-0DD3-48CB-BDB1-EED2B7F0BEB2)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #17987: [WIP][SPARK-19707][SPARK-18922][TESTS][SQL][CORE] Fix te...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/17987
  
    Merged build finished. Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #17987: [SPARK-19707][SPARK-18922][TESTS][SQL][CORE] Fix test fa...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:

    https://github.com/apache/spark/pull/17987
  
    **[Test build #76965 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76965/testReport)** for PR 17987 at commit [`1af7324`](https://github.com/apache/spark/commit/1af732442de7d002daf38a13aff72db335509ff2).
     * This patch passes all tests.
     * This patch merges cleanly.
     * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #17987: [WIP][SPARK-19707][SPARK-18922][TESTS][SQL][CORE]...

Posted by HyukjinKwon <gi...@git.apache.org>.
Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17987#discussion_r116531475
  
    --- Diff: core/src/test/scala/org/apache/spark/SparkContextSuite.scala ---
    @@ -301,13 +301,13 @@ class SparkContextSuite extends SparkFunSuite with LocalSparkContext with Eventu
         sc = new SparkContext(new SparkConf().setAppName("test").setMaster("local"))
         sc.addJar(tmpJar.getAbsolutePath)
     
    -    // Invaid jar path will only print the error log, will not add to file server.
    +    // Invalid jar path will only print the error log, will not add to file server.
         sc.addJar("dummy.jar")
         sc.addJar("")
         sc.addJar(tmpDir.getAbsolutePath)
     
    -    sc.listJars().size should be (1)
    -    sc.listJars().head should include (tmpJar.getName)
    +    assert(sc.listJars().size == 1)
    --- End diff --
    
    This gives a better error message.
    
    **Before**
    
    ```
    2 was not equal to 1 (SparkContextSuite.scala:309)
    ```
    
    **After**
    
    ```
    ArrayBuffer("spark://172.24.17.81:2411/jars/spark-7e45e7da-ca1b-4e33-8a64-1fb44860ee76", "spark://172.24.17.81:2411/jars/test7132431731623035882.jar") had size 2 instead of expected size 1 (SparkContextSuite.scala:309)
    ```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #17987: [SPARK-19707][SPARK-18922][TESTS][SQL][CORE] Fix test fa...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/17987
  
    Test PASSed.
    Refer to this link for build results (access rights to CI server needed): 
    https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76965/
    Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #17987: [SPARK-19707][SPARK-18922][TESTS][SQL][CORE] Fix test fa...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:

    https://github.com/apache/spark/pull/17987
  
    **[Test build #76965 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76965/testReport)** for PR 17987 at commit [`1af7324`](https://github.com/apache/spark/commit/1af732442de7d002daf38a13aff72db335509ff2).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #17987: [WIP][SPARK-19707][SPARK-18922][TESTS][SQL][CORE]...

Posted by HyukjinKwon <gi...@git.apache.org>.
Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17987#discussion_r116535323
  
    --- Diff: sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala ---
    @@ -2080,27 +2095,39 @@ abstract class DDLSuite extends QueryTest with SQLTestUtils {
               assert(loc.listFiles().isEmpty)
               spark.sql("INSERT INTO TABLE t1 PARTITION(b=2) SELECT 1")
               val partFile = new File(loc, "b=2")
    -          assert(partFile.listFiles().length >= 1)
    +          assert(partFile.listFiles().nonEmpty)
               checkAnswer(spark.table("t1"), Row("1", "2") :: Nil)
     
               spark.sql("INSERT INTO TABLE t1 PARTITION(b='2017-03-03 12:13%3A14') SELECT 1")
               val partFile1 = new File(loc, "b=2017-03-03 12:13%3A14")
               assert(!partFile1.exists())
    -          val partFile2 = new File(loc, "b=2017-03-03 12%3A13%253A14")
    -          assert(partFile2.listFiles().length >= 1)
    -          checkAnswer(spark.table("t1"), Row("1", "2") :: Row("1", "2017-03-03 12:13%3A14") :: Nil)
    +
    +          if (!Utils.isWindows) {
    +            // Actual path becomes "b=2017-03-03%2012%3A13%253A14" on Windows.
    --- End diff --
    
    For example... `C:\projects\spark\target\tmp\spark-eb6b87cc-4a44-4de4-96f0-f4d4852fe3fc\a b\b=2017-03-03%2012%3A13%253A14`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #17987: [WIP][SPARK-19707][SPARK-18922][TESTS][SQL][CORE] Fix te...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:

    https://github.com/apache/spark/pull/17987
  
    **[Test build #76945 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76945/testReport)** for PR 17987 at commit [`ea11321`](https://github.com/apache/spark/commit/ea113211b09d0b9b7876472d8bcb47476b6f598d).
     * This patch passes all tests.
     * This patch merges cleanly.
     * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #17987: [WIP][SPARK-19707][SPARK-18922][TESTS][SQL][CORE] Fix te...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:

    https://github.com/apache/spark/pull/17987
  
    **[Test build #76945 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76945/testReport)** for PR 17987 at commit [`ea11321`](https://github.com/apache/spark/commit/ea113211b09d0b9b7876472d8bcb47476b6f598d).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #17987: [WIP][SPARK-19707][SPARK-18922][TESTS][SQL][CORE]...

Posted by HyukjinKwon <gi...@git.apache.org>.
Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17987#discussion_r116532209
  
    --- Diff: core/src/test/scala/org/apache/spark/storage/LocalDirsSuite.scala ---
    @@ -37,27 +37,50 @@ class LocalDirsSuite extends SparkFunSuite with BeforeAndAfter {
         Utils.clearLocalRootDirs()
       }
     
    +  private def assumeNonExistentAndNotCreatable(f: File): Unit = {
    +    try {
    +      assume(!f.exists() && !f.mkdirs())
    +    } finally {
    +      Utils.deleteRecursively(f)
    +    }
    +  }
    +
       test("Utils.getLocalDir() returns a valid directory, even if some local dirs are missing") {
    --- End diff --
    
    The problem here is, `Utils.getLocalDir` -> `Utils.getOrCreateLocalRootDirs` actually creates the directory. So, even if the path does not exist, this can be created. I believe the not existing directory are not created under the root in the most cases. However, on Windows, it is arguably more possible (at least it seems working in AppVeyor) as it creates the directory under `C:` as below:
    
    ```scala
    scala> val a = new java.io.File("/NENEXISTENT_PATH")
    a: java.io.File = \NENEXISTENT_PATH
    
    scala> a.exists()
    res3: Boolean = false
    
    scala> a.mkdirs()
    res4: Boolean = true
    ```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #17987: [SPARK-19707][SPARK-18922][TESTS][SQL][CORE] Fix ...

Posted by asfgit <gi...@git.apache.org>.
Github user asfgit closed the pull request at:

    https://github.com/apache/spark/pull/17987


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #17987: [WIP][SPARK-19707][SPARK-18922][TESTS][SQL][CORE]...

Posted by HyukjinKwon <gi...@git.apache.org>.
Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17987#discussion_r116533688
  
    --- Diff: sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala ---
    @@ -2018,21 +2019,25 @@ abstract class DDLSuite extends QueryTest with SQLTestUtils {
     
       Seq("a b", "a:b", "a%b", "a,b").foreach { specialChars =>
         test(s"data source table:partition column name containing $specialChars") {
    +      // On Windows, it looks colon in the file name is illegal by default. See
    +      // https://support.microsoft.com/en-us/help/289627
    +      assume(!Utils.isWindows || specialChars != "a:b")
    --- End diff --
    
    It seems `:` is not allowed on Windows by default - https://support.microsoft.com/en-us/help/289627
    
    **Windows**
    
    ```scala
    scala> new java.io.File("a").mkdirs()
    res0: Boolean = true
    
    scala> new java.io.File("a:").mkdirs()
    res1: Boolean = false
    
    scala> new java.io.File("a:b").mkdirs()
    res2: Boolean = false
    
    scala> new java.io.File("a\\:b").mkdirs()
    res3: Boolean = false
    ```
    
    **Mac**
    
    ```scala
    scala> new java.io.File("a:b").mkdirs()
    res0: Boolean = true
    ```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #17987: [SPARK-19707][SPARK-18922][TESTS][SQL][CORE] Fix test fa...

Posted by srowen <gi...@git.apache.org>.
Github user srowen commented on the issue:

    https://github.com/apache/spark/pull/17987
  
    Merged to master/2.2


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #17987: [WIP][SPARK-19707][SPARK-18922][TESTS][SQL][CORE]...

Posted by HyukjinKwon <gi...@git.apache.org>.
Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17987#discussion_r116531008
  
    --- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
    @@ -1801,40 +1801,39 @@ class SparkContext(config: SparkConf) extends Logging {
        * an HTTP, HTTPS or FTP URI, or local:/path for a file on every worker node.
        */
       def addJar(path: String) {
    +    def addJarFile(file: File): String = {
    +      try {
    +        if (!file.exists()) {
    +          throw new FileNotFoundException(s"Jar ${file.getAbsolutePath} not found")
    +        }
    +        if (file.isDirectory) {
    +          throw new IllegalArgumentException(
    +            s"Directory ${file.getAbsoluteFile} is not allowed for addJar")
    +        }
    +        env.rpcEnv.fileServer.addJar(file)
    +      } catch {
    +        case NonFatal(e) =>
    +          logError(s"Failed to add $path to Spark environment", e)
    +          null
    +      }
    +    }
    +
         if (path == null) {
           logWarning("null specified as parameter to addJar")
         } else {
    -      var key = ""
    -      if (path.contains("\\")) {
    +      val key = if (path.contains("\\")) {
             // For local paths with backslashes on Windows, URI throws an exception
    -        key = env.rpcEnv.fileServer.addJar(new File(path))
    +        addJarFile(new File(path))
           } else {
             val uri = new URI(path)
             // SPARK-17650: Make sure this is a valid URL before adding it to the list of dependencies
             Utils.validateURL(uri)
    -        key = uri.getScheme match {
    +        uri.getScheme match {
               // A JAR file which exists only on the driver node
    -          case null | "file" =>
    -            try {
    --- End diff --
    
    Here, I tried to move this try-catch logic into `addJarFile` and used this for local paths with backslashes on Windows. This is covered in `add jar with invalid path` in `SparkContextSuite`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #17987: [WIP][SPARK-19707][SPARK-18922][TESTS][SQL][CORE] Fix te...

Posted by HyukjinKwon <gi...@git.apache.org>.
Github user HyukjinKwon commented on the issue:

    https://github.com/apache/spark/pull/17987
  
    Thank you @srowen for your approval. (I will remove `[WIP]` flag if the tests pass).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #17987: [WIP][SPARK-19707][SPARK-18922][TESTS][SQL][CORE] Fix te...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/17987
  
    Test PASSed.
    Refer to this link for build results (access rights to CI server needed): 
    https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76945/
    Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #17987: [SPARK-19707][SPARK-18922][TESTS][SQL][CORE] Fix test fa...

Posted by HyukjinKwon <gi...@git.apache.org>.
Github user HyukjinKwon commented on the issue:

    https://github.com/apache/spark/pull/17987
  
    @srowen, I took another look and think it is ready.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #17987: [WIP][SPARK-19707][SPARK-18922][TESTS][SQL][CORE]...

Posted by HyukjinKwon <gi...@git.apache.org>.
Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17987#discussion_r116535923
  
    --- Diff: sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala ---
    @@ -2119,24 +2146,30 @@ abstract class DDLSuite extends QueryTest with SQLTestUtils {
         withTable("t", "t1") {
    --- End diff --
    
    This test seems checking local paths are actually stored as fully qualified URI. So, here, I used local paths.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #17987: [WIP][SPARK-19707][SPARK-18922][TESTS][SQL][CORE]...

Posted by HyukjinKwon <gi...@git.apache.org>.
Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17987#discussion_r116532435
  
    --- Diff: core/src/test/scala/org/apache/spark/storage/LocalDirsSuite.scala ---
    @@ -37,27 +37,50 @@ class LocalDirsSuite extends SparkFunSuite with BeforeAndAfter {
         Utils.clearLocalRootDirs()
       }
     
    +  private def assumeNonExistentAndNotCreatable(f: File): Unit = {
    +    try {
    +      assume(!f.exists() && !f.mkdirs())
    +    } finally {
    +      Utils.deleteRecursively(f)
    +    }
    +  }
    +
       test("Utils.getLocalDir() returns a valid directory, even if some local dirs are missing") {
    --- End diff --
    
    I am not too sure which path I should provide here for Windows. So, I added `assumeNonExistentAndNotCreatable` here to check if the directory is creatable.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #17987: [SPARK-19707][SPARK-18922][TESTS][SQL][CORE] Fix test fa...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/17987
  
    Merged build finished. Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #17987: [SPARK-19707][SPARK-18922][TESTS][SQL][CORE] Fix test fa...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:

    https://github.com/apache/spark/pull/17987
  
    **[Test build #77269 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/77269/testReport)** for PR 17987 at commit [`1af7324`](https://github.com/apache/spark/commit/1af732442de7d002daf38a13aff72db335509ff2).
     * This patch passes all tests.
     * This patch merges cleanly.
     * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org