You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2016/12/16 07:04:58 UTC

[jira] [Commented] (SPARK-18895) Fix resource-closing-related and path-related test failures in identified ones on Windows

    [ https://issues.apache.org/jira/browse/SPARK-18895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15753671#comment-15753671 ] 

Apache Spark commented on SPARK-18895:
--------------------------------------

User 'HyukjinKwon' has created a pull request for this issue:
https://github.com/apache/spark/pull/16305

> Fix resource-closing-related and path-related test failures in identified ones on Windows
> -----------------------------------------------------------------------------------------
>
>                 Key: SPARK-18895
>                 URL: https://issues.apache.org/jira/browse/SPARK-18895
>             Project: Spark
>          Issue Type: Sub-task
>          Components: Tests
>            Reporter: Hyukjin Kwon
>            Priority: Minor
>
> There are several tests failing due to resource-closing-related and path-related  problems on Windows as below.
> - {{RPackageUtilsSuite}}:
> {code}
> - build an R package from a jar end to end *** FAILED *** (1 second, 625 milliseconds)
>   java.io.IOException: Unable to delete file: C:\projects\spark\target\tmp\1481729427517-0\a\dep2\d\dep2-d.jar
>   at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279)
>   at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
>   at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> - faulty R package shows documentation *** FAILED *** (359 milliseconds)
>   java.io.IOException: Unable to delete file: C:\projects\spark\target\tmp\1481729428970-0\dep1-c.jar
>   at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279)
>   at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
>   at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> - SparkR zipping works properly *** FAILED *** (47 milliseconds)
>   java.util.regex.PatternSyntaxException: Unknown character property name {r} near index 4
> C:\projects\spark\target\tmp\1481729429282-0
>     ^
>   at java.util.regex.Pattern.error(Pattern.java:1955)
>   at java.util.regex.Pattern.charPropertyNodeFor(Pattern.java:2781)
> {code}
> - {{InputOutputMetricsSuite}}:
> {code}
> - input metrics for old hadoop with coalesce *** FAILED *** (240 milliseconds)
>   java.io.IOException: Not a file: file:/C:/projects/spark/core/ignored
>   at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:277)
>   at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
> - input metrics with cache and coalesce *** FAILED *** (109 milliseconds)
>   java.io.IOException: Not a file: file:/C:/projects/spark/core/ignored
>   at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:277)
>   at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
> - input metrics for new Hadoop API with coalesce *** FAILED *** (0 milliseconds)
>   java.lang.IllegalArgumentException: Wrong FS: file://C:\projects\spark\target\tmp\spark-9366ec94-dac7-4a5c-a74b-3e7594a692ab\test\InputOutputMetricsSuite.txt, expected: file:///
>   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:642)
>   at org.apache.hadoop.fs.FileSystem.makeQualified(FileSystem.java:462)
>   at org.apache.hadoop.fs.FilterFileSystem.makeQualified(FilterFileSystem.java:114)
> - input metrics when reading text file *** FAILED *** (110 milliseconds)
>   java.io.IOException: Not a file: file:/C:/projects/spark/core/ignored
>   at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:277)
>   at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
> - input metrics on records read - simple *** FAILED *** (125 milliseconds)
>   java.io.IOException: Not a file: file:/C:/projects/spark/core/ignored
>   at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:277)
>   at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
> - input metrics on records read - more stages *** FAILED *** (110 milliseconds)
>   java.io.IOException: Not a file: file:/C:/projects/spark/core/ignored
>   at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:277)
>   at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
> - input metrics on records - New Hadoop API *** FAILED *** (16 milliseconds)
>   java.lang.IllegalArgumentException: Wrong FS: file://C:\projects\spark\target\tmp\spark-3f10a1a4-7820-4772-b821-25fd7523bf6f\test\InputOutputMetricsSuite.txt, expected: file:///
>   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:642)
>   at org.apache.hadoop.fs.FileSystem.makeQualified(FileSystem.java:462)
>   at org.apache.hadoop.fs.FilterFileSystem.makeQualified(FilterFileSystem.java:114)
> - input metrics on records read with cache *** FAILED *** (93 milliseconds)
>   java.io.IOException: Not a file: file:/C:/projects/spark/core/ignored
>   at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:277)
>   at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
> - input read/write and shuffle read/write metrics all line up *** FAILED *** (93 milliseconds)
>   java.io.IOException: Not a file: file:/C:/projects/spark/core/ignored
>   at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:277)
>   at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
> - input metrics with interleaved reads *** FAILED *** (0 milliseconds)
>   java.lang.IllegalArgumentException: Wrong FS: file://C:\projects\spark\target\tmp\spark-2638d893-e89b-47ce-acd0-bbaeee78dd9b\InputOutputMetricsSuite_cart.txt, expected: file:///
>   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:642)
>   at org.apache.hadoop.fs.FileSystem.makeQualified(FileSystem.java:462)
>   at org.apache.hadoop.fs.FilterFileSystem.makeQualified(FilterFileSystem.java:114)
> - input metrics with old CombineFileInputFormat *** FAILED *** (157 milliseconds)
>   17947 was not greater than or equal to 300000 (InputOutputMetricsSuite.scala:324)
>   org.scalatest.exceptions.TestFailedException:
>   at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:500)
>   at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
>   at org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:466)
> - input metrics with new CombineFileInputFormat *** FAILED *** (16 milliseconds)
>   java.lang.IllegalArgumentException: Wrong FS: file://C:\projects\spark\target\tmp\spark-11920c08-19d8-4c7c-9fba-28ed72b79f80\test\InputOutputMetricsSuite.txt, expected: file:///
>   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:642)
>   at org.apache.hadoop.fs.FileSystem.makeQualified(FileSystem.java:462)
>   at org.apache.hadoop.fs.FilterFileSystem.makeQualified(FilterFileSystem.java:114)
> {code}
> - {{ReplayListenerSuite}}:
> {code}
> - End-to-end replay *** FAILED *** (121 milliseconds)
>   java.io.IOException: No FileSystem for scheme: C
>   at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2421)
>   at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2428)
> - End-to-end replay with compression *** FAILED *** (516 milliseconds)
>   java.io.IOException: No FileSystem for scheme: C
>   at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2421)
>   at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2428)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
> {code}
> - {{EventLoggingListenerSuite}}:
> {code}
> - End-to-end event logging *** FAILED *** (7 seconds, 435 milliseconds)
>   java.io.IOException: No FileSystem for scheme: C
>   at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2421)
>   at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2428)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
> - End-to-end event logging with compression *** FAILED *** (1 second)
>   java.io.IOException: No FileSystem for scheme: C
>   at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2421)
>   at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2428)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
> - Event log name *** FAILED *** (16 milliseconds)
>   "file:/[]base-dir/app1" did not equal "file:/[C:/]base-dir/app1" (EventLoggingListenerSuite.scala:123)
>   org.scalatest.exceptions.TestFailedException:
>   at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:500)
>   at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
>   at org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:466)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org