You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hunter Morgan (JIRA)" <ji...@apache.org> on 2015/07/29 17:47:05 UTC

[jira] [Commented] (SPARK-5079) Detect failed jobs / batches in Spark Streaming unit tests

    [ https://issues.apache.org/jira/browse/SPARK-5079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646301#comment-14646301 ] 

Hunter Morgan commented on SPARK-5079:
--------------------------------------

Would it be feasible to restrict test dependencies so that only a single slf4j implementation is present, make that implementation http://projects.lidalia.org.uk/slf4j-test/, and look for log messages to determine job failure?

> Detect failed jobs / batches in Spark Streaming unit tests
> ----------------------------------------------------------
>
>                 Key: SPARK-5079
>                 URL: https://issues.apache.org/jira/browse/SPARK-5079
>             Project: Spark
>          Issue Type: Bug
>          Components: Streaming
>            Reporter: Josh Rosen
>            Assignee: Ilya Ganelin
>
> Currently, it is possible to write Spark Streaming unit tests where Spark jobs fail but the streaming tests succeed because we rely on wall-clock time plus output comparision in order to check whether a test has passed, and hence may miss cases where errors occurred if they didn't affect these results.  We should strengthen the tests to check that no job failures occurred while processing batches.
> See https://github.com/apache/spark/pull/3832#issuecomment-68580794 for additional context.
> The StreamingTestWaiter in https://github.com/apache/spark/pull/3801 might also fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org