You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2019/05/21 05:37:41 UTC

[jira] [Resolved] (SPARK-5079) Detect failed jobs / batches in Spark Streaming unit tests

     [ https://issues.apache.org/jira/browse/SPARK-5079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hyukjin Kwon resolved SPARK-5079.
---------------------------------
    Resolution: Incomplete

> Detect failed jobs / batches in Spark Streaming unit tests
> ----------------------------------------------------------
>
>                 Key: SPARK-5079
>                 URL: https://issues.apache.org/jira/browse/SPARK-5079
>             Project: Spark
>          Issue Type: Bug
>          Components: DStreams
>            Reporter: Josh Rosen
>            Assignee: Ilya Ganelin
>            Priority: Major
>              Labels: bulk-closed
>
> Currently, it is possible to write Spark Streaming unit tests where Spark jobs fail but the streaming tests succeed because we rely on wall-clock time plus output comparision in order to check whether a test has passed, and hence may miss cases where errors occurred if they didn't affect these results.  We should strengthen the tests to check that no job failures occurred while processing batches.
> See https://github.com/apache/spark/pull/3832#issuecomment-68580794 for additional context.
> The StreamingTestWaiter in https://github.com/apache/spark/pull/3801 might also fix this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org