You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues-all@impala.apache.org by "Philip Zeyliger (JIRA)" <ji...@apache.org> on 2019/01/08 04:08:00 UTC

[jira] [Commented] (IMPALA-8055) run-tests.py reports tests as passed even if the did not

    [ https://issues.apache.org/jira/browse/IMPALA-8055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16736703#comment-16736703 ] 

Philip Zeyliger commented on IMPALA-8055:
-----------------------------------------

Are you just running into {{$MAX_PYTEST_FAILURES}} or something else? That's an option that configures when to give up. The thorny issue is that sometimes you want to run all of them, but, sometimes, if a test crashes impala, the gazillion test failures after the crash aren't particularly useful.

It seems like a test shouldn't have said "passed" if it "failed", and, if you know how to reproduce that, let's figure that out. Do you have a repro handy?

> run-tests.py reports tests as passed even if the did not
> --------------------------------------------------------
>
>                 Key: IMPALA-8055
>                 URL: https://issues.apache.org/jira/browse/IMPALA-8055
>             Project: IMPALA
>          Issue Type: Bug
>          Components: Infrastructure
>    Affects Versions: Impala 3.1.0
>            Reporter: Paul Rogers
>            Priority: Minor
>
> Been mucking about with the EXPLAIN output format which required rebasing a bunch of tests on the new format. PlannerTest is fine: it clearly fails when the expected ".test" files don't match the new "actual" files.
> When run on Jenkins in "pre-review" mode, the build does fail if a Python end-to-end test fails. But, the job seems to give up at that point, not running other tests and finding more problems. (There were three separate test cases that needed fixing; took multiple runs to find them.)
> When run on my dev box, I get the following (highly abbreviated) output:
> {noformat}
> '|  in pipelines: 00(GETNEXT)' != '|  row-size=402B cardinality=5.76M'
> ...
> [gw3] PASSED metadata/test_explain.py::TestExplain::test_explain_level0[protocol: beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 'abort_on_error': 1, 'debug_action': None, 'exec_single_node_rows_threshold': 0} | table_format: text/none] 
> ...
> ==== 6 passed in 68.63 seconds =====
> {noformat}
> I've learned that "passed" means "maybe failed" and to go back and inspect the actual output to figure out if the test did, indeed, fail. I suspect "passed" means "didn't crash" rather than "tests worked."
> Would be very helpful to plumb the failure through to the summary line so it said "3 passed, 3 failed" or whatever. Would be a huge time-saver.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-all-unsubscribe@impala.apache.org
For additional commands, e-mail: issues-all-help@impala.apache.org