You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@maven.apache.org by "Ralph Weires (Jira)" <ji...@apache.org> on 2023/02/16 09:33:00 UTC

[jira] [Updated] (SUREFIRE-2151) Inconsistent console reporter output on failures for parameterized tests, with/without rerunFailingTestsCount

     [ https://issues.apache.org/jira/browse/SUREFIRE-2151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ralph Weires updated SUREFIRE-2151:
-----------------------------------
    Description: 
The way in which test-failures are being displayed with the console-reporter is not ideal and partly inconsistent, in particular for (e.g. JUnit5) parameterized tests.

Taking a small (JUnit5) snippet of a dummy-test as example:
{code:java}
public class DummyTest {
  @ParameterizedTest
  @CsvSource({"yes", "no", "yes", "yes", "no"})
  public void dummyTest(String param) {
    testInternal(param);
  }

  private void testInternal(String arg) {
    if (arg.equals("no")) {
      Assertions.fail("If you say 'no', it's a no");
    }
  }
}{code}
Running this with surefire will display an error like this (the summary in the end):
{code:java}
[...]

[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR]   DummyTest.dummyTest:16->testInternal:21 If you say 'no', it's a no
[ERROR]   DummyTest.dummyTest:16->testInternal:21 If you say 'no', it's a no
[INFO]
[ERROR] Tests run: 5, Failures: 2, Errors: 0, Skipped: 0
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------

[...]{code}
The failures do show parts of the problematic code-path, but don't have any information about the actual invocations of the parameterized tests that failed (in the example, invocations 2+5 of the 5). And while it is possible to see more details in the stack traces (i.e. scrolling up in the output), it would be quite nice see more details right away.

If _rerunFailingTestsCount_ is used (here with value 2), the output does show more details right away - namely the actual problematic invocations:
{code:java}
[...]

[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR] test.DummyTest.dummyTest(String)[2]
[ERROR]   Run 1: DummyTest.dummyTest:16->testInternal:21 If you say 'no', it's a no
[ERROR]   Run 2: DummyTest.dummyTest:16->testInternal:21 If you say 'no', it's a no
[ERROR]   Run 3: DummyTest.dummyTest:16->testInternal:21 If you say 'no', it's a no
[INFO]
[ERROR] test.DummyTest.dummyTest(String)[5]
[ERROR]   Run 1: DummyTest.dummyTest:16->testInternal:21 If you say 'no', it's a no
[ERROR]   Run 2: DummyTest.dummyTest:16->testInternal:21 If you say 'no', it's a no
[ERROR]   Run 3: DummyTest.dummyTest:16->testInternal:21 If you say 'no', it's a no
[INFO]
[INFO]
[ERROR] Tests run: 5, Failures: 2, Errors: 0, Skipped: 0
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------

[...] {code}
In fact, this is currently the main reason for us to even use the _rerunFailingTestsCount_ flag - regardless of what that flag is actually meant for - which feels rather weird.

 

Would it make sense to align this somehow?

  was:
The way in which test-failures are being displayed with the console-reporter is not ideal and partly inconsistent, in particular for (e.g. JUnit5) parameterized tests.

Taking a small (JUnit5) snippet of a dummy-test as example:
{code:java}
public class DummyTest {
  @ParameterizedTest
  @CsvSource({"yes", "no", "yes", "yes", "no"})
  public void dummyTest(String param) {
    testInternal(param);
  }

  private void testInternal(String arg) {
    if (arg.equals("no")) {
      Assertions.fail("If you say 'no', it's a no");
    }
  }
}{code}
Running this with surefire will display an error like this (the summary in the end):
{code:java}
[...]

[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR]   DummyTest.dummyTest:16->testInternal:21 If you say 'no', it's a no
[ERROR]   DummyTest.dummyTest:16->testInternal:21 If you say 'no', it's a no
[INFO]
[ERROR] Tests run: 5, Failures: 2, Errors: 0, Skipped: 0
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------

[...]{code}
The failures do show parts of the problematic code-path, but don't have any information about the actual invocations of the parameterized tests that failed (in the example, invocations 2+5 of the 5). And while it is possible to see more details in the stack traces (i.e. scrolling up in the output), it would be quite nice see more details right away.

If the _rerunFailingTestsCount_ is used (here with value 2), the output does show more details right away - namely the actual problematic invocations:
{code:java}
[...]

[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR] test.DummyTest.dummyTest(String)[2]
[ERROR]   Run 1: DummyTest.dummyTest:16->testInternal:21 If you say 'no', it's a no
[ERROR]   Run 2: DummyTest.dummyTest:16->testInternal:21 If you say 'no', it's a no
[ERROR]   Run 3: DummyTest.dummyTest:16->testInternal:21 If you say 'no', it's a no
[INFO]
[ERROR] test.DummyTest.dummyTest(String)[5]
[ERROR]   Run 1: DummyTest.dummyTest:16->testInternal:21 If you say 'no', it's a no
[ERROR]   Run 2: DummyTest.dummyTest:16->testInternal:21 If you say 'no', it's a no
[ERROR]   Run 3: DummyTest.dummyTest:16->testInternal:21 If you say 'no', it's a no
[INFO]
[INFO]
[ERROR] Tests run: 5, Failures: 2, Errors: 0, Skipped: 0
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------

[...] {code}
In fact, this is currently the main reason for us to even use the _rerunFailingTestsCount_ flag - regardless of what that flag is actually meant for - which feels rather weird.

 

Would it make sense to align this somehow?


> Inconsistent console reporter output on failures for parameterized tests, with/without rerunFailingTestsCount
> -------------------------------------------------------------------------------------------------------------
>
>                 Key: SUREFIRE-2151
>                 URL: https://issues.apache.org/jira/browse/SUREFIRE-2151
>             Project: Maven Surefire
>          Issue Type: Bug
>          Components: Maven Surefire Plugin
>    Affects Versions: 3.0.0-M9
>            Reporter: Ralph Weires
>            Priority: Major
>
> The way in which test-failures are being displayed with the console-reporter is not ideal and partly inconsistent, in particular for (e.g. JUnit5) parameterized tests.
> Taking a small (JUnit5) snippet of a dummy-test as example:
> {code:java}
> public class DummyTest {
>   @ParameterizedTest
>   @CsvSource({"yes", "no", "yes", "yes", "no"})
>   public void dummyTest(String param) {
>     testInternal(param);
>   }
>   private void testInternal(String arg) {
>     if (arg.equals("no")) {
>       Assertions.fail("If you say 'no', it's a no");
>     }
>   }
> }{code}
> Running this with surefire will display an error like this (the summary in the end):
> {code:java}
> [...]
> [INFO] Results:
> [INFO]
> [ERROR] Failures:
> [ERROR]   DummyTest.dummyTest:16->testInternal:21 If you say 'no', it's a no
> [ERROR]   DummyTest.dummyTest:16->testInternal:21 If you say 'no', it's a no
> [INFO]
> [ERROR] Tests run: 5, Failures: 2, Errors: 0, Skipped: 0
> [INFO]
> [INFO] ------------------------------------------------------------------------
> [INFO] BUILD FAILURE
> [INFO] ------------------------------------------------------------------------
> [...]{code}
> The failures do show parts of the problematic code-path, but don't have any information about the actual invocations of the parameterized tests that failed (in the example, invocations 2+5 of the 5). And while it is possible to see more details in the stack traces (i.e. scrolling up in the output), it would be quite nice see more details right away.
> If _rerunFailingTestsCount_ is used (here with value 2), the output does show more details right away - namely the actual problematic invocations:
> {code:java}
> [...]
> [INFO] Results:
> [INFO]
> [ERROR] Failures:
> [ERROR] test.DummyTest.dummyTest(String)[2]
> [ERROR]   Run 1: DummyTest.dummyTest:16->testInternal:21 If you say 'no', it's a no
> [ERROR]   Run 2: DummyTest.dummyTest:16->testInternal:21 If you say 'no', it's a no
> [ERROR]   Run 3: DummyTest.dummyTest:16->testInternal:21 If you say 'no', it's a no
> [INFO]
> [ERROR] test.DummyTest.dummyTest(String)[5]
> [ERROR]   Run 1: DummyTest.dummyTest:16->testInternal:21 If you say 'no', it's a no
> [ERROR]   Run 2: DummyTest.dummyTest:16->testInternal:21 If you say 'no', it's a no
> [ERROR]   Run 3: DummyTest.dummyTest:16->testInternal:21 If you say 'no', it's a no
> [INFO]
> [INFO]
> [ERROR] Tests run: 5, Failures: 2, Errors: 0, Skipped: 0
> [INFO]
> [INFO] ------------------------------------------------------------------------
> [INFO] BUILD FAILURE
> [INFO] ------------------------------------------------------------------------
> [...] {code}
> In fact, this is currently the main reason for us to even use the _rerunFailingTestsCount_ flag - regardless of what that flag is actually meant for - which feels rather weird.
>  
> Would it make sense to align this somehow?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)