You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@mynewt.apache.org by Kevin Townsend <ke...@adafruit.com> on 2016/10/03 13:08:32 UTC

Improve Unit Tests and Test Suite Output

I was wondering if there were any suggestions on how it might be 
possible to improve the output of the unit tests to be a bit more 
verbose, following some of the other frameworks out there for embedded 
systems like CMOCK.

Unit testing and test simulation is an important part of the system for 
any professionally maintained project, but could be even more useful 
with a little bit of refinement.

Personally, I find it useful to see a list of tests being run and maybe 
spot if I missed a module, and to know how many tests were run, etc., so 
something like this when running the test suite(s)?

    Running 'TestSuiteName' test suite:
       Running 'Test Name' ... [OK][FAILED]
       Running 'Test Name' ... [OK][FAILED]
       [n] unit tests passed, [n] failed

    Running 'TestSuiteName' test suite:
       Running 'Test Name' ... [OK][FAILED]
       Running 'Test Name' ... [OK][FAILED]
       [n] unit tests passed, [n] failed

    Ran [n] unit tests in [n] test suites
    [n] unit tests passed, [n] failed

It's a poor example that needs more thought, but I was interested in 
getting the discussion started.

Also, having a 'startup' and 'teardown' function that runs before and 
after every unit test in the test suite may be nice as well to clear any 
variables or put things into a known state, but I'm also curious about 
opinions there.

Maybe have optional functions like this in every test suite module (this 
is taken from a project where we used CMOCK and UNITY: 
http://www.throwtheswitch.org/#download-section)

    void setUp(void)
    {
       fifo_clear(&ff_non_overwritable);
    }

    void tearDown(void)
    {

    }

Happy to help here, but wanted to get a discussion started first.

K.


Re: Improve Unit Tests and Test Suite Output

Posted by Kevin Townsend <ke...@adafruit.com>.
Just to elaborate (though briefly from my phone) the setup and tear down
are useful when working with memory buffers or simulating peripherals or HW
to put everything into a known state before and after a set of tests and
avoid duplicating code or complex setups again and again.

You can of course manually call a custom function before and after each
test though and if people think it's not useful to automatically call and
avoid a couple lines in each test I'm ok with skipping it ... it's just an
option I found myself looking for migrating old test code to mynewt but
hardly critical.

K.

Re: Improve Unit Tests and Test Suite Output

Posted by Kevin Townsend <ke...@adafruit.com>.
Hi Chris,

I'll send a bigger reply tonight but ...

I for one would welcome all ideas and contributions to the testutil
> library.  Could you expand on the setup / teardown thoughts?  Would
> these be executed per test case, or just per suite?  Also, my
> understanding is that these function get executed automatically without
> the framework needing to be told about them, is that correct


I think they should run automatically before and after individual tests if
present yes. Anything specific to one test belongs in that test function in
my opinion.

K.

Re: Improve Unit Tests and Test Suite Output

Posted by hathach <th...@tinyusb.org>.
Hi,

IMHO, we should at least include the number of tests that PASSED & 
FAILED as well. Since it is super easy to forget adding TEST_CASE into 
the TEST_SUITE body. At least I did, very often thinking it is already 
tested.

On 04/10/2016 02:32, Sterling Hughes wrote:
> Hey,
>
>>
>> The thinking was that the user doesn't want to be bothered with a bunch
>> of text when there are no failures.  That said, I agree that more
>> verbose output in the success case would be useful in some cases.  You
>> can get something kind of like your example if you provide the -ldebug
>> command line option when you run the test, e.g.,
>>
>>     newt -ldebug test net/nimble/host
>>     Executing test:
>> /home/ccollins/repos/mynewt/core/bin/targets/unittest/net_nimble_host_test/test/net/nimble/host/test/net_nimble_host_test
>>     2016/10/03 08:00:49 [DEBUG]
>> /home/ccollins/repos/mynewt/core/bin/targets/unittest/net_nimble_host_test/test/net/nimble/host/test/net_nimble_host_test
>>     2016/10/03 08:00:50 [DEBUG] o=[pass]
>>     ble_att_clt_suite/ble_att_clt_test_tx_find_info
>>     [pass] ble_att_clt_suite/ble_att_clt_test_rx_find_info
>>     [pass] ble_att_clt_suite/ble_att_clt_test_tx_read
>>     [...]
>>     [pass] 
>> ble_sm_sc_test_suite/ble_sm_sc_peer_nc_iio1_rio1_b1_iat2_rat2_ik3_rk3
>>     [pass] 
>> ble_sm_sc_test_suite/ble_sm_sc_peer_pk_iio2_rio0_b1_iat2_rat2_ik7_rk3
>>     [pass] 
>> ble_sm_sc_test_suite/ble_sm_sc_us_jw_iio3_rio3_b1_iat2_rat2_ik3_rk3
>>     [pass] 
>> ble_sm_sc_test_suite/ble_sm_sc_us_nc_iio1_rio1_b1_iat2_rat2_ik3_rk3
>>     [pass] 
>> ble_sm_sc_test_suite/ble_sm_sc_us_pk_iio2_rio0_b1_iat2_rat2_ik7_rk3
>>     [pass] ble_uuid_test_suite/ble_uuid_test_128_to_16
>>
>>     Passed tests: [net/nimble/host/test]
>>     All tests passed
>>
>> The output is a bit rough, and -ldebug produces a lot of extra output
>> that is not relevant, so there is some work to do here.  As an aside, I
>> think newt is not very consistent with its "-v" and "-ldebug" options.
>> As I understand it, "-v" is supposed to produce extra output about the
>> user's project; "-ldebug" is meant for debugging the newt tool itself,
>> and is supposed to generate output relating to newt's internals.
>>
>
> yes, we should clean that up prior to 1.0-rel.
>
> for me, i\u2019d prefer to have the default output show [pass] or [fail] 
> but not show \u201ccompiling\u2026\u201d when doing unit tests.  i\u2019m much more 
> accustomed to the pass/fail messages than a whole bunch of compiler 
> output.  i\u2019d then like -v to show me the compiling, and -vv show me 
> the commands.
>
> sterling
>
>


Re: Improve Unit Tests and Test Suite Output

Posted by Sterling Hughes <st...@apache.org>.
Hey,

>
> The thinking was that the user doesn't want to be bothered with a 
> bunch
> of text when there are no failures.  That said, I agree that more
> verbose output in the success case would be useful in some cases.  You
> can get something kind of like your example if you provide the -ldebug
> command line option when you run the test, e.g.,
>
>     newt -ldebug test net/nimble/host
>     Executing test:
>     /home/ccollins/repos/mynewt/core/bin/targets/unittest/net_nimble_host_test/test/net/nimble/host/test/net_nimble_host_test
>     2016/10/03 08:00:49 [DEBUG]
>     /home/ccollins/repos/mynewt/core/bin/targets/unittest/net_nimble_host_test/test/net/nimble/host/test/net_nimble_host_test
>     2016/10/03 08:00:50 [DEBUG] o=[pass]
>     ble_att_clt_suite/ble_att_clt_test_tx_find_info
>     [pass] ble_att_clt_suite/ble_att_clt_test_rx_find_info
>     [pass] ble_att_clt_suite/ble_att_clt_test_tx_read
>     [...]
>     [pass] 
> ble_sm_sc_test_suite/ble_sm_sc_peer_nc_iio1_rio1_b1_iat2_rat2_ik3_rk3
>     [pass] 
> ble_sm_sc_test_suite/ble_sm_sc_peer_pk_iio2_rio0_b1_iat2_rat2_ik7_rk3
>     [pass] 
> ble_sm_sc_test_suite/ble_sm_sc_us_jw_iio3_rio3_b1_iat2_rat2_ik3_rk3
>     [pass] 
> ble_sm_sc_test_suite/ble_sm_sc_us_nc_iio1_rio1_b1_iat2_rat2_ik3_rk3
>     [pass] 
> ble_sm_sc_test_suite/ble_sm_sc_us_pk_iio2_rio0_b1_iat2_rat2_ik7_rk3
>     [pass] ble_uuid_test_suite/ble_uuid_test_128_to_16
>
>     Passed tests: [net/nimble/host/test]
>     All tests passed
>
> The output is a bit rough, and -ldebug produces a lot of extra output
> that is not relevant, so there is some work to do here.  As an aside, 
> I
> think newt is not very consistent with its "-v" and "-ldebug" options.
> As I understand it, "-v" is supposed to produce extra output about the
> user's project; "-ldebug" is meant for debugging the newt tool itself,
> and is supposed to generate output relating to newt's internals.
>

yes, we should clean that up prior to 1.0-rel.

for me, i’d prefer to have the default output show [pass] or [fail] 
but not show “compiling…” when doing unit tests.  i’m much more 
accustomed to the pass/fail messages than a whole bunch of compiler 
output.  i’d then like -v to show me the compiling, and -vv show me 
the commands.

sterling

Re: Improve Unit Tests and Test Suite Output

Posted by hathach <ha...@gmail.com>.
Yeah, Mocking is great tool, we can test high level module based on 
behaviors of lower one without going down to the bsp/peripheral 
simulation. Since I plan to port my library as an newt project, and it 
already has decent tests running CMOCK, I will try to pull that of later.

On 06/10/2016 00:28, Sterling Hughes wrote:
> Hi,
>
> I don\u2019t think we planned on providing a mock\u2019ing framework in V1 of 
> Mynewt.  The approach to mocking has been to implement the lower 
> layers on sim, and then special case things where it only makes sense 
> for a particular regression or unit test.  While you won\u2019t get the 
> control you have with mocking (i.e. guaranteed set of responses to 
> external function calls), it does allow for a fair number of 
> regression tests to run simulated \u2014 and should catch the vast majority 
> of cases.
>
> Going forward, it does sound like having this ability would be 
> useful.  If somebody wanted to provide a patch to newt, that allows it 
> to either use an external framework like CMock, or generate a set of 
> mock templates itself, I think it would be a great contribution!
>
> Sterling
>
> On 3 Oct 2016, at 12:26, hathach wrote:
>
>> Hi all,
>>
>> I previously used CMock & Unity as unit testing framework for my own 
>> project. CMock is rather complex since it allows mocking the lower 
>> layers thus isolating module and making it easy for testing/probing 
>> its behavior.
>>
>> For example, when testing an service-adding function, all we care 
>> about is the ble_gatts_register_svcs()  finally invoked with the 
>> exact same svc_def. Behavior of ble_gatts_register_svcs() is subject 
>> to its own unit testing.
>>
>> Though newt's testutil is still in developing stage, do we have a 
>> plan to implement some level of mocking framework like CMock. Since 
>> it will be a challenge to simulate lower layer and stimulate some 
>> certain scenario.
>>
>> PS: I found even with mocking, it is also hard to do decent coverage 
>> of unit test with stuffs like peripherals. And the integration test 
>> is completely out of control :(
>>
>> On 03/10/2016 22:42, Christopher Collins wrote:
>>> Hi Kevin,
>>>
>>> On Mon, Oct 03, 2016 at 03:08:32PM +0200, Kevin Townsend wrote:
>>>> I was wondering if there were any suggestions on how it might be
>>>> possible to improve the output of the unit tests to be a bit more
>>>> verbose, following some of the other frameworks out there for embedded
>>>> systems like CMOCK.
>>>>
>>>> Unit testing and test simulation is an important part of the system 
>>>> for
>>>> any professionally maintained project, but could be even more useful
>>>> with a little bit of refinement.
>>>>
>>>> Personally, I find it useful to see a list of tests being run and 
>>>> maybe
>>>> spot if I missed a module, and to know how many tests were run, 
>>>> etc., so
>>>> something like this when running the test suite(s)?
>>>>
>>>>      Running 'TestSuiteName' test suite:
>>>>         Running 'Test Name' ... [OK][FAILED]
>>>>         Running 'Test Name' ... [OK][FAILED]
>>>>         [n] unit tests passed, [n] failed
>>>>
>>>>      Running 'TestSuiteName' test suite:
>>>>         Running 'Test Name' ... [OK][FAILED]
>>>>         Running 'Test Name' ... [OK][FAILED]
>>>>         [n] unit tests passed, [n] failed
>>>>
>>>>      Ran [n] unit tests in [n] test suites
>>>>      [n] unit tests passed, [n] failed
>>>>
>>>> It's a poor example that needs more thought, but I was interested in
>>>> getting the discussion started.
>>> The thinking was that the user doesn't want to be bothered with a bunch
>>> of text when there are no failures.  That said, I agree that more
>>> verbose output in the success case would be useful in some cases.  You
>>> can get something kind of like your example if you provide the -ldebug
>>> command line option when you run the test, e.g.,
>>>
>>>      newt -ldebug test net/nimble/host
>>>      Executing test:
>>> /home/ccollins/repos/mynewt/core/bin/targets/unittest/net_nimble_host_test/test/net/nimble/host/test/net_nimble_host_test
>>>      2016/10/03 08:00:49 [DEBUG]
>>> /home/ccollins/repos/mynewt/core/bin/targets/unittest/net_nimble_host_test/test/net/nimble/host/test/net_nimble_host_test
>>>      2016/10/03 08:00:50 [DEBUG] o=[pass]
>>>      ble_att_clt_suite/ble_att_clt_test_tx_find_info
>>>      [pass] ble_att_clt_suite/ble_att_clt_test_rx_find_info
>>>      [pass] ble_att_clt_suite/ble_att_clt_test_tx_read
>>>      [...]
>>>      [pass] 
>>> ble_sm_sc_test_suite/ble_sm_sc_peer_nc_iio1_rio1_b1_iat2_rat2_ik3_rk3
>>>      [pass] 
>>> ble_sm_sc_test_suite/ble_sm_sc_peer_pk_iio2_rio0_b1_iat2_rat2_ik7_rk3
>>>      [pass] 
>>> ble_sm_sc_test_suite/ble_sm_sc_us_jw_iio3_rio3_b1_iat2_rat2_ik3_rk3
>>>      [pass] 
>>> ble_sm_sc_test_suite/ble_sm_sc_us_nc_iio1_rio1_b1_iat2_rat2_ik3_rk3
>>>      [pass] 
>>> ble_sm_sc_test_suite/ble_sm_sc_us_pk_iio2_rio0_b1_iat2_rat2_ik7_rk3
>>>      [pass] ble_uuid_test_suite/ble_uuid_test_128_to_16
>>>
>>>      Passed tests: [net/nimble/host/test]
>>>      All tests passed
>>>
>>> The output is a bit rough, and -ldebug produces a lot of extra output
>>> that is not relevant, so there is some work to do here.  As an aside, I
>>> think newt is not very consistent with its "-v" and "-ldebug" options.
>>> As I understand it, "-v" is supposed to produce extra output about the
>>> user's project; "-ldebug" is meant for debugging the newt tool itself,
>>> and is supposed to generate output relating to newt's internals.
>>>
>>>> Also, having a 'startup' and 'teardown' function that runs before and
>>>> after every unit test in the test suite may be nice as well to clear
>>>> any variables or put things into a known state, but I'm also curious
>>>> about opinions there.
>>>>
>>>> Maybe have optional functions like this in every test suite module
>>>> (this is taken from a project where we used CMOCK and UNITY:
>>>> http://www.throwtheswitch.org/#download-section)
>>>>
>>>>     void setUp(void)
>>>>     {
>>>>        fifo_clear(&ff_non_overwritable);
>>>>     }
>>>>
>>>>     void tearDown(void)
>>>>     {
>>>>
>>>>     }
>>> I agree.  Again, this is kind of half implemented currently, but it
>>> needs some more work.  The testutil library exposes the following:
>>>
>>>      typedef void tu_post_test_fn_t(void *arg);
>>>      void tu_suite_set_post_test_cb(tu_post_test_fn_t *cb, void 
>>> *cb_arg);
>>>
>>> So, there is a "teardown" function at the suite level, but no startup
>>> functions, and nothing at the individual case level.  Also, this
>>> function doesn't get executed automatically unless testutil is
>>> configured to do so.
>>>
>>> Long ago when I was working on the testutil library, I consciously
>>> avoided adding this type of functionality.  I wanted the unit tests to
>>> be easy to understand and debug, so I strived for a small API and
>>> nothing automatic.  In retrospect, after writing several unit tests, I
>>> do think automatic setup and teardown functions are useful enough to
>>> include in the API.
>>>
>>> I also recall looking at CMock a while back when I was searching for
>>> ideas.  I think it provides a lot of useful functionality, but it 
>>> looked
>>> like it did way more than we were interested in at the time. Now that
>>> the project is a bit more mature, it might be a good time to add some
>>> needed functionality to the unit testing framework.
>>>
>>>> Happy to help here, but wanted to get a discussion started first.
>>> I for one would welcome all ideas and contributions to the testutil
>>> library.  Could you expand on the setup / teardown thoughts? Would
>>> these be executed per test case, or just per suite?  Also, my
>>> understanding is that these function get executed automatically without
>>> the framework needing to be told about them, is that correct?
>>>
>>> Thanks,
>>> Chris
>>>
>>>
>>


Re: Improve Unit Tests and Test Suite Output

Posted by Peter Snyder <pe...@peterfs.net>.
Hey all,

Yes, I’ve been working on a framework that will make it easy(er) to develop tests that can run on native HW as well as a simulated environment (ie “newt test …”). There’s actually a fair amount of good functional test code but it's not not easy to package to run on target devices. I’m modifying the existing frameworks to allow tests to be allow control over pre and post setups for various environments as well as reporting results. The tests will be split up in a way so that one can pick and choose which tests to run in a given test application.  Stay tuned, I’ll have something ready shortly and I’ll look for comments to make it more usable.

- peter

> On Oct 5, 2016, at 7:24 PM, Sterling Hughes <st...@apache.org> wrote:
> 
> Indeed - Peter has been working on making this a little easier to do for the various hardware platforms that Mynewt runs on (breaking up unit tests to make sure they run well on physical hardware, making it easier to include only specific tests on given platforms, etc.)
> 
> Peter: can you chime in on some of the changes that are pending to develop?
> 
> On 5 Oct 2016, at 18:47, Kevin Townsend wrote:
> 
>> Sorry, to answer my own question the 'test' app in apache-mynewt-core show how tests can be run on native HW.
>> 
>> On 06/10/16 00:16, Kevin Townsend wrote:
>>> Hi Sterling,
>>> 
>>> Are you able to run the unit tests on real HW via newt to pipe the results back to the console? That would probably remove the need to 'mock' peripherals in most cases, and be a significantly easier way to run a set of tests that have specific HW requirements, such as BLE which isn't available in the simulator today (understandably). Or are the unit tests currently limited to the simulator running as a native binary? The latter was my understanding but I haven't dug very deeply into it either.
>>> 
>>> Using the native BLE HW on OS X or via Bluez with the simulator would of course be /amazing/, but I think there are a WHOLE LOT of other higher priority features to add before that. :)
>>> 
>>> K.
>>> 
>> 


Re: Improve Unit Tests and Test Suite Output

Posted by Sterling Hughes <st...@apache.org>.
Indeed - Peter has been working on making this a little easier to do for 
the various hardware platforms that Mynewt runs on (breaking up unit 
tests to make sure they run well on physical hardware, making it easier 
to include only specific tests on given platforms, etc.)

Peter: can you chime in on some of the changes that are pending to 
develop?

On 5 Oct 2016, at 18:47, Kevin Townsend wrote:

> Sorry, to answer my own question the 'test' app in apache-mynewt-core 
> show how tests can be run on native HW.
>
> On 06/10/16 00:16, Kevin Townsend wrote:
>> Hi Sterling,
>>
>> Are you able to run the unit tests on real HW via newt to pipe the 
>> results back to the console? That would probably remove the need to 
>> 'mock' peripherals in most cases, and be a significantly easier way 
>> to run a set of tests that have specific HW requirements, such as BLE 
>> which isn't available in the simulator today (understandably). Or are 
>> the unit tests currently limited to the simulator running as a native 
>> binary? The latter was my understanding but I haven't dug very deeply 
>> into it either.
>>
>> Using the native BLE HW on OS X or via Bluez with the simulator would 
>> of course be /amazing/, but I think there are a WHOLE LOT of other 
>> higher priority features to add before that. :)
>>
>> K.
>>
>

Re: Improve Unit Tests and Test Suite Output

Posted by Kevin Townsend <ke...@adafruit.com>.
Sorry, to answer my own question the 'test' app in apache-mynewt-core 
show how tests can be run on native HW.


On 06/10/16 00:16, Kevin Townsend wrote:
> Hi Sterling,
>
> Are you able to run the unit tests on real HW via newt to pipe the 
> results back to the console? That would probably remove the need to 
> 'mock' peripherals in most cases, and be a significantly easier way to 
> run a set of tests that have specific HW requirements, such as BLE 
> which isn't available in the simulator today (understandably). Or are 
> the unit tests currently limited to the simulator running as a native 
> binary? The latter was my understanding but I haven't dug very deeply 
> into it either.
>
> Using the native BLE HW on OS X or via Bluez with the simulator would 
> of course be /amazing/, but I think there are a WHOLE LOT of other 
> higher priority features to add before that. :)
>
> K.
>


Re: Improve Unit Tests and Test Suite Output

Posted by Kevin Townsend <ke...@adafruit.com>.
Hi Sterling,

Are you able to run the unit tests on real HW via newt to pipe the 
results back to the console? That would probably remove the need to 
'mock' peripherals in most cases, and be a significantly easier way to 
run a set of tests that have specific HW requirements, such as BLE which 
isn't available in the simulator today (understandably). Or are the unit 
tests currently limited to the simulator running as a native binary? The 
latter was my understanding but I haven't dug very deeply into it either.

Using the native BLE HW on OS X or via Bluez with the simulator would of 
course be /amazing/, but I think there are a WHOLE LOT of other higher 
priority features to add before that. :)

K.


On 05/10/16 19:28, Sterling Hughes wrote:
> Hi,
>
> I don\u2019t think we planned on providing a mock\u2019ing framework in V1 of 
> Mynewt.  The approach to mocking has been to implement the lower 
> layers on sim, and then special case things where it only makes sense 
> for a particular regression or unit test.  While you won\u2019t get the 
> control you have with mocking (i.e. guaranteed set of responses to 
> external function calls), it does allow for a fair number of 
> regression tests to run simulated \u2014 and should catch the vast majority 
> of cases.
>
> Going forward, it does sound like having this ability would be 
> useful.  If somebody wanted to provide a patch to newt, that allows it 
> to either use an external framework like CMock, or generate a set of 
> mock templates itself, I think it would be a great contribution!
>
> Sterling
>
> On 3 Oct 2016, at 12:26, hathach wrote:
>
>> Hi all,
>>
>> I previously used CMock & Unity as unit testing framework for my own 
>> project. CMock is rather complex since it allows mocking the lower 
>> layers thus isolating module and making it easy for testing/probing 
>> its behavior.
>>
>> For example, when testing an service-adding function, all we care 
>> about is the ble_gatts_register_svcs()  finally invoked with the 
>> exact same svc_def. Behavior of ble_gatts_register_svcs() is subject 
>> to its own unit testing.
>>
>> Though newt's testutil is still in developing stage, do we have a 
>> plan to implement some level of mocking framework like CMock. Since 
>> it will be a challenge to simulate lower layer and stimulate some 
>> certain scenario.
>>
>> PS: I found even with mocking, it is also hard to do decent coverage 
>> of unit test with stuffs like peripherals. And the integration test 
>> is completely out of control :(
>>
>> On 03/10/2016 22:42, Christopher Collins wrote:
>>> Hi Kevin,
>>>
>>> On Mon, Oct 03, 2016 at 03:08:32PM +0200, Kevin Townsend wrote:
>>>> I was wondering if there were any suggestions on how it might be
>>>> possible to improve the output of the unit tests to be a bit more
>>>> verbose, following some of the other frameworks out there for embedded
>>>> systems like CMOCK.
>>>>
>>>> Unit testing and test simulation is an important part of the system 
>>>> for
>>>> any professionally maintained project, but could be even more useful
>>>> with a little bit of refinement.
>>>>
>>>> Personally, I find it useful to see a list of tests being run and 
>>>> maybe
>>>> spot if I missed a module, and to know how many tests were run, 
>>>> etc., so
>>>> something like this when running the test suite(s)?
>>>>
>>>>      Running 'TestSuiteName' test suite:
>>>>         Running 'Test Name' ... [OK][FAILED]
>>>>         Running 'Test Name' ... [OK][FAILED]
>>>>         [n] unit tests passed, [n] failed
>>>>
>>>>      Running 'TestSuiteName' test suite:
>>>>         Running 'Test Name' ... [OK][FAILED]
>>>>         Running 'Test Name' ... [OK][FAILED]
>>>>         [n] unit tests passed, [n] failed
>>>>
>>>>      Ran [n] unit tests in [n] test suites
>>>>      [n] unit tests passed, [n] failed
>>>>
>>>> It's a poor example that needs more thought, but I was interested in
>>>> getting the discussion started.
>>> The thinking was that the user doesn't want to be bothered with a bunch
>>> of text when there are no failures.  That said, I agree that more
>>> verbose output in the success case would be useful in some cases.  You
>>> can get something kind of like your example if you provide the -ldebug
>>> command line option when you run the test, e.g.,
>>>
>>>      newt -ldebug test net/nimble/host
>>>      Executing test:
>>> /home/ccollins/repos/mynewt/core/bin/targets/unittest/net_nimble_host_test/test/net/nimble/host/test/net_nimble_host_test
>>>      2016/10/03 08:00:49 [DEBUG]
>>> /home/ccollins/repos/mynewt/core/bin/targets/unittest/net_nimble_host_test/test/net/nimble/host/test/net_nimble_host_test
>>>      2016/10/03 08:00:50 [DEBUG] o=[pass]
>>>      ble_att_clt_suite/ble_att_clt_test_tx_find_info
>>>      [pass] ble_att_clt_suite/ble_att_clt_test_rx_find_info
>>>      [pass] ble_att_clt_suite/ble_att_clt_test_tx_read
>>>      [...]
>>>      [pass] 
>>> ble_sm_sc_test_suite/ble_sm_sc_peer_nc_iio1_rio1_b1_iat2_rat2_ik3_rk3
>>>      [pass] 
>>> ble_sm_sc_test_suite/ble_sm_sc_peer_pk_iio2_rio0_b1_iat2_rat2_ik7_rk3
>>>      [pass] 
>>> ble_sm_sc_test_suite/ble_sm_sc_us_jw_iio3_rio3_b1_iat2_rat2_ik3_rk3
>>>      [pass] 
>>> ble_sm_sc_test_suite/ble_sm_sc_us_nc_iio1_rio1_b1_iat2_rat2_ik3_rk3
>>>      [pass] 
>>> ble_sm_sc_test_suite/ble_sm_sc_us_pk_iio2_rio0_b1_iat2_rat2_ik7_rk3
>>>      [pass] ble_uuid_test_suite/ble_uuid_test_128_to_16
>>>
>>>      Passed tests: [net/nimble/host/test]
>>>      All tests passed
>>>
>>> The output is a bit rough, and -ldebug produces a lot of extra output
>>> that is not relevant, so there is some work to do here.  As an aside, I
>>> think newt is not very consistent with its "-v" and "-ldebug" options.
>>> As I understand it, "-v" is supposed to produce extra output about the
>>> user's project; "-ldebug" is meant for debugging the newt tool itself,
>>> and is supposed to generate output relating to newt's internals.
>>>
>>>> Also, having a 'startup' and 'teardown' function that runs before and
>>>> after every unit test in the test suite may be nice as well to clear
>>>> any variables or put things into a known state, but I'm also curious
>>>> about opinions there.
>>>>
>>>> Maybe have optional functions like this in every test suite module
>>>> (this is taken from a project where we used CMOCK and UNITY:
>>>> http://www.throwtheswitch.org/#download-section)
>>>>
>>>>     void setUp(void)
>>>>     {
>>>>        fifo_clear(&ff_non_overwritable);
>>>>     }
>>>>
>>>>     void tearDown(void)
>>>>     {
>>>>
>>>>     }
>>> I agree.  Again, this is kind of half implemented currently, but it
>>> needs some more work.  The testutil library exposes the following:
>>>
>>>      typedef void tu_post_test_fn_t(void *arg);
>>>      void tu_suite_set_post_test_cb(tu_post_test_fn_t *cb, void 
>>> *cb_arg);
>>>
>>> So, there is a "teardown" function at the suite level, but no startup
>>> functions, and nothing at the individual case level.  Also, this
>>> function doesn't get executed automatically unless testutil is
>>> configured to do so.
>>>
>>> Long ago when I was working on the testutil library, I consciously
>>> avoided adding this type of functionality.  I wanted the unit tests to
>>> be easy to understand and debug, so I strived for a small API and
>>> nothing automatic.  In retrospect, after writing several unit tests, I
>>> do think automatic setup and teardown functions are useful enough to
>>> include in the API.
>>>
>>> I also recall looking at CMock a while back when I was searching for
>>> ideas.  I think it provides a lot of useful functionality, but it 
>>> looked
>>> like it did way more than we were interested in at the time. Now that
>>> the project is a bit more mature, it might be a good time to add some
>>> needed functionality to the unit testing framework.
>>>
>>>> Happy to help here, but wanted to get a discussion started first.
>>> I for one would welcome all ideas and contributions to the testutil
>>> library.  Could you expand on the setup / teardown thoughts? Would
>>> these be executed per test case, or just per suite?  Also, my
>>> understanding is that these function get executed automatically without
>>> the framework needing to be told about them, is that correct?
>>>
>>> Thanks,
>>> Chris
>>>
>>>
>>


Re: Improve Unit Tests and Test Suite Output

Posted by Sterling Hughes <st...@apache.org>.
Hi,

I don’t think we planned on providing a mock’ing framework in V1 of 
Mynewt.  The approach to mocking has been to implement the lower layers 
on sim, and then special case things where it only makes sense for a 
particular regression or unit test.  While you won’t get the control 
you have with mocking (i.e. guaranteed set of responses to external 
function calls), it does allow for a fair number of regression tests to 
run simulated — and should catch the vast majority of cases.

Going forward, it does sound like having this ability would be useful.  
If somebody wanted to provide a patch to newt, that allows it to either 
use an external framework like CMock, or generate a set of mock 
templates itself, I think it would be a great contribution!

Sterling

On 3 Oct 2016, at 12:26, hathach wrote:

> Hi all,
>
> I previously used CMock & Unity as unit testing framework for my own 
> project. CMock is rather complex since it allows mocking the lower 
> layers thus isolating module and making it easy for testing/probing 
> its behavior.
>
> For example, when testing an service-adding function, all we care 
> about is the ble_gatts_register_svcs()  finally invoked with the exact 
> same svc_def. Behavior of ble_gatts_register_svcs() is subject to its 
> own unit testing.
>
> Though newt's testutil is still in developing stage, do we have a plan 
> to implement some level of mocking framework like CMock. Since it will 
> be a challenge to simulate lower layer and stimulate some certain 
> scenario.
>
> PS: I found even with mocking, it is also hard to do decent coverage 
> of unit test with stuffs like peripherals. And the integration test is 
> completely out of control :(
>
> On 03/10/2016 22:42, Christopher Collins wrote:
>> Hi Kevin,
>>
>> On Mon, Oct 03, 2016 at 03:08:32PM +0200, Kevin Townsend wrote:
>>> I was wondering if there were any suggestions on how it might be
>>> possible to improve the output of the unit tests to be a bit more
>>> verbose, following some of the other frameworks out there for 
>>> embedded
>>> systems like CMOCK.
>>>
>>> Unit testing and test simulation is an important part of the system 
>>> for
>>> any professionally maintained project, but could be even more useful
>>> with a little bit of refinement.
>>>
>>> Personally, I find it useful to see a list of tests being run and 
>>> maybe
>>> spot if I missed a module, and to know how many tests were run, 
>>> etc., so
>>> something like this when running the test suite(s)?
>>>
>>>      Running 'TestSuiteName' test suite:
>>>         Running 'Test Name' ... [OK][FAILED]
>>>         Running 'Test Name' ... [OK][FAILED]
>>>         [n] unit tests passed, [n] failed
>>>
>>>      Running 'TestSuiteName' test suite:
>>>         Running 'Test Name' ... [OK][FAILED]
>>>         Running 'Test Name' ... [OK][FAILED]
>>>         [n] unit tests passed, [n] failed
>>>
>>>      Ran [n] unit tests in [n] test suites
>>>      [n] unit tests passed, [n] failed
>>>
>>> It's a poor example that needs more thought, but I was interested in
>>> getting the discussion started.
>> The thinking was that the user doesn't want to be bothered with a 
>> bunch
>> of text when there are no failures.  That said, I agree that more
>> verbose output in the success case would be useful in some cases.  
>> You
>> can get something kind of like your example if you provide the 
>> -ldebug
>> command line option when you run the test, e.g.,
>>
>>      newt -ldebug test net/nimble/host
>>      Executing test:
>>      /home/ccollins/repos/mynewt/core/bin/targets/unittest/net_nimble_host_test/test/net/nimble/host/test/net_nimble_host_test
>>      2016/10/03 08:00:49 [DEBUG]
>>      /home/ccollins/repos/mynewt/core/bin/targets/unittest/net_nimble_host_test/test/net/nimble/host/test/net_nimble_host_test
>>      2016/10/03 08:00:50 [DEBUG] o=[pass]
>>      ble_att_clt_suite/ble_att_clt_test_tx_find_info
>>      [pass] ble_att_clt_suite/ble_att_clt_test_rx_find_info
>>      [pass] ble_att_clt_suite/ble_att_clt_test_tx_read
>>      [...]
>>      [pass] 
>> ble_sm_sc_test_suite/ble_sm_sc_peer_nc_iio1_rio1_b1_iat2_rat2_ik3_rk3
>>      [pass] 
>> ble_sm_sc_test_suite/ble_sm_sc_peer_pk_iio2_rio0_b1_iat2_rat2_ik7_rk3
>>      [pass] 
>> ble_sm_sc_test_suite/ble_sm_sc_us_jw_iio3_rio3_b1_iat2_rat2_ik3_rk3
>>      [pass] 
>> ble_sm_sc_test_suite/ble_sm_sc_us_nc_iio1_rio1_b1_iat2_rat2_ik3_rk3
>>      [pass] 
>> ble_sm_sc_test_suite/ble_sm_sc_us_pk_iio2_rio0_b1_iat2_rat2_ik7_rk3
>>      [pass] ble_uuid_test_suite/ble_uuid_test_128_to_16
>>
>>      Passed tests: [net/nimble/host/test]
>>      All tests passed
>>
>> The output is a bit rough, and -ldebug produces a lot of extra output
>> that is not relevant, so there is some work to do here.  As an aside, 
>> I
>> think newt is not very consistent with its "-v" and "-ldebug" 
>> options.
>> As I understand it, "-v" is supposed to produce extra output about 
>> the
>> user's project; "-ldebug" is meant for debugging the newt tool 
>> itself,
>> and is supposed to generate output relating to newt's internals.
>>
>>> Also, having a 'startup' and 'teardown' function that runs before 
>>> and
>>> after every unit test in the test suite may be nice as well to clear
>>> any variables or put things into a known state, but I'm also curious
>>> about opinions there.
>>>
>>> Maybe have optional functions like this in every test suite module
>>> (this is taken from a project where we used CMOCK and UNITY:
>>> http://www.throwtheswitch.org/#download-section)
>>>
>>>     void setUp(void)
>>>     {
>>>        fifo_clear(&ff_non_overwritable);
>>>     }
>>>
>>>     void tearDown(void)
>>>     {
>>>
>>>     }
>> I agree.  Again, this is kind of half implemented currently, but it
>> needs some more work.  The testutil library exposes the following:
>>
>>      typedef void tu_post_test_fn_t(void *arg);
>>      void tu_suite_set_post_test_cb(tu_post_test_fn_t *cb, void 
>> *cb_arg);
>>
>> So, there is a "teardown" function at the suite level, but no startup
>> functions, and nothing at the individual case level.  Also, this
>> function doesn't get executed automatically unless testutil is
>> configured to do so.
>>
>> Long ago when I was working on the testutil library, I consciously
>> avoided adding this type of functionality.  I wanted the unit tests 
>> to
>> be easy to understand and debug, so I strived for a small API and
>> nothing automatic.  In retrospect, after writing several unit tests, 
>> I
>> do think automatic setup and teardown functions are useful enough to
>> include in the API.
>>
>> I also recall looking at CMock a while back when I was searching for
>> ideas.  I think it provides a lot of useful functionality, but it 
>> looked
>> like it did way more than we were interested in at the time.  Now 
>> that
>> the project is a bit more mature, it might be a good time to add some
>> needed functionality to the unit testing framework.
>>
>>> Happy to help here, but wanted to get a discussion started first.
>> I for one would welcome all ideas and contributions to the testutil
>> library.  Could you expand on the setup / teardown thoughts?  Would
>> these be executed per test case, or just per suite?  Also, my
>> understanding is that these function get executed automatically 
>> without
>> the framework needing to be told about them, is that correct?
>>
>> Thanks,
>> Chris
>>
>>
>

Re: Improve Unit Tests and Test Suite Output

Posted by hathach <th...@tinyusb.org>.
Hi all,

I previously used CMock & Unity as unit testing framework for my own 
project. CMock is rather complex since it allows mocking the lower 
layers thus isolating module and making it easy for testing/probing its 
behavior.

For example, when testing an service-adding function, all we care about 
is the ble_gatts_register_svcs()  finally invoked with the exact same 
svc_def. Behavior of ble_gatts_register_svcs() is subject to its own 
unit testing.

Though newt's testutil is still in developing stage, do we have a plan 
to implement some level of mocking framework like CMock. Since it will 
be a challenge to simulate lower layer and stimulate some certain scenario.

PS: I found even with mocking, it is also hard to do decent coverage of 
unit test with stuffs like peripherals. And the integration test is 
completely out of control :(

On 03/10/2016 22:42, Christopher Collins wrote:
> Hi Kevin,
>
> On Mon, Oct 03, 2016 at 03:08:32PM +0200, Kevin Townsend wrote:
>> I was wondering if there were any suggestions on how it might be
>> possible to improve the output of the unit tests to be a bit more
>> verbose, following some of the other frameworks out there for embedded
>> systems like CMOCK.
>>
>> Unit testing and test simulation is an important part of the system for
>> any professionally maintained project, but could be even more useful
>> with a little bit of refinement.
>>
>> Personally, I find it useful to see a list of tests being run and maybe
>> spot if I missed a module, and to know how many tests were run, etc., so
>> something like this when running the test suite(s)?
>>
>>      Running 'TestSuiteName' test suite:
>>         Running 'Test Name' ... [OK][FAILED]
>>         Running 'Test Name' ... [OK][FAILED]
>>         [n] unit tests passed, [n] failed
>>
>>      Running 'TestSuiteName' test suite:
>>         Running 'Test Name' ... [OK][FAILED]
>>         Running 'Test Name' ... [OK][FAILED]
>>         [n] unit tests passed, [n] failed
>>
>>      Ran [n] unit tests in [n] test suites
>>      [n] unit tests passed, [n] failed
>>
>> It's a poor example that needs more thought, but I was interested in
>> getting the discussion started.
> The thinking was that the user doesn't want to be bothered with a bunch
> of text when there are no failures.  That said, I agree that more
> verbose output in the success case would be useful in some cases.  You
> can get something kind of like your example if you provide the -ldebug
> command line option when you run the test, e.g.,
>
>      newt -ldebug test net/nimble/host
>      Executing test:
>      /home/ccollins/repos/mynewt/core/bin/targets/unittest/net_nimble_host_test/test/net/nimble/host/test/net_nimble_host_test
>      2016/10/03 08:00:49 [DEBUG]
>      /home/ccollins/repos/mynewt/core/bin/targets/unittest/net_nimble_host_test/test/net/nimble/host/test/net_nimble_host_test
>      2016/10/03 08:00:50 [DEBUG] o=[pass]
>      ble_att_clt_suite/ble_att_clt_test_tx_find_info
>      [pass] ble_att_clt_suite/ble_att_clt_test_rx_find_info
>      [pass] ble_att_clt_suite/ble_att_clt_test_tx_read
>      [...]
>      [pass] ble_sm_sc_test_suite/ble_sm_sc_peer_nc_iio1_rio1_b1_iat2_rat2_ik3_rk3
>      [pass] ble_sm_sc_test_suite/ble_sm_sc_peer_pk_iio2_rio0_b1_iat2_rat2_ik7_rk3
>      [pass] ble_sm_sc_test_suite/ble_sm_sc_us_jw_iio3_rio3_b1_iat2_rat2_ik3_rk3
>      [pass] ble_sm_sc_test_suite/ble_sm_sc_us_nc_iio1_rio1_b1_iat2_rat2_ik3_rk3
>      [pass] ble_sm_sc_test_suite/ble_sm_sc_us_pk_iio2_rio0_b1_iat2_rat2_ik7_rk3
>      [pass] ble_uuid_test_suite/ble_uuid_test_128_to_16
>
>      Passed tests: [net/nimble/host/test]
>      All tests passed
>
> The output is a bit rough, and -ldebug produces a lot of extra output
> that is not relevant, so there is some work to do here.  As an aside, I
> think newt is not very consistent with its "-v" and "-ldebug" options.
> As I understand it, "-v" is supposed to produce extra output about the
> user's project; "-ldebug" is meant for debugging the newt tool itself,
> and is supposed to generate output relating to newt's internals.
>
>> Also, having a 'startup' and 'teardown' function that runs before and
>> after every unit test in the test suite may be nice as well to clear
>> any variables or put things into a known state, but I'm also curious
>> about opinions there.
>>
>> Maybe have optional functions like this in every test suite module
>> (this is taken from a project where we used CMOCK and UNITY:
>> http://www.throwtheswitch.org/#download-section)
>>
>>     void setUp(void)
>>     {
>>        fifo_clear(&ff_non_overwritable);
>>     }
>>
>>     void tearDown(void)
>>     {
>>
>>     }
> I agree.  Again, this is kind of half implemented currently, but it
> needs some more work.  The testutil library exposes the following:
>
>      typedef void tu_post_test_fn_t(void *arg);
>      void tu_suite_set_post_test_cb(tu_post_test_fn_t *cb, void *cb_arg);
>
> So, there is a "teardown" function at the suite level, but no startup
> functions, and nothing at the individual case level.  Also, this
> function doesn't get executed automatically unless testutil is
> configured to do so.
>
> Long ago when I was working on the testutil library, I consciously
> avoided adding this type of functionality.  I wanted the unit tests to
> be easy to understand and debug, so I strived for a small API and
> nothing automatic.  In retrospect, after writing several unit tests, I
> do think automatic setup and teardown functions are useful enough to
> include in the API.
>
> I also recall looking at CMock a while back when I was searching for
> ideas.  I think it provides a lot of useful functionality, but it looked
> like it did way more than we were interested in at the time.  Now that
> the project is a bit more mature, it might be a good time to add some
> needed functionality to the unit testing framework.
>
>> Happy to help here, but wanted to get a discussion started first.
> I for one would welcome all ideas and contributions to the testutil
> library.  Could you expand on the setup / teardown thoughts?  Would
> these be executed per test case, or just per suite?  Also, my
> understanding is that these function get executed automatically without
> the framework needing to be told about them, is that correct?
>
> Thanks,
> Chris
>
>


Re: Improve Unit Tests and Test Suite Output

Posted by Christopher Collins <cc...@apache.org>.
Hi Kevin,

On Mon, Oct 03, 2016 at 03:08:32PM +0200, Kevin Townsend wrote:
> I was wondering if there were any suggestions on how it might be 
> possible to improve the output of the unit tests to be a bit more 
> verbose, following some of the other frameworks out there for embedded 
> systems like CMOCK.
> 
> Unit testing and test simulation is an important part of the system for 
> any professionally maintained project, but could be even more useful 
> with a little bit of refinement.
> 
> Personally, I find it useful to see a list of tests being run and maybe 
> spot if I missed a module, and to know how many tests were run, etc., so 
> something like this when running the test suite(s)?
> 
>     Running 'TestSuiteName' test suite:
>        Running 'Test Name' ... [OK][FAILED]
>        Running 'Test Name' ... [OK][FAILED]
>        [n] unit tests passed, [n] failed
> 
>     Running 'TestSuiteName' test suite:
>        Running 'Test Name' ... [OK][FAILED]
>        Running 'Test Name' ... [OK][FAILED]
>        [n] unit tests passed, [n] failed
> 
>     Ran [n] unit tests in [n] test suites
>     [n] unit tests passed, [n] failed
> 
> It's a poor example that needs more thought, but I was interested in 
> getting the discussion started.

The thinking was that the user doesn't want to be bothered with a bunch
of text when there are no failures.  That said, I agree that more
verbose output in the success case would be useful in some cases.  You
can get something kind of like your example if you provide the -ldebug
command line option when you run the test, e.g.,

    newt -ldebug test net/nimble/host
    Executing test:
    /home/ccollins/repos/mynewt/core/bin/targets/unittest/net_nimble_host_test/test/net/nimble/host/test/net_nimble_host_test
    2016/10/03 08:00:49 [DEBUG]
    /home/ccollins/repos/mynewt/core/bin/targets/unittest/net_nimble_host_test/test/net/nimble/host/test/net_nimble_host_test
    2016/10/03 08:00:50 [DEBUG] o=[pass]
    ble_att_clt_suite/ble_att_clt_test_tx_find_info
    [pass] ble_att_clt_suite/ble_att_clt_test_rx_find_info
    [pass] ble_att_clt_suite/ble_att_clt_test_tx_read
    [...]
    [pass] ble_sm_sc_test_suite/ble_sm_sc_peer_nc_iio1_rio1_b1_iat2_rat2_ik3_rk3
    [pass] ble_sm_sc_test_suite/ble_sm_sc_peer_pk_iio2_rio0_b1_iat2_rat2_ik7_rk3
    [pass] ble_sm_sc_test_suite/ble_sm_sc_us_jw_iio3_rio3_b1_iat2_rat2_ik3_rk3
    [pass] ble_sm_sc_test_suite/ble_sm_sc_us_nc_iio1_rio1_b1_iat2_rat2_ik3_rk3
    [pass] ble_sm_sc_test_suite/ble_sm_sc_us_pk_iio2_rio0_b1_iat2_rat2_ik7_rk3
    [pass] ble_uuid_test_suite/ble_uuid_test_128_to_16

    Passed tests: [net/nimble/host/test]
    All tests passed

The output is a bit rough, and -ldebug produces a lot of extra output
that is not relevant, so there is some work to do here.  As an aside, I
think newt is not very consistent with its "-v" and "-ldebug" options.
As I understand it, "-v" is supposed to produce extra output about the
user's project; "-ldebug" is meant for debugging the newt tool itself,
and is supposed to generate output relating to newt's internals.

> Also, having a 'startup' and 'teardown' function that runs before and
> after every unit test in the test suite may be nice as well to clear
> any variables or put things into a known state, but I'm also curious
> about opinions there.
>
> Maybe have optional functions like this in every test suite module
> (this is taken from a project where we used CMOCK and UNITY:
> http://www.throwtheswitch.org/#download-section)
> 
>    void setUp(void)
>    {
>       fifo_clear(&ff_non_overwritable);
>    }
> 
>    void tearDown(void)
>    {
> 
>    }

I agree.  Again, this is kind of half implemented currently, but it
needs some more work.  The testutil library exposes the following:

    typedef void tu_post_test_fn_t(void *arg);
    void tu_suite_set_post_test_cb(tu_post_test_fn_t *cb, void *cb_arg);

So, there is a "teardown" function at the suite level, but no startup
functions, and nothing at the individual case level.  Also, this
function doesn't get executed automatically unless testutil is
configured to do so.

Long ago when I was working on the testutil library, I consciously
avoided adding this type of functionality.  I wanted the unit tests to
be easy to understand and debug, so I strived for a small API and
nothing automatic.  In retrospect, after writing several unit tests, I
do think automatic setup and teardown functions are useful enough to
include in the API.

I also recall looking at CMock a while back when I was searching for
ideas.  I think it provides a lot of useful functionality, but it looked
like it did way more than we were interested in at the time.  Now that
the project is a bit more mature, it might be a good time to add some
needed functionality to the unit testing framework.

> Happy to help here, but wanted to get a discussion started first.

I for one would welcome all ideas and contributions to the testutil
library.  Could you expand on the setup / teardown thoughts?  Would
these be executed per test case, or just per suite?  Also, my
understanding is that these function get executed automatically without
the framework needing to be told about them, is that correct?

Thanks,
Chris