You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@nuttx.apache.org by "fjpanag (via GitHub)" <gi...@apache.org> on 2023/01/31 13:54:56 UTC

[GitHub] [nuttx] fjpanag commented on issue #8017: Create a Software Qualification Testing for Release new versions

fjpanag commented on issue #8017:
URL: https://github.com/apache/nuttx/issues/8017#issuecomment-1410399666

   Some thoughts on enhancing testing of NuttX:
   
   # Expect
   When we want to test the external interfaces of the system (e.g. console, protocols etc), we are making heavy use of `expect` scripts:
   
   https://linux.die.net/man/1/expect  
   https://likegeeks.com/expect-command/
   
   We are running the application in the sim and we have the script to interact with the system through NSH. Practically any manual interaction can be scripted this way, easily.
   
   We have our CI to run these scripts, ensuring that at least the basic functionality of the system and its interfaces are OK.
   
   Another way that we are using it, is by testing protocols.
   We have sim to use one virtual serial port, and we have `expect` to interact with the system through this port,
   exercising all system commands.
   
   I am planning to use `expect` also on lower-level tests, like device drivers.
   I am thinking of having it sending dummy data to mocked devices, and evaluate the system's behavior.
   
   All in all, I believe that at least all (most?) NSH commands, and NSH-triggered functionality can be easily
   tested with `expect`.
   
   _I can provide more information on how test the system with `expect`, but unfortunately I cannot share my existing scripts verbatim, as they refer to proprietary projects._
   
   
   # Code Coverage
   Code coverage data can be a useful metric to evaluate the coverage of the tests.
   NuttX in sim supports `gcov`, and we are already using it in all our CI tests.
   
   The simulator starts and executes its tests.
   Either using `expect`, or by any other means of input, or by using purpose-build applications (like os-test).
   Upon finishing the coverage data are created automatically.
   
   At the end of the CI pipeline, `lcov` is used to merge all reports into a single one.
   This single report provides the overall metrics of the tests.
   
   I believe that it would be nice to have this on NuttX.
   Especially since it requires practically 0 effort (everything is already there, enabled with Kconfigs).
   
   At least it would be an indication on what is tested or what may need improvement.
   We may even add a "rule" that each release must at least increase the coverage of the tests (even marginally).
   
   _ I can provide my `lcov` scripts that combine the result of the various individual tests and generate the reports._
   
   
   # Tests Output
   One of the greatest issues with testing NuttX in an automated way is that the output of the tests
   (os-test, mm-test, fs-test etc), is inconsistent.
   
   Every test has its own output format.
   If I recall correctly, one test uses the word "error" for a non-erroneous state (i.g. expected error).
   
   Parsing of the output is very hard, adding tests can be tedious.
   It happened to me at least once to have the test fail, but my CI script fail to recognize the issue.
   
   Standardization of the output for all tests will be greatly beneficial.
   
   _Note that all sim tests are currently broken for us, due to a regression. Something have been changed recently, the outputs differ and all tests fail. I am still investigating._
   
   
   # Networking
   The most neglected part of NuttX (at least IMO) is networking.
   There is no automated testing I can do there.
   
   And as it seems, it is quite a problematic part of NuttX.
   Its a few months that I struggle with networking, and still it is totally unstable for me.
   I also see various regressions occasionally, for example the latest master hardfaults immediately when I try to use `sendfile`.
   
   Unfortunately, I never managed to have the networking run on the simulator (let alone within CI).
   So no tests can be written for this for the moment.
   
   I believe that this is the part of NuttX that requires the most attention, testing-wise.
   
   Even basic HTTP, ftpc, ftpd etc communications would be a good indication that the system is still functioning properly.
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@nuttx.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org