You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@nuttx.apache.org by da...@gmail.com on 2019/12/17 09:27:11 UTC

[REQUIREMENTS- NuttX Workflow]

 [REQUIREMENTS- NuttX Workflow]

I am creating this thread to gather ONLY REQUIREMENTS. See [DISCUSS - NuttX
Workflow]

After the requirements are gathered in one place we can discuss the merits
and vote on them.

Re: [DISCUSS - NuttX Workflow]

Posted by Gregory Nutt <sp...@gmail.com>.
> As we fill in the details, this discussion will naturally blend in specifics of implementation and tools — I expect “git” might come up in the discussions ;)
But only when we discussing details of implementation... AFTER we have 
established functional requirements.  Git discussions are basically out 
of place, disrupt communication, and usually derail the discussion of 
functionality.  It has happened repeatedly already.  Let's decide what 
we want to do, then deal with how to do it as the next step.

Re: [DISCUSS - NuttX Workflow]

Posted by "David S. Alessio" <da...@gmail.com>.
We’ve digressed a bit on this thread.  Let’s see if we can reboot DavidS’ Workflow thread and keep the thread on topic.

Let me start by stating a few [obvious] objectives:
Keep things simple for those NuttX users who prefer to work with a zip’d release.
provide best-practice tools and workflow to maximize productivity of developers living on the bleeding edge.
define a disciplined process that insures the continued quality of the project.

As we fill in the details, this discussion will naturally blend in specifics of implementation and tools — I expect “git” might come up in the discussions ;)


Cheers,
-david





> On Dec 17, 2019, at 1:36 AM, david.sidrane@gmail.com wrote:
> 
> [DISCUSS - NuttX Workflow]
> 
> I am creating this thread to discuss what we as a community would like to
> have as NuttX Workflow. I have also created [REQUIREMENTS- NuttX Workflow]
> I am asking us to not add discussion to [REQUIREMENTS- NuttX Workflow].
> Please do that here.
> 
> As this discussion evolves we shall create requirements and add them
> to the [REQUIREMENTS-
> NuttX Workflow] thread.
> 
> Please use [DISCUSS - NuttX Workflow] to propose and discuss the ideas
> and experiences
> you have to offer.
> 
> Be detailed; give examples, list pros and cons, why you like it and why you
> don't.
> 
> Then after the requirements are gathered in one place and discussed here
> then can vote on them.
> 
> Thank you.
> 
> David


Re: [DISCUSS - NuttX Workflow]

Posted by "Juha Niskanen (Haltian)" <ju...@haltian.com>.
-1 for anything that has git submodules in it.

Didn't we try submodules at one time and it did not work out and was abandoned? Why is this even discussed now? We can do the Apache transition with repositories as they are today and change the
workflow or source code organization later, right? Not much sense to reorganize everything at the same time.

Some of our projects don't need apps, some use heavily customized apps instead of the one from Greg's tree, some need just NSH. When one is supporting multiple NuttX-based products, there can be many apps trees, all different or of different versions. We use repo to integrate NuttX with other software. Using repo + submodules would just add extra dimension of complexity for no reason. I don't want to change our company's CI and test systems because of submodules. There is no client to bill for the hours.

Best Regards,
   Juha Niskanen


________________________________
From: Sebastien Lorquet <se...@lorquet.fr>
Sent: Thursday, December 19, 2019 10:32 AM
To: dev@nuttx.apache.org <de...@nuttx.apache.org>
Subject: Re: [DISCUSS - NuttX Workflow]

Looks really complex to me, if any contributor has to master all of this
perfectly to contribute officially.

the submodule sync with these specific options is already too much.

do you really realize all that has to be memorized just for a hat repo?


to put it another way: if you assure me that this hat repo is completely
optional and that I will never ever have to use it, I'm okay. let me use my two
repos as usual and play with your hat submodules without annoying anyone else.


But, if this workflow requires such a complex string of git commands including
rebase anytime I have to push anything to the apps or nuttx repo, I dont want to
do it.


Again just my opinion.

But the endless list of complex git commands with additional options is probably
a blocker for many other people too.

I dont even want to read it all.

Sebastien

Le 18/12/2019 à 15:20, David Sidrane a écrit :
>> what advantage does in fact the submodule method bring?
> See below
>
>> Even with a hat repository that contains two submodules (apps and nuttx),
>> you
>> will have to send separate pull requests for each submodule, right?
> Yes. But they com nit in 1 Atomic operation.
>
>
> Submodules 101
>
> This example is with write access on the repo - for committers
>
> git clone <url to knot> NuttX
> cd NuttX
> git checkout master
> git submodule sync --recursive && git submodule update --init --recursive
>
> git checkout -b master_add_tof_driver
>
> cd nuttx
> git checkout -b master_add_tof_driver
>
> #work and commit - rebase on self and remove drabble.
> rebase -i master
> #reorder, squash and fixup the commits (learn about mv-changes is your
> friend) - you will look organized.
>
> cd apps
> git checkout -b master_add_tof_driver
>
> #work and commit - rebase on self and remove cruft and noise.
> rebase -i master
> #reorder, squash and fixup the commits (learn about mv-changes it is your
> friend) - you will look organized.
>
> #Build and test locally.
> ## AOK
>
> cd apps
> git push origin master_add_tof_driver
>
> cd nuttx
> git push origin master_add_tof_driver
>
> cd .. (NuttX)
> git add nuts apps
> git commit "Update NuttX with TOF driver"
>
> git push origin master_add_tof_driver
>
> Ok so now (shal simplified to compare them)
>
> NuttX master shal 0000 point to
>   \nuttx master shal 2222
>   \apps master shal 1111
>
> NuttX master_add_tof_driver cccc
>   \nuttx master shal aaa
>   \apps master shal bbb
>
> merge PR from apps to master apps
> merge PR from nuttx to master nuttx
>
> NuttX master shal 0000 point to (still builds and runs)
>   \nuttx master shal 2222
>   \apps master shal 1111
>
> But the branch master of the submodules
>
>   \nuttx master shal aaa
>   \apps master shal bbb
>
>
> merge PR from NuttX to master NuttX (atomic replacement)
> NuttX master shal zzzzz point to
>   \nuttx master shal aaa
>   \apps master shal bbb
>
>
>
> -----Original Message-----
> From: Sebastien Lorquet [mailto:sebastien@lorquet.fr]
> Sent: Wednesday, December 18, 2019 5:52 AM
> To: dev@nuttx.apache.org
> Subject: Re: [DISCUSS - NuttX Workflow]
>
> Wait,
>
> what advantage does in fact the submodule method bring?
>
> Even with a hat repository that contains two submodules (apps and nuttx),
> you
> will have to send separate pull requests for each submodule, right?
>
> Sebastien
>
> Le 18/12/2019 à 14:40, Gregory Nutt a écrit :
>> On 12/18/2019 4:23 AM, David Sidrane wrote:
>>> That is precisely what submodules do:submodules aggregate on a single
>>> SHAL N
>>> repositories.
>>>
>>> The problem is: How to have atomic checkout of the correct configuration
>>> with
>>> out a temporal shift?
>>>
>>> Please describe how you would do this. Give detailed steps.
>> I don't see any difference in versioning with submodules.  You have to
>> explicitly state the UUID you are using in the submodule (unless there is
>> a
>> GIT sub-module trick I don't know).
>>
>> So how would you checkout the correct configuration with sub-modules.
>> Seems
>> to me that it is the same issue.
>>
>> I would vote about 18billion minus for this change.  But architecture
>> designs
>> are not justified by blantant expediency.
>>
>> Let's not go this way.
>>
>>

RE: [DISCUSS - NuttX Workflow]

Posted by David Sidrane <Da...@nscdg.com>.
Greg, please read the first post again.

Re: [DISCUSS - NuttX Workflow]

Posted by Xiang Xiao <xi...@gmail.com>.
I think we can learn the good practice from PX4, but shouldn't take
all without any justification.
PX4 has the mature workflow, maybe we can extract the core OS part as
our base to boost the initial setup.
The important thing now is to define the high level workflow and vote
in the community.
Then we may adapter the workflow from PX4, even reuse the test
infrastructure if possible.

Thanks
Xiang

On Fri, Dec 20, 2019 at 7:45 PM Alan Carvalho de Assis
<ac...@gmail.com> wrote:
>
> Hi David,
>
> On 12/20/19, David Sidrane <da...@apache.org> wrote:
> > Hi Nathan,
> >
> > On 2019/12/20 02:51:56, Nathan Hartman <ha...@gmail.com> wrote:
> >> On Thu, Dec 19, 2019 at 6:24 PM Gregory Nutt <sp...@gmail.com> wrote:
> >> > >> ] A bad build system change can cause serious problems for a lot of
> >> people around the world.  A bad change in the core OS can destroy the
> >> good
> >> reputation of the OS.
> >> > > Why is this the case? Users should not be using unreleased code or be
> >> encouraged to use it.. If they are one solution is to make more frequent
> >> releases.
> >> > I don't think that the number of releases is the factor.  It is time in
> >> > people's hand.  Subtle corruption of OS real time behavior is not
> >> > easily
> >> > testing.   You normally have to specially instrument the software and
> >> > setup a special test environment perhaps with a logic analyzer to
> >> > detect
> >> > these errors.  Errors in the core OS can persists for months and in at
> >> > least one case I am aware of, years, until some sets up the correct
> >> > instrumented test.
> >>
> >> And:
> >>
> >> On Thu, Dec 19, 2019 at 4:20 PM Justin Mclean <ju...@classsoftware.com>
> >> wrote:
> >> > > ] A bad build system change can cause serious problems for a lot of
> >> people around the world.  A bad change in the core OS can destroy the
> >> good
> >> reputation of the OS.
> >> >
> >> > Why is this the case? Users should not be using unreleased code or be
> >> encouraged to use it.. If they are one solution is to make more frequent
> >> releases.
> >>
> >> Many users are only using released code. However, whatever is in "master"
> >> eventually gets released. So if problems creep in unnoticed, downstream
> >> users will be affected. It is only delayed.
> >>
> >> I can personally attest that those kinds of errors are extremely
> >> difficult
> >> to detect and trace. It does require a special setup with logic analyzer
> >> or
> >> oscilloscope, and sometimes other tools, not to mention a whole setup to
> >> produce the right stimuli, several pieces of software that may have to be
> >> written specifically for the test....
> >>
> >> I have been wracking my brain on and off thinking about how we could set
> >> up
> >> an automated test system to find errors related to timing etc.
> >> Unfortunately unlike ordinary software for which you can write an
> >> automated
> >> test suite, this sort of embedded RTOS will need specialized hardware to
> >> conduct the tests. That's a subject for another thread and i don't know
> >> if
> >> now is the time, but I will post my thoughts eventually.
> >>
> >> Nathan
> >>
> >
> > From the proposal
> >
> > "Community
> >
> > NuttX has a large, active community.  Communication is via a Google group at
> > https://groups.google.com/forum/#!forum/nuttx where there are 395 members as
> > of this writing.  Code is currently maintained at Bitbucket.org at
> > https://bitbucket.org/nuttx/.  Other communications are through Bitbucket
> > issues and also via Slack for focused, interactive discussions."
> >
> >
> >> Many users are only using released code.
> >
> > Can we ask the 395 members?
> >
> > I can only share my experience with NuttX since I began working on the
> > project in 2012 for multiple companies.
> >
> > Historically (based on my time on the project) releases - were build tested
> > - by this I mean that the configurations were updated and the thus created a
> > set of "Build Test vectors" BTV. Given the number of permutations solely
> > based on the load time of
> > (http://nuttx.org/doku.php?id=documentation:configvars) with 95,338 CONFIG_*
> > hits. Yes there are duplicates on the page and dependencies. This is just
> > meant to give a number of bits....
> >
> > The total space is very large
> >
> > The BTV space was very sparse coverage.
> >
> > IIRC Greg gave the build testing task a day of time. It was repeated after
> > errors were found.  I am not aware of any other testing. Are you?
> >
> > There were no Release Candidate (rc) nor alpha nor beta test that ran this
> > code one real systems and very little, if any Run Test Vectors (RTV) - I
> > have never seen a test report - has anyone?
> >
> > One way to look at this is Sporadic Integration. (SI) with limited BTV and
> > minimal RTV.  Total Test Vector Coverage TTVC = BTV + RTV;  The ROI of  way
> > of working, from a reliability perspective was and is very small.
> >
> > A herculean effort Greg's part with little return: We released code with
> > many significant and critical errors in it. See the ReleaseNotes and the
> > commit log.
> >
> > Over the years Greg referred to TRUNK (yes it was on SVN) and master as his
> > "own sandbox" stating is should not be considered stable or build-able. This
> > is evident in the commit log.
> >
>
> Please stop focusing on the people (Greg) and let talk about how the workflow.
> We are here to discuss how we can improve the process, we are not
> talking about throw away NuttX Build System and move to PX4.
>
> You are picturing something that is not true.
>
> We have issues, as FreeRTOS, MBEB and Zephyr also have. But it is not
> Greg or the Build System guilt.
>
> Please, stop! It is disgusting!
>
> > I have personally never used a release from a tarball. Given the above why
> > would I? It is less stable then master at TC = N
> > (https://www.electronics-tutorials.ws/rc/rc_1.html) where N Is some number
> > of days after a release. - unfortunately based on the current practices (a
> > very unprofessional workflow)  N is also dictated by when apps and nuttx
> > actually building for a given target's set of BTV.
> >
>
> It is not "unprofessional" it was what we could do based or our
> hardware limitations.
>
> > With the tools and resources that exist in our work today, Quite frankly:
> > This unacceptable and is an embarrassment.
> >
>
> Oh my Gosh! Please don't do it.
>
>
> > I suspect this is why there is a Tizen. The modern era - gets it.
> > (Disclaimer I am an old dog - I am learning to get it)
> >
>
> Tizen exists because companies want to have control.
> This is the same logic why Redhat and others maintain their own Linux
> kernel by themselves.
>
> > --- Disclaimer ---
> >
> > In the following, I'm am not bragging about PX4 or selling tools, I am
> > merely trying to share our experiences for the betterment of NuttX.
> >
> > From what I understand PX4 has the most instances of NuttX running on real
> > HW in the world. Over 300K. (I welcome other users to share their numbers)
> >
> > PX4's Total TTVC is still limited, but much, much greater than NuttX.
> >
> > We use Continuous integration (CI) on Nuttx on PX4 on every commit on PRs.
> >
> >       C/C++ CI / build (push) Successful in 3m
> >       Compile MacOS Pending — This commit is being built
> >       Compile All Boards — This commit looks good
> >       Hardware Test — This commit looks good
> >       SITL Tests — This commit looks good
> >       SITL Tests (code coverage) — This commit looks good
> >       ci/circleci — Your tests passed on CircleCI!
> >       continuous-integration/appveyor/pr — AppVeyor build succeeded
> >       continuous-integration/jenkins/pr-head — This commit looks good
> >
> >
> > We run tests on HW.
> >
> > http://ci.px4.io:8080/blue/organizations/jenkins/PX4_misc%2FFirmware-hardware/detail/pr-mag-str-preflt/1/pipeline
> >
> > I say limited because of the set of arch we use and the way we configure the
> > OS.
> >
> > I believe this to be true of all users.
> >
> > The benefit of a community is that the sum of all TTVC that finds the
> > problems and fix them.
> >
> > Why not maximize TTVC - if it will have a huge ROI and it is free:
> >
> > PX4 will contribute all that we have. We just need to build temporally
> > consistent build. Yeah he is on the submodule thing AGAIN :)
> >
>
> Just to make the history short: we already have solutions for SW and HW CI.
>
> Besides the buildbot (https://buildbot.net) that was implemented and
> tested by Fabio Balzano, Xiaomi also has a build test for NuttX.
>
> At end of the day, it is not only Greg testing the system, we all are
> testing it as well.
>
> Don't try to push PX4 down your throat, it will not work this way.
> Let's keep the Apache way, it is a democracy!
>
> BR,
>
> Alan

Re: [DISCUSS - NuttX Workflow]

Posted by Fabio Balzano <fa...@elfarolab.com>.
Buildbot is written in python and the configuration is very flexible, you can do whatever you want, even make a coffee after a git event, the server keep the configurations and can both poll or react to events from the git server.

In my configuration there is an AWS buildbot server with a build configuration for each board I have, a laptop the "worker" that receive what to do and the boards under test connected via n USB hubthen, at the end of the build, Buildbot send a command to a "quality controller" board to begin the hardware tests on the fresh flashed boards, tests are about:

-if realtime timings are consistent with the previous build
-hardware behavior is still nominal
-power consumption is still nominal

in case of any anomalies, buildbot server get informed, the build fails and it is marked red. Then you can configure what to do next, re-attempt, drop the build, notifications, burn a rocket..



> On 20 Dec 2019, at 13:28, David Sidrane <Da...@nscdg.com> wrote:
> 
> I am not familiar with buildbot  ore this sort of setup so please forgive
> for some simple minded questions.
> 
> Is this SW CI or HW CI or both?
> 
> How does the RPi/BBB/Laptop fit into the picture.
> 
> Any Pictures?
> 
> David
> 
> -----Original Message-----
> From: Fabio Balzano [mailto:fabio@elfarolab.com]
> Sent: Friday, December 20, 2019 5:22 AM
> To: dev@nuttx.apache.org
> Subject: Re: [DISCUSS - NuttX Workflow]
> 
> 2 hours is a configured parameter, it is to allow burst of commits, it can
> reduced to 0 if you need real time building, then the buildbot server can
> also provision remote testing of the builds.
> 
>> On 20 Dec 2019, at 13:09, David Sidrane <Da...@nscdg.com> wrote:
>> 
>> Hi Fabio,
>> 
>> What are the capabilities?
>> 
>> It this 1 RPi/BBB per board nuttx board?
>> 
>> David
>> 
>> -----Original Message-----
>> From: Fabio Balzano [mailto:fabio@elfarolab.com]
>> Sent: Friday, December 20, 2019 5:06 AM
>> To: dev@nuttx.apache.org
>> Subject: Re: [DISCUSS - NuttX Workflow]
>> 
>> Hello,
>> 
>> yes the buildbot server is down, later today I will bring it up. Yes you
>> can
>> do remote builds using a RPI/BBB or similars or local builds performed by
>> the server itself. I can setup and maintain the server for the Nuttx
>> project
>> in case you think it is useful.
>> 
>> Thank you so much
>> Fabio Balzano
>> 
>>> On 20 Dec 2019, at 13:00, Alan Carvalho de Assis <ac...@gmail.com>
>>> wrote:
>>> 
>>> Hi David,
>>> 
>>> Sorry for scolding you in public as well, but I think we don't need to
>>> find guilt.
>>> 
>>> So, I got the impression you were doing it to promote PX4 test
>>> workflow as the best solution for all the NuttX issues.
>>> 
>>> And although 300K drones are a lot, there are many commercial products
>>> using NuttX. Many Sony audio recorders, Moto Z Snaps, Thermal
>>> printers, etc. Probably we have products that overcome that number.
>>> 
>>> I think recently Fabio changed the buildbot link. BTW I just remember
>>> other alternative that Sebastien and I did about 3 years ago:
>>> 
>>> https://bitbucket.org/acassis/raspi-nuttx-farm/src/master/
>>> 
>>> The idea was to use low cost Raspberry PIs as a distributed build test
>>> for NuttX. It worked fine! You just define a board file with the
>>> configuration you want to test and it is done.
>>> 
>>> BR,
>>> 
>>> Alan
>>> 
>>>> On 12/20/19, David Sidrane <da...@apache.org> wrote:
>>>> Hi Alan,
>>>> 
>>>> Sorry if  my intent was misunderstood. I am merely stating facts on were
>>>> we
>>>> are and how got there.I am not assigning blame. I am not forcing
>>>> anything
>>>> I
>>>> am giving some examples of how we can make it the project complete and
>>>> better. We can use all of it, some of it none of it. The is a group
>>>> decision.
>>>> 
>>>> Also Pleases do fill us in on where we can see the SW CI  & HW CI you
>>>> mentioned. Do you have links maybe be we can use it now?
>>>> 
>>>> Again Sorry!
>>>> 
>>>> David
>>>> 
>>>>> On 2019/12/20 11:44:23, Alan Carvalho de Assis <ac...@gmail.com>
>>>>> wrote:
>>>>> Hi David,
>>>>> 
>>>>>> On 12/20/19, David Sidrane <da...@apache.org> wrote:
>>>>>> Hi Nathan,
>>>>>> 
>>>>>> On 2019/12/20 02:51:56, Nathan Hartman <ha...@gmail.com>
>>>>>> wrote:
>>>>>>> On Thu, Dec 19, 2019 at 6:24 PM Gregory Nutt <sp...@gmail.com>
>>>>>>> wrote:
>>>>>>>>>> ] A bad build system change can cause serious problems for a lot
>>>>>>>>>> of
>>>>>>> people around the world.  A bad change in the core OS can destroy the
>>>>>>> good
>>>>>>> reputation of the OS.
>>>>>>>>> Why is this the case? Users should not be using unreleased code or
>>>>>>>>> be
>>>>>>> encouraged to use it.. If they are one solution is to make more
>>>>>>> frequent
>>>>>>> releases.
>>>>>>>> I don't think that the number of releases is the factor.  It is time
>>>>>>>> in
>>>>>>>> people's hand.  Subtle corruption of OS real time behavior is not
>>>>>>>> easily
>>>>>>>> testing.   You normally have to specially instrument the software
>>>>>>>> and
>>>>>>>> setup a special test environment perhaps with a logic analyzer to
>>>>>>>> detect
>>>>>>>> these errors.  Errors in the core OS can persists for months and in
>>>>>>>> at
>>>>>>>> least one case I am aware of, years, until some sets up the correct
>>>>>>>> instrumented test.
>>>>>>> 
>>>>>>> And:
>>>>>>> 
>>>>>>> On Thu, Dec 19, 2019 at 4:20 PM Justin Mclean
>>>>>>> <ju...@classsoftware.com>
>>>>>>> wrote:
>>>>>>>>> ] A bad build system change can cause serious problems for a lot
>>>>>>>>> of
>>>>>>> people around the world.  A bad change in the core OS can destroy the
>>>>>>> good
>>>>>>> reputation of the OS.
>>>>>>>> 
>>>>>>>> Why is this the case? Users should not be using unreleased code or
>>>>>>>> be
>>>>>>> encouraged to use it.. If they are one solution is to make more
>>>>>>> frequent
>>>>>>> releases.
>>>>>>> 
>>>>>>> Many users are only using released code. However, whatever is in
>>>>>>> "master"
>>>>>>> eventually gets released. So if problems creep in unnoticed,
>>>>>>> downstream
>>>>>>> users will be affected. It is only delayed.
>>>>>>> 
>>>>>>> I can personally attest that those kinds of errors are extremely
>>>>>>> difficult
>>>>>>> to detect and trace. It does require a special setup with logic
>>>>>>> analyzer
>>>>>>> or
>>>>>>> oscilloscope, and sometimes other tools, not to mention a whole setup
>>>>>>> to
>>>>>>> produce the right stimuli, several pieces of software that may have
>>>>>>> to
>>>>>>> be
>>>>>>> written specifically for the test....
>>>>>>> 
>>>>>>> I have been wracking my brain on and off thinking about how we could
>>>>>>> set
>>>>>>> up
>>>>>>> an automated test system to find errors related to timing etc.
>>>>>>> Unfortunately unlike ordinary software for which you can write an
>>>>>>> automated
>>>>>>> test suite, this sort of embedded RTOS will need specialized hardware
>>>>>>> to
>>>>>>> conduct the tests. That's a subject for another thread and i don't
>>>>>>> know
>>>>>>> if
>>>>>>> now is the time, but I will post my thoughts eventually.
>>>>>>> 
>>>>>>> Nathan
>>>>>> 
>>>>>> From the proposal
>>>>>> 
>>>>>> "Community
>>>>>> 
>>>>>> NuttX has a large, active community.  Communication is via a Google
>>>>>> group at
>>>>>> https://groups.google.com/forum/#!forum/nuttx where there are 395
>>>>>> members as
>>>>>> of this writing.  Code is currently maintained at Bitbucket.org at
>>>>>> https://bitbucket.org/nuttx/.  Other communications are through
>>>>>> Bitbucket
>>>>>> issues and also via Slack for focused, interactive discussions."
>>>>>> 
>>>>>> 
>>>>>>> Many users are only using released code.
>>>>>> 
>>>>>> Can we ask the 395 members?
>>>>>> 
>>>>>> I can only share my experience with NuttX since I began working on the
>>>>>> project in 2012 for multiple companies.
>>>>>> 
>>>>>> Historically (based on my time on the project) releases - were build
>>>>>> tested
>>>>>> - by this I mean that the configurations were updated and the thus
>>>>>> created a
>>>>>> set of "Build Test vectors" BTV. Given the number of permutations
>>>>>> solely
>>>>>> based on the load time of
>>>>>> (http://nuttx.org/doku.php?id=documentation:configvars) with 95,338
>>>>>> CONFIG_*
>>>>>> hits. Yes there are duplicates on the page and dependencies. This is
>>>>>> just
>>>>>> meant to give a number of bits....
>>>>>> 
>>>>>> The total space is very large
>>>>>> 
>>>>>> The BTV space was very sparse coverage.
>>>>>> 
>>>>>> IIRC Greg gave the build testing task a day of time. It was repeated
>>>>>> after
>>>>>> errors were found.  I am not aware of any other testing. Are you?
>>>>>> 
>>>>>> There were no Release Candidate (rc) nor alpha nor beta test that ran
>>>>>> this
>>>>>> code one real systems and very little, if any Run Test Vectors (RTV) -
>>>>>> I
>>>>>> have never seen a test report - has anyone?
>>>>>> 
>>>>>> One way to look at this is Sporadic Integration. (SI) with limited BTV
>>>>>> and
>>>>>> minimal RTV.  Total Test Vector Coverage TTVC = BTV + RTV;  The ROI of
>>>>>> way
>>>>>> of working, from a reliability perspective was and is very small.
>>>>>> 
>>>>>> A herculean effort Greg's part with little return: We released code
>>>>>> with
>>>>>> many significant and critical errors in it. See the ReleaseNotes and
>>>>>> the
>>>>>> commit log.
>>>>>> 
>>>>>> Over the years Greg referred to TRUNK (yes it was on SVN) and master
>>>>>> as
>>>>>> his
>>>>>> "own sandbox" stating is should not be considered stable or
>>>>>> build-able.
>>>>>> This
>>>>>> is evident in the commit log.
>>>>> 
>>>>> Please stop focusing on the people (Greg) and let talk about how the
>>>>> workflow.
>>>>> We are here to discuss how we can improve the process, we are not
>>>>> talking about throw away NuttX Build System and move to PX4.
>>>>> 
>>>>> You are picturing something that is not true.
>>>>> 
>>>>> We have issues, as FreeRTOS, MBEB and Zephyr also have. But it is not
>>>>> Greg or the Build System guilt.
>>>>> 
>>>>> Please, stop! It is disgusting!
>>>>> 
>>>>>> I have personally never used a release from a tarball. Given the above
>>>>>> why
>>>>>> would I? It is less stable then master at TC = N
>>>>>> (https://www.electronics-tutorials.ws/rc/rc_1.html) where N Is some
>>>>>> number
>>>>>> of days after a release. - unfortunately based on the current
>>>>>> practices
>>>>>> (a
>>>>>> very unprofessional workflow)  N is also dictated by when apps and
>>>>>> nuttx
>>>>>> actually building for a given target's set of BTV.
>>>>> 
>>>>> It is not "unprofessional" it was what we could do based or our
>>>>> hardware limitations.
>>>>> 
>>>>>> With the tools and resources that exist in our work today, Quite
>>>>>> frankly:
>>>>>> This unacceptable and is an embarrassment.
>>>>> 
>>>>> Oh my Gosh! Please don't do it.
>>>>> 
>>>>> 
>>>>>> I suspect this is why there is a Tizen. The modern era - gets it.
>>>>>> (Disclaimer I am an old dog - I am learning to get it)
>>>>> 
>>>>> Tizen exists because companies want to have control.
>>>>> This is the same logic why Redhat and others maintain their own Linux
>>>>> kernel by themselves.
>>>>> 
>>>>>> --- Disclaimer ---
>>>>>> 
>>>>>> In the following, I'm am not bragging about PX4 or selling tools, I am
>>>>>> merely trying to share our experiences for the betterment of NuttX.
>>>>>> 
>>>>>> From what I understand PX4 has the most instances of NuttX running on
>>>>>> real
>>>>>> HW in the world. Over 300K. (I welcome other users to share their
>>>>>> numbers)
>>>>>> 
>>>>>> PX4's Total TTVC is still limited, but much, much greater than NuttX.
>>>>>> 
>>>>>> We use Continuous integration (CI) on Nuttx on PX4 on every commit on
>>>>>> PRs.
>>>>>> 
>>>>>>  C/C++ CI / build (push) Successful in 3m
>>>>>>  Compile MacOS Pending — This commit is being built
>>>>>>  Compile All Boards — This commit looks good
>>>>>>  Hardware Test — This commit looks good
>>>>>>  SITL Tests — This commit looks good
>>>>>>  SITL Tests (code coverage) — This commit looks good
>>>>>>  ci/circleci — Your tests passed on CircleCI!
>>>>>>  continuous-integration/appveyor/pr — AppVeyor build succeeded
>>>>>>  continuous-integration/jenkins/pr-head — This commit looks good
>>>>>> 
>>>>>> 
>>>>>> We run tests on HW.
>>>>>> 
>>>>>> http://ci.px4.io:8080/blue/organizations/jenkins/PX4_misc%2FFirmware-hardware/detail/pr-mag-str-preflt/1/pipeline
>>>>>> 
>>>>>> I say limited because of the set of arch we use and the way we
>>>>>> configure
>>>>>> the
>>>>>> OS.
>>>>>> 
>>>>>> I believe this to be true of all users.
>>>>>> 
>>>>>> The benefit of a community is that the sum of all TTVC that finds the
>>>>>> problems and fix them.
>>>>>> 
>>>>>> Why not maximize TTVC - if it will have a huge ROI and it is free:
>>>>>> 
>>>>>> PX4 will contribute all that we have. We just need to build temporally
>>>>>> consistent build. Yeah he is on the submodule thing AGAIN :)
>>>>> 
>>>>> Just to make the history short: we already have solutions for SW and HW
>>>>> CI.
>>>>> 
>>>>> Besides the buildbot (https://buildbot.net) that was implemented and
>>>>> tested by Fabio Balzano, Xiaomi also has a build test for NuttX.
>>>>> 
>>>>> At end of the day, it is not only Greg testing the system, we all are
>>>>> testing it as well.
>>>>> 
>>>>> Don't try to push PX4 down your throat, it will not work this way.
>>>>> Let's keep the Apache way, it is a democracy!
>>>>> 
>>>>> BR,
>>>>> 
>>>>> Alan
>>>> 

RE: [DISCUSS - NuttX Workflow]

Posted by David Sidrane <Da...@nscdg.com>.
I am not familiar with buildbot  ore this sort of setup so please forgive
for some simple minded questions.

Is this SW CI or HW CI or both?

How does the RPi/BBB/Laptop fit into the picture.

Any Pictures?

David

-----Original Message-----
From: Fabio Balzano [mailto:fabio@elfarolab.com]
Sent: Friday, December 20, 2019 5:22 AM
To: dev@nuttx.apache.org
Subject: Re: [DISCUSS - NuttX Workflow]

2 hours is a configured parameter, it is to allow burst of commits, it can
reduced to 0 if you need real time building, then the buildbot server can
also provision remote testing of the builds.

> On 20 Dec 2019, at 13:09, David Sidrane <Da...@nscdg.com> wrote:
>
> Hi Fabio,
>
> What are the capabilities?
>
> It this 1 RPi/BBB per board nuttx board?
>
> David
>
> -----Original Message-----
> From: Fabio Balzano [mailto:fabio@elfarolab.com]
> Sent: Friday, December 20, 2019 5:06 AM
> To: dev@nuttx.apache.org
> Subject: Re: [DISCUSS - NuttX Workflow]
>
> Hello,
>
> yes the buildbot server is down, later today I will bring it up. Yes you
> can
> do remote builds using a RPI/BBB or similars or local builds performed by
> the server itself. I can setup and maintain the server for the Nuttx
> project
> in case you think it is useful.
>
> Thank you so much
> Fabio Balzano
>
>> On 20 Dec 2019, at 13:00, Alan Carvalho de Assis <ac...@gmail.com>
>> wrote:
>>
>> Hi David,
>>
>> Sorry for scolding you in public as well, but I think we don't need to
>> find guilt.
>>
>> So, I got the impression you were doing it to promote PX4 test
>> workflow as the best solution for all the NuttX issues.
>>
>> And although 300K drones are a lot, there are many commercial products
>> using NuttX. Many Sony audio recorders, Moto Z Snaps, Thermal
>> printers, etc. Probably we have products that overcome that number.
>>
>> I think recently Fabio changed the buildbot link. BTW I just remember
>> other alternative that Sebastien and I did about 3 years ago:
>>
>> https://bitbucket.org/acassis/raspi-nuttx-farm/src/master/
>>
>> The idea was to use low cost Raspberry PIs as a distributed build test
>> for NuttX. It worked fine! You just define a board file with the
>> configuration you want to test and it is done.
>>
>> BR,
>>
>> Alan
>>
>>> On 12/20/19, David Sidrane <da...@apache.org> wrote:
>>> Hi Alan,
>>>
>>> Sorry if  my intent was misunderstood. I am merely stating facts on were
>>> we
>>> are and how got there.I am not assigning blame. I am not forcing
>>> anything
>>> I
>>> am giving some examples of how we can make it the project complete and
>>> better. We can use all of it, some of it none of it. The is a group
>>> decision.
>>>
>>> Also Pleases do fill us in on where we can see the SW CI  & HW CI you
>>> mentioned. Do you have links maybe be we can use it now?
>>>
>>> Again Sorry!
>>>
>>> David
>>>
>>>> On 2019/12/20 11:44:23, Alan Carvalho de Assis <ac...@gmail.com>
>>>> wrote:
>>>> Hi David,
>>>>
>>>>> On 12/20/19, David Sidrane <da...@apache.org> wrote:
>>>>> Hi Nathan,
>>>>>
>>>>> On 2019/12/20 02:51:56, Nathan Hartman <ha...@gmail.com>
>>>>> wrote:
>>>>>> On Thu, Dec 19, 2019 at 6:24 PM Gregory Nutt <sp...@gmail.com>
>>>>>> wrote:
>>>>>>>>> ] A bad build system change can cause serious problems for a lot
>>>>>>>>> of
>>>>>> people around the world.  A bad change in the core OS can destroy the
>>>>>> good
>>>>>> reputation of the OS.
>>>>>>>> Why is this the case? Users should not be using unreleased code or
>>>>>>>> be
>>>>>> encouraged to use it.. If they are one solution is to make more
>>>>>> frequent
>>>>>> releases.
>>>>>>> I don't think that the number of releases is the factor.  It is time
>>>>>>> in
>>>>>>> people's hand.  Subtle corruption of OS real time behavior is not
>>>>>>> easily
>>>>>>> testing.   You normally have to specially instrument the software
>>>>>>> and
>>>>>>> setup a special test environment perhaps with a logic analyzer to
>>>>>>> detect
>>>>>>> these errors.  Errors in the core OS can persists for months and in
>>>>>>> at
>>>>>>> least one case I am aware of, years, until some sets up the correct
>>>>>>> instrumented test.
>>>>>>
>>>>>> And:
>>>>>>
>>>>>> On Thu, Dec 19, 2019 at 4:20 PM Justin Mclean
>>>>>> <ju...@classsoftware.com>
>>>>>> wrote:
>>>>>>>> ] A bad build system change can cause serious problems for a lot
>>>>>>>> of
>>>>>> people around the world.  A bad change in the core OS can destroy the
>>>>>> good
>>>>>> reputation of the OS.
>>>>>>>
>>>>>>> Why is this the case? Users should not be using unreleased code or
>>>>>>> be
>>>>>> encouraged to use it.. If they are one solution is to make more
>>>>>> frequent
>>>>>> releases.
>>>>>>
>>>>>> Many users are only using released code. However, whatever is in
>>>>>> "master"
>>>>>> eventually gets released. So if problems creep in unnoticed,
>>>>>> downstream
>>>>>> users will be affected. It is only delayed.
>>>>>>
>>>>>> I can personally attest that those kinds of errors are extremely
>>>>>> difficult
>>>>>> to detect and trace. It does require a special setup with logic
>>>>>> analyzer
>>>>>> or
>>>>>> oscilloscope, and sometimes other tools, not to mention a whole setup
>>>>>> to
>>>>>> produce the right stimuli, several pieces of software that may have
>>>>>> to
>>>>>> be
>>>>>> written specifically for the test....
>>>>>>
>>>>>> I have been wracking my brain on and off thinking about how we could
>>>>>> set
>>>>>> up
>>>>>> an automated test system to find errors related to timing etc.
>>>>>> Unfortunately unlike ordinary software for which you can write an
>>>>>> automated
>>>>>> test suite, this sort of embedded RTOS will need specialized hardware
>>>>>> to
>>>>>> conduct the tests. That's a subject for another thread and i don't
>>>>>> know
>>>>>> if
>>>>>> now is the time, but I will post my thoughts eventually.
>>>>>>
>>>>>> Nathan
>>>>>
>>>>> From the proposal
>>>>>
>>>>> "Community
>>>>>
>>>>> NuttX has a large, active community.  Communication is via a Google
>>>>> group at
>>>>> https://groups.google.com/forum/#!forum/nuttx where there are 395
>>>>> members as
>>>>> of this writing.  Code is currently maintained at Bitbucket.org at
>>>>> https://bitbucket.org/nuttx/.  Other communications are through
>>>>> Bitbucket
>>>>> issues and also via Slack for focused, interactive discussions."
>>>>>
>>>>>
>>>>>> Many users are only using released code.
>>>>>
>>>>> Can we ask the 395 members?
>>>>>
>>>>> I can only share my experience with NuttX since I began working on the
>>>>> project in 2012 for multiple companies.
>>>>>
>>>>> Historically (based on my time on the project) releases - were build
>>>>> tested
>>>>> - by this I mean that the configurations were updated and the thus
>>>>> created a
>>>>> set of "Build Test vectors" BTV. Given the number of permutations
>>>>> solely
>>>>> based on the load time of
>>>>> (http://nuttx.org/doku.php?id=documentation:configvars) with 95,338
>>>>> CONFIG_*
>>>>> hits. Yes there are duplicates on the page and dependencies. This is
>>>>> just
>>>>> meant to give a number of bits....
>>>>>
>>>>> The total space is very large
>>>>>
>>>>> The BTV space was very sparse coverage.
>>>>>
>>>>> IIRC Greg gave the build testing task a day of time. It was repeated
>>>>> after
>>>>> errors were found.  I am not aware of any other testing. Are you?
>>>>>
>>>>> There were no Release Candidate (rc) nor alpha nor beta test that ran
>>>>> this
>>>>> code one real systems and very little, if any Run Test Vectors (RTV) -
>>>>> I
>>>>> have never seen a test report - has anyone?
>>>>>
>>>>> One way to look at this is Sporadic Integration. (SI) with limited BTV
>>>>> and
>>>>> minimal RTV.  Total Test Vector Coverage TTVC = BTV + RTV;  The ROI of
>>>>> way
>>>>> of working, from a reliability perspective was and is very small.
>>>>>
>>>>> A herculean effort Greg's part with little return: We released code
>>>>> with
>>>>> many significant and critical errors in it. See the ReleaseNotes and
>>>>> the
>>>>> commit log.
>>>>>
>>>>> Over the years Greg referred to TRUNK (yes it was on SVN) and master
>>>>> as
>>>>> his
>>>>> "own sandbox" stating is should not be considered stable or
>>>>> build-able.
>>>>> This
>>>>> is evident in the commit log.
>>>>
>>>> Please stop focusing on the people (Greg) and let talk about how the
>>>> workflow.
>>>> We are here to discuss how we can improve the process, we are not
>>>> talking about throw away NuttX Build System and move to PX4.
>>>>
>>>> You are picturing something that is not true.
>>>>
>>>> We have issues, as FreeRTOS, MBEB and Zephyr also have. But it is not
>>>> Greg or the Build System guilt.
>>>>
>>>> Please, stop! It is disgusting!
>>>>
>>>>> I have personally never used a release from a tarball. Given the above
>>>>> why
>>>>> would I? It is less stable then master at TC = N
>>>>> (https://www.electronics-tutorials.ws/rc/rc_1.html) where N Is some
>>>>> number
>>>>> of days after a release. - unfortunately based on the current
>>>>> practices
>>>>> (a
>>>>> very unprofessional workflow)  N is also dictated by when apps and
>>>>> nuttx
>>>>> actually building for a given target's set of BTV.
>>>>
>>>> It is not "unprofessional" it was what we could do based or our
>>>> hardware limitations.
>>>>
>>>>> With the tools and resources that exist in our work today, Quite
>>>>> frankly:
>>>>> This unacceptable and is an embarrassment.
>>>>
>>>> Oh my Gosh! Please don't do it.
>>>>
>>>>
>>>>> I suspect this is why there is a Tizen. The modern era - gets it.
>>>>> (Disclaimer I am an old dog - I am learning to get it)
>>>>
>>>> Tizen exists because companies want to have control.
>>>> This is the same logic why Redhat and others maintain their own Linux
>>>> kernel by themselves.
>>>>
>>>>> --- Disclaimer ---
>>>>>
>>>>> In the following, I'm am not bragging about PX4 or selling tools, I am
>>>>> merely trying to share our experiences for the betterment of NuttX.
>>>>>
>>>>> From what I understand PX4 has the most instances of NuttX running on
>>>>> real
>>>>> HW in the world. Over 300K. (I welcome other users to share their
>>>>> numbers)
>>>>>
>>>>> PX4's Total TTVC is still limited, but much, much greater than NuttX.
>>>>>
>>>>> We use Continuous integration (CI) on Nuttx on PX4 on every commit on
>>>>> PRs.
>>>>>
>>>>>   C/C++ CI / build (push) Successful in 3m
>>>>>   Compile MacOS Pending — This commit is being built
>>>>>   Compile All Boards — This commit looks good
>>>>>   Hardware Test — This commit looks good
>>>>>   SITL Tests — This commit looks good
>>>>>   SITL Tests (code coverage) — This commit looks good
>>>>>   ci/circleci — Your tests passed on CircleCI!
>>>>>   continuous-integration/appveyor/pr — AppVeyor build succeeded
>>>>>   continuous-integration/jenkins/pr-head — This commit looks good
>>>>>
>>>>>
>>>>> We run tests on HW.
>>>>>
>>>>> http://ci.px4.io:8080/blue/organizations/jenkins/PX4_misc%2FFirmware-hardware/detail/pr-mag-str-preflt/1/pipeline
>>>>>
>>>>> I say limited because of the set of arch we use and the way we
>>>>> configure
>>>>> the
>>>>> OS.
>>>>>
>>>>> I believe this to be true of all users.
>>>>>
>>>>> The benefit of a community is that the sum of all TTVC that finds the
>>>>> problems and fix them.
>>>>>
>>>>> Why not maximize TTVC - if it will have a huge ROI and it is free:
>>>>>
>>>>> PX4 will contribute all that we have. We just need to build temporally
>>>>> consistent build. Yeah he is on the submodule thing AGAIN :)
>>>>
>>>> Just to make the history short: we already have solutions for SW and HW
>>>> CI.
>>>>
>>>> Besides the buildbot (https://buildbot.net) that was implemented and
>>>> tested by Fabio Balzano, Xiaomi also has a build test for NuttX.
>>>>
>>>> At end of the day, it is not only Greg testing the system, we all are
>>>> testing it as well.
>>>>
>>>> Don't try to push PX4 down your throat, it will not work this way.
>>>> Let's keep the Apache way, it is a democracy!
>>>>
>>>> BR,
>>>>
>>>> Alan
>>>

Re: [DISCUSS - NuttX Workflow]

Posted by Fabio Balzano <fa...@elfarolab.com>.
2 hours is a configured parameter, it is to allow burst of commits, it can reduced to 0 if you need real time building, then the buildbot server can also provision remote testing of the builds.

> On 20 Dec 2019, at 13:09, David Sidrane <Da...@nscdg.com> wrote:
> 
> Hi Fabio,
> 
> What are the capabilities?
> 
> It this 1 RPi/BBB per board nuttx board?
> 
> David
> 
> -----Original Message-----
> From: Fabio Balzano [mailto:fabio@elfarolab.com]
> Sent: Friday, December 20, 2019 5:06 AM
> To: dev@nuttx.apache.org
> Subject: Re: [DISCUSS - NuttX Workflow]
> 
> Hello,
> 
> yes the buildbot server is down, later today I will bring it up. Yes you can
> do remote builds using a RPI/BBB or similars or local builds performed by
> the server itself. I can setup and maintain the server for the Nuttx project
> in case you think it is useful.
> 
> Thank you so much
> Fabio Balzano
> 
>> On 20 Dec 2019, at 13:00, Alan Carvalho de Assis <ac...@gmail.com>
>> wrote:
>> 
>> Hi David,
>> 
>> Sorry for scolding you in public as well, but I think we don't need to
>> find guilt.
>> 
>> So, I got the impression you were doing it to promote PX4 test
>> workflow as the best solution for all the NuttX issues.
>> 
>> And although 300K drones are a lot, there are many commercial products
>> using NuttX. Many Sony audio recorders, Moto Z Snaps, Thermal
>> printers, etc. Probably we have products that overcome that number.
>> 
>> I think recently Fabio changed the buildbot link. BTW I just remember
>> other alternative that Sebastien and I did about 3 years ago:
>> 
>> https://bitbucket.org/acassis/raspi-nuttx-farm/src/master/
>> 
>> The idea was to use low cost Raspberry PIs as a distributed build test
>> for NuttX. It worked fine! You just define a board file with the
>> configuration you want to test and it is done.
>> 
>> BR,
>> 
>> Alan
>> 
>>> On 12/20/19, David Sidrane <da...@apache.org> wrote:
>>> Hi Alan,
>>> 
>>> Sorry if  my intent was misunderstood. I am merely stating facts on were
>>> we
>>> are and how got there.I am not assigning blame. I am not forcing anything
>>> I
>>> am giving some examples of how we can make it the project complete and
>>> better. We can use all of it, some of it none of it. The is a group
>>> decision.
>>> 
>>> Also Pleases do fill us in on where we can see the SW CI  & HW CI you
>>> mentioned. Do you have links maybe be we can use it now?
>>> 
>>> Again Sorry!
>>> 
>>> David
>>> 
>>>> On 2019/12/20 11:44:23, Alan Carvalho de Assis <ac...@gmail.com>
>>>> wrote:
>>>> Hi David,
>>>> 
>>>>> On 12/20/19, David Sidrane <da...@apache.org> wrote:
>>>>> Hi Nathan,
>>>>> 
>>>>> On 2019/12/20 02:51:56, Nathan Hartman <ha...@gmail.com>
>>>>> wrote:
>>>>>> On Thu, Dec 19, 2019 at 6:24 PM Gregory Nutt <sp...@gmail.com>
>>>>>> wrote:
>>>>>>>>> ] A bad build system change can cause serious problems for a lot
>>>>>>>>> of
>>>>>> people around the world.  A bad change in the core OS can destroy the
>>>>>> good
>>>>>> reputation of the OS.
>>>>>>>> Why is this the case? Users should not be using unreleased code or
>>>>>>>> be
>>>>>> encouraged to use it.. If they are one solution is to make more
>>>>>> frequent
>>>>>> releases.
>>>>>>> I don't think that the number of releases is the factor.  It is time
>>>>>>> in
>>>>>>> people's hand.  Subtle corruption of OS real time behavior is not
>>>>>>> easily
>>>>>>> testing.   You normally have to specially instrument the software
>>>>>>> and
>>>>>>> setup a special test environment perhaps with a logic analyzer to
>>>>>>> detect
>>>>>>> these errors.  Errors in the core OS can persists for months and in
>>>>>>> at
>>>>>>> least one case I am aware of, years, until some sets up the correct
>>>>>>> instrumented test.
>>>>>> 
>>>>>> And:
>>>>>> 
>>>>>> On Thu, Dec 19, 2019 at 4:20 PM Justin Mclean
>>>>>> <ju...@classsoftware.com>
>>>>>> wrote:
>>>>>>>> ] A bad build system change can cause serious problems for a lot
>>>>>>>> of
>>>>>> people around the world.  A bad change in the core OS can destroy the
>>>>>> good
>>>>>> reputation of the OS.
>>>>>>> 
>>>>>>> Why is this the case? Users should not be using unreleased code or
>>>>>>> be
>>>>>> encouraged to use it.. If they are one solution is to make more
>>>>>> frequent
>>>>>> releases.
>>>>>> 
>>>>>> Many users are only using released code. However, whatever is in
>>>>>> "master"
>>>>>> eventually gets released. So if problems creep in unnoticed,
>>>>>> downstream
>>>>>> users will be affected. It is only delayed.
>>>>>> 
>>>>>> I can personally attest that those kinds of errors are extremely
>>>>>> difficult
>>>>>> to detect and trace. It does require a special setup with logic
>>>>>> analyzer
>>>>>> or
>>>>>> oscilloscope, and sometimes other tools, not to mention a whole setup
>>>>>> to
>>>>>> produce the right stimuli, several pieces of software that may have to
>>>>>> be
>>>>>> written specifically for the test....
>>>>>> 
>>>>>> I have been wracking my brain on and off thinking about how we could
>>>>>> set
>>>>>> up
>>>>>> an automated test system to find errors related to timing etc.
>>>>>> Unfortunately unlike ordinary software for which you can write an
>>>>>> automated
>>>>>> test suite, this sort of embedded RTOS will need specialized hardware
>>>>>> to
>>>>>> conduct the tests. That's a subject for another thread and i don't
>>>>>> know
>>>>>> if
>>>>>> now is the time, but I will post my thoughts eventually.
>>>>>> 
>>>>>> Nathan
>>>>> 
>>>>> From the proposal
>>>>> 
>>>>> "Community
>>>>> 
>>>>> NuttX has a large, active community.  Communication is via a Google
>>>>> group at
>>>>> https://groups.google.com/forum/#!forum/nuttx where there are 395
>>>>> members as
>>>>> of this writing.  Code is currently maintained at Bitbucket.org at
>>>>> https://bitbucket.org/nuttx/.  Other communications are through
>>>>> Bitbucket
>>>>> issues and also via Slack for focused, interactive discussions."
>>>>> 
>>>>> 
>>>>>> Many users are only using released code.
>>>>> 
>>>>> Can we ask the 395 members?
>>>>> 
>>>>> I can only share my experience with NuttX since I began working on the
>>>>> project in 2012 for multiple companies.
>>>>> 
>>>>> Historically (based on my time on the project) releases - were build
>>>>> tested
>>>>> - by this I mean that the configurations were updated and the thus
>>>>> created a
>>>>> set of "Build Test vectors" BTV. Given the number of permutations
>>>>> solely
>>>>> based on the load time of
>>>>> (http://nuttx.org/doku.php?id=documentation:configvars) with 95,338
>>>>> CONFIG_*
>>>>> hits. Yes there are duplicates on the page and dependencies. This is
>>>>> just
>>>>> meant to give a number of bits....
>>>>> 
>>>>> The total space is very large
>>>>> 
>>>>> The BTV space was very sparse coverage.
>>>>> 
>>>>> IIRC Greg gave the build testing task a day of time. It was repeated
>>>>> after
>>>>> errors were found.  I am not aware of any other testing. Are you?
>>>>> 
>>>>> There were no Release Candidate (rc) nor alpha nor beta test that ran
>>>>> this
>>>>> code one real systems and very little, if any Run Test Vectors (RTV) -
>>>>> I
>>>>> have never seen a test report - has anyone?
>>>>> 
>>>>> One way to look at this is Sporadic Integration. (SI) with limited BTV
>>>>> and
>>>>> minimal RTV.  Total Test Vector Coverage TTVC = BTV + RTV;  The ROI of
>>>>> way
>>>>> of working, from a reliability perspective was and is very small.
>>>>> 
>>>>> A herculean effort Greg's part with little return: We released code
>>>>> with
>>>>> many significant and critical errors in it. See the ReleaseNotes and
>>>>> the
>>>>> commit log.
>>>>> 
>>>>> Over the years Greg referred to TRUNK (yes it was on SVN) and master as
>>>>> his
>>>>> "own sandbox" stating is should not be considered stable or build-able.
>>>>> This
>>>>> is evident in the commit log.
>>>> 
>>>> Please stop focusing on the people (Greg) and let talk about how the
>>>> workflow.
>>>> We are here to discuss how we can improve the process, we are not
>>>> talking about throw away NuttX Build System and move to PX4.
>>>> 
>>>> You are picturing something that is not true.
>>>> 
>>>> We have issues, as FreeRTOS, MBEB and Zephyr also have. But it is not
>>>> Greg or the Build System guilt.
>>>> 
>>>> Please, stop! It is disgusting!
>>>> 
>>>>> I have personally never used a release from a tarball. Given the above
>>>>> why
>>>>> would I? It is less stable then master at TC = N
>>>>> (https://www.electronics-tutorials.ws/rc/rc_1.html) where N Is some
>>>>> number
>>>>> of days after a release. - unfortunately based on the current practices
>>>>> (a
>>>>> very unprofessional workflow)  N is also dictated by when apps and
>>>>> nuttx
>>>>> actually building for a given target's set of BTV.
>>>> 
>>>> It is not "unprofessional" it was what we could do based or our
>>>> hardware limitations.
>>>> 
>>>>> With the tools and resources that exist in our work today, Quite
>>>>> frankly:
>>>>> This unacceptable and is an embarrassment.
>>>> 
>>>> Oh my Gosh! Please don't do it.
>>>> 
>>>> 
>>>>> I suspect this is why there is a Tizen. The modern era - gets it.
>>>>> (Disclaimer I am an old dog - I am learning to get it)
>>>> 
>>>> Tizen exists because companies want to have control.
>>>> This is the same logic why Redhat and others maintain their own Linux
>>>> kernel by themselves.
>>>> 
>>>>> --- Disclaimer ---
>>>>> 
>>>>> In the following, I'm am not bragging about PX4 or selling tools, I am
>>>>> merely trying to share our experiences for the betterment of NuttX.
>>>>> 
>>>>> From what I understand PX4 has the most instances of NuttX running on
>>>>> real
>>>>> HW in the world. Over 300K. (I welcome other users to share their
>>>>> numbers)
>>>>> 
>>>>> PX4's Total TTVC is still limited, but much, much greater than NuttX.
>>>>> 
>>>>> We use Continuous integration (CI) on Nuttx on PX4 on every commit on
>>>>> PRs.
>>>>> 
>>>>>   C/C++ CI / build (push) Successful in 3m
>>>>>   Compile MacOS Pending — This commit is being built
>>>>>   Compile All Boards — This commit looks good
>>>>>   Hardware Test — This commit looks good
>>>>>   SITL Tests — This commit looks good
>>>>>   SITL Tests (code coverage) — This commit looks good
>>>>>   ci/circleci — Your tests passed on CircleCI!
>>>>>   continuous-integration/appveyor/pr — AppVeyor build succeeded
>>>>>   continuous-integration/jenkins/pr-head — This commit looks good
>>>>> 
>>>>> 
>>>>> We run tests on HW.
>>>>> 
>>>>> http://ci.px4.io:8080/blue/organizations/jenkins/PX4_misc%2FFirmware-hardware/detail/pr-mag-str-preflt/1/pipeline
>>>>> 
>>>>> I say limited because of the set of arch we use and the way we
>>>>> configure
>>>>> the
>>>>> OS.
>>>>> 
>>>>> I believe this to be true of all users.
>>>>> 
>>>>> The benefit of a community is that the sum of all TTVC that finds the
>>>>> problems and fix them.
>>>>> 
>>>>> Why not maximize TTVC - if it will have a huge ROI and it is free:
>>>>> 
>>>>> PX4 will contribute all that we have. We just need to build temporally
>>>>> consistent build. Yeah he is on the submodule thing AGAIN :)
>>>> 
>>>> Just to make the history short: we already have solutions for SW and HW
>>>> CI.
>>>> 
>>>> Besides the buildbot (https://buildbot.net) that was implemented and
>>>> tested by Fabio Balzano, Xiaomi also has a build test for NuttX.
>>>> 
>>>> At end of the day, it is not only Greg testing the system, we all are
>>>> testing it as well.
>>>> 
>>>> Don't try to push PX4 down your throat, it will not work this way.
>>>> Let's keep the Apache way, it is a democracy!
>>>> 
>>>> BR,
>>>> 
>>>> Alan
>>> 

Re: [DISCUSS - NuttX Workflow]

Posted by Fabio Balzano <fa...@elfarolab.com>.
you can connect as many boards you want via USB, it all depends by the processing power and required deadline time, personally I use an old laptop so I can provision and automate a fresh build of Nuttx at every new git commit for 15 boards in 2 hours.

> On 20 Dec 2019, at 13:09, David Sidrane <Da...@nscdg.com> wrote:
> 
> Hi Fabio,
> 
> What are the capabilities?
> 
> It this 1 RPi/BBB per board nuttx board?
> 
> David
> 
> -----Original Message-----
> From: Fabio Balzano [mailto:fabio@elfarolab.com]
> Sent: Friday, December 20, 2019 5:06 AM
> To: dev@nuttx.apache.org
> Subject: Re: [DISCUSS - NuttX Workflow]
> 
> Hello,
> 
> yes the buildbot server is down, later today I will bring it up. Yes you can
> do remote builds using a RPI/BBB or similars or local builds performed by
> the server itself. I can setup and maintain the server for the Nuttx project
> in case you think it is useful.
> 
> Thank you so much
> Fabio Balzano
> 
>> On 20 Dec 2019, at 13:00, Alan Carvalho de Assis <ac...@gmail.com>
>> wrote:
>> 
>> Hi David,
>> 
>> Sorry for scolding you in public as well, but I think we don't need to
>> find guilt.
>> 
>> So, I got the impression you were doing it to promote PX4 test
>> workflow as the best solution for all the NuttX issues.
>> 
>> And although 300K drones are a lot, there are many commercial products
>> using NuttX. Many Sony audio recorders, Moto Z Snaps, Thermal
>> printers, etc. Probably we have products that overcome that number.
>> 
>> I think recently Fabio changed the buildbot link. BTW I just remember
>> other alternative that Sebastien and I did about 3 years ago:
>> 
>> https://bitbucket.org/acassis/raspi-nuttx-farm/src/master/
>> 
>> The idea was to use low cost Raspberry PIs as a distributed build test
>> for NuttX. It worked fine! You just define a board file with the
>> configuration you want to test and it is done.
>> 
>> BR,
>> 
>> Alan
>> 
>>> On 12/20/19, David Sidrane <da...@apache.org> wrote:
>>> Hi Alan,
>>> 
>>> Sorry if  my intent was misunderstood. I am merely stating facts on were
>>> we
>>> are and how got there.I am not assigning blame. I am not forcing anything
>>> I
>>> am giving some examples of how we can make it the project complete and
>>> better. We can use all of it, some of it none of it. The is a group
>>> decision.
>>> 
>>> Also Pleases do fill us in on where we can see the SW CI  & HW CI you
>>> mentioned. Do you have links maybe be we can use it now?
>>> 
>>> Again Sorry!
>>> 
>>> David
>>> 
>>>> On 2019/12/20 11:44:23, Alan Carvalho de Assis <ac...@gmail.com>
>>>> wrote:
>>>> Hi David,
>>>> 
>>>>> On 12/20/19, David Sidrane <da...@apache.org> wrote:
>>>>> Hi Nathan,
>>>>> 
>>>>> On 2019/12/20 02:51:56, Nathan Hartman <ha...@gmail.com>
>>>>> wrote:
>>>>>> On Thu, Dec 19, 2019 at 6:24 PM Gregory Nutt <sp...@gmail.com>
>>>>>> wrote:
>>>>>>>>> ] A bad build system change can cause serious problems for a lot
>>>>>>>>> of
>>>>>> people around the world.  A bad change in the core OS can destroy the
>>>>>> good
>>>>>> reputation of the OS.
>>>>>>>> Why is this the case? Users should not be using unreleased code or
>>>>>>>> be
>>>>>> encouraged to use it.. If they are one solution is to make more
>>>>>> frequent
>>>>>> releases.
>>>>>>> I don't think that the number of releases is the factor.  It is time
>>>>>>> in
>>>>>>> people's hand.  Subtle corruption of OS real time behavior is not
>>>>>>> easily
>>>>>>> testing.   You normally have to specially instrument the software
>>>>>>> and
>>>>>>> setup a special test environment perhaps with a logic analyzer to
>>>>>>> detect
>>>>>>> these errors.  Errors in the core OS can persists for months and in
>>>>>>> at
>>>>>>> least one case I am aware of, years, until some sets up the correct
>>>>>>> instrumented test.
>>>>>> 
>>>>>> And:
>>>>>> 
>>>>>> On Thu, Dec 19, 2019 at 4:20 PM Justin Mclean
>>>>>> <ju...@classsoftware.com>
>>>>>> wrote:
>>>>>>>> ] A bad build system change can cause serious problems for a lot
>>>>>>>> of
>>>>>> people around the world.  A bad change in the core OS can destroy the
>>>>>> good
>>>>>> reputation of the OS.
>>>>>>> 
>>>>>>> Why is this the case? Users should not be using unreleased code or
>>>>>>> be
>>>>>> encouraged to use it.. If they are one solution is to make more
>>>>>> frequent
>>>>>> releases.
>>>>>> 
>>>>>> Many users are only using released code. However, whatever is in
>>>>>> "master"
>>>>>> eventually gets released. So if problems creep in unnoticed,
>>>>>> downstream
>>>>>> users will be affected. It is only delayed.
>>>>>> 
>>>>>> I can personally attest that those kinds of errors are extremely
>>>>>> difficult
>>>>>> to detect and trace. It does require a special setup with logic
>>>>>> analyzer
>>>>>> or
>>>>>> oscilloscope, and sometimes other tools, not to mention a whole setup
>>>>>> to
>>>>>> produce the right stimuli, several pieces of software that may have to
>>>>>> be
>>>>>> written specifically for the test....
>>>>>> 
>>>>>> I have been wracking my brain on and off thinking about how we could
>>>>>> set
>>>>>> up
>>>>>> an automated test system to find errors related to timing etc.
>>>>>> Unfortunately unlike ordinary software for which you can write an
>>>>>> automated
>>>>>> test suite, this sort of embedded RTOS will need specialized hardware
>>>>>> to
>>>>>> conduct the tests. That's a subject for another thread and i don't
>>>>>> know
>>>>>> if
>>>>>> now is the time, but I will post my thoughts eventually.
>>>>>> 
>>>>>> Nathan
>>>>> 
>>>>> From the proposal
>>>>> 
>>>>> "Community
>>>>> 
>>>>> NuttX has a large, active community.  Communication is via a Google
>>>>> group at
>>>>> https://groups.google.com/forum/#!forum/nuttx where there are 395
>>>>> members as
>>>>> of this writing.  Code is currently maintained at Bitbucket.org at
>>>>> https://bitbucket.org/nuttx/.  Other communications are through
>>>>> Bitbucket
>>>>> issues and also via Slack for focused, interactive discussions."
>>>>> 
>>>>> 
>>>>>> Many users are only using released code.
>>>>> 
>>>>> Can we ask the 395 members?
>>>>> 
>>>>> I can only share my experience with NuttX since I began working on the
>>>>> project in 2012 for multiple companies.
>>>>> 
>>>>> Historically (based on my time on the project) releases - were build
>>>>> tested
>>>>> - by this I mean that the configurations were updated and the thus
>>>>> created a
>>>>> set of "Build Test vectors" BTV. Given the number of permutations
>>>>> solely
>>>>> based on the load time of
>>>>> (http://nuttx.org/doku.php?id=documentation:configvars) with 95,338
>>>>> CONFIG_*
>>>>> hits. Yes there are duplicates on the page and dependencies. This is
>>>>> just
>>>>> meant to give a number of bits....
>>>>> 
>>>>> The total space is very large
>>>>> 
>>>>> The BTV space was very sparse coverage.
>>>>> 
>>>>> IIRC Greg gave the build testing task a day of time. It was repeated
>>>>> after
>>>>> errors were found.  I am not aware of any other testing. Are you?
>>>>> 
>>>>> There were no Release Candidate (rc) nor alpha nor beta test that ran
>>>>> this
>>>>> code one real systems and very little, if any Run Test Vectors (RTV) -
>>>>> I
>>>>> have never seen a test report - has anyone?
>>>>> 
>>>>> One way to look at this is Sporadic Integration. (SI) with limited BTV
>>>>> and
>>>>> minimal RTV.  Total Test Vector Coverage TTVC = BTV + RTV;  The ROI of
>>>>> way
>>>>> of working, from a reliability perspective was and is very small.
>>>>> 
>>>>> A herculean effort Greg's part with little return: We released code
>>>>> with
>>>>> many significant and critical errors in it. See the ReleaseNotes and
>>>>> the
>>>>> commit log.
>>>>> 
>>>>> Over the years Greg referred to TRUNK (yes it was on SVN) and master as
>>>>> his
>>>>> "own sandbox" stating is should not be considered stable or build-able.
>>>>> This
>>>>> is evident in the commit log.
>>>> 
>>>> Please stop focusing on the people (Greg) and let talk about how the
>>>> workflow.
>>>> We are here to discuss how we can improve the process, we are not
>>>> talking about throw away NuttX Build System and move to PX4.
>>>> 
>>>> You are picturing something that is not true.
>>>> 
>>>> We have issues, as FreeRTOS, MBEB and Zephyr also have. But it is not
>>>> Greg or the Build System guilt.
>>>> 
>>>> Please, stop! It is disgusting!
>>>> 
>>>>> I have personally never used a release from a tarball. Given the above
>>>>> why
>>>>> would I? It is less stable then master at TC = N
>>>>> (https://www.electronics-tutorials.ws/rc/rc_1.html) where N Is some
>>>>> number
>>>>> of days after a release. - unfortunately based on the current practices
>>>>> (a
>>>>> very unprofessional workflow)  N is also dictated by when apps and
>>>>> nuttx
>>>>> actually building for a given target's set of BTV.
>>>> 
>>>> It is not "unprofessional" it was what we could do based or our
>>>> hardware limitations.
>>>> 
>>>>> With the tools and resources that exist in our work today, Quite
>>>>> frankly:
>>>>> This unacceptable and is an embarrassment.
>>>> 
>>>> Oh my Gosh! Please don't do it.
>>>> 
>>>> 
>>>>> I suspect this is why there is a Tizen. The modern era - gets it.
>>>>> (Disclaimer I am an old dog - I am learning to get it)
>>>> 
>>>> Tizen exists because companies want to have control.
>>>> This is the same logic why Redhat and others maintain their own Linux
>>>> kernel by themselves.
>>>> 
>>>>> --- Disclaimer ---
>>>>> 
>>>>> In the following, I'm am not bragging about PX4 or selling tools, I am
>>>>> merely trying to share our experiences for the betterment of NuttX.
>>>>> 
>>>>> From what I understand PX4 has the most instances of NuttX running on
>>>>> real
>>>>> HW in the world. Over 300K. (I welcome other users to share their
>>>>> numbers)
>>>>> 
>>>>> PX4's Total TTVC is still limited, but much, much greater than NuttX.
>>>>> 
>>>>> We use Continuous integration (CI) on Nuttx on PX4 on every commit on
>>>>> PRs.
>>>>> 
>>>>>   C/C++ CI / build (push) Successful in 3m
>>>>>   Compile MacOS Pending — This commit is being built
>>>>>   Compile All Boards — This commit looks good
>>>>>   Hardware Test — This commit looks good
>>>>>   SITL Tests — This commit looks good
>>>>>   SITL Tests (code coverage) — This commit looks good
>>>>>   ci/circleci — Your tests passed on CircleCI!
>>>>>   continuous-integration/appveyor/pr — AppVeyor build succeeded
>>>>>   continuous-integration/jenkins/pr-head — This commit looks good
>>>>> 
>>>>> 
>>>>> We run tests on HW.
>>>>> 
>>>>> http://ci.px4.io:8080/blue/organizations/jenkins/PX4_misc%2FFirmware-hardware/detail/pr-mag-str-preflt/1/pipeline
>>>>> 
>>>>> I say limited because of the set of arch we use and the way we
>>>>> configure
>>>>> the
>>>>> OS.
>>>>> 
>>>>> I believe this to be true of all users.
>>>>> 
>>>>> The benefit of a community is that the sum of all TTVC that finds the
>>>>> problems and fix them.
>>>>> 
>>>>> Why not maximize TTVC - if it will have a huge ROI and it is free:
>>>>> 
>>>>> PX4 will contribute all that we have. We just need to build temporally
>>>>> consistent build. Yeah he is on the submodule thing AGAIN :)
>>>> 
>>>> Just to make the history short: we already have solutions for SW and HW
>>>> CI.
>>>> 
>>>> Besides the buildbot (https://buildbot.net) that was implemented and
>>>> tested by Fabio Balzano, Xiaomi also has a build test for NuttX.
>>>> 
>>>> At end of the day, it is not only Greg testing the system, we all are
>>>> testing it as well.
>>>> 
>>>> Don't try to push PX4 down your throat, it will not work this way.
>>>> Let's keep the Apache way, it is a democracy!
>>>> 
>>>> BR,
>>>> 
>>>> Alan
>>> 

RE: [DISCUSS - NuttX Workflow]

Posted by David Sidrane <Da...@nscdg.com>.
Hi Fabio,

What are the capabilities?

It this 1 RPi/BBB per board nuttx board?

David

-----Original Message-----
From: Fabio Balzano [mailto:fabio@elfarolab.com]
Sent: Friday, December 20, 2019 5:06 AM
To: dev@nuttx.apache.org
Subject: Re: [DISCUSS - NuttX Workflow]

Hello,

yes the buildbot server is down, later today I will bring it up. Yes you can
do remote builds using a RPI/BBB or similars or local builds performed by
the server itself. I can setup and maintain the server for the Nuttx project
in case you think it is useful.

Thank you so much
Fabio Balzano

> On 20 Dec 2019, at 13:00, Alan Carvalho de Assis <ac...@gmail.com>
> wrote:
>
> Hi David,
>
> Sorry for scolding you in public as well, but I think we don't need to
> find guilt.
>
> So, I got the impression you were doing it to promote PX4 test
> workflow as the best solution for all the NuttX issues.
>
> And although 300K drones are a lot, there are many commercial products
> using NuttX. Many Sony audio recorders, Moto Z Snaps, Thermal
> printers, etc. Probably we have products that overcome that number.
>
> I think recently Fabio changed the buildbot link. BTW I just remember
> other alternative that Sebastien and I did about 3 years ago:
>
> https://bitbucket.org/acassis/raspi-nuttx-farm/src/master/
>
> The idea was to use low cost Raspberry PIs as a distributed build test
> for NuttX. It worked fine! You just define a board file with the
> configuration you want to test and it is done.
>
> BR,
>
> Alan
>
>> On 12/20/19, David Sidrane <da...@apache.org> wrote:
>> Hi Alan,
>>
>> Sorry if  my intent was misunderstood. I am merely stating facts on were
>> we
>> are and how got there.I am not assigning blame. I am not forcing anything
>> I
>> am giving some examples of how we can make it the project complete and
>> better. We can use all of it, some of it none of it. The is a group
>> decision.
>>
>> Also Pleases do fill us in on where we can see the SW CI  & HW CI you
>> mentioned. Do you have links maybe be we can use it now?
>>
>> Again Sorry!
>>
>> David
>>
>>> On 2019/12/20 11:44:23, Alan Carvalho de Assis <ac...@gmail.com>
>>> wrote:
>>> Hi David,
>>>
>>>> On 12/20/19, David Sidrane <da...@apache.org> wrote:
>>>> Hi Nathan,
>>>>
>>>> On 2019/12/20 02:51:56, Nathan Hartman <ha...@gmail.com>
>>>> wrote:
>>>>> On Thu, Dec 19, 2019 at 6:24 PM Gregory Nutt <sp...@gmail.com>
>>>>> wrote:
>>>>>>>> ] A bad build system change can cause serious problems for a lot
>>>>>>>> of
>>>>> people around the world.  A bad change in the core OS can destroy the
>>>>> good
>>>>> reputation of the OS.
>>>>>>> Why is this the case? Users should not be using unreleased code or
>>>>>>> be
>>>>> encouraged to use it.. If they are one solution is to make more
>>>>> frequent
>>>>> releases.
>>>>>> I don't think that the number of releases is the factor.  It is time
>>>>>> in
>>>>>> people's hand.  Subtle corruption of OS real time behavior is not
>>>>>> easily
>>>>>> testing.   You normally have to specially instrument the software
>>>>>> and
>>>>>> setup a special test environment perhaps with a logic analyzer to
>>>>>> detect
>>>>>> these errors.  Errors in the core OS can persists for months and in
>>>>>> at
>>>>>> least one case I am aware of, years, until some sets up the correct
>>>>>> instrumented test.
>>>>>
>>>>> And:
>>>>>
>>>>> On Thu, Dec 19, 2019 at 4:20 PM Justin Mclean
>>>>> <ju...@classsoftware.com>
>>>>> wrote:
>>>>>>> ] A bad build system change can cause serious problems for a lot
>>>>>>> of
>>>>> people around the world.  A bad change in the core OS can destroy the
>>>>> good
>>>>> reputation of the OS.
>>>>>>
>>>>>> Why is this the case? Users should not be using unreleased code or
>>>>>> be
>>>>> encouraged to use it.. If they are one solution is to make more
>>>>> frequent
>>>>> releases.
>>>>>
>>>>> Many users are only using released code. However, whatever is in
>>>>> "master"
>>>>> eventually gets released. So if problems creep in unnoticed,
>>>>> downstream
>>>>> users will be affected. It is only delayed.
>>>>>
>>>>> I can personally attest that those kinds of errors are extremely
>>>>> difficult
>>>>> to detect and trace. It does require a special setup with logic
>>>>> analyzer
>>>>> or
>>>>> oscilloscope, and sometimes other tools, not to mention a whole setup
>>>>> to
>>>>> produce the right stimuli, several pieces of software that may have to
>>>>> be
>>>>> written specifically for the test....
>>>>>
>>>>> I have been wracking my brain on and off thinking about how we could
>>>>> set
>>>>> up
>>>>> an automated test system to find errors related to timing etc.
>>>>> Unfortunately unlike ordinary software for which you can write an
>>>>> automated
>>>>> test suite, this sort of embedded RTOS will need specialized hardware
>>>>> to
>>>>> conduct the tests. That's a subject for another thread and i don't
>>>>> know
>>>>> if
>>>>> now is the time, but I will post my thoughts eventually.
>>>>>
>>>>> Nathan
>>>>
>>>> From the proposal
>>>>
>>>> "Community
>>>>
>>>> NuttX has a large, active community.  Communication is via a Google
>>>> group at
>>>> https://groups.google.com/forum/#!forum/nuttx where there are 395
>>>> members as
>>>> of this writing.  Code is currently maintained at Bitbucket.org at
>>>> https://bitbucket.org/nuttx/.  Other communications are through
>>>> Bitbucket
>>>> issues and also via Slack for focused, interactive discussions."
>>>>
>>>>
>>>>> Many users are only using released code.
>>>>
>>>> Can we ask the 395 members?
>>>>
>>>> I can only share my experience with NuttX since I began working on the
>>>> project in 2012 for multiple companies.
>>>>
>>>> Historically (based on my time on the project) releases - were build
>>>> tested
>>>> - by this I mean that the configurations were updated and the thus
>>>> created a
>>>> set of "Build Test vectors" BTV. Given the number of permutations
>>>> solely
>>>> based on the load time of
>>>> (http://nuttx.org/doku.php?id=documentation:configvars) with 95,338
>>>> CONFIG_*
>>>> hits. Yes there are duplicates on the page and dependencies. This is
>>>> just
>>>> meant to give a number of bits....
>>>>
>>>> The total space is very large
>>>>
>>>> The BTV space was very sparse coverage.
>>>>
>>>> IIRC Greg gave the build testing task a day of time. It was repeated
>>>> after
>>>> errors were found.  I am not aware of any other testing. Are you?
>>>>
>>>> There were no Release Candidate (rc) nor alpha nor beta test that ran
>>>> this
>>>> code one real systems and very little, if any Run Test Vectors (RTV) -
>>>> I
>>>> have never seen a test report - has anyone?
>>>>
>>>> One way to look at this is Sporadic Integration. (SI) with limited BTV
>>>> and
>>>> minimal RTV.  Total Test Vector Coverage TTVC = BTV + RTV;  The ROI of
>>>> way
>>>> of working, from a reliability perspective was and is very small.
>>>>
>>>> A herculean effort Greg's part with little return: We released code
>>>> with
>>>> many significant and critical errors in it. See the ReleaseNotes and
>>>> the
>>>> commit log.
>>>>
>>>> Over the years Greg referred to TRUNK (yes it was on SVN) and master as
>>>> his
>>>> "own sandbox" stating is should not be considered stable or build-able.
>>>> This
>>>> is evident in the commit log.
>>>
>>> Please stop focusing on the people (Greg) and let talk about how the
>>> workflow.
>>> We are here to discuss how we can improve the process, we are not
>>> talking about throw away NuttX Build System and move to PX4.
>>>
>>> You are picturing something that is not true.
>>>
>>> We have issues, as FreeRTOS, MBEB and Zephyr also have. But it is not
>>> Greg or the Build System guilt.
>>>
>>> Please, stop! It is disgusting!
>>>
>>>> I have personally never used a release from a tarball. Given the above
>>>> why
>>>> would I? It is less stable then master at TC = N
>>>> (https://www.electronics-tutorials.ws/rc/rc_1.html) where N Is some
>>>> number
>>>> of days after a release. - unfortunately based on the current practices
>>>> (a
>>>> very unprofessional workflow)  N is also dictated by when apps and
>>>> nuttx
>>>> actually building for a given target's set of BTV.
>>>
>>> It is not "unprofessional" it was what we could do based or our
>>> hardware limitations.
>>>
>>>> With the tools and resources that exist in our work today, Quite
>>>> frankly:
>>>> This unacceptable and is an embarrassment.
>>>
>>> Oh my Gosh! Please don't do it.
>>>
>>>
>>>> I suspect this is why there is a Tizen. The modern era - gets it.
>>>> (Disclaimer I am an old dog - I am learning to get it)
>>>
>>> Tizen exists because companies want to have control.
>>> This is the same logic why Redhat and others maintain their own Linux
>>> kernel by themselves.
>>>
>>>> --- Disclaimer ---
>>>>
>>>> In the following, I'm am not bragging about PX4 or selling tools, I am
>>>> merely trying to share our experiences for the betterment of NuttX.
>>>>
>>>> From what I understand PX4 has the most instances of NuttX running on
>>>> real
>>>> HW in the world. Over 300K. (I welcome other users to share their
>>>> numbers)
>>>>
>>>> PX4's Total TTVC is still limited, but much, much greater than NuttX.
>>>>
>>>> We use Continuous integration (CI) on Nuttx on PX4 on every commit on
>>>> PRs.
>>>>
>>>>    C/C++ CI / build (push) Successful in 3m
>>>>    Compile MacOS Pending — This commit is being built
>>>>    Compile All Boards — This commit looks good
>>>>    Hardware Test — This commit looks good
>>>>    SITL Tests — This commit looks good
>>>>    SITL Tests (code coverage) — This commit looks good
>>>>    ci/circleci — Your tests passed on CircleCI!
>>>>    continuous-integration/appveyor/pr — AppVeyor build succeeded
>>>>    continuous-integration/jenkins/pr-head — This commit looks good
>>>>
>>>>
>>>> We run tests on HW.
>>>>
>>>> http://ci.px4.io:8080/blue/organizations/jenkins/PX4_misc%2FFirmware-hardware/detail/pr-mag-str-preflt/1/pipeline
>>>>
>>>> I say limited because of the set of arch we use and the way we
>>>> configure
>>>> the
>>>> OS.
>>>>
>>>> I believe this to be true of all users.
>>>>
>>>> The benefit of a community is that the sum of all TTVC that finds the
>>>> problems and fix them.
>>>>
>>>> Why not maximize TTVC - if it will have a huge ROI and it is free:
>>>>
>>>> PX4 will contribute all that we have. We just need to build temporally
>>>> consistent build. Yeah he is on the submodule thing AGAIN :)
>>>
>>> Just to make the history short: we already have solutions for SW and HW
>>> CI.
>>>
>>> Besides the buildbot (https://buildbot.net) that was implemented and
>>> tested by Fabio Balzano, Xiaomi also has a build test for NuttX.
>>>
>>> At end of the day, it is not only Greg testing the system, we all are
>>> testing it as well.
>>>
>>> Don't try to push PX4 down your throat, it will not work this way.
>>> Let's keep the Apache way, it is a democracy!
>>>
>>> BR,
>>>
>>> Alan
>>

Re: [DISCUSS - NuttX Workflow]

Posted by Fabio Balzano <fa...@elfarolab.com>.
Hello,

yes the buildbot server is down, later today I will bring it up. Yes you can do remote builds using a RPI/BBB or similars or local builds performed by the server itself. I can setup and maintain the server for the Nuttx project in case you think it is useful.

Thank you so much
Fabio Balzano 

> On 20 Dec 2019, at 13:00, Alan Carvalho de Assis <ac...@gmail.com> wrote:
> 
> Hi David,
> 
> Sorry for scolding you in public as well, but I think we don't need to
> find guilt.
> 
> So, I got the impression you were doing it to promote PX4 test
> workflow as the best solution for all the NuttX issues.
> 
> And although 300K drones are a lot, there are many commercial products
> using NuttX. Many Sony audio recorders, Moto Z Snaps, Thermal
> printers, etc. Probably we have products that overcome that number.
> 
> I think recently Fabio changed the buildbot link. BTW I just remember
> other alternative that Sebastien and I did about 3 years ago:
> 
> https://bitbucket.org/acassis/raspi-nuttx-farm/src/master/
> 
> The idea was to use low cost Raspberry PIs as a distributed build test
> for NuttX. It worked fine! You just define a board file with the
> configuration you want to test and it is done.
> 
> BR,
> 
> Alan
> 
>> On 12/20/19, David Sidrane <da...@apache.org> wrote:
>> Hi Alan,
>> 
>> Sorry if  my intent was misunderstood. I am merely stating facts on were we
>> are and how got there.I am not assigning blame. I am not forcing anything I
>> am giving some examples of how we can make it the project complete and
>> better. We can use all of it, some of it none of it. The is a group
>> decision.
>> 
>> Also Pleases do fill us in on where we can see the SW CI  & HW CI you
>> mentioned. Do you have links maybe be we can use it now?
>> 
>> Again Sorry!
>> 
>> David
>> 
>>> On 2019/12/20 11:44:23, Alan Carvalho de Assis <ac...@gmail.com> wrote:
>>> Hi David,
>>> 
>>>> On 12/20/19, David Sidrane <da...@apache.org> wrote:
>>>> Hi Nathan,
>>>> 
>>>> On 2019/12/20 02:51:56, Nathan Hartman <ha...@gmail.com>
>>>> wrote:
>>>>> On Thu, Dec 19, 2019 at 6:24 PM Gregory Nutt <sp...@gmail.com>
>>>>> wrote:
>>>>>>>> ] A bad build system change can cause serious problems for a lot
>>>>>>>> of
>>>>> people around the world.  A bad change in the core OS can destroy the
>>>>> good
>>>>> reputation of the OS.
>>>>>>> Why is this the case? Users should not be using unreleased code or
>>>>>>> be
>>>>> encouraged to use it.. If they are one solution is to make more
>>>>> frequent
>>>>> releases.
>>>>>> I don't think that the number of releases is the factor.  It is time
>>>>>> in
>>>>>> people's hand.  Subtle corruption of OS real time behavior is not
>>>>>> easily
>>>>>> testing.   You normally have to specially instrument the software
>>>>>> and
>>>>>> setup a special test environment perhaps with a logic analyzer to
>>>>>> detect
>>>>>> these errors.  Errors in the core OS can persists for months and in
>>>>>> at
>>>>>> least one case I am aware of, years, until some sets up the correct
>>>>>> instrumented test.
>>>>> 
>>>>> And:
>>>>> 
>>>>> On Thu, Dec 19, 2019 at 4:20 PM Justin Mclean
>>>>> <ju...@classsoftware.com>
>>>>> wrote:
>>>>>>> ] A bad build system change can cause serious problems for a lot
>>>>>>> of
>>>>> people around the world.  A bad change in the core OS can destroy the
>>>>> good
>>>>> reputation of the OS.
>>>>>> 
>>>>>> Why is this the case? Users should not be using unreleased code or
>>>>>> be
>>>>> encouraged to use it.. If they are one solution is to make more
>>>>> frequent
>>>>> releases.
>>>>> 
>>>>> Many users are only using released code. However, whatever is in
>>>>> "master"
>>>>> eventually gets released. So if problems creep in unnoticed,
>>>>> downstream
>>>>> users will be affected. It is only delayed.
>>>>> 
>>>>> I can personally attest that those kinds of errors are extremely
>>>>> difficult
>>>>> to detect and trace. It does require a special setup with logic
>>>>> analyzer
>>>>> or
>>>>> oscilloscope, and sometimes other tools, not to mention a whole setup
>>>>> to
>>>>> produce the right stimuli, several pieces of software that may have to
>>>>> be
>>>>> written specifically for the test....
>>>>> 
>>>>> I have been wracking my brain on and off thinking about how we could
>>>>> set
>>>>> up
>>>>> an automated test system to find errors related to timing etc.
>>>>> Unfortunately unlike ordinary software for which you can write an
>>>>> automated
>>>>> test suite, this sort of embedded RTOS will need specialized hardware
>>>>> to
>>>>> conduct the tests. That's a subject for another thread and i don't
>>>>> know
>>>>> if
>>>>> now is the time, but I will post my thoughts eventually.
>>>>> 
>>>>> Nathan
>>>> 
>>>> From the proposal
>>>> 
>>>> "Community
>>>> 
>>>> NuttX has a large, active community.  Communication is via a Google
>>>> group at
>>>> https://groups.google.com/forum/#!forum/nuttx where there are 395
>>>> members as
>>>> of this writing.  Code is currently maintained at Bitbucket.org at
>>>> https://bitbucket.org/nuttx/.  Other communications are through
>>>> Bitbucket
>>>> issues and also via Slack for focused, interactive discussions."
>>>> 
>>>> 
>>>>> Many users are only using released code.
>>>> 
>>>> Can we ask the 395 members?
>>>> 
>>>> I can only share my experience with NuttX since I began working on the
>>>> project in 2012 for multiple companies.
>>>> 
>>>> Historically (based on my time on the project) releases - were build
>>>> tested
>>>> - by this I mean that the configurations were updated and the thus
>>>> created a
>>>> set of "Build Test vectors" BTV. Given the number of permutations
>>>> solely
>>>> based on the load time of
>>>> (http://nuttx.org/doku.php?id=documentation:configvars) with 95,338
>>>> CONFIG_*
>>>> hits. Yes there are duplicates on the page and dependencies. This is
>>>> just
>>>> meant to give a number of bits....
>>>> 
>>>> The total space is very large
>>>> 
>>>> The BTV space was very sparse coverage.
>>>> 
>>>> IIRC Greg gave the build testing task a day of time. It was repeated
>>>> after
>>>> errors were found.  I am not aware of any other testing. Are you?
>>>> 
>>>> There were no Release Candidate (rc) nor alpha nor beta test that ran
>>>> this
>>>> code one real systems and very little, if any Run Test Vectors (RTV) -
>>>> I
>>>> have never seen a test report - has anyone?
>>>> 
>>>> One way to look at this is Sporadic Integration. (SI) with limited BTV
>>>> and
>>>> minimal RTV.  Total Test Vector Coverage TTVC = BTV + RTV;  The ROI of
>>>> way
>>>> of working, from a reliability perspective was and is very small.
>>>> 
>>>> A herculean effort Greg's part with little return: We released code
>>>> with
>>>> many significant and critical errors in it. See the ReleaseNotes and
>>>> the
>>>> commit log.
>>>> 
>>>> Over the years Greg referred to TRUNK (yes it was on SVN) and master as
>>>> his
>>>> "own sandbox" stating is should not be considered stable or build-able.
>>>> This
>>>> is evident in the commit log.
>>> 
>>> Please stop focusing on the people (Greg) and let talk about how the
>>> workflow.
>>> We are here to discuss how we can improve the process, we are not
>>> talking about throw away NuttX Build System and move to PX4.
>>> 
>>> You are picturing something that is not true.
>>> 
>>> We have issues, as FreeRTOS, MBEB and Zephyr also have. But it is not
>>> Greg or the Build System guilt.
>>> 
>>> Please, stop! It is disgusting!
>>> 
>>>> I have personally never used a release from a tarball. Given the above
>>>> why
>>>> would I? It is less stable then master at TC = N
>>>> (https://www.electronics-tutorials.ws/rc/rc_1.html) where N Is some
>>>> number
>>>> of days after a release. - unfortunately based on the current practices
>>>> (a
>>>> very unprofessional workflow)  N is also dictated by when apps and
>>>> nuttx
>>>> actually building for a given target's set of BTV.
>>> 
>>> It is not "unprofessional" it was what we could do based or our
>>> hardware limitations.
>>> 
>>>> With the tools and resources that exist in our work today, Quite
>>>> frankly:
>>>> This unacceptable and is an embarrassment.
>>> 
>>> Oh my Gosh! Please don't do it.
>>> 
>>> 
>>>> I suspect this is why there is a Tizen. The modern era - gets it.
>>>> (Disclaimer I am an old dog - I am learning to get it)
>>> 
>>> Tizen exists because companies want to have control.
>>> This is the same logic why Redhat and others maintain their own Linux
>>> kernel by themselves.
>>> 
>>>> --- Disclaimer ---
>>>> 
>>>> In the following, I'm am not bragging about PX4 or selling tools, I am
>>>> merely trying to share our experiences for the betterment of NuttX.
>>>> 
>>>> From what I understand PX4 has the most instances of NuttX running on
>>>> real
>>>> HW in the world. Over 300K. (I welcome other users to share their
>>>> numbers)
>>>> 
>>>> PX4's Total TTVC is still limited, but much, much greater than NuttX.
>>>> 
>>>> We use Continuous integration (CI) on Nuttx on PX4 on every commit on
>>>> PRs.
>>>> 
>>>>    C/C++ CI / build (push) Successful in 3m
>>>>    Compile MacOS Pending — This commit is being built
>>>>    Compile All Boards — This commit looks good
>>>>    Hardware Test — This commit looks good
>>>>    SITL Tests — This commit looks good
>>>>    SITL Tests (code coverage) — This commit looks good
>>>>    ci/circleci — Your tests passed on CircleCI!
>>>>    continuous-integration/appveyor/pr — AppVeyor build succeeded
>>>>    continuous-integration/jenkins/pr-head — This commit looks good
>>>> 
>>>> 
>>>> We run tests on HW.
>>>> 
>>>> http://ci.px4.io:8080/blue/organizations/jenkins/PX4_misc%2FFirmware-hardware/detail/pr-mag-str-preflt/1/pipeline
>>>> 
>>>> I say limited because of the set of arch we use and the way we configure
>>>> the
>>>> OS.
>>>> 
>>>> I believe this to be true of all users.
>>>> 
>>>> The benefit of a community is that the sum of all TTVC that finds the
>>>> problems and fix them.
>>>> 
>>>> Why not maximize TTVC - if it will have a huge ROI and it is free:
>>>> 
>>>> PX4 will contribute all that we have. We just need to build temporally
>>>> consistent build. Yeah he is on the submodule thing AGAIN :)
>>> 
>>> Just to make the history short: we already have solutions for SW and HW
>>> CI.
>>> 
>>> Besides the buildbot (https://buildbot.net) that was implemented and
>>> tested by Fabio Balzano, Xiaomi also has a build test for NuttX.
>>> 
>>> At end of the day, it is not only Greg testing the system, we all are
>>> testing it as well.
>>> 
>>> Don't try to push PX4 down your throat, it will not work this way.
>>> Let's keep the Apache way, it is a democracy!
>>> 
>>> BR,
>>> 
>>> Alan
>> 

Re: [DISCUSS - NuttX Workflow]

Posted by Alan Carvalho de Assis <ac...@gmail.com>.
Hi David,

Sorry for scolding you in public as well, but I think we don't need to
find guilt.

So, I got the impression you were doing it to promote PX4 test
workflow as the best solution for all the NuttX issues.

And although 300K drones are a lot, there are many commercial products
using NuttX. Many Sony audio recorders, Moto Z Snaps, Thermal
printers, etc. Probably we have products that overcome that number.

I think recently Fabio changed the buildbot link. BTW I just remember
other alternative that Sebastien and I did about 3 years ago:

https://bitbucket.org/acassis/raspi-nuttx-farm/src/master/

The idea was to use low cost Raspberry PIs as a distributed build test
for NuttX. It worked fine! You just define a board file with the
configuration you want to test and it is done.

BR,

Alan

On 12/20/19, David Sidrane <da...@apache.org> wrote:
> Hi Alan,
>
> Sorry if  my intent was misunderstood. I am merely stating facts on were we
> are and how got there.I am not assigning blame. I am not forcing anything I
> am giving some examples of how we can make it the project complete and
> better. We can use all of it, some of it none of it. The is a group
> decision.
>
> Also Pleases do fill us in on where we can see the SW CI  & HW CI you
> mentioned. Do you have links maybe be we can use it now?
>
> Again Sorry!
>
> David
>
> On 2019/12/20 11:44:23, Alan Carvalho de Assis <ac...@gmail.com> wrote:
>> Hi David,
>>
>> On 12/20/19, David Sidrane <da...@apache.org> wrote:
>> > Hi Nathan,
>> >
>> > On 2019/12/20 02:51:56, Nathan Hartman <ha...@gmail.com>
>> > wrote:
>> >> On Thu, Dec 19, 2019 at 6:24 PM Gregory Nutt <sp...@gmail.com>
>> >> wrote:
>> >> > >> ] A bad build system change can cause serious problems for a lot
>> >> > >> of
>> >> people around the world.  A bad change in the core OS can destroy the
>> >> good
>> >> reputation of the OS.
>> >> > > Why is this the case? Users should not be using unreleased code or
>> >> > > be
>> >> encouraged to use it.. If they are one solution is to make more
>> >> frequent
>> >> releases.
>> >> > I don't think that the number of releases is the factor.  It is time
>> >> > in
>> >> > people's hand.  Subtle corruption of OS real time behavior is not
>> >> > easily
>> >> > testing.   You normally have to specially instrument the software
>> >> > and
>> >> > setup a special test environment perhaps with a logic analyzer to
>> >> > detect
>> >> > these errors.  Errors in the core OS can persists for months and in
>> >> > at
>> >> > least one case I am aware of, years, until some sets up the correct
>> >> > instrumented test.
>> >>
>> >> And:
>> >>
>> >> On Thu, Dec 19, 2019 at 4:20 PM Justin Mclean
>> >> <ju...@classsoftware.com>
>> >> wrote:
>> >> > > ] A bad build system change can cause serious problems for a lot
>> >> > > of
>> >> people around the world.  A bad change in the core OS can destroy the
>> >> good
>> >> reputation of the OS.
>> >> >
>> >> > Why is this the case? Users should not be using unreleased code or
>> >> > be
>> >> encouraged to use it.. If they are one solution is to make more
>> >> frequent
>> >> releases.
>> >>
>> >> Many users are only using released code. However, whatever is in
>> >> "master"
>> >> eventually gets released. So if problems creep in unnoticed,
>> >> downstream
>> >> users will be affected. It is only delayed.
>> >>
>> >> I can personally attest that those kinds of errors are extremely
>> >> difficult
>> >> to detect and trace. It does require a special setup with logic
>> >> analyzer
>> >> or
>> >> oscilloscope, and sometimes other tools, not to mention a whole setup
>> >> to
>> >> produce the right stimuli, several pieces of software that may have to
>> >> be
>> >> written specifically for the test....
>> >>
>> >> I have been wracking my brain on and off thinking about how we could
>> >> set
>> >> up
>> >> an automated test system to find errors related to timing etc.
>> >> Unfortunately unlike ordinary software for which you can write an
>> >> automated
>> >> test suite, this sort of embedded RTOS will need specialized hardware
>> >> to
>> >> conduct the tests. That's a subject for another thread and i don't
>> >> know
>> >> if
>> >> now is the time, but I will post my thoughts eventually.
>> >>
>> >> Nathan
>> >>
>> >
>> > From the proposal
>> >
>> > "Community
>> >
>> > NuttX has a large, active community.  Communication is via a Google
>> > group at
>> > https://groups.google.com/forum/#!forum/nuttx where there are 395
>> > members as
>> > of this writing.  Code is currently maintained at Bitbucket.org at
>> > https://bitbucket.org/nuttx/.  Other communications are through
>> > Bitbucket
>> > issues and also via Slack for focused, interactive discussions."
>> >
>> >
>> >> Many users are only using released code.
>> >
>> > Can we ask the 395 members?
>> >
>> > I can only share my experience with NuttX since I began working on the
>> > project in 2012 for multiple companies.
>> >
>> > Historically (based on my time on the project) releases - were build
>> > tested
>> > - by this I mean that the configurations were updated and the thus
>> > created a
>> > set of "Build Test vectors" BTV. Given the number of permutations
>> > solely
>> > based on the load time of
>> > (http://nuttx.org/doku.php?id=documentation:configvars) with 95,338
>> > CONFIG_*
>> > hits. Yes there are duplicates on the page and dependencies. This is
>> > just
>> > meant to give a number of bits....
>> >
>> > The total space is very large
>> >
>> > The BTV space was very sparse coverage.
>> >
>> > IIRC Greg gave the build testing task a day of time. It was repeated
>> > after
>> > errors were found.  I am not aware of any other testing. Are you?
>> >
>> > There were no Release Candidate (rc) nor alpha nor beta test that ran
>> > this
>> > code one real systems and very little, if any Run Test Vectors (RTV) -
>> > I
>> > have never seen a test report - has anyone?
>> >
>> > One way to look at this is Sporadic Integration. (SI) with limited BTV
>> > and
>> > minimal RTV.  Total Test Vector Coverage TTVC = BTV + RTV;  The ROI of
>> > way
>> > of working, from a reliability perspective was and is very small.
>> >
>> > A herculean effort Greg's part with little return: We released code
>> > with
>> > many significant and critical errors in it. See the ReleaseNotes and
>> > the
>> > commit log.
>> >
>> > Over the years Greg referred to TRUNK (yes it was on SVN) and master as
>> > his
>> > "own sandbox" stating is should not be considered stable or build-able.
>> > This
>> > is evident in the commit log.
>> >
>>
>> Please stop focusing on the people (Greg) and let talk about how the
>> workflow.
>> We are here to discuss how we can improve the process, we are not
>> talking about throw away NuttX Build System and move to PX4.
>>
>> You are picturing something that is not true.
>>
>> We have issues, as FreeRTOS, MBEB and Zephyr also have. But it is not
>> Greg or the Build System guilt.
>>
>> Please, stop! It is disgusting!
>>
>> > I have personally never used a release from a tarball. Given the above
>> > why
>> > would I? It is less stable then master at TC = N
>> > (https://www.electronics-tutorials.ws/rc/rc_1.html) where N Is some
>> > number
>> > of days after a release. - unfortunately based on the current practices
>> > (a
>> > very unprofessional workflow)  N is also dictated by when apps and
>> > nuttx
>> > actually building for a given target's set of BTV.
>> >
>>
>> It is not "unprofessional" it was what we could do based or our
>> hardware limitations.
>>
>> > With the tools and resources that exist in our work today, Quite
>> > frankly:
>> > This unacceptable and is an embarrassment.
>> >
>>
>> Oh my Gosh! Please don't do it.
>>
>>
>> > I suspect this is why there is a Tizen. The modern era - gets it.
>> > (Disclaimer I am an old dog - I am learning to get it)
>> >
>>
>> Tizen exists because companies want to have control.
>> This is the same logic why Redhat and others maintain their own Linux
>> kernel by themselves.
>>
>> > --- Disclaimer ---
>> >
>> > In the following, I'm am not bragging about PX4 or selling tools, I am
>> > merely trying to share our experiences for the betterment of NuttX.
>> >
>> > From what I understand PX4 has the most instances of NuttX running on
>> > real
>> > HW in the world. Over 300K. (I welcome other users to share their
>> > numbers)
>> >
>> > PX4's Total TTVC is still limited, but much, much greater than NuttX.
>> >
>> > We use Continuous integration (CI) on Nuttx on PX4 on every commit on
>> > PRs.
>> >
>> > 	C/C++ CI / build (push) Successful in 3m
>> > 	Compile MacOS Pending — This commit is being built
>> > 	Compile All Boards — This commit looks good
>> > 	Hardware Test — This commit looks good
>> > 	SITL Tests — This commit looks good
>> > 	SITL Tests (code coverage) — This commit looks good
>> > 	ci/circleci — Your tests passed on CircleCI!
>> > 	continuous-integration/appveyor/pr — AppVeyor build succeeded
>> > 	continuous-integration/jenkins/pr-head — This commit looks good
>> >
>> >
>> > We run tests on HW.
>> >
>> > http://ci.px4.io:8080/blue/organizations/jenkins/PX4_misc%2FFirmware-hardware/detail/pr-mag-str-preflt/1/pipeline
>> >
>> > I say limited because of the set of arch we use and the way we configure
>> > the
>> > OS.
>> >
>> > I believe this to be true of all users.
>> >
>> > The benefit of a community is that the sum of all TTVC that finds the
>> > problems and fix them.
>> >
>> > Why not maximize TTVC - if it will have a huge ROI and it is free:
>> >
>> > PX4 will contribute all that we have. We just need to build temporally
>> > consistent build. Yeah he is on the submodule thing AGAIN :)
>> >
>>
>> Just to make the history short: we already have solutions for SW and HW
>> CI.
>>
>> Besides the buildbot (https://buildbot.net) that was implemented and
>> tested by Fabio Balzano, Xiaomi also has a build test for NuttX.
>>
>> At end of the day, it is not only Greg testing the system, we all are
>> testing it as well.
>>
>> Don't try to push PX4 down your throat, it will not work this way.
>> Let's keep the Apache way, it is a democracy!
>>
>> BR,
>>
>> Alan
>>
>

Re: [DISCUSS - NuttX Workflow]

Posted by David Sidrane <da...@apache.org>.
Hi Alan,

Sorry if  my intent was misunderstood. I am merely stating facts on were we are and how got there.I am not assigning blame. I am not forcing anything I am giving some examples of how we can make it the project complete and better. We can use all of it, some of it none of it. The is a group decision.

Also Pleases do fill us in on where we can see the SW CI  & HW CI you mentioned. Do you have links maybe be we can use it now?

Again Sorry!

David

On 2019/12/20 11:44:23, Alan Carvalho de Assis <ac...@gmail.com> wrote: 
> Hi David,
> 
> On 12/20/19, David Sidrane <da...@apache.org> wrote:
> > Hi Nathan,
> >
> > On 2019/12/20 02:51:56, Nathan Hartman <ha...@gmail.com> wrote:
> >> On Thu, Dec 19, 2019 at 6:24 PM Gregory Nutt <sp...@gmail.com> wrote:
> >> > >> ] A bad build system change can cause serious problems for a lot of
> >> people around the world.  A bad change in the core OS can destroy the
> >> good
> >> reputation of the OS.
> >> > > Why is this the case? Users should not be using unreleased code or be
> >> encouraged to use it.. If they are one solution is to make more frequent
> >> releases.
> >> > I don't think that the number of releases is the factor.  It is time in
> >> > people's hand.  Subtle corruption of OS real time behavior is not
> >> > easily
> >> > testing.   You normally have to specially instrument the software and
> >> > setup a special test environment perhaps with a logic analyzer to
> >> > detect
> >> > these errors.  Errors in the core OS can persists for months and in at
> >> > least one case I am aware of, years, until some sets up the correct
> >> > instrumented test.
> >>
> >> And:
> >>
> >> On Thu, Dec 19, 2019 at 4:20 PM Justin Mclean <ju...@classsoftware.com>
> >> wrote:
> >> > > ] A bad build system change can cause serious problems for a lot of
> >> people around the world.  A bad change in the core OS can destroy the
> >> good
> >> reputation of the OS.
> >> >
> >> > Why is this the case? Users should not be using unreleased code or be
> >> encouraged to use it.. If they are one solution is to make more frequent
> >> releases.
> >>
> >> Many users are only using released code. However, whatever is in "master"
> >> eventually gets released. So if problems creep in unnoticed, downstream
> >> users will be affected. It is only delayed.
> >>
> >> I can personally attest that those kinds of errors are extremely
> >> difficult
> >> to detect and trace. It does require a special setup with logic analyzer
> >> or
> >> oscilloscope, and sometimes other tools, not to mention a whole setup to
> >> produce the right stimuli, several pieces of software that may have to be
> >> written specifically for the test....
> >>
> >> I have been wracking my brain on and off thinking about how we could set
> >> up
> >> an automated test system to find errors related to timing etc.
> >> Unfortunately unlike ordinary software for which you can write an
> >> automated
> >> test suite, this sort of embedded RTOS will need specialized hardware to
> >> conduct the tests. That's a subject for another thread and i don't know
> >> if
> >> now is the time, but I will post my thoughts eventually.
> >>
> >> Nathan
> >>
> >
> > From the proposal
> >
> > "Community
> >
> > NuttX has a large, active community.  Communication is via a Google group at
> > https://groups.google.com/forum/#!forum/nuttx where there are 395 members as
> > of this writing.  Code is currently maintained at Bitbucket.org at
> > https://bitbucket.org/nuttx/.  Other communications are through Bitbucket
> > issues and also via Slack for focused, interactive discussions."
> >
> >
> >> Many users are only using released code.
> >
> > Can we ask the 395 members?
> >
> > I can only share my experience with NuttX since I began working on the
> > project in 2012 for multiple companies.
> >
> > Historically (based on my time on the project) releases - were build tested
> > - by this I mean that the configurations were updated and the thus created a
> > set of "Build Test vectors" BTV. Given the number of permutations solely
> > based on the load time of
> > (http://nuttx.org/doku.php?id=documentation:configvars) with 95,338 CONFIG_*
> > hits. Yes there are duplicates on the page and dependencies. This is just
> > meant to give a number of bits....
> >
> > The total space is very large
> >
> > The BTV space was very sparse coverage.
> >
> > IIRC Greg gave the build testing task a day of time. It was repeated after
> > errors were found.  I am not aware of any other testing. Are you?
> >
> > There were no Release Candidate (rc) nor alpha nor beta test that ran this
> > code one real systems and very little, if any Run Test Vectors (RTV) - I
> > have never seen a test report - has anyone?
> >
> > One way to look at this is Sporadic Integration. (SI) with limited BTV and
> > minimal RTV.  Total Test Vector Coverage TTVC = BTV + RTV;  The ROI of  way
> > of working, from a reliability perspective was and is very small.
> >
> > A herculean effort Greg's part with little return: We released code with
> > many significant and critical errors in it. See the ReleaseNotes and the
> > commit log.
> >
> > Over the years Greg referred to TRUNK (yes it was on SVN) and master as his
> > "own sandbox" stating is should not be considered stable or build-able. This
> > is evident in the commit log.
> >
> 
> Please stop focusing on the people (Greg) and let talk about how the workflow.
> We are here to discuss how we can improve the process, we are not
> talking about throw away NuttX Build System and move to PX4.
> 
> You are picturing something that is not true.
> 
> We have issues, as FreeRTOS, MBEB and Zephyr also have. But it is not
> Greg or the Build System guilt.
> 
> Please, stop! It is disgusting!
> 
> > I have personally never used a release from a tarball. Given the above why
> > would I? It is less stable then master at TC = N
> > (https://www.electronics-tutorials.ws/rc/rc_1.html) where N Is some number
> > of days after a release. - unfortunately based on the current practices (a
> > very unprofessional workflow)  N is also dictated by when apps and nuttx
> > actually building for a given target's set of BTV.
> >
> 
> It is not "unprofessional" it was what we could do based or our
> hardware limitations.
> 
> > With the tools and resources that exist in our work today, Quite frankly:
> > This unacceptable and is an embarrassment.
> >
> 
> Oh my Gosh! Please don't do it.
> 
> 
> > I suspect this is why there is a Tizen. The modern era - gets it.
> > (Disclaimer I am an old dog - I am learning to get it)
> >
> 
> Tizen exists because companies want to have control.
> This is the same logic why Redhat and others maintain their own Linux
> kernel by themselves.
> 
> > --- Disclaimer ---
> >
> > In the following, I'm am not bragging about PX4 or selling tools, I am
> > merely trying to share our experiences for the betterment of NuttX.
> >
> > From what I understand PX4 has the most instances of NuttX running on real
> > HW in the world. Over 300K. (I welcome other users to share their numbers)
> >
> > PX4's Total TTVC is still limited, but much, much greater than NuttX.
> >
> > We use Continuous integration (CI) on Nuttx on PX4 on every commit on PRs.
> >
> > 	C/C++ CI / build (push) Successful in 3m
> > 	Compile MacOS Pending — This commit is being built
> > 	Compile All Boards — This commit looks good
> > 	Hardware Test — This commit looks good
> > 	SITL Tests — This commit looks good
> > 	SITL Tests (code coverage) — This commit looks good
> > 	ci/circleci — Your tests passed on CircleCI!
> > 	continuous-integration/appveyor/pr — AppVeyor build succeeded
> > 	continuous-integration/jenkins/pr-head — This commit looks good
> >
> >
> > We run tests on HW.
> >
> > http://ci.px4.io:8080/blue/organizations/jenkins/PX4_misc%2FFirmware-hardware/detail/pr-mag-str-preflt/1/pipeline
> >
> > I say limited because of the set of arch we use and the way we configure the
> > OS.
> >
> > I believe this to be true of all users.
> >
> > The benefit of a community is that the sum of all TTVC that finds the
> > problems and fix them.
> >
> > Why not maximize TTVC - if it will have a huge ROI and it is free:
> >
> > PX4 will contribute all that we have. We just need to build temporally
> > consistent build. Yeah he is on the submodule thing AGAIN :)
> >
> 
> Just to make the history short: we already have solutions for SW and HW CI.
> 
> Besides the buildbot (https://buildbot.net) that was implemented and
> tested by Fabio Balzano, Xiaomi also has a build test for NuttX.
> 
> At end of the day, it is not only Greg testing the system, we all are
> testing it as well.
> 
> Don't try to push PX4 down your throat, it will not work this way.
> Let's keep the Apache way, it is a democracy!
> 
> BR,
> 
> Alan
> 

Re: [DISCUSS - NuttX Workflow]

Posted by Alan Carvalho de Assis <ac...@gmail.com>.
Hi David,

On 12/20/19, David Sidrane <da...@apache.org> wrote:
> Hi Nathan,
>
> On 2019/12/20 02:51:56, Nathan Hartman <ha...@gmail.com> wrote:
>> On Thu, Dec 19, 2019 at 6:24 PM Gregory Nutt <sp...@gmail.com> wrote:
>> > >> ] A bad build system change can cause serious problems for a lot of
>> people around the world.  A bad change in the core OS can destroy the
>> good
>> reputation of the OS.
>> > > Why is this the case? Users should not be using unreleased code or be
>> encouraged to use it.. If they are one solution is to make more frequent
>> releases.
>> > I don't think that the number of releases is the factor.  It is time in
>> > people's hand.  Subtle corruption of OS real time behavior is not
>> > easily
>> > testing.   You normally have to specially instrument the software and
>> > setup a special test environment perhaps with a logic analyzer to
>> > detect
>> > these errors.  Errors in the core OS can persists for months and in at
>> > least one case I am aware of, years, until some sets up the correct
>> > instrumented test.
>>
>> And:
>>
>> On Thu, Dec 19, 2019 at 4:20 PM Justin Mclean <ju...@classsoftware.com>
>> wrote:
>> > > ] A bad build system change can cause serious problems for a lot of
>> people around the world.  A bad change in the core OS can destroy the
>> good
>> reputation of the OS.
>> >
>> > Why is this the case? Users should not be using unreleased code or be
>> encouraged to use it.. If they are one solution is to make more frequent
>> releases.
>>
>> Many users are only using released code. However, whatever is in "master"
>> eventually gets released. So if problems creep in unnoticed, downstream
>> users will be affected. It is only delayed.
>>
>> I can personally attest that those kinds of errors are extremely
>> difficult
>> to detect and trace. It does require a special setup with logic analyzer
>> or
>> oscilloscope, and sometimes other tools, not to mention a whole setup to
>> produce the right stimuli, several pieces of software that may have to be
>> written specifically for the test....
>>
>> I have been wracking my brain on and off thinking about how we could set
>> up
>> an automated test system to find errors related to timing etc.
>> Unfortunately unlike ordinary software for which you can write an
>> automated
>> test suite, this sort of embedded RTOS will need specialized hardware to
>> conduct the tests. That's a subject for another thread and i don't know
>> if
>> now is the time, but I will post my thoughts eventually.
>>
>> Nathan
>>
>
> From the proposal
>
> "Community
>
> NuttX has a large, active community.  Communication is via a Google group at
> https://groups.google.com/forum/#!forum/nuttx where there are 395 members as
> of this writing.  Code is currently maintained at Bitbucket.org at
> https://bitbucket.org/nuttx/.  Other communications are through Bitbucket
> issues and also via Slack for focused, interactive discussions."
>
>
>> Many users are only using released code.
>
> Can we ask the 395 members?
>
> I can only share my experience with NuttX since I began working on the
> project in 2012 for multiple companies.
>
> Historically (based on my time on the project) releases - were build tested
> - by this I mean that the configurations were updated and the thus created a
> set of "Build Test vectors" BTV. Given the number of permutations solely
> based on the load time of
> (http://nuttx.org/doku.php?id=documentation:configvars) with 95,338 CONFIG_*
> hits. Yes there are duplicates on the page and dependencies. This is just
> meant to give a number of bits....
>
> The total space is very large
>
> The BTV space was very sparse coverage.
>
> IIRC Greg gave the build testing task a day of time. It was repeated after
> errors were found.  I am not aware of any other testing. Are you?
>
> There were no Release Candidate (rc) nor alpha nor beta test that ran this
> code one real systems and very little, if any Run Test Vectors (RTV) - I
> have never seen a test report - has anyone?
>
> One way to look at this is Sporadic Integration. (SI) with limited BTV and
> minimal RTV.  Total Test Vector Coverage TTVC = BTV + RTV;  The ROI of  way
> of working, from a reliability perspective was and is very small.
>
> A herculean effort Greg's part with little return: We released code with
> many significant and critical errors in it. See the ReleaseNotes and the
> commit log.
>
> Over the years Greg referred to TRUNK (yes it was on SVN) and master as his
> "own sandbox" stating is should not be considered stable or build-able. This
> is evident in the commit log.
>

Please stop focusing on the people (Greg) and let talk about how the workflow.
We are here to discuss how we can improve the process, we are not
talking about throw away NuttX Build System and move to PX4.

You are picturing something that is not true.

We have issues, as FreeRTOS, MBEB and Zephyr also have. But it is not
Greg or the Build System guilt.

Please, stop! It is disgusting!

> I have personally never used a release from a tarball. Given the above why
> would I? It is less stable then master at TC = N
> (https://www.electronics-tutorials.ws/rc/rc_1.html) where N Is some number
> of days after a release. - unfortunately based on the current practices (a
> very unprofessional workflow)  N is also dictated by when apps and nuttx
> actually building for a given target's set of BTV.
>

It is not "unprofessional" it was what we could do based or our
hardware limitations.

> With the tools and resources that exist in our work today, Quite frankly:
> This unacceptable and is an embarrassment.
>

Oh my Gosh! Please don't do it.


> I suspect this is why there is a Tizen. The modern era - gets it.
> (Disclaimer I am an old dog - I am learning to get it)
>

Tizen exists because companies want to have control.
This is the same logic why Redhat and others maintain their own Linux
kernel by themselves.

> --- Disclaimer ---
>
> In the following, I'm am not bragging about PX4 or selling tools, I am
> merely trying to share our experiences for the betterment of NuttX.
>
> From what I understand PX4 has the most instances of NuttX running on real
> HW in the world. Over 300K. (I welcome other users to share their numbers)
>
> PX4's Total TTVC is still limited, but much, much greater than NuttX.
>
> We use Continuous integration (CI) on Nuttx on PX4 on every commit on PRs.
>
> 	C/C++ CI / build (push) Successful in 3m
> 	Compile MacOS Pending — This commit is being built
> 	Compile All Boards — This commit looks good
> 	Hardware Test — This commit looks good
> 	SITL Tests — This commit looks good
> 	SITL Tests (code coverage) — This commit looks good
> 	ci/circleci — Your tests passed on CircleCI!
> 	continuous-integration/appveyor/pr — AppVeyor build succeeded
> 	continuous-integration/jenkins/pr-head — This commit looks good
>
>
> We run tests on HW.
>
> http://ci.px4.io:8080/blue/organizations/jenkins/PX4_misc%2FFirmware-hardware/detail/pr-mag-str-preflt/1/pipeline
>
> I say limited because of the set of arch we use and the way we configure the
> OS.
>
> I believe this to be true of all users.
>
> The benefit of a community is that the sum of all TTVC that finds the
> problems and fix them.
>
> Why not maximize TTVC - if it will have a huge ROI and it is free:
>
> PX4 will contribute all that we have. We just need to build temporally
> consistent build. Yeah he is on the submodule thing AGAIN :)
>

Just to make the history short: we already have solutions for SW and HW CI.

Besides the buildbot (https://buildbot.net) that was implemented and
tested by Fabio Balzano, Xiaomi also has a build test for NuttX.

At end of the day, it is not only Greg testing the system, we all are
testing it as well.

Don't try to push PX4 down your throat, it will not work this way.
Let's keep the Apache way, it is a democracy!

BR,

Alan

Re: [DISCUSS - NuttX Workflow]

Posted by David Sidrane <da...@apache.org>.
Hi Nathan,

On 2019/12/20 02:51:56, Nathan Hartman <ha...@gmail.com> wrote: 
> On Thu, Dec 19, 2019 at 6:24 PM Gregory Nutt <sp...@gmail.com> wrote:
> > >> ] A bad build system change can cause serious problems for a lot of
> people around the world.  A bad change in the core OS can destroy the good
> reputation of the OS.
> > > Why is this the case? Users should not be using unreleased code or be
> encouraged to use it.. If they are one solution is to make more frequent
> releases.
> > I don't think that the number of releases is the factor.  It is time in
> > people's hand.  Subtle corruption of OS real time behavior is not easily
> > testing.   You normally have to specially instrument the software and
> > setup a special test environment perhaps with a logic analyzer to detect
> > these errors.  Errors in the core OS can persists for months and in at
> > least one case I am aware of, years, until some sets up the correct
> > instrumented test.
> 
> And:
> 
> On Thu, Dec 19, 2019 at 4:20 PM Justin Mclean <ju...@classsoftware.com>
> wrote:
> > > ] A bad build system change can cause serious problems for a lot of
> people around the world.  A bad change in the core OS can destroy the good
> reputation of the OS.
> >
> > Why is this the case? Users should not be using unreleased code or be
> encouraged to use it.. If they are one solution is to make more frequent
> releases.
> 
> Many users are only using released code. However, whatever is in "master"
> eventually gets released. So if problems creep in unnoticed, downstream
> users will be affected. It is only delayed.
> 
> I can personally attest that those kinds of errors are extremely difficult
> to detect and trace. It does require a special setup with logic analyzer or
> oscilloscope, and sometimes other tools, not to mention a whole setup to
> produce the right stimuli, several pieces of software that may have to be
> written specifically for the test....
> 
> I have been wracking my brain on and off thinking about how we could set up
> an automated test system to find errors related to timing etc.
> Unfortunately unlike ordinary software for which you can write an automated
> test suite, this sort of embedded RTOS will need specialized hardware to
> conduct the tests. That's a subject for another thread and i don't know if
> now is the time, but I will post my thoughts eventually.
> 
> Nathan
> 

From the proposal

"Community

NuttX has a large, active community.  Communication is via a Google group at https://groups.google.com/forum/#!forum/nuttx where there are 395 members as of this writing.  Code is currently maintained at Bitbucket.org at https://bitbucket.org/nuttx/.  Other communications are through Bitbucket issues and also via Slack for focused, interactive discussions."


> Many users are only using released code.

Can we ask the 395 members?

I can only share my experience with NuttX since I began working on the project in 2012 for multiple companies.

Historically (based on my time on the project) releases - were build tested - by this I mean that the configurations were updated and the thus created a set of "Build Test vectors" BTV. Given the number of permutations solely based on the load time of (http://nuttx.org/doku.php?id=documentation:configvars) with 95,338 CONFIG_* hits. Yes there are duplicates on the page and dependencies. This is just meant to give a number of bits.... 

The total space is very large

The BTV space was very sparse coverage. 

IIRC Greg gave the build testing task a day of time. It was repeated after errors were found.  I am not aware of any other testing. Are you?

There were no Release Candidate (rc) nor alpha nor beta test that ran this code one real systems and very little, if any Run Test Vectors (RTV) - I have never seen a test report - has anyone?

One way to look at this is Sporadic Integration. (SI) with limited BTV and minimal RTV.  Total Test Vector Coverage TTVC = BTV + RTV;  The ROI of  way of working, from a reliability perspective was and is very small.  

A herculean effort Greg's part with little return: We released code with many significant and critical errors in it. See the ReleaseNotes and the commit log.

Over the years Greg referred to TRUNK (yes it was on SVN) and master as his "own sandbox" stating is should not be considered stable or build-able. This is evident in the commit log. 

I have personally never used a release from a tarball. Given the above why would I? It is less stable then master at TC = N (https://www.electronics-tutorials.ws/rc/rc_1.html) where N Is some number of days after a release. - unfortunately based on the current practices (a very unprofessional workflow)  N is also dictated by when apps and nuttx actually building for a given target's set of BTV.

With the tools and resources that exist in our work today, Quite frankly: This unacceptable and is an embarrassment. 

I suspect this is why there is a Tizen. The modern era - gets it. (Disclaimer I am an old dog - I am learning to get it)

--- Disclaimer ---

In the following, I'm am not bragging about PX4 or selling tools, I am merely trying to share our experiences for the betterment of NuttX.
 
From what I understand PX4 has the most instances of NuttX running on real HW in the world. Over 300K. (I welcome other users to share their numbers)

PX4's Total TTVC is still limited, but much, much greater than NuttX. 

We use Continuous integration (CI) on Nuttx on PX4 on every commit on PRs.

	C/C++ CI / build (push) Successful in 3m
	Compile MacOS Pending — This commit is being built
	Compile All Boards — This commit looks good
	Hardware Test — This commit looks good
	SITL Tests — This commit looks good
	SITL Tests (code coverage) — This commit looks good
	ci/circleci — Your tests passed on CircleCI!
	continuous-integration/appveyor/pr — AppVeyor build succeeded
	continuous-integration/jenkins/pr-head — This commit looks good


We run tests on HW.

http://ci.px4.io:8080/blue/organizations/jenkins/PX4_misc%2FFirmware-hardware/detail/pr-mag-str-preflt/1/pipeline

I say limited because of the set of arch we use and the way we configure the OS.

I believe this to be true of all users. 

The benefit of a community is that the sum of all TTVC that finds the problems and fix them.  

Why not maximize TTVC - if it will have a huge ROI and it is free:

PX4 will contribute all that we have. We just need to build temporally consistent build. Yeah he is on the submodule thing AGAIN :) 


David


Re: [DISCUSS - NuttX Workflow]

Posted by Justin Mclean <ju...@classsoftware.com>.
Hi,

> I don't think that the number of releases is the factor.  It is time in people's hand.  Subtle corruption of OS real time behavior is not easily testing.   You normally have to specially instrument the software and setup a special test environment perhaps with a logic analyzer to detect these errors.  Errors in the core OS can persists for months and in at least one case I am aware of, years, until some sets up the correct instrumented test.

Isn’t more reviewing / testing done at realise time? I’m curious to why the project things this way. If you want to project to grow beyond it’s current contributors you may need to change this. Anything setup can be changed so perhaps best to revisit discuss later, once the repos are moved, web site set up (anyone?) and people are contributing patches here.

Thanks,
Justin

Re: [DISCUSS - NuttX Workflow]

Posted by Nathan Hartman <ha...@gmail.com>.
On Thu, Dec 19, 2019 at 6:24 PM Gregory Nutt <sp...@gmail.com> wrote:
> >> ] A bad build system change can cause serious problems for a lot of
people around the world.  A bad change in the core OS can destroy the good
reputation of the OS.
> > Why is this the case? Users should not be using unreleased code or be
encouraged to use it.. If they are one solution is to make more frequent
releases.
> I don't think that the number of releases is the factor.  It is time in
> people's hand.  Subtle corruption of OS real time behavior is not easily
> testing.   You normally have to specially instrument the software and
> setup a special test environment perhaps with a logic analyzer to detect
> these errors.  Errors in the core OS can persists for months and in at
> least one case I am aware of, years, until some sets up the correct
> instrumented test.

And:

On Thu, Dec 19, 2019 at 4:20 PM Justin Mclean <ju...@classsoftware.com>
wrote:
> > ] A bad build system change can cause serious problems for a lot of
people around the world.  A bad change in the core OS can destroy the good
reputation of the OS.
>
> Why is this the case? Users should not be using unreleased code or be
encouraged to use it.. If they are one solution is to make more frequent
releases.

Many users are only using released code. However, whatever is in "master"
eventually gets released. So if problems creep in unnoticed, downstream
users will be affected. It is only delayed.

I can personally attest that those kinds of errors are extremely difficult
to detect and trace. It does require a special setup with logic analyzer or
oscilloscope, and sometimes other tools, not to mention a whole setup to
produce the right stimuli, several pieces of software that may have to be
written specifically for the test....

I have been wracking my brain on and off thinking about how we could set up
an automated test system to find errors related to timing etc.
Unfortunately unlike ordinary software for which you can write an automated
test suite, this sort of embedded RTOS will need specialized hardware to
conduct the tests. That's a subject for another thread and i don't know if
now is the time, but I will post my thoughts eventually.

Nathan

Re: [DISCUSS - NuttX Workflow]

Posted by Gregory Nutt <sp...@gmail.com>.
>> ] A bad build system change can cause serious problems for a lot of people around the world.  A bad change in the core OS can destroy the good reputation of the OS.
> Why is this the case? Users should not be using unreleased code or be encouraged to use it.. If they are one solution is to make more frequent releases.
I don't think that the number of releases is the factor.  It is time in 
people's hand.  Subtle corruption of OS real time behavior is not easily 
testing.   You normally have to specially instrument the software and 
setup a special test environment perhaps with a logic analyzer to detect 
these errors.  Errors in the core OS can persists for months and in at 
least one case I am aware of, years, until some sets up the correct 
instrumented test.



Re: [DISCUSS - NuttX Workflow]

Posted by Justin Mclean <ju...@classsoftware.com>.
Hi,

> ] A bad build system change can cause serious problems for a lot of people around the world.  A bad change in the core OS can destroy the good reputation of the OS.

Why is this the case? Users should not be using unreleased code or be encouraged to use it.. If they are one solution is to make more frequent releases.
 
Thanks,
Justin

Re: [DISCUSS - NuttX Workflow]

Posted by Gregory Nutt <sp...@gmail.com>.
>>> Changes that affect the build system should require three +1 binding
>>> votes and no vetoes from PMC members
> Other projects that I know of that have tried an approach like, seem to have a lot of difficultly get those 3 +1 votes. This slows down development or worse forms groups of people that just all +1 each other patches without doing a real review. This project may be different and if it’s not working you can change it.
Sometimes it is good to slow down if there are modifications proposed to 
critical parts of the system.  A bad build system change can cause 
serious problems for a lot of people around the world.  A bad change in 
the core OS can destroy the good reputation of the OS.
> I see what your concern is (not break the build) but with any CTR (commit then review system) any commit can be easily reverted and you have known working points (releases) that users can use. How does a system like this help you users of NuttX?

It is true that build errors are usually found quickly.  Usually you 
will hear about it in a day or so.  But it is not good public relations 
to break peoples builds; it is unprofessional.  A good qualification 
environment should build on several platforms first:  Linux, Windows 
(native, Cygwin, MSYS2), macOS, free/openBSD, and others (those are the 
main paltforms).  That would minimize the risk of those embarrassments.

Errors in the core OS are much, much more subtle.  You make changes that 
subtly damage scheduling, prioritization, interlocking, priority 
inheritance, real-time performance and not catch problem for months.  
That is because the effect is subtle; the OS just becomes a crappy OS 
for a few release cycles.  That is the big one that I worry most about 
and and slow-down in the workflow for changes that risk those core OS 
features would be well worth it.

Greg



Re: [DISCUSS - NuttX Workflow]

Posted by Justin Mclean <ju...@classsoftware.com>.
Hi,

>> Changes that affect the build system should require three +1 binding
>> votes and no vetoes from PMC members 

Other projects that I know of that have tried an approach like, seem to have a lot of difficultly get those 3 +1 votes. This slows down development or worse forms groups of people that just all +1 each other patches without doing a real review. This project may be different and if it’s not working you can change it. 

My bigger concern is that this may also discourage new people from taking part in the project and set the committer bar too high. But each project is free to set that where they want.

It also seems a little complex, with different amount of votes required for different areas, people are likely to make mistakes. What happens then?

I see what your concern is (not break the build) but with any CTR (commit then review system) any commit can be easily reverted and you have known working points (releases) that users can use. How does a system like this help you users of NuttX?

Thanks,
Justin

Re: [DISCUSS - NuttX Workflow]

Posted by David Sidrane <da...@apache.org>.
Hi Nathan,

You Rock!

On 2019/12/20 05:31:37, Nathan Hartman <ha...@gmail.com> wrote: 
> On Thu, Dec 19, 2019 at 4:40 PM David Sidrane <Da...@nscdg.com> wrote:
> > > Changes to code in MCU architectural support, board support, or features
> > > in the periphery of the OS should be at the discretion of the
> > > committer. Committers should use their best judgment and are
> > > encouraged to discuss anything they're not sure about. But these
> > > changes don't require as much oversight. These changes are much more
> > > limited in their exposure. They are usually developed by someone to
> > > scratch their own itch. Also these are allowed to be feature-
> > > incomplete: e.g., it is okay to have partial board support.
> >
> > I do not agree. MCU and board changes need better scrutiny for many reasons,
> > here are some:
> >
> > 1) Proper patterns applied. This gets misses a lot - consider the "ifdef
> > rash" and missed invilotes
> > 2) Proper scoping - this just happened in imxrt
> > 3) They still can break the build.
> > 3) They still can ruin the OS's reputation.
> > 4) This is where HW background is needed.
> >
> > You may want to consider separate levels of scrutiny for MCU's than boards.
> 
> Acknowledged.
> 
> The issue with boards and MCUs is that the pool of contributors for
> these areas is much thinner than the pool of contributors for some of
> the more heavily exercised areas.
> 
> Now, I can see where a company that produces a board -- and wants to
> provide full NuttX support for it in order to sell those boards --
> could pay engineers to develop FULL tested and characterized support
> for the board and all its features.
> 
> But, if it's a volunteer like me, I would likely implement support for
> the parts I'm going to use. It's a "scratch your own itch" type of
> development. Although this is "incomplete," I would greatly prefer to
> have it contributed to NuttX, over rejecting it due to incompleteness,
> because it reduces effort for the next person who wants to use that
> board, even if they need to implement support for a missing feature.
> It's better to start with partial support and implement the extra part
> you need, then to start from zero.
> 
> Now, I agree that we need to check for proper patterns, proper
> scoping, and unbroken build. But we can't necessarily have PMC members
> testing changes to boards because they may not have the board. Also,
> testing changes may require a whole hardware test setup with
> oscilloscope / logic analyzer and we cannot expect PMC members to
> spend that much time testing a change. It will never get voted on and
> people will become discouraged and stop contributing. So, we must have
> a certain amount of trust that the person contributing changes is
> doing so in good faith. Perhaps we need a PMC vote, or, say, two non-
> PMC committers to agree that changes to board support follow the code
> conventions and rules, but their +1 doesn't have to imply that they
> have the hardware and actually tested the change. As long as it
> follows basic rules and seems legit, it should be committed.
> 

Right!  Boards are secondary - your itch, your blood if you scratch it wrong.

> As for protecting the OS's reputation, I think that each board should
> have some sort of "Status" score:
>

Yes! +(1.0/0.0) more then 18billion :)
 
> * Boards with complete support that are widely exercised and known
>   to work correctly out of the box could be given a "Tier 1" status.
>   Recommended if you just want to focus on your application and have
>   all the hardware details taken care of for you.
> 
> * Boards with less support and testing could be categorized as
>   "Tier 2" status, meaning that NuttX's support for them might be
>   fine for some applications but some board features may be
>   unsupported, incomplete, or not well-exercised. Recommended for
>   the hardware savvy who don't mind if they have to fix a few issues
>   to finish their project.
> 
> * "Tier 3" could mean implementation in progress and many features
>   are missing or buggy. Recommended for those who want to hack on
>   support for the board.
> 
> How to arrive at such a status score? This could be problematic, since
> not all of us own every single board in NuttX, and even if we did, we
> could never volunteer the time that it would take to characterize and
> test every board and all its features with NuttX. So we may have to
> rely on the word of the implementer and community members who happen
> to use a particular board, or we could consider factors like the
> number of contributors who have worked on a board, the number of bugs
> reported, the number of bugs fixed, time between report and fix, and
> word of mouth from the community. Just a thought.
> 

We build this - and give it to the world for free.

http://ci.px4.io:8080/blue/organizations/jenkins/PX4_misc%2FFirmware-hardware/detail/master/1142/pipeline

You also _give_ them the design files (BOM,etc) to build this:

https://docs.google.com/spreadsheets/d/1EdbT_deF-nqZ5bysBFqwQLaOa1irG3gsgE6za5GKap4/edit?folder=1cUEhrAQh-72D5Sgy5sRmXpoTqwqj6A1D#gid=0&range=1:1

Then we write real test. For each driver and build configs with them.

The rest will fall into place.

> Regarding boards, I'd like to point out that at my job, all boards are
> custom boards designed by us. So it is entirely up to us to support a
> board. I am guessing that most companies that produce products with
> NuttX are doing so with custom boards anyway. I don't think they're
> incorporating LaunchPads or Nucleo boards into commercial products.
> But that's just my guess.

Then you and your company will want one of these!

> 
> As far as MCUs go, we could do something similar with "Tiers" (or
> other terminology, if you prefer). MCUs might be more heavily
> exercised than boards, because the same (or similar) MCU may appear on
> several different boards. Also MCUs in the same family tend to share
> many common themes with just minor differences. I believe that the
> NuttX website already contains some documentation about the caveats of
> various MCUs.
> 
> Again, because the pool of contributors for a particular MCU family
> may be thinner than for other parts of the system, we may have to
> check that basics like coding standard are followed, but +1 doesn't
> have to imply that you have the MCU and conducted in-depth tests
> before you voted. Again, if the change seems legit and the coding
> rules are followed, we will have to trust that changes are contributed
> in good faith.
> 
I do not agree on this
:
To translate to ASF lingo: One to many meant that:

  Very skilled contributors while scratching their own itch can make others itch's bleed is ways they do not realize.

This is the same pattern that cases rejection of contribution to the OS. "Myoptic" is the word Greg will use.

What is good for the OS is good for the Project as a whole - as stated by Greg on the not vaporized Slack:

He does not care about hardware. It is not interesting to  to him. His goal is making  a perfect operating system. 

I will call it "world class" using best practices in both design and implementation. 

Out goal as a community should be using best practices in both design and implementation of the PROJECT - the code, the tooling, the build system, test and HW support.

We should ask the 395 - if there was no hardware support: Would you use NuttX?


> Thoughts?
> 
> Cheers,
> Nathan
> 
David

Re: [DISCUSS - NuttX Workflow]

Posted by Gregory Nutt <sp...@gmail.com>.
>> You may want to consider separate levels of scrutiny for MCU's than boards.
> Acknowledged.
>
> The issue with boards and MCUs is that the pool of contributors for
> these areas is much thinner than the pool of contributors for some of
> the more heavily exercised areas.

Depends on the popularity of the MCU.  The STMicro family was a large 
base of contributors.  i.MX RT has significant support now. Others much 
less.

I businesses that I worked in, QA talked about "exposure" to defects.  
That is basically what percentage of users would experience the 
problem.   The "exposure" criteria would be applied primarily for new 
bugs found just before a product release.  Bugs with major exposure 
would hold up the product release, others would slide through and get 
caught next time.

I think the concept of "exposure" applies here too.  Core OS bugs can 
effect everyone but may effect no one if it if applies only to an 
obscure combination of configuration settings.  STMicro bugs might 
effect a large group of users (but never everyone).

Prioritizing bugs by sub-system is a crude but simple metric for these 
kinds of decisions.  Something like exposure would be better, but 
difficult to quantify.




Re: [DISCUSS - NuttX Workflow]

Posted by Nathan Hartman <ha...@gmail.com>.
On Thu, Dec 19, 2019 at 4:40 PM David Sidrane <Da...@nscdg.com> wrote:
> > Changes to code in MCU architectural support, board support, or features
> > in the periphery of the OS should be at the discretion of the
> > committer. Committers should use their best judgment and are
> > encouraged to discuss anything they're not sure about. But these
> > changes don't require as much oversight. These changes are much more
> > limited in their exposure. They are usually developed by someone to
> > scratch their own itch. Also these are allowed to be feature-
> > incomplete: e.g., it is okay to have partial board support.
>
> I do not agree. MCU and board changes need better scrutiny for many reasons,
> here are some:
>
> 1) Proper patterns applied. This gets misses a lot - consider the "ifdef
> rash" and missed invilotes
> 2) Proper scoping - this just happened in imxrt
> 3) They still can break the build.
> 3) They still can ruin the OS's reputation.
> 4) This is where HW background is needed.
>
> You may want to consider separate levels of scrutiny for MCU's than boards.

Acknowledged.

The issue with boards and MCUs is that the pool of contributors for
these areas is much thinner than the pool of contributors for some of
the more heavily exercised areas.

Now, I can see where a company that produces a board -- and wants to
provide full NuttX support for it in order to sell those boards --
could pay engineers to develop FULL tested and characterized support
for the board and all its features.

But, if it's a volunteer like me, I would likely implement support for
the parts I'm going to use. It's a "scratch your own itch" type of
development. Although this is "incomplete," I would greatly prefer to
have it contributed to NuttX, over rejecting it due to incompleteness,
because it reduces effort for the next person who wants to use that
board, even if they need to implement support for a missing feature.
It's better to start with partial support and implement the extra part
you need, then to start from zero.

Now, I agree that we need to check for proper patterns, proper
scoping, and unbroken build. But we can't necessarily have PMC members
testing changes to boards because they may not have the board. Also,
testing changes may require a whole hardware test setup with
oscilloscope / logic analyzer and we cannot expect PMC members to
spend that much time testing a change. It will never get voted on and
people will become discouraged and stop contributing. So, we must have
a certain amount of trust that the person contributing changes is
doing so in good faith. Perhaps we need a PMC vote, or, say, two non-
PMC committers to agree that changes to board support follow the code
conventions and rules, but their +1 doesn't have to imply that they
have the hardware and actually tested the change. As long as it
follows basic rules and seems legit, it should be committed.

As for protecting the OS's reputation, I think that each board should
have some sort of "Status" score:

* Boards with complete support that are widely exercised and known
  to work correctly out of the box could be given a "Tier 1" status.
  Recommended if you just want to focus on your application and have
  all the hardware details taken care of for you.

* Boards with less support and testing could be categorized as
  "Tier 2" status, meaning that NuttX's support for them might be
  fine for some applications but some board features may be
  unsupported, incomplete, or not well-exercised. Recommended for
  the hardware savvy who don't mind if they have to fix a few issues
  to finish their project.

* "Tier 3" could mean implementation in progress and many features
  are missing or buggy. Recommended for those who want to hack on
  support for the board.

How to arrive at such a status score? This could be problematic, since
not all of us own every single board in NuttX, and even if we did, we
could never volunteer the time that it would take to characterize and
test every board and all its features with NuttX. So we may have to
rely on the word of the implementer and community members who happen
to use a particular board, or we could consider factors like the
number of contributors who have worked on a board, the number of bugs
reported, the number of bugs fixed, time between report and fix, and
word of mouth from the community. Just a thought.

Regarding boards, I'd like to point out that at my job, all boards are
custom boards designed by us. So it is entirely up to us to support a
board. I am guessing that most companies that produce products with
NuttX are doing so with custom boards anyway. I don't think they're
incorporating LaunchPads or Nucleo boards into commercial products.
But that's just my guess.

As far as MCUs go, we could do something similar with "Tiers" (or
other terminology, if you prefer). MCUs might be more heavily
exercised than boards, because the same (or similar) MCU may appear on
several different boards. Also MCUs in the same family tend to share
many common themes with just minor differences. I believe that the
NuttX website already contains some documentation about the caveats of
various MCUs.

Again, because the pool of contributors for a particular MCU family
may be thinner than for other parts of the system, we may have to
check that basics like coding standard are followed, but +1 doesn't
have to imply that you have the MCU and conducted in-depth tests
before you voted. Again, if the change seems legit and the coding
rules are followed, we will have to trust that changes are contributed
in good faith.

Thoughts?

Cheers,
Nathan

RE: [DISCUSS - NuttX Workflow]

Posted by David Sidrane <Da...@nscdg.com>.
This reads like a past slack discussion that ignored HW.
Is that really what an embedded system OS should do?

> Changes to code in MCU architectural support, board support, or features
> in the periphery of the OS should be at the discretion of the
> committer. Committers should use their best judgment and are
> encouraged to discuss anything they're not sure about. But these
> changes don't require as much oversight. These changes are much more
> limited in their exposure. They are usually developed by someone to
> scratch their own itch. Also these are allowed to be feature-
> incomplete: e.g., it is okay to have partial board support.

I do not agree. MCU and board changes need better scrutiny for many reasons,
here are some:

1) Proper patterns applied. This gets misses a lot - consider the "ifdef
rash" and missed invilotes
2) Proper scoping - this just happened in imxrt
3) They still can break the build.
3) They still can ruin the OS's reputation.
4) This is where HW background is needed.

You may want to consider separate levels of scrutiny for MCU's than boards.

One is a many to 1 relation.
One is a 1 to 1.

David

-----Original Message-----
From: Gregory Nutt [mailto:spudaneco@gmail.com]
Sent: Thursday, December 19, 2019 10:33 AM
To: dev@nuttx.apache.org
Subject: Re: [DISCUSS - NuttX Workflow]


>> I think only 5 emails in the whole list really address these functional
>> requirements.
> Let me add a 6th... (Without mentioning any "stupid" SCMs.)
>
> One thing missing from our earlier discussions is to decide how many
> approvals a change requires. I think this varies by area of the code
> being changed.
>
> As a starting point for further discussion, I suggest something along
> these lines:
>
> Changes that affect the build system should require three +1 binding
> votes and no vetoes from PMC members PLUS at least one report that
> NuttX builds successfully on each supported platform: Windows, Mac,
> Unix, and no reports of breakage caused by the change. Builds on
> Windows using a Unix compatibility layer would be considered Unix for
> this purpose. Any member of the community should be able to report
> whether it builds successfully and on which platform. Between the
> submitter of the patch, PMC members, and testers, this means that at
> least 7 pairs of eyes looks at any change to the build system. This
> high number is necessary because breakage there affects everyone and
> is very disruptive!
>
> Changes to code that affect the core of the OS should require three +1
> binding votes and no vetoes from PMC members and should be accompanied
> by some rationale or justification for the change. If applicable,
> supporting data should be provided, e.g., if it's supposed to improve
> performance, is this backed up by measurements?
>
> Changes to code in MCU architectural support, board support, or features
> in the periphery of the OS should be at the discretion of the
> committer. Committers should use their best judgment and are
> encouraged to discuss anything they're not sure about. But these
> changes don't require as much oversight. These changes are much more
> limited in their exposure. They are usually developed by someone to
> scratch their own itch. Also these are allowed to be feature-
> incomplete: e.g., it is okay to have partial board support.
>
> In the apps repository: Changes to code in core apps (such as NSH)
> should require two +1 binding votes and no vetoes.
>
> Changes to other non-core areas of apps are at the discretion of the
> committer.
>
> Notwithstanding all of the above, there is the concept of an "obvious
> fix." Any committer may fix things like obvious typos, misspellings,
> grammar mistakes in comments, code formatting violations, that do not
> change the behavior of the code, without the need for voting and
> approvals, etc. Committers are expected to exercise their best
> judgment here.
>
> It is expected that when someone votes +1 on a change, it means that:
>
> * They have studied the change
>
> * Verified that the change meets INVIOLABLES.
>
> * Verified that it does not break POSIX compatibility or OS
> architectural boundaries
>
> * Done testing if feasible
>
> * Weighed any input from the community
>
> Please remember, the above are NOT rules, the above is a starting
> point for discussion as we hash out our requirements.
>
> Please participate, offer your thoughts, criticisms, etc.
>
> Thanks,
> Nathan

Those sound like good rules of thumb to me.  Certainly there are parts
of the OS that require more care and have broader impact that others.

Re: [DISCUSS - NuttX Workflow]

Posted by Gregory Nutt <sp...@gmail.com>.
>> I think only 5 emails in the whole list really address these functional
>> requirements.
> Let me add a 6th... (Without mentioning any "stupid" SCMs.)
>
> One thing missing from our earlier discussions is to decide how many
> approvals a change requires. I think this varies by area of the code
> being changed.
>
> As a starting point for further discussion, I suggest something along
> these lines:
>
> Changes that affect the build system should require three +1 binding
> votes and no vetoes from PMC members PLUS at least one report that
> NuttX builds successfully on each supported platform: Windows, Mac,
> Unix, and no reports of breakage caused by the change. Builds on
> Windows using a Unix compatibility layer would be considered Unix for
> this purpose. Any member of the community should be able to report
> whether it builds successfully and on which platform. Between the
> submitter of the patch, PMC members, and testers, this means that at
> least 7 pairs of eyes looks at any change to the build system. This
> high number is necessary because breakage there affects everyone and
> is very disruptive!
>
> Changes to code that affect the core of the OS should require three +1
> binding votes and no vetoes from PMC members and should be accompanied
> by some rationale or justification for the change. If applicable,
> supporting data should be provided, e.g., if it's supposed to improve
> performance, is this backed up by measurements?
>
> Changes to code in MCU architectural support, board support, or features
> in the periphery of the OS should be at the discretion of the
> committer. Committers should use their best judgment and are
> encouraged to discuss anything they're not sure about. But these
> changes don't require as much oversight. These changes are much more
> limited in their exposure. They are usually developed by someone to
> scratch their own itch. Also these are allowed to be feature-
> incomplete: e.g., it is okay to have partial board support.
>
> In the apps repository: Changes to code in core apps (such as NSH)
> should require two +1 binding votes and no vetoes.
>
> Changes to other non-core areas of apps are at the discretion of the
> committer.
>
> Notwithstanding all of the above, there is the concept of an "obvious
> fix." Any committer may fix things like obvious typos, misspellings,
> grammar mistakes in comments, code formatting violations, that do not
> change the behavior of the code, without the need for voting and
> approvals, etc. Committers are expected to exercise their best
> judgment here.
>
> It is expected that when someone votes +1 on a change, it means that:
>
> * They have studied the change
>
> * Verified that the change meets INVIOLABLES.
>
> * Verified that it does not break POSIX compatibility or OS
> architectural boundaries
>
> * Done testing if feasible
>
> * Weighed any input from the community
>
> Please remember, the above are NOT rules, the above is a starting
> point for discussion as we hash out our requirements.
>
> Please participate, offer your thoughts, criticisms, etc.
>
> Thanks,
> Nathan

Those sound like good rules of thumb to me.  Certainly there are parts 
of the OS that require more care and have broader impact that others.



Re: [DISCUSS - NuttX Workflow]

Posted by Nathan Hartman <ha...@gmail.com>.
On Thu, Dec 19, 2019 at 8:30 AM Gregory Nutt <sp...@gmail.com> wrote:
> On Thu, Dec 19, 2019 at 3:32 AM Sebastien Lorquet <se...@lorquet.fr> wrote:
>> But the endless list of complex git commands with additional options is probably
>> a blocker for many other people too.
>>
>> I dont even want to read it all.
>
> You and me both.  The near term objective of the PPMC is just to come up
> with a list -- maybe one page double spaced -- that just summarizes the
> steps that changes will undergo going from a patch (or PR) to being
> merged into master.  Should be pretty simple. These would be the
> "functional" requirements of the workflow.
>
> I think only 5 emails in the whole list really address these functional
> requirements.

Let me add a 6th... (Without mentioning any "stupid" SCMs.)

One thing missing from our earlier discussions is to decide how many
approvals a change requires. I think this varies by area of the code
being changed.

As a starting point for further discussion, I suggest something along
these lines:

Changes that affect the build system should require three +1 binding
votes and no vetoes from PMC members PLUS at least one report that
NuttX builds successfully on each supported platform: Windows, Mac,
Unix, and no reports of breakage caused by the change. Builds on
Windows using a Unix compatibility layer would be considered Unix for
this purpose. Any member of the community should be able to report
whether it builds successfully and on which platform. Between the
submitter of the patch, PMC members, and testers, this means that at
least 7 pairs of eyes looks at any change to the build system. This
high number is necessary because breakage there affects everyone and
is very disruptive!

Changes to code that affect the core of the OS should require three +1
binding votes and no vetoes from PMC members and should be accompanied
by some rationale or justification for the change. If applicable,
supporting data should be provided, e.g., if it's supposed to improve
performance, is this backed up by measurements?

Changes to code in MCU architectural support, board support, or features
in the periphery of the OS should be at the discretion of the
committer. Committers should use their best judgment and are
encouraged to discuss anything they're not sure about. But these
changes don't require as much oversight. These changes are much more
limited in their exposure. They are usually developed by someone to
scratch their own itch. Also these are allowed to be feature-
incomplete: e.g., it is okay to have partial board support.

In the apps repository: Changes to code in core apps (such as NSH)
should require two +1 binding votes and no vetoes.

Changes to other non-core areas of apps are at the discretion of the
committer.

Notwithstanding all of the above, there is the concept of an "obvious
fix." Any committer may fix things like obvious typos, misspellings,
grammar mistakes in comments, code formatting violations, that do not
change the behavior of the code, without the need for voting and
approvals, etc. Committers are expected to exercise their best
judgment here.

It is expected that when someone votes +1 on a change, it means that:

* They have studied the change

* Verified that the change meets INVIOLABLES.

* Verified that it does not break POSIX compatibility or OS
architectural boundaries

* Done testing if feasible

* Weighed any input from the community

Please remember, the above are NOT rules, the above is a starting
point for discussion as we hash out our requirements.

Please participate, offer your thoughts, criticisms, etc.

Thanks,
Nathan

Re: [DISCUSS - NuttX Workflow]

Posted by Gregory Nutt <sp...@gmail.com>.
> Looks really complex to me, if any contributor has to master all of this
> perfectly to contribute officially.
>
> the submodule sync with these specific options is already too much.
>
> do you really realize all that has to be memorized just for a hat repo?
>
>
> to put it another way: if you assure me that this hat repo is completely
> optional and that I will never ever have to use it, I'm okay. let me use my two
> repos as usual and play with your hat submodules without annoying anyone else.
>
>
> But, if this workflow requires such a complex string of git commands including
> rebase anytime I have to push anything to the apps or nuttx repo, I dont want to
> do it.
>
>
> Again just my opinion.
>
> But the endless list of complex git commands with additional options is probably
> a blocker for many other people too.
>
> I dont even want to read it all.

You and me both.  The near term objective of the PPMC is just to come up 
with a list -- maybe one page double spaced -- that just summarizes the 
steps that changes will undergo going from a patch (or PR) to being 
merged into master.  Should be pretty simple. These would be the 
"functional" requirements of the workflow.

I think only 5 emails in the whole list really address these functional 
requirements.   The reset is all rambling git and github talk that 
completely buries the goal to establish clean functional requirements.  
Details of the use of git, any special testing setups in git, and all of 
that is part of the implementation phase.  Mixing implementation and 
functional specification is always a disaster.

You should not have to be concerned now and you should having this 
conversations but, like everything else, you have been swept into the 
chaos vortex.  Abandon hope all ye who enter here.



Re: [DISCUSS - NuttX Workflow]

Posted by Sebastien Lorquet <se...@lorquet.fr>.
Looks really complex to me, if any contributor has to master all of this
perfectly to contribute officially.

the submodule sync with these specific options is already too much.

do you really realize all that has to be memorized just for a hat repo?


to put it another way: if you assure me that this hat repo is completely
optional and that I will never ever have to use it, I'm okay. let me use my two
repos as usual and play with your hat submodules without annoying anyone else.


But, if this workflow requires such a complex string of git commands including
rebase anytime I have to push anything to the apps or nuttx repo, I dont want to
do it.


Again just my opinion.

But the endless list of complex git commands with additional options is probably
a blocker for many other people too.

I dont even want to read it all.

Sebastien

Le 18/12/2019 à 15:20, David Sidrane a écrit :
>> what advantage does in fact the submodule method bring?
> See below
>
>> Even with a hat repository that contains two submodules (apps and nuttx),
>> you
>> will have to send separate pull requests for each submodule, right?
> Yes. But they com nit in 1 Atomic operation.
>
>
> Submodules 101
>
> This example is with write access on the repo - for committers
>
> git clone <url to knot> NuttX
> cd NuttX
> git checkout master
> git submodule sync --recursive && git submodule update --init --recursive
>
> git checkout -b master_add_tof_driver
>
> cd nuttx
> git checkout -b master_add_tof_driver
>
> #work and commit - rebase on self and remove drabble.
> rebase -i master
> #reorder, squash and fixup the commits (learn about mv-changes is your
> friend) - you will look organized.
>
> cd apps
> git checkout -b master_add_tof_driver
>
> #work and commit - rebase on self and remove cruft and noise.
> rebase -i master
> #reorder, squash and fixup the commits (learn about mv-changes it is your
> friend) - you will look organized.
>
> #Build and test locally.
> ## AOK
>
> cd apps
> git push origin master_add_tof_driver
>
> cd nuttx
> git push origin master_add_tof_driver
>
> cd .. (NuttX)
> git add nuts apps
> git commit "Update NuttX with TOF driver"
>
> git push origin master_add_tof_driver
>
> Ok so now (shal simplified to compare them)
>
> NuttX master shal 0000 point to
>   \nuttx master shal 2222
>   \apps master shal 1111
>
> NuttX master_add_tof_driver cccc
>   \nuttx master shal aaa
>   \apps master shal bbb
>
> merge PR from apps to master apps
> merge PR from nuttx to master nuttx
>
> NuttX master shal 0000 point to (still builds and runs)
>   \nuttx master shal 2222
>   \apps master shal 1111
>
> But the branch master of the submodules
>
>   \nuttx master shal aaa
>   \apps master shal bbb
>
>
> merge PR from NuttX to master NuttX (atomic replacement)
> NuttX master shal zzzzz point to
>   \nuttx master shal aaa
>   \apps master shal bbb
>
>
>
> -----Original Message-----
> From: Sebastien Lorquet [mailto:sebastien@lorquet.fr]
> Sent: Wednesday, December 18, 2019 5:52 AM
> To: dev@nuttx.apache.org
> Subject: Re: [DISCUSS - NuttX Workflow]
>
> Wait,
>
> what advantage does in fact the submodule method bring?
>
> Even with a hat repository that contains two submodules (apps and nuttx),
> you
> will have to send separate pull requests for each submodule, right?
>
> Sebastien
>
> Le 18/12/2019 à 14:40, Gregory Nutt a écrit :
>> On 12/18/2019 4:23 AM, David Sidrane wrote:
>>> That is precisely what submodules do:submodules aggregate on a single
>>> SHAL N
>>> repositories.
>>>
>>> The problem is: How to have atomic checkout of the correct configuration
>>> with
>>> out a temporal shift?
>>>
>>> Please describe how you would do this. Give detailed steps.
>> I don't see any difference in versioning with submodules.  You have to
>> explicitly state the UUID you are using in the submodule (unless there is
>> a
>> GIT sub-module trick I don't know).
>>
>> So how would you checkout the correct configuration with sub-modules.
>> Seems
>> to me that it is the same issue.
>>
>> I would vote about 18billion minus for this change.  But architecture
>> designs
>> are not justified by blantant expediency.
>>
>> Let's not go this way.
>>
>>

RE: [DISCUSS - NuttX Workflow]

Posted by David Sidrane <Da...@nscdg.com>.
>what advantage does in fact the submodule method bring?
See below

>Even with a hat repository that contains two submodules (apps and nuttx),
>you
>will have to send separate pull requests for each submodule, right?

Yes. But they com nit in 1 Atomic operation.


Submodules 101

This example is with write access on the repo - for committers

git clone <url to knot> NuttX
cd NuttX
git checkout master
git submodule sync --recursive && git submodule update --init --recursive

git checkout -b master_add_tof_driver

cd nuttx
git checkout -b master_add_tof_driver

#work and commit - rebase on self and remove drabble.
rebase -i master
#reorder, squash and fixup the commits (learn about mv-changes is your
friend) - you will look organized.

cd apps
git checkout -b master_add_tof_driver

#work and commit - rebase on self and remove cruft and noise.
rebase -i master
#reorder, squash and fixup the commits (learn about mv-changes it is your
friend) - you will look organized.

#Build and test locally.
## AOK

cd apps
git push origin master_add_tof_driver

cd nuttx
git push origin master_add_tof_driver

cd .. (NuttX)
git add nuts apps
git commit "Update NuttX with TOF driver"

git push origin master_add_tof_driver

Ok so now (shal simplified to compare them)

NuttX master shal 0000 point to
  \nuttx master shal 2222
  \apps master shal 1111

NuttX master_add_tof_driver cccc
  \nuttx master shal aaa
  \apps master shal bbb

merge PR from apps to master apps
merge PR from nuttx to master nuttx

NuttX master shal 0000 point to (still builds and runs)
  \nuttx master shal 2222
  \apps master shal 1111

But the branch master of the submodules

  \nuttx master shal aaa
  \apps master shal bbb


merge PR from NuttX to master NuttX (atomic replacement)
NuttX master shal zzzzz point to
  \nuttx master shal aaa
  \apps master shal bbb



-----Original Message-----
From: Sebastien Lorquet [mailto:sebastien@lorquet.fr]
Sent: Wednesday, December 18, 2019 5:52 AM
To: dev@nuttx.apache.org
Subject: Re: [DISCUSS - NuttX Workflow]

Wait,

what advantage does in fact the submodule method bring?

Even with a hat repository that contains two submodules (apps and nuttx),
you
will have to send separate pull requests for each submodule, right?

Sebastien

Le 18/12/2019 à 14:40, Gregory Nutt a écrit :
> On 12/18/2019 4:23 AM, David Sidrane wrote:
>> That is precisely what submodules do:submodules aggregate on a single
>> SHAL N
>> repositories.
>>
>> The problem is: How to have atomic checkout of the correct configuration
>> with
>> out a temporal shift?
>>
>> Please describe how you would do this. Give detailed steps.
>
> I don't see any difference in versioning with submodules.  You have to
> explicitly state the UUID you are using in the submodule (unless there is
> a
> GIT sub-module trick I don't know).
>
> So how would you checkout the correct configuration with sub-modules.
> Seems
> to me that it is the same issue.
>
> I would vote about 18billion minus for this change.  But architecture
> designs
> are not justified by blantant expediency.
>
> Let's not go this way.
>
>

RE: [DISCUSS - NuttX Workflow]

Posted by David Sidrane <Da...@nscdg.com>.
GOOD and BAD does not say why. It just says how one feel about something.

I get that the system architecture has a clear separate from OS to apps. No
question there. I also see the value in NOT having them in 1 repo.

But please bear with me, and let me tease out some "Why" answers on this
thread.

What I am gapping on is: How are 2 folders in the same directory in a repo
the system architecture?

What is it about the location in a folder structure that is perceived as the
system architecture?

Would you please help us to understand this POV (Point Of View).

David

-----Original Message-----
From: Nathan Hartman [mailto:hartman.nathan@gmail.com]
Sent: Wednesday, December 18, 2019 6:49 AM
To: dev@nuttx.apache.org
Subject: Re: [DISCUSS - NuttX Workflow]

On Wed, Dec 18, 2019 at 9:05 AM Gregory Nutt <sp...@gmail.com> wrote:

> There are three different concepts being discussed here that I think we
> should separate.  I know that I get confused about which is which.
>
>  1. Two repositories apps/ and nuttx/ -- GOOD
>  2. One respository with apps/ and nuttx/ as folders -- VERY, VERY BAD
>  3. Three repositories, apps/, nuttx/ and, say, testing/.  Where testing
>     has the apps/and nuttx/ as submodules -- WORTH CONSIDERING


Thanks for summing this up. I think this correctly summarizes what was said
before.

I agree that #2 is very very bad.

Nathan

Re: [DISCUSS - NuttX Workflow]

Posted by Gregory Nutt <sp...@gmail.com>.
>> There are three different concepts being discussed here that I think we
>> should separate.  I know that I get confused about which is which.
>>
>>   1. Two repositories apps/ and nuttx/ -- GOOD
>>   2. One respository with apps/ and nuttx/ as folders -- VERY, VERY BAD
>>   3. Three repositories, apps/, nuttx/ and, say, testing/.  Where testing
>>      has the apps/and nuttx/ as submodules -- WORTH CONSIDERING
>
> Thanks for summing this up. I think this correctly summarizes what was said
> before.
>
> I agree that #2 is very very bad.

#3 should be thought of an implementation strategy.  In this thread, we 
are trying to summarize and agree to the high level, logical steps of 
the workflow.  I don't think that implementation details should even be 
a part of the conversation.  Mixing discussions of implementation into a 
description of functional requirements is bad engineering and cannot 
lead to a successful definition of the functional requirements.

My preference is that we forget about github altogether for now for the 
purpose of communicating functional requirements clearly. Let's forget 
want what is the cleanest and easiest implementation for now.  Let's 
specify the behavior that we want, the nicest thing in the show window, 
not the lowest cost thing in the bargain bin.

Greg



Re: [DISCUSS - NuttX Workflow]

Posted by Nathan Hartman <ha...@gmail.com>.
On Wed, Dec 18, 2019 at 9:05 AM Gregory Nutt <sp...@gmail.com> wrote:

> There are three different concepts being discussed here that I think we
> should separate.  I know that I get confused about which is which.
>
>  1. Two repositories apps/ and nuttx/ -- GOOD
>  2. One respository with apps/ and nuttx/ as folders -- VERY, VERY BAD
>  3. Three repositories, apps/, nuttx/ and, say, testing/.  Where testing
>     has the apps/and nuttx/ as submodules -- WORTH CONSIDERING


Thanks for summing this up. I think this correctly summarizes what was said
before.

I agree that #2 is very very bad.

Nathan

RE: [DISCUSS - NuttX Workflow]

Posted by David Sidrane <Da...@nscdg.com>.
The 2 and 3 is to contrast HARD against EASY. This is so ALL of us can
realize we are not suggesting doing what is expedient*, What is being
suggested is doing what is right. I am having a really hard time to get you
to see this is not about EASY. It is subtle.

>I don't know if my understanding of the proposal is correct (I think
I've confused 2 and 3 a couple of times).  But I can't imagine a problem
with the testing/ repository that holds sub-modules.  The user would not
be impacted by such a thing in any.

Yes they are an end-user needs to pull nuttx and apps at any instant in time
and have them be in sync.

*there is no intent to violate the inviolate

-----Original Message-----
From: Gregory Nutt [mailto:spudaneco@gmail.com]
Sent: Wednesday, December 18, 2019 6:06 AM
To: dev@nuttx.apache.org
Subject: Re: [DISCUSS - NuttX Workflow]

So I hope that we do not go to far down the github rabbit hole.  At this
level.  Everytime we have tried to address and agree to the functional
work flow, we get derailed by github technical implementation details.
I think this discussion is relevant still, but we are on edge losing
focus ont he functional workflow and talking only about github
implementation (and have crossed the edge at times).

> what advantage does in fact the submodule method bring?
>
> Even with a hat repository that contains two submodules (apps and nuttx),
> you
> will have to send separate pull requests for each submodule, right?

There are three different concepts being discussed here that I think we
should separate.  I know that I get confused about which is which.

 1. Two repositories apps/ and nuttx/ -- GOOD
 2. One respository with apps/ and nuttx/ as folders -- VERY, VERY BAD
 3. Three repositories, apps/, nuttx/ and, say, testing/.  Where testing
    has the apps/and nuttx/ as submodules -- WORTH CONSIDERING

Number 3 would simply be a mechanization to support the workflow.  The
end user would never clone it or need ever be concerned about it in any
way.  From the end-user point of view apps/ and nuttx/ are the only
repositories.

I don't know if my understanding of the proposal is correct (I think
I've confused 2 and 3 a couple of times).  But I can't imagine a problem
with the testing/ repository that holds sub-modules.  The user would not
be impacted by such a thing in any.

If there is no user impact and no smearing of architectural entities,
then I retract bad things I said about sub-modules in any previous
discussion.

Greg

Re: [DISCUSS - NuttX Workflow]

Posted by Gregory Nutt <sp...@gmail.com>.
So I hope that we do not go to far down the github rabbit hole.  At this 
level.  Everytime we have tried to address and agree to the functional 
work flow, we get derailed by github technical implementation details.  
I think this discussion is relevant still, but we are on edge losing 
focus ont he functional workflow and talking only about github 
implementation (and have crossed the edge at times).

> what advantage does in fact the submodule method bring?
>
> Even with a hat repository that contains two submodules (apps and nuttx), you
> will have to send separate pull requests for each submodule, right?

There are three different concepts being discussed here that I think we 
should separate.  I know that I get confused about which is which.

 1. Two repositories apps/ and nuttx/ -- GOOD
 2. One respository with apps/ and nuttx/ as folders -- VERY, VERY BAD
 3. Three repositories, apps/, nuttx/ and, say, testing/.  Where testing
    has the apps/and nuttx/ as submodules -- WORTH CONSIDERING

Number 3 would simply be a mechanization to support the workflow.  The 
end user would never clone it or need ever be concerned about it in any 
way.  From the end-user point of view apps/ and nuttx/ are the only 
repositories.

I don't know if my understanding of the proposal is correct (I think 
I've confused 2 and 3 a couple of times).  But I can't imagine a problem 
with the testing/ repository that holds sub-modules.  The user would not 
be impacted by such a thing in any.

If there is no user impact and no smearing of architectural entities, 
then I retract bad things I said about sub-modules in any previous 
discussion.

Greg




Re: [DISCUSS - NuttX Workflow]

Posted by Sebastien Lorquet <se...@lorquet.fr>.
Wait,

what advantage does in fact the submodule method bring?

Even with a hat repository that contains two submodules (apps and nuttx), you
will have to send separate pull requests for each submodule, right?

Sebastien

Le 18/12/2019 à 14:40, Gregory Nutt a écrit :
> On 12/18/2019 4:23 AM, David Sidrane wrote:
>> That is precisely what submodules do:submodules aggregate on a single SHAL N
>> repositories.
>>
>> The problem is: How to have atomic checkout of the correct configuration with
>> out a temporal shift?
>>
>> Please describe how you would do this. Give detailed steps.
>
> I don't see any difference in versioning with submodules.  You have to
> explicitly state the UUID you are using in the submodule (unless there is a
> GIT sub-module trick I don't know).
>
> So how would you checkout the correct configuration with sub-modules.  Seems
> to me that it is the same issue.
>
> I would vote about 18billion minus for this change.  But architecture designs
> are not justified by blantant expediency.
>
> Let's not go this way.
>
>

RE: [DISCUSS - NuttX Workflow]

Posted by David Sidrane <Da...@nscdg.com>.
>(unless there is a GIT sub-module trick I don't know)

I believe this to be == TRUE. See the steps - try them then you will
understand.


-----Original Message-----
From: Gregory Nutt [mailto:spudaneco@gmail.com]
Sent: Wednesday, December 18, 2019 5:40 AM
To: dev@nuttx.apache.org
Subject: Re: [DISCUSS - NuttX Workflow]

On 12/18/2019 4:23 AM, David Sidrane wrote:
> That is precisely what submodules do:submodules aggregate on a single SHAL
> N repositories.
>
> The problem is: How to have atomic checkout of the correct configuration
> with out a temporal shift?
>
> Please describe how you would do this. Give detailed steps.

I don't see any difference in versioning with submodules.  You have to
explicitly state the UUID you are using in the submodule (unless there
is a GIT sub-module trick I don't know).

So how would you checkout the correct configuration with sub-modules.
Seems to me that it is the same issue.

I would vote about 18billion minus for this change.  But architecture
designs are not justified by blantant expediency.

Let's not go this way.

Re: [DISCUSS - NuttX Workflow]

Posted by Gregory Nutt <sp...@gmail.com>.
On 12/18/2019 4:23 AM, David Sidrane wrote:
> That is precisely what submodules do:submodules aggregate on a single SHAL N repositories.
>
> The problem is: How to have atomic checkout of the correct configuration with out a temporal shift?
>
> Please describe how you would do this. Give detailed steps.

I don't see any difference in versioning with submodules.  You have to 
explicitly state the UUID you are using in the submodule (unless there 
is a GIT sub-module trick I don't know).

So how would you checkout the correct configuration with sub-modules.  
Seems to me that it is the same issue.

I would vote about 18billion minus for this change.  But architecture 
designs are not justified by blantant expediency.

Let's not go this way.



Re: [DISCUSS - NuttX Workflow]

Posted by David Sidrane <da...@apache.org>.
That is precisely what submodules do:submodules aggregate on a single SHAL N repositories.

The problem is: How to have atomic checkout of the correct configuration with out a temporal shift?

Please describe how you would do this. Give detailed steps.

On 2019/12/18 10:09:26, Alan Carvalho de Assis <ac...@gmail.com> wrote: 
> Hi Liu,
> 
> On Wednesday, December 18, 2019, Haitao Liu <li...@gmail.com> wrote:
> > How about just keep two separate git repositories (apps and nuttx
> > projects) instead
> > of add a parent knot repo with apps and nuttx as sub-modules?
> > As to jenkins CI, I haven’t found proper github plugin to get PRs from
> > multiple repos(especially PRs dependency in apps & nuttx ) in one Jenkins
> > job.  Before that, I wonder whether we could keep it simple and
> > directly, create
> > one jenkins job for apps and another  jenkins job for nuttx to process PR
> > trigger accordingly.  Just make sure the jenkins pipeline or build script
> > to sync both apps and nuttx repos, then pick the apps or nuttx PR to do
> > full build.
> >
> > Since nuttx and apps projects keeps same as before, developers adapt to
> > github workflow as usual:
> > 1 fork the official apache nuttx & apps projects in github
> > 2 git clone your fork projects locally
> > 3 edit locally and then git commit to local branch
> > 4 git push to your github fork nuttx/apps branch
> > 5 issue one pull request from your fork nuttx/apps to apache nuttx/apps
> > master branch
> > 6 jenkins CI auto-trigger: style check, build or test, if failed, go to
> > step 3, continue 3 ~ 7
> > 7 PMC start to review PR, review ok, merge to master; or review failed, go
> > to step 3, continue 3~7
> >
> > Detailed info about GitHub workflow:
> >
> https://help.github.com/en/github/collaborating-with-issues-and-pull-requests
> >
> 
> I agree! Using two repositores is better than creating submodules.
> 
> We Just need to guarantee that users will clone both directories. The build
> system can do it when the user try to build without the ../apps.
> 
> BR,
> 
> Alan
> 

Re: [DISCUSS - NuttX Workflow]

Posted by Gregory Nutt <sp...@gmail.com>.
> why completely change what has worked for years?
>
> 2 repos as always. Submodules are an absolute pain to manage when you have branches.
>
> people have always been cloning two repos.
I agree.  Let's not change that.

RE: [DISCUSS - NuttX Workflow]

Posted by David Sidrane <Da...@nscdg.com>.
Hi Sebastien,

I agree submodules are a PAIN! But I do not agree this is hard it is just
more steps.

This why:  Because on a busy project it will break the build and/or cause
code to not get run.
It will waist time debugging ghosts and creating problem posts to list of
issues that are not reproducible.

No matter what anyone thinks architecturally  - There are dependencies from
apps to nuttx and nuttx to apps.

Two examples:

1 ) Change: CONFIG_EXAMPLE_IRBLASTER -> CONFIG_EXAMPLE_IR_BLASTER

In all the defconfig files on nuttx then on apps


push nuttx
->>>>>>>>You pull
push apps

Oh my code is broken it does not even run the ir blaster

OR

2) Add a new OS syscall

push apps
->>>>>>>>You pull
push nuttx

Oh the build is broken


If you want to roll the dice you can - do your old work flow NOTHING is
stopping you from it just check or the two repos not the knot repo do it all
by hand.

David


-----Original Message-----
From: Sebastien Lorquet [mailto:sebastien@lorquet.fr]
Sent: Wednesday, December 18, 2019 2:58 AM
To: dev@nuttx.apache.org
Subject: Re: [DISCUSS - NuttX Workflow]

why completely change what has worked for years?

2 repos as always. Submodules are an absolute pain to manage when you have
branches.

people have always been cloning two repos.

devs were sending patches for one of them.

Now they send pull request instead. Better tracking, ability to fix while
being
reviewed...

pull requests require branches, that will be annoying with submodules. This
will
still require separate pull requests for apps and nuttx.

I have NEVER seen any contribution that really required an exactly atomic
update
to both repos.

People often send patches for nuttx, and sometimes for apps.

Why change that?

Sebastien

Le 18/12/2019 à 11:46, David Sidrane a écrit :
>> 5 issue one pull request from your fork nuttx/apps to apache nuttx/apps
>> master branch
> Are you suggesting we have one repo NuttX with 2 folders apps and nuttx?
>
> That will simplify everything! - but I suspect we will receive STRONG
> arguments against it.
>
> So you  say "one pull request"
>
> Where? You have 2 repos. PR are against a single repo.
>
> This it what the Knot does. - It is the where
>
> On 2019/12/18 10:09:26, Alan Carvalho de Assis <ac...@gmail.com> wrote:
>> Hi Liu,
>>
>> On Wednesday, December 18, 2019, Haitao Liu <li...@gmail.com> wrote:
>>> How about just keep two separate git repositories (apps and nuttx
>>> projects) instead
>>> of add a parent knot repo with apps and nuttx as sub-modules?
>>> As to jenkins CI, I haven’t found proper github plugin to get PRs from
>>> multiple repos(especially PRs dependency in apps & nuttx ) in one
>>> Jenkins
>>> job.  Before that, I wonder whether we could keep it simple and
>>> directly, create
>>> one jenkins job for apps and another  jenkins job for nuttx to process
>>> PR
>>> trigger accordingly.  Just make sure the jenkins pipeline or build
>>> script
>>> to sync both apps and nuttx repos, then pick the apps or nuttx PR to do
>>> full build.
>>>
>>> Since nuttx and apps projects keeps same as before, developers adapt to
>>> github workflow as usual:
>>> 1 fork the official apache nuttx & apps projects in github
>>> 2 git clone your fork projects locally
>>> 3 edit locally and then git commit to local branch
>>> 4 git push to your github fork nuttx/apps branch
>>> 5 issue one pull request from your fork nuttx/apps to apache nuttx/apps
>>> master branch
>>> 6 jenkins CI auto-trigger: style check, build or test, if failed, go to
>>> step 3, continue 3 ~ 7
>>> 7 PMC start to review PR, review ok, merge to master; or review failed,
>>> go
>>> to step 3, continue 3~7
>>>
>>> Detailed info about GitHub workflow:
>>>
>> https://help.github.com/en/github/collaborating-with-issues-and-pull-requests
>> I agree! Using two repositores is better than creating submodules.
>>
>> We Just need to guarantee that users will clone both directories. The
>> build
>> system can do it when the user try to build without the ../apps.
>>
>> BR,
>>
>> Alan
>>

Re: [DISCUSS - NuttX Workflow]

Posted by Sebastien Lorquet <se...@lorquet.fr>.
why completely change what has worked for years?

2 repos as always. Submodules are an absolute pain to manage when you have branches.

people have always been cloning two repos.

devs were sending patches for one of them.

Now they send pull request instead. Better tracking, ability to fix while being
reviewed...

pull requests require branches, that will be annoying with submodules. This will
still require separate pull requests for apps and nuttx.

I have NEVER seen any contribution that really required an exactly atomic update
to both repos.

People often send patches for nuttx, and sometimes for apps.

Why change that?

Sebastien

Le 18/12/2019 à 11:46, David Sidrane a écrit :
>> 5 issue one pull request from your fork nuttx/apps to apache nuttx/apps master branch
> Are you suggesting we have one repo NuttX with 2 folders apps and nuttx?
>
> That will simplify everything! - but I suspect we will receive STRONG arguments against it.
>  
> So you  say "one pull request" 
>
> Where? You have 2 repos. PR are against a single repo.
>
> This it what the Knot does. - It is the where
>
> On 2019/12/18 10:09:26, Alan Carvalho de Assis <ac...@gmail.com> wrote: 
>> Hi Liu,
>>
>> On Wednesday, December 18, 2019, Haitao Liu <li...@gmail.com> wrote:
>>> How about just keep two separate git repositories (apps and nuttx
>>> projects) instead
>>> of add a parent knot repo with apps and nuttx as sub-modules?
>>> As to jenkins CI, I haven’t found proper github plugin to get PRs from
>>> multiple repos(especially PRs dependency in apps & nuttx ) in one Jenkins
>>> job.  Before that, I wonder whether we could keep it simple and
>>> directly, create
>>> one jenkins job for apps and another  jenkins job for nuttx to process PR
>>> trigger accordingly.  Just make sure the jenkins pipeline or build script
>>> to sync both apps and nuttx repos, then pick the apps or nuttx PR to do
>>> full build.
>>>
>>> Since nuttx and apps projects keeps same as before, developers adapt to
>>> github workflow as usual:
>>> 1 fork the official apache nuttx & apps projects in github
>>> 2 git clone your fork projects locally
>>> 3 edit locally and then git commit to local branch
>>> 4 git push to your github fork nuttx/apps branch
>>> 5 issue one pull request from your fork nuttx/apps to apache nuttx/apps
>>> master branch
>>> 6 jenkins CI auto-trigger: style check, build or test, if failed, go to
>>> step 3, continue 3 ~ 7
>>> 7 PMC start to review PR, review ok, merge to master; or review failed, go
>>> to step 3, continue 3~7
>>>
>>> Detailed info about GitHub workflow:
>>>
>> https://help.github.com/en/github/collaborating-with-issues-and-pull-requests
>> I agree! Using two repositores is better than creating submodules.
>>
>> We Just need to guarantee that users will clone both directories. The build
>> system can do it when the user try to build without the ../apps.
>>
>> BR,
>>
>> Alan
>>

Re: [DISCUSS - NuttX Workflow]

Posted by Justin Mclean <ju...@classsoftware.com>.
Hi,

> However, in order to workaround one jenkins job cannot receive two github
> repos webhook trigger( or I haven't found it yet : ( ), assign two jenkins
> job instead.

There are a number of way around this e.g. off top of my head multiple step pipeline, jenkins run a shell script to check out, there are many other Apache projects with multiple repos that use CI so we’ll just need to ask about.

Thanks,
Justin 

Re: [DISCUSS - NuttX Workflow]

Posted by Haitao Liu <li...@gmail.com>.
David, sorry that my expression was not clearly. What I meant is keep only
two repositories apps/ and nuttx/ instead of sub-modules.

However, in order to workaround one jenkins job cannot receive two github
repos webhook trigger( or I haven't found it yet : ( ), assign two jenkins
job instead.
One job for apps/ repo and another for nuttx/ repo.  The specific CI
related info may need more discussion in other thread.


David Sidrane <da...@apache.org> 于2019年12月18日周三 下午6:46写道:

> > 5 issue one pull request from your fork nuttx/apps to apache nuttx/apps
> master branch
>
> Are you suggesting we have one repo NuttX with 2 folders apps and nuttx?
>
> That will simplify everything! - but I suspect we will receive STRONG
> arguments against it.
>
> So you  say "one pull request"
>
> Where? You have 2 repos. PR are against a single repo.
>
> This it what the Knot does. - It is the where
>
> On 2019/12/18 10:09:26, Alan Carvalho de Assis <ac...@gmail.com> wrote:
> > Hi Liu,
> >
> > On Wednesday, December 18, 2019, Haitao Liu <li...@gmail.com> wrote:
> > > How about just keep two separate git repositories (apps and nuttx
> > > projects) instead
> > > of add a parent knot repo with apps and nuttx as sub-modules?
> > > As to jenkins CI, I haven’t found proper github plugin to get PRs from
> > > multiple repos(especially PRs dependency in apps & nuttx ) in one
> Jenkins
> > > job.  Before that, I wonder whether we could keep it simple and
> > > directly, create
> > > one jenkins job for apps and another  jenkins job for nuttx to process
> PR
> > > trigger accordingly.  Just make sure the jenkins pipeline or build
> script
> > > to sync both apps and nuttx repos, then pick the apps or nuttx PR to do
> > > full build.
> > >
> > > Since nuttx and apps projects keeps same as before, developers adapt to
> > > github workflow as usual:
> > > 1 fork the official apache nuttx & apps projects in github
> > > 2 git clone your fork projects locally
> > > 3 edit locally and then git commit to local branch
> > > 4 git push to your github fork nuttx/apps branch
> > > 5 issue one pull request from your fork nuttx/apps to apache nuttx/apps
> > > master branch
> > > 6 jenkins CI auto-trigger: style check, build or test, if failed, go to
> > > step 3, continue 3 ~ 7
> > > 7 PMC start to review PR, review ok, merge to master; or review
> failed, go
> > > to step 3, continue 3~7
> > >
> > > Detailed info about GitHub workflow:
> > >
> >
> https://help.github.com/en/github/collaborating-with-issues-and-pull-requests
> > >
> >
> > I agree! Using two repositores is better than creating submodules.
> >
> > We Just need to guarantee that users will clone both directories. The
> build
> > system can do it when the user try to build without the ../apps.
> >
> > BR,
> >
> > Alan
> >
>

Re: [DISCUSS - NuttX Workflow]

Posted by Gregory Nutt <sp...@gmail.com>.
> Where do you see any reference to github (In a url as an example?)
>
> This is all pure git.
>
> Are we going to continue using git?
I will not discuss git or github in this thread.  I suggest you start a 
new thread on that subject.

RE: [DISCUSS - NuttX Workflow]

Posted by David Sidrane <Da...@nscdg.com>.
Greg,

Where do you see any reference to github (In a url as an example?)

This is all pure git.

Are we going to continue using git?


David
-----Original Message-----
From: Gregory Nutt [mailto:spudaneco@gmail.com]
Sent: Wednesday, December 18, 2019 8:07 AM
To: dev@nuttx.apache.org
Subject: Re: [DISCUSS - NuttX Workflow]


> What about the people who are just learning Nuttx? Simple is relative. I
> can see how a check out of one folder would make it hard in your setup and
> simple for the New folks is'nt that way we are here to grow the project?
users should not need to learn details of the workflow
> BTW: your argument is solve by sub modules. You would just check out from
> nuttx repo
> It is also very helpful to have multiple remotes
> nuttx
>   nuttx ASF nuttx repo
> apps
>    nat   nathan's apps repo
>    nuttx ASF apps repo
>
> git fetch nuttx
> git log nuttx/apps - hmm that changed in make in afd890
> git reset --hard nat/apps
> git cherry-pick afd890
>
Please no... save the github chatter for another day, another thread.  I
refuse to even look at that

RE: [DISCUSS - NuttX Workflow]

Posted by David Sidrane <Da...@nscdg.com>.
Agreed!

Gosh I am hoping I am not talking down to people. I just remember my
learning curve with git. If we are continuing to use git and I assume we
are: My comments are meant to help people that do not understand how to use
git for the process and evaluate what they will have to do if we choose a
specific implementation.

Would you please layout the step for using patches and educate us on that
process as well


David

-----Original Message-----
From: Gregory Nutt [mailto:spudaneco@gmail.com]
Sent: Wednesday, December 18, 2019 8:19 AM
To: dev@nuttx.apache.org
Subject: Re: [DISCUSS - NuttX Workflow]


>> What about the people who are just learning Nuttx? Simple is
>> relative. I can see how a check out of one folder would make it hard
>> in your setup and simple for the New folks is'nt that way we are here
>> to grow the project?
> users should not need to learn details of the workflow
>> BTW: your argument is solve by sub modules. You would just check out
>> from nuttx repo
>> It is also very helpful to have multiple remotes
>> nuttx
>>   nuttx ASF nuttx repo
>> apps
>>    nat   nathan's apps repo
>>    nuttx ASF apps repo
>>
>> git fetch nuttx
>> git log nuttx/apps - hmm that changed in make in afd890
>> git reset --hard nat/apps
>> git cherry-pick afd890
>>
> Please no... save the github chatter for another day, another thread.
> I refuse to even look at that

Requirements specification is a top-down activity.  It is only driven by
end users needs and project objects.  NOT by implementation.  That is
the nature of System Engineering: top-down

Design for an implementation, on the other hand, is usually a bottoms-up
activity:  You implement the lowest level foundations of the system and
build on top of that to complete the full functional requirements.

This is extreme bad, bad engineering to drive system functional
requirements base on pre-determined implementation of the lowest level.
It is a terrible, unprofessional practice.  We need to keep proper
top-down system engineering practices, and not get derailed by this low
level stuff.

We do indeed need pick the nicest dress from show room window based upon
what we really want, not be rummaging through the bargain for the
cheapest thing.

Let's get profressional!

Greg

Re: [DISCUSS - NuttX Workflow]

Posted by Nathan Hartman <ha...@gmail.com>.
On Wed, Dec 18, 2019 at 3:51 PM David Sidrane <Da...@nscdg.com> wrote:
> Hi Nathan,
>
> Great list!
>
> I can +1 on most of them, but isn't correct that the PPMC will need to all
> agree on these?

Hi David,

Thank you.

As I said, I was:

> Just throwing some thoughts out here as a starting point for that
> top-down discussion:

In other words, I wasn't stating that any of that is decided, final,
or even preliminary. I just wrote down some points in the hopes that
it would help get the conversation going, as Greg suggests (correctly
in my opinion) that we do before delving into git operational details.

More below...

> > When they wish to contribute, they can do so:
> > * Via a pull request
> > * Via a patch transmitted to us by some method
>
> Is this an ASF edict?

Nope.

More below...

> > Regardless of the method, we would convert the pull request and/or
> > patch into a form that is useful for us. For example, if we work with
> > pull requests and we are given a patch, we convert the patch into a
> > pull request.
>
> Where is the ability to have group a review? How is it done?

It sounds like the general direction is to use GitHub and its
mechanisms. Is that correct? If so, then that's where the group review
would take place.

As for patches that come by email, I suppose that whoever decides to
look over a patch can make a judgement call, and if they'd like to,
they can create a PR out of it. I don't think there's any formal
"process" for this part, but it would be nice if they'd reply to the
email to tell everyone that it has been moved to a PR and give the
link to it. That should also prevent (or minimize the chance of)
duplicated PRs from the same patch by multiple people.

More below...

> > Contributions may be based on:
> > * Master.
> > * Or the latest release. When contributions are based on the latest
> > release, we should rebase them onto master.
>
> What If the fix on master? Would it need to be backported to the release?
> How do you see the decision made on backports and who does this?

I don't know how other projects do it but I can tell you what we do
over in Subversionland. Once a release branch is formed, we never
commit code changes directly to it. All fixes are committed to trunk
(i.e., "master") and are backported to the branch. The decision to
backport a change is made as follows (summarizing roughly): A
committer nominates a commit for backport by listing it in a STATUS
file. This is an ordinary text file. Other committers may vote for (or
against) the backport by adding their vote to the same file. Once the
requisite number of votes have accumulated with no vetoes etc., the
commit is approved for backport. (While this could be done manually,
we have custom tooling to assist with adding items to STATUS and
voting on items, and there's a bot that runs every night, checks
STATUS, and does the actual merging.) If you're interested in the
details: https://subversion.apache.org/docs/community-guide/releasing.html#release-stabilization

Hope this helps,
Nathan

Re: [DISCUSS - NuttX Workflow]

Posted by Justin Mclean <ju...@classsoftware.com>.
Hi,

To answer a couple of your questions.

> over 2 weeks nuttx.apache.org has content?

Would be good to start what to do about this in another thread. Having a website even if minimal to start with is sort of important.

> Jira is up?

From what I can see it not decided if JIRA is needed or not. The project could use GitHib issues for instance, but if teh project needs JIRA it’s very easy to set up and would be live in 10 minutes.

Thanks,
Justin

Re: [DISCUSS - NuttX Workflow]

Posted by Gregory Nutt <sp...@gmail.com>.
Those of you that are used to the fast-response, shoot-from-the-hip 
style of the old BSD NuttX will in for a rude awakening.  There will be 
some bureaucracy, some overhead, and lots of processes.  The processes 
are, I believe, good for NuttX. It is now too large and too critical to 
depend on how I am feeling in the morning when handle the night's 
patches.  Good engineering processes will help the OS in the long run... 
especially in the not-so-distant future when I will not be so deeply 
involved.

I assume that you trusted me to make good decisions in the past. You 
will now need to trust the group think.

In a democracy, you will always have politics in any endeavor with 3 or 
more people.  As soon as 3 people are involved, one will have to lobby, 
bamboozle, or overwhelm the other to get a majority.  That is just the 
way that the human creature works. Voting is a way to make sure that 
everyone is heard, everyone has there voice, and no dominant person can 
take charge.  It is also a good thing.

The future is upon us.  Best to embrace it.

Greg


On 12/18/2019 4:38 PM, Disruptive Solutions wrote:
> Do we really have to do some sharing in what we did in the past to get each others trust or something? I really do not care about egos and stuff like that. Focus and get things done please... politics are another matter right?
>
> Apache: please tell us which milestones are set?
> over 2 weeks nuttx.apache.org has content?
> We can contribute in patches?
> Test strategy is set?
> Jira is up?
> There is a strategy how Nuttx gets over 100 commiters in 6 months?
> Etc etc?
>
> I thing focus is the key....
>
> Verstuurd vanaf mijn iPhone
>
>> Op 18 dec. 2019 om 23:31 heeft Gregory Nutt <sp...@gmail.com> het volgende geschreven:
>>
>> With Nathan's workflow on another thread, DavidS's workflow early in this thread, Nathan's workflow on this thread, Nathan's workflow with my appended workflow, and Justin's comments ... Do we have enough to define an initial workflow?  I think so.  Some of it is a little inconsistent (but not wildly so), some has a little longer lead time like a reliable beautifier and hardware/simulator in loop testing, but I think it is generally resolvable over time.  Do you think we have enough to put together a straw man work flow and get consensus on it?
>>
>> We should not discuss or consider any git/github implementation at this time.  We should have just a clean, simple list of English sentences that describe what the workflow is.  I propose that we get consensus through a less formal vote of the PPMC (binding) and we should also hear what everyone else thinks in the list (non-binding).
>>
>> Who wants to summarize and call the vote?  I would like to see some volunteer from the other, less vocal members of the PPMC.  We need to get everyone on board.
>>
>> I think I should specifically stand back and let it happen.
>>
>> Once we have nailed the workflow, then it will be the time talk git and github topics to generate the top-level design.  You can then all 'break-a-leg' with git discussions!  The top-level design (e.g., how many repositories, for example) should be subject to consensus as well, I think.  But let's let the implementers have a more-or-less free hand with the detailed design.
>>
>> Thoughts?
>>
>> Greg
>>
>>> On 12/18/2019 3:51 PM, Justin Mclean wrote:
>>> Hi,
>>>
>>>> I can +1 on most of them, but isn't correct that the PPMC will need to all
>>>> agree on these?
>>> There need to reach consensus, that doesn’t mean all need to 100% agree but all are OK with the proposed workflow.
>>>
>>>>> When they wish to contribute, they can do so:
>>>>> * Via a pull request
>>>>> * Via a patch transmitted to us by some method
>>>> Is this an ASF edict?
>>> Nope we don’t care how contributions come in, some project may have their own requirments. But for significant contributions we do like people to sign an ICLA, and once they are a committer an ICLA is needed.
>>>
>>> Thanks,
>>> Justin

Re: [DISCUSS - NuttX Workflow]

Posted by Disruptive Solutions <di...@gmail.com>.
Do we really have to do some sharing in what we did in the past to get each others trust or something? I really do not care about egos and stuff like that. Focus and get things done please... politics are another matter right?

Apache: please tell us which milestones are set?
over 2 weeks nuttx.apache.org has content?
We can contribute in patches?
Test strategy is set?
Jira is up?
There is a strategy how Nuttx gets over 100 commiters in 6 months?
Etc etc?

I thing focus is the key....

Verstuurd vanaf mijn iPhone

> Op 18 dec. 2019 om 23:31 heeft Gregory Nutt <sp...@gmail.com> het volgende geschreven:
> 
> With Nathan's workflow on another thread, DavidS's workflow early in this thread, Nathan's workflow on this thread, Nathan's workflow with my appended workflow, and Justin's comments ... Do we have enough to define an initial workflow?  I think so.  Some of it is a little inconsistent (but not wildly so), some has a little longer lead time like a reliable beautifier and hardware/simulator in loop testing, but I think it is generally resolvable over time.  Do you think we have enough to put together a straw man work flow and get consensus on it?
> 
> We should not discuss or consider any git/github implementation at this time.  We should have just a clean, simple list of English sentences that describe what the workflow is.  I propose that we get consensus through a less formal vote of the PPMC (binding) and we should also hear what everyone else thinks in the list (non-binding).
> 
> Who wants to summarize and call the vote?  I would like to see some volunteer from the other, less vocal members of the PPMC.  We need to get everyone on board.
> 
> I think I should specifically stand back and let it happen.
> 
> Once we have nailed the workflow, then it will be the time talk git and github topics to generate the top-level design.  You can then all 'break-a-leg' with git discussions!  The top-level design (e.g., how many repositories, for example) should be subject to consensus as well, I think.  But let's let the implementers have a more-or-less free hand with the detailed design.
> 
> Thoughts?
> 
> Greg
> 
>> On 12/18/2019 3:51 PM, Justin Mclean wrote:
>> Hi,
>> 
>>> I can +1 on most of them, but isn't correct that the PPMC will need to all
>>> agree on these?
>> There need to reach consensus, that doesn’t mean all need to 100% agree but all are OK with the proposed workflow.
>> 
>>>> When they wish to contribute, they can do so:
>>>> * Via a pull request
>>>> * Via a patch transmitted to us by some method
>>> Is this an ASF edict?
>> Nope we don’t care how contributions come in, some project may have their own requirments. But for significant contributions we do like people to sign an ICLA, and once they are a committer an ICLA is needed.
>> 
>> Thanks,
>> Justin

Re: [DISCUSS - NuttX Workflow]

Posted by Gregory Nutt <sp...@gmail.com>.
With Nathan's workflow on another thread, DavidS's workflow early in 
this thread, Nathan's workflow on this thread, Nathan's workflow with my 
appended workflow, and Justin's comments ... Do we have enough to define 
an initial workflow?  I think so.  Some of it is a little inconsistent 
(but not wildly so), some has a little longer lead time like a reliable 
beautifier and hardware/simulator in loop testing, but I think it is 
generally resolvable over time.  Do you think we have enough to put 
together a straw man work flow and get consensus on it?

We should not discuss or consider any git/github implementation at this 
time.  We should have just a clean, simple list of English sentences 
that describe what the workflow is.  I propose that we get consensus 
through a less formal vote of the PPMC (binding) and we should also hear 
what everyone else thinks in the list (non-binding).

Who wants to summarize and call the vote?  I would like to see some 
volunteer from the other, less vocal members of the PPMC.  We need to 
get everyone on board.

I think I should specifically stand back and let it happen.

Once we have nailed the workflow, then it will be the time talk git and 
github topics to generate the top-level design.  You can then all 
'break-a-leg' with git discussions!  The top-level design (e.g., how 
many repositories, for example) should be subject to consensus as well, 
I think.  But let's let the implementers have a more-or-less free hand 
with the detailed design.

Thoughts?

Greg

On 12/18/2019 3:51 PM, Justin Mclean wrote:
> Hi,
>
>> I can +1 on most of them, but isn't correct that the PPMC will need to all
>> agree on these?
> There need to reach consensus, that doesn’t mean all need to 100% agree but all are OK with the proposed workflow.
>
>>> When they wish to contribute, they can do so:
>>> * Via a pull request
>>> * Via a patch transmitted to us by some method
>> Is this an ASF edict?
> Nope we don’t care how contributions come in, some project may have their own requirments. But for significant contributions we do like people to sign an ICLA, and once they are a committer an ICLA is needed.
>
> Thanks,
> Justin

Re: [DISCUSS - NuttX Workflow]

Posted by Justin Mclean <ju...@classsoftware.com>.
Hi,

> I can +1 on most of them, but isn't correct that the PPMC will need to all
> agree on these?

There need to reach consensus, that doesn’t mean all need to 100% agree but all are OK with the proposed workflow.

>> When they wish to contribute, they can do so:
>> * Via a pull request
>> * Via a patch transmitted to us by some method
> 
> Is this an ASF edict?

Nope we don’t care how contributions come in, some project may have their own requirments. But for significant contributions we do like people to sign an ICLA, and once they are a committer an ICLA is needed.

Thanks,
Justin

RE: [DISCUSS - NuttX Workflow]

Posted by David Sidrane <Da...@nscdg.com>.
Hi Nathan,

Great list!

I can +1 on most of them, but isn't correct that the PPMC will need to all
agree on these?

> When they wish to contribute, they can do so:
> * Via a pull request
> * Via a patch transmitted to us by some method

Is this an ASF edict?

> Regardless of the method, we would convert the pull request and/or
> patch into a form that is useful for us. For example, if we work with
> pull requests and we are given a patch, we convert the patch into a
> pull request.

Where is the ability to have group a review? How is it done?

> Contributions may be based on:
> * Master.
> * Or the latest release. When contributions are based on the latest
> release, we should rebase them onto master.

What If the fix on master? Would it need to be backported to the release?
How do you see the decision made on backports and who does this?

David


-----Original Message-----
From: Nathan Hartman [mailto:hartman.nathan@gmail.com]
Sent: Wednesday, December 18, 2019 8:56 AM
To: dev@nuttx.apache.org
Subject: Re: [DISCUSS - NuttX Workflow]

On Wed, Dec 18, 2019 at 11:18 AM Gregory Nutt <sp...@gmail.com> wrote:
> Requirements specification is a top-down activity.  It is only driven by
> end users needs and project objects.  NOT by implementation.  That is
> the nature of System Engineering: top-down

Just throwing some thoughts out here as a starting point for that
top-down discussion:

Users of NuttX can:
* Use NuttX with our Apps
* Use NuttX by itself and provide their own Apps

For the toolchain, they can:
* Use the toolchains we provide with buildroot
* Use their own toolchains

They can get NuttX and/or apps:
* From Git
* From source release tarballs

If getting from Git, they can:
* Live on the bleeding edge with Master
* Work from a branch or tag for more stability

When they wish to contribute, they can do so:
* Via a pull request
* Via a patch transmitted to us by some method

Regardless of the method, we would convert the pull request and/or
patch into a form that is useful for us. For example, if we work with
pull requests and we are given a patch, we convert the patch into a
pull request.

Contributions may be based on:
* Master.
* Or the latest release. When contributions are based on the latest
release, we should rebase them onto master.

Re: [DISCUSS - NuttX Workflow]

Posted by Justin Mclean <ju...@classsoftware.com>.
HI,

> 3.
>   PMC should triage and assign the change to a committer.  PMC may
>   also review for conformance with the Inviolables If this review
>   fails, the change is declined.

Most of the Apache  projects I’m on let committers select what they what to review and work on rather than being assigned it. It’s often referred to as “scratch your own itch”. Tha not to say that this project can’t do it differently, but the workflow may need to consider that people are volunteers and their availability may vary. 

Thanks,
Justin

Re: [DISCUSS - NuttX Workflow]

Posted by Gregory Nutt <sp...@gmail.com>.
>> Requirements specification is a top-down activity.  It is only driven by
>> end users needs and project objects.  NOT by implementation.  That is
>> the nature of System Engineering: top-down
> Just throwing some thoughts out here as a starting point for that
> top-down discussion:
>
> Users of NuttX can:
> * Use NuttX with our Apps
> * Use NuttX by itself and provide their own Apps
>
> For the toolchain, they can:
> * Use the toolchains we provide with buildroot
> * Use their own toolchains
>
> They can get NuttX and/or apps:
> * From Git
> * From source release tarballs
>
> If getting from Git, they can:
> * Live on the bleeding edge with Master
> * Work from a branch or tag for more stability
>
> When they wish to contribute, they can do so:
> * Via a pull request
> * Via a patch transmitted to us by some method
>
> Regardless of the method, we would convert the pull request and/or
> patch into a form that is useful for us. For example, if we work with
> pull requests and we are given a patch, we convert the patch into a
> pull request.
>
> Contributions may be based on:
> * Master.
> * Or the latest release. When contributions are based on the latest
> release, we should rebase them onto master.

I wrote this a long time ago:


  Proposed Steps from Contribution to Commit

  I think the work flow should be like this:

 1.

    PR or patch received (basically what Nathan wrote above)

 2.

    Triggers automated checking:

     1.

        Verify that it follows the coding standard, and

     2.

        verify that the build is not broken.

  If either fail, ask the contributor to fix the problem and resubmit 
the change.

 3.

    PMC should triage and assign the change to a committer.  PMC may
    also review for conformance with the Inviolables If this review
    fails, the change is declined.

 4.

    Committer performs final review for technical correctness and
    conformance to the Inviolables.  If this review fails, the change is
    declined otherwise the committer commits the change.

 5.

    2:42 PM <https://nuttx.slack.com/archives/GM2JH0P3M/p1575146565176000>


    Step 1

Changes should include some information about how to test the change.  
For modifications to code that is tested by existing configurations, we 
would need to know the relevant configurations settings.  From that we 
should be able to select a set of relevant test configurations.


If the change is a new feature, then it may not be testable using any 
existing configuration.  In that case, we will have to insist that the 
change be accompanied by a configuration that can be used for testing.


    Step 2

For now we just need this minimum, but this should extend in the future 
as we aim for a higher level of quality assurance.

Step 2a)The NuttX style verification tool, nxstyle, should be used to 
check coding style.  If the submission does not follow the NuttX coding 
style, we will need to ask the contributor to update the change so that 
it does.

Nxstyle is an imperfect tool, however.  We probably need to manually 
check any failed output to verify that the failures are not false alarms.

Step 2b) The brute force Jenkins-style testing is not useful here.  
Rather, we need a smarter build. We need to build configurations that 
ACTUALLY build the code that is changed by the contribution.  Per step 
1) the contributor has either provided a new test configuration (which 
should be included) as a part of the change, or 2) the contributor has 
provided the relevant configuration settings for testing the change.

In the latter case, we should be able to build a test configuration list 
by selecting existing configuration that include these configuration 
settings.

Step 2c)  This is where we may want to add hardware-in-the-loop testing 
with a reference board in the future.


    Step 3

There should be a list of all committers with the areas in which they 
have the best expertise.  Assigning a change to a committer should be 
simply picking the person with the best expertise, but also accounting 
for any backlog.  Other committers may need to take up the slack.


    Step 4

Ultimately, it is the committer who is responsible for assuring that (1) 
the change is technically correct, complete, and of the highest 
quality.  And that (2) the change is consistent with all of the 
principles of the Inviolables: The change must not violate the portable 
POSIX interface, the change must conform to the architectural principles 
of the OS, the change must not expose any platform dependencies that 
would have any impact on other users of NuttX.


At this point, the committer should be confident that the change is in 
full compliance with the coding standard and will not break the build.



Re: [DISCUSS - NuttX Workflow]

Posted by Nathan Hartman <ha...@gmail.com>.
On Wed, Dec 18, 2019 at 11:18 AM Gregory Nutt <sp...@gmail.com> wrote:
> Requirements specification is a top-down activity.  It is only driven by
> end users needs and project objects.  NOT by implementation.  That is
> the nature of System Engineering: top-down

Just throwing some thoughts out here as a starting point for that
top-down discussion:

Users of NuttX can:
* Use NuttX with our Apps
* Use NuttX by itself and provide their own Apps

For the toolchain, they can:
* Use the toolchains we provide with buildroot
* Use their own toolchains

They can get NuttX and/or apps:
* From Git
* From source release tarballs

If getting from Git, they can:
* Live on the bleeding edge with Master
* Work from a branch or tag for more stability

When they wish to contribute, they can do so:
* Via a pull request
* Via a patch transmitted to us by some method

Regardless of the method, we would convert the pull request and/or
patch into a form that is useful for us. For example, if we work with
pull requests and we are given a patch, we convert the patch into a
pull request.

Contributions may be based on:
* Master.
* Or the latest release. When contributions are based on the latest
release, we should rebase them onto master.

Re: [DISCUSS - NuttX Workflow]

Posted by Gregory Nutt <sp...@gmail.com>.
>> What about the people who are just learning Nuttx? Simple is 
>> relative. I can see how a check out of one folder would make it hard 
>> in your setup and simple for the New folks is'nt that way we are here 
>> to grow the project?
> users should not need to learn details of the workflow
>> BTW: your argument is solve by sub modules. You would just check out 
>> from nuttx repo
>> It is also very helpful to have multiple remotes
>> nuttx
>>   nuttx ASF nuttx repo
>> apps
>>    nat   nathan's apps repo
>>    nuttx ASF apps repo
>>
>> git fetch nuttx
>> git log nuttx/apps - hmm that changed in make in afd890
>> git reset --hard nat/apps
>> git cherry-pick afd890
>>
> Please no... save the github chatter for another day, another thread.  
> I refuse to even look at that

Requirements specification is a top-down activity.  It is only driven by 
end users needs and project objects.  NOT by implementation.  That is 
the nature of System Engineering: top-down

Design for an implementation, on the other hand, is usually a bottoms-up 
activity:  You implement the lowest level foundations of the system and 
build on top of that to complete the full functional requirements.

This is extreme bad, bad engineering to drive system functional 
requirements base on pre-determined implementation of the lowest level.  
It is a terrible, unprofessional practice.  We need to keep proper 
top-down system engineering practices, and not get derailed by this low 
level stuff.

We do indeed need pick the nicest dress from show room window based upon 
what we really want, not be rummaging through the bargain for the 
cheapest thing.

Let's get profressional!

Greg



Re: [DISCUSS - NuttX Workflow]

Posted by Gregory Nutt <sp...@gmail.com>.
> What about the people who are just learning Nuttx? Simple is relative. I can see how a check out of one folder would make it hard in your setup and simple for the New folks is'nt that way we are here to grow the project?
users should not need to learn details of the workflow
> BTW: your argument is solve by sub modules. You would just check out from nuttx repo
> It is also very helpful to have multiple remotes
> nuttx
>   nuttx ASF nuttx repo
> apps
>    nat   nathan's apps repo
>    nuttx ASF apps repo
>
> git fetch nuttx
> git log nuttx/apps - hmm that changed in make in afd890
> git reset --hard nat/apps
> git cherry-pick afd890
>
Please no... save the github chatter for another day, another thread.  I 
refuse to even look at that



Re: nuttx.events

Posted by Flavio Junqueira <fp...@apache.org>.
Hi Greg,

It is ok to discuss separately the organization of events, they don't need to happen on the project mailing lists. If the event is open to the community with an open call for abstracts, then you might want to announce it on the user/dev lists. Some communities do not like it, though. If there is a company backing such an event, people could complain that it is not a neutral event and as such should not be advertised on the Apache lists.

As a sample point, I have never organized events using the Apache lists or any other channels, but we have announced project meetups to let people sign up to attend and present.

-Flavio

> On 18 Dec 2019, at 20:06, Gregory Nutt <sp...@gmail.com> wrote:
> 
> This is a question for Justin or any other mentor.
> 
> There are a couple of associated web sites that are dedicated for NuttX event planning:  nuttx2019.org and https://nuttx.events/. nuttx2019.org now just re-directs to nuttx.events.  They event planners used to use a private channel in the NuttX Slack for communication.  However, the NuttX project can no longer host private conversations in the NuttX Slack;
> 
> nuttx.events is managed by Dave Marples..  It is fairly open but they need to have some private discussions related to events, event planning, sponsor relationships, etc.  For example, planning is (or at least was) underway for the NuttX2020 event in Tokyo in May.  I am not sure what they are doing now.. I hope I didn't undermine them too badly.
> 
> So the question is, after having booted event planning out into the cold, is there some way to bring it back into the fold.  There must be other projects that host events and must have similar planning needs.  Do you know what the standard practice is for such things?  I imagine that an independent group like nuttx.events is necessary, but is there any way to coordinate. Slack private channels worked very well for this.  I don't know what the replacement could be and would be open to suggestions.
> 
> Greg
> 
> 


Re: nuttx.events

Posted by "张铎 (Duo Zhang)" <pa...@gmail.com>.
For HBaseCon, usually the discussion will first be made in the private
mailing list, to get enough PMC members support, and then you can post it
in the public mailing list to get the PC members for reviewing and
accepting the submissions, and also a PMC member must send an email to the
trademarks mailing list to acquire the permission to use the trademarks for
'Apache NuttX', on the materials for this event. The website is fine,
usually it will be hosted by the company which hosts the event.

Justin Mclean <ju...@classsoftware.com> 于2019年12月19日周四 上午5:35写道:

> Hi,
>
> It’s best if the PMC are involved in some way, ideally they would have a
> hand in selecting the speakers, and the event would have the ASF as a
> community sponsor. For more details see [1]. Given this is an event already
> in the planning process and you're an incubating project there’s going to
> be some leeway but I don’t see anything there that would be too hard to
> take into consideration (says someone not on the planning committee :-) )
> Out of interest how large is the event expected to be? You also might also
> want to considering having the event added here [2.]
>
> Thanks,
> Justin
>
> 1. https://www.apache.org/foundation/marks/events.html
> 2. http://community.apache.org/calendars/

Re: nuttx.events

Posted by Justin Mclean <ju...@classsoftware.com>.
Hi,

It’s best if the PMC are involved in some way, ideally they would have a hand in selecting the speakers, and the event would have the ASF as a community sponsor. For more details see [1]. Given this is an event already in the planning process and you're an incubating project there’s going to be some leeway but I don’t see anything there that would be too hard to take into consideration (says someone not on the planning committee :-) ) Out of interest how large is the event expected to be? You also might also want to considering having the event added here [2.]

Thanks,
Justin

1. https://www.apache.org/foundation/marks/events.html
2. http://community.apache.org/calendars/

nuttx.events

Posted by Gregory Nutt <sp...@gmail.com>.
This is a question for Justin or any other mentor.

There are a couple of associated web sites that are dedicated for NuttX 
event planning:  nuttx2019.org and https://nuttx.events/. nuttx2019.org 
now just re-directs to nuttx.events.  They event planners used to use a 
private channel in the NuttX Slack for communication.  However, the 
NuttX project can no longer host private conversations in the NuttX Slack;

nuttx.events is managed by Dave Marples..  It is fairly open but they 
need to have some private discussions related to events, event planning, 
sponsor relationships, etc.  For example, planning is (or at least was) 
underway for the NuttX2020 event in Tokyo in May.  I am not sure what 
they are doing now.. I hope I didn't undermine them too badly.

So the question is, after having booted event planning out into the 
cold, is there some way to bring it back into the fold.  There must be 
other projects that host events and must have similar planning needs.  
Do you know what the standard practice is for such things?  I imagine 
that an independent group like nuttx.events is necessary, but is there 
any way to coordinate. Slack private channels worked very well for 
this.  I don't know what the replacement could be and would be open to 
suggestions.

Greg



RE: Getting Started (Was Re: [DISCUSS - NuttX Workflow])

Posted by di...@gmail.com.
My current state is that the outline is formed and that the filling per "chapter" is on its way and that the purpose of the book is set. Its public is also for people who come across Nuttx for the first time and do not have seen movies or other tutorials from Nuttx and did not have the privilege to meet all of you as I have.  Say like people who start with a platform like Arduino. So there is a global introduction to all the aspects which I already mentioned: Architecture, Concepts, etc...

Also, the outline base is on real use cases, and the book will take you through hands-on situations. So you do not only read about the Architecture of Nuttx, but also you are taken on a journey by really make things happen. It also concerns setting up a development process (IDE, debugging, etc.). Arduino has its IDE. I am writing all the examples in Visual Studio Code and here I have to get the debugging solid. In Eclipse, it looks more advanced. And using a Segger is also an option, but OpenOCD is free and open for STM32. So a choice is made here. It has to be around for many years and not have a Vendor Lock-in.

But before I can write out the use cases, I have to do them myself so I know all the written use cases and code tested thoroughly. There is nothing more killing then referring something in a book and it's not working. Also, I had to choose a reference platform for the book on which all the use cases are implemented. Even I do know what an RTOS is and its nota bout hardware... one has to choose a reference platform for the book. And for this, I have chosen the STM32 hardware. I do not have any commitment to a hardware supplier, but one has to make a choice. And STM32 has some good criteria for real beginners... especially in costs vs. functionality.

So I am doing a lot of work daily (next to a job I have to do) to get all the things done. But writing a book has to be fun and a learning experience. I also am in contact with Alan concerning the use cases and testing. And a publisher who has understanding fort he previous matters. Better do things right than in chaos.

If some-one wants an update, I am always willing to share this. I am also very anxious to see the progress on Nuttx and I do not see a roadmap here. My roadmap is clear... but I now have all the links to Nuttx on the bitbucket repo and the BSD platform. So I cannot bring out anything until I know ho wand what and when the Apache release is ready in a state like the previous platform was, where I could commit my patches, which now have a link with this writing process.

Good luck tot he PMC and people who are doing this great job, and I am reading all the emails about status and progress.

Ben

-----Oorspronkelijk bericht-----
Van: Gregory Nutt <sp...@gmail.com> 
Verzonden: woensdag 18 december 2019 20:31
Aan: dev@nuttx.apache.org
Onderwerp: Re: Getting Started (Was Re: [DISCUSS - NuttX Workflow])

Ben vd Veen has been working on a NuttX getting started book for a few months.  I don't know the current state.  There was a channel on the old NuttX Slack devoted to the book, but it has not been updated in a very long time.  Perhaps Ben could fill us in on the current state, progress, and where it is going.

BTW: I have archived the channels and deactivated most of the members. It is no longer viable for anything and I may as well shut down that Slack now.  There is nothing to see there now.

On 12/18/2019 1:34 PM, Abdelatif Guettouche wrote:
>> I'd prefer that the Getting Started guide should be reachable by one 
>> click from the front page of the NuttX website (which doesn't exist 
>> yet), so that a TOTAL newbie who hasn't even gotten the code yet can 
>> read and get a feel for what's involved.
> Agree.
> I wanted to point out that much of the content needed to make such a 
> document is already in place.
>
>> Yes, much of the information is in the README file. Perhaps we can 
>> modify text files like that to be in Markdown format, which unlike 
>> HTML, leaves the file looking like a normal ASCII file, but allows 
>> the file to be converted to other formats, including HTML, using 
>> automated tools. Then we could convert that information and display 
>> it directly on the website.
> That would helpful.
> Some of the readme files are already (almost) in Markdown format.
>
>
> On Wed, Dec 18, 2019 at 5:46 PM Nathan Hartman <ha...@gmail.com> wrote:
>> On Wed, Dec 18, 2019 at 12:01 PM Gregory Nutt <sp...@gmail.com> wrote:
>>> There is this: http://www.nuttx.org/doku.php?id=wiki:getting-started
>>> where the "external tutorials" is quite extensive:
>>> http://www.nuttx.org/doku.php?id=wiki:getting-started:external-tutor
>>> ials
>> That's great but I think we need our own basic Getting Started guide 
>> that gets a total newbie off the ground quickly; it can, of course, 
>> have an "Additional Resources" section with links to all of these 
>> other resources.
>>
>> On Wed, Dec 18, 2019 at 12:31 PM Abdelatif Guettouche 
>> <ab...@gmail.com> wrote:
>>> Boards readme files contain all the information needed to get 
>>> started with a particular board.
>> Again, that's great, but it presumes that you have the code, know 
>> about the board READMEs, know where they are...
>>
>> I'd prefer that the Getting Started guide should be reachable by one 
>> click from the front page of the NuttX website (which doesn't exist 
>> yet), so that a TOTAL newbie who hasn't even gotten the code yet can 
>> read and get a feel for what's involved.
>>
>> Yes, much of the information is in the README file. Perhaps we can 
>> modify text files like that to be in Markdown format, which unlike 
>> HTML, leaves the file looking like a normal ASCII file, but allows 
>> the file to be converted to other formats, including HTML, using 
>> automated tools. Then we could convert that information and display 
>> it directly on the website.
>>
>> Nathan




Re: Getting Started (Was Re: [DISCUSS - NuttX Workflow])

Posted by Gregory Nutt <sp...@gmail.com>.
Ben vd Veen has been working on a NuttX getting started book for a few 
months.  I don't know the current state.  There was a channel on the old 
NuttX Slack devoted to the book, but it has not been updated in a very 
long time.  Perhaps Ben could fill us in on the current state, progress, 
and where it is going.

BTW: I have archived the channels and deactivated most of the members.  
It is no longer viable for anything and I may as well shut down that 
Slack now.  There is nothing to see there now.

On 12/18/2019 1:34 PM, Abdelatif Guettouche wrote:
>> I'd prefer that the Getting Started guide should be reachable by one
>> click from the front page of the NuttX website (which doesn't exist
>> yet), so that a TOTAL newbie who hasn't even gotten the code yet can
>> read and get a feel for what's involved.
> Agree.
> I wanted to point out that much of the content needed to make such a
> document is already in place.
>
>> Yes, much of the information is in the README file. Perhaps we can
>> modify text files like that to be in Markdown format, which unlike
>> HTML, leaves the file looking like a normal ASCII file, but allows the
>> file to be converted to other formats, including HTML, using automated
>> tools. Then we could convert that information and display it directly
>> on the website.
> That would helpful.
> Some of the readme files are already (almost) in Markdown format.
>
>
> On Wed, Dec 18, 2019 at 5:46 PM Nathan Hartman <ha...@gmail.com> wrote:
>> On Wed, Dec 18, 2019 at 12:01 PM Gregory Nutt <sp...@gmail.com> wrote:
>>> There is this: http://www.nuttx.org/doku.php?id=wiki:getting-started
>>> where the "external tutorials" is quite extensive:
>>> http://www.nuttx.org/doku.php?id=wiki:getting-started:external-tutorials
>> That's great but I think we need our own basic Getting Started guide
>> that gets a total newbie off the ground quickly; it can, of course,
>> have an "Additional Resources" section with links to all of these
>> other resources.
>>
>> On Wed, Dec 18, 2019 at 12:31 PM Abdelatif Guettouche
>> <ab...@gmail.com> wrote:
>>> Boards readme files contain all the information needed to get started
>>> with a particular board.
>> Again, that's great, but it presumes that you have the code, know
>> about the board READMEs, know where they are...
>>
>> I'd prefer that the Getting Started guide should be reachable by one
>> click from the front page of the NuttX website (which doesn't exist
>> yet), so that a TOTAL newbie who hasn't even gotten the code yet can
>> read and get a feel for what's involved.
>>
>> Yes, much of the information is in the README file. Perhaps we can
>> modify text files like that to be in Markdown format, which unlike
>> HTML, leaves the file looking like a normal ASCII file, but allows the
>> file to be converted to other formats, including HTML, using automated
>> tools. Then we could convert that information and display it directly
>> on the website.
>>
>> Nathan



Re: Getting Started (Was Re: [DISCUSS - NuttX Workflow])

Posted by Abdelatif Guettouche <ab...@gmail.com>.
> I'd prefer that the Getting Started guide should be reachable by one
> click from the front page of the NuttX website (which doesn't exist
> yet), so that a TOTAL newbie who hasn't even gotten the code yet can
> read and get a feel for what's involved.

Agree.
I wanted to point out that much of the content needed to make such a
document is already in place.

> Yes, much of the information is in the README file. Perhaps we can
> modify text files like that to be in Markdown format, which unlike
> HTML, leaves the file looking like a normal ASCII file, but allows the
> file to be converted to other formats, including HTML, using automated
> tools. Then we could convert that information and display it directly
> on the website.

That would helpful.
Some of the readme files are already (almost) in Markdown format.


On Wed, Dec 18, 2019 at 5:46 PM Nathan Hartman <ha...@gmail.com> wrote:
>
> On Wed, Dec 18, 2019 at 12:01 PM Gregory Nutt <sp...@gmail.com> wrote:
> > There is this: http://www.nuttx.org/doku.php?id=wiki:getting-started
> > where the "external tutorials" is quite extensive:
> > http://www.nuttx.org/doku.php?id=wiki:getting-started:external-tutorials
>
> That's great but I think we need our own basic Getting Started guide
> that gets a total newbie off the ground quickly; it can, of course,
> have an "Additional Resources" section with links to all of these
> other resources.
>
> On Wed, Dec 18, 2019 at 12:31 PM Abdelatif Guettouche
> <ab...@gmail.com> wrote:
> > Boards readme files contain all the information needed to get started
> > with a particular board.
>
> Again, that's great, but it presumes that you have the code, know
> about the board READMEs, know where they are...
>
> I'd prefer that the Getting Started guide should be reachable by one
> click from the front page of the NuttX website (which doesn't exist
> yet), so that a TOTAL newbie who hasn't even gotten the code yet can
> read and get a feel for what's involved.
>
> Yes, much of the information is in the README file. Perhaps we can
> modify text files like that to be in Markdown format, which unlike
> HTML, leaves the file looking like a normal ASCII file, but allows the
> file to be converted to other formats, including HTML, using automated
> tools. Then we could convert that information and display it directly
> on the website.
>
> Nathan

Re: Getting Started (Was Re: [DISCUSS - NuttX Workflow])

Posted by Nathan Hartman <ha...@gmail.com>.
On Wed, Dec 18, 2019 at 12:01 PM Gregory Nutt <sp...@gmail.com> wrote:
> There is this: http://www.nuttx.org/doku.php?id=wiki:getting-started
> where the "external tutorials" is quite extensive:
> http://www.nuttx.org/doku.php?id=wiki:getting-started:external-tutorials

That's great but I think we need our own basic Getting Started guide
that gets a total newbie off the ground quickly; it can, of course,
have an "Additional Resources" section with links to all of these
other resources.

On Wed, Dec 18, 2019 at 12:31 PM Abdelatif Guettouche
<ab...@gmail.com> wrote:
> Boards readme files contain all the information needed to get started
> with a particular board.

Again, that's great, but it presumes that you have the code, know
about the board READMEs, know where they are...

I'd prefer that the Getting Started guide should be reachable by one
click from the front page of the NuttX website (which doesn't exist
yet), so that a TOTAL newbie who hasn't even gotten the code yet can
read and get a feel for what's involved.

Yes, much of the information is in the README file. Perhaps we can
modify text files like that to be in Markdown format, which unlike
HTML, leaves the file looking like a normal ASCII file, but allows the
file to be converted to other formats, including HTML, using automated
tools. Then we could convert that information and display it directly
on the website.

Nathan

Re: Getting Started (Was Re: [DISCUSS - NuttX Workflow])

Posted by Gregory Nutt <sp...@gmail.com>.
> Boards readme files contain all the information needed to get started
> with a particular board.

Many do.  They all should.  They vary in quality.  Some board README 
files have no useful information at all; some have old information.  
Some consist of only a few lines, some are thousands of lines.  Some 
boards don't any any README files.

The board README files are very helpful.  Even for me.  If I have not 
worked with a board for a couple of years, the README file brings me 
back up to speed very quickly.

Unfortunately the content is not consistent, controlled, or properly 
maintained.

Greg



Re: Getting Started (Was Re: [DISCUSS - NuttX Workflow])

Posted by Abdelatif Guettouche <ab...@gmail.com>.
Boards readme files contain all the information needed to get started
with a particular board.

On Wed, Dec 18, 2019 at 5:01 PM Gregory Nutt <sp...@gmail.com> wrote:
>
> On 12/18/2019 10:47 AM, Nathan Hartman wrote:
> > On Wed, Dec 18, 2019 at 11:04 AM David Sidrane <da...@apache.org> wrote:
> >> What about the people who are just learning Nuttx? Simple is relative.
> > We never had a Getting Started guide. We need one. And because it's so
> > hard for someone "in the know" not to assume knowledge, we may need
> > the help of some total n00bs to get this guide written -- to see where
> > they get stuck and waste time looking for answers, and write those
> > things in the guide. But that's a subject for another thread.
>
> There is this: http://www.nuttx.org/doku.php?id=wiki:getting-started
> where the "external tutorials" is quite extensive:
> http://www.nuttx.org/doku.php?id=wiki:getting-started:external-tutorials
>

Getting Started (Was Re: [DISCUSS - NuttX Workflow])

Posted by Gregory Nutt <sp...@gmail.com>.
On 12/18/2019 10:47 AM, Nathan Hartman wrote:
> On Wed, Dec 18, 2019 at 11:04 AM David Sidrane <da...@apache.org> wrote:
>> What about the people who are just learning Nuttx? Simple is relative.
> We never had a Getting Started guide. We need one. And because it's so
> hard for someone "in the know" not to assume knowledge, we may need
> the help of some total n00bs to get this guide written -- to see where
> they get stuck and waste time looking for answers, and write those
> things in the guide. But that's a subject for another thread.

There is this: http://www.nuttx.org/doku.php?id=wiki:getting-started  
where the "external tutorials" is quite extensive: 
http://www.nuttx.org/doku.php?id=wiki:getting-started:external-tutorials


Re: Getting Started Guide (Re: [DISCUSS - NuttX Workflow])

Posted by Gregory Nutt <sp...@gmail.com>.
We need to get this discussion onto a separate thread.  I just created a 
getting started thread, but I think we just crossed paths

On 12/18/2019 4:00 PM, Disruptive Solutions wrote:
> Maybe one can make a roadmap and milestones when we can expect things getting back to “normal”. What is who doing and what has to be done?
>
> Maybe its better to write a getting started guide how to contribute? Making patches, code conventions, etc etc
>
> Ben
>
> Verstuurd vanaf mijn iPhone
>
>> Op 18 dec. 2019 om 22:41 heeft Justin Mclean <ju...@classsoftware.com> het volgende geschreven:
>>
>> Hi,
>>
>>> We never had a Getting Started guide. We need one. And because it's so
>>> hard for someone "in the know" not to assume knowledge, we may need
>>> the help of some total n00bs to get this guide written -- to see where
>>> they get stuck and waste time looking for answers, and write those
>>> things in the guide. But that's a subject for another thread.
>> I can probably help there,  being the total n00b that is :-) I’ve not used Nuttx but have some RTOS experience. I’m also a part time teacher.
>>
>> Thanks,
>> Justin
>>

Re: [DISCUSS - NuttX Workflow]

Posted by Disruptive Solutions <di...@gmail.com>.
Maybe one can make a roadmap and milestones when we can expect things getting back to “normal”. What is who doing and what has to be done? 

Maybe its better to write a getting started guide how to contribute? Making patches, code conventions, etc etc

Ben

Verstuurd vanaf mijn iPhone

> Op 18 dec. 2019 om 22:41 heeft Justin Mclean <ju...@classsoftware.com> het volgende geschreven:
> 
> Hi,
> 
>> We never had a Getting Started guide. We need one. And because it's so
>> hard for someone "in the know" not to assume knowledge, we may need
>> the help of some total n00bs to get this guide written -- to see where
>> they get stuck and waste time looking for answers, and write those
>> things in the guide. But that's a subject for another thread.
> 
> I can probably help there,  being the total n00b that is :-) I’ve not used Nuttx but have some RTOS experience. I’m also a part time teacher.
> 
> Thanks,
> Justin
> 

Re: Getting Started Guide (Re: [DISCUSS - NuttX Workflow])

Posted by Gregory Nutt <sp...@gmail.com>.
>
>> A little. :-)  I've worked on a few commercial projects in recent 
>> years and way back I did SCADA and real time systems and been 
>> involved as a hobbyist for 15 or so years. I’m spoken at the Open 
>> Hardware Summit and given a number of IoT and MyNewt talks at various 
>> conferences, and run basic Arduino courses. I organise the IoT meetup 
>> here in Sydney. One of the reasons I put my hand up to mentor this 
>> project, but I’ve never used NuttX.
>
> You will find it is a little "beefier" than Mynewt or Arduino. It is a 
> full Linux compatible RTOS (but much, much smaller than LInux).  So 
> working with NuttX is a little more like working with Linux than with 
> other really tiny RTOSs.  Somewhere in between. NuttX apps are 
> definitely like Linux apps.  That is a consequence of the portable 
> POSIX OS interface.  Most Linux code can be made run on NuttX (but 
> often with having to bring some less-than-standard Linux definitions, 
> and dealing with all of the libraries used in Linux development).
Aside from the standard POSIX/Unix interface, another big difference 
between NuttX and the very tiny RTOSs is that, like Linux/Unix, NuttX is 
very console oriented.  Certainly there are many NuttX embedded systems 
that are field "headless", but most included a usb, serial, or telnet 
console and a tiny bash-like shell if only during at least during 
development.  Of course name of the shell is the NuttShell (NSH) ;-)

Re: Getting Started Guide (Re: [DISCUSS - NuttX Workflow])

Posted by Gregory Nutt <sp...@gmail.com>.
> A little. :-)  I've worked on a few commercial projects in recent years and way back I did SCADA and real time systems and been involved as a hobbyist for 15 or so years. I’m spoken at the Open Hardware Summit and given a number of IoT and MyNewt talks at various conferences, and run basic Arduino courses. I organise the IoT meetup here in Sydney. One of the reasons I put my hand up to mentor this project, but I’ve never used NuttX.

You will find it is a little "beefier" than Mynewt or Arduino. It is a 
full Linux compatible RTOS (but much, much smaller than LInux).  So 
working with NuttX is a little more like working with Linux than with 
other really tiny RTOSs.  Somewhere in between. NuttX apps are 
definitely like Linux apps.  That is a consequence of the portable POSIX 
OS interface.  Most Linux code can be made run on NuttX (but often with 
having to bring some less-than-standard Linux definitions, and dealing 
with all of the libraries used in Linux development).



Re: Getting Started Guide (Re: [DISCUSS - NuttX Workflow])

Posted by Brennan Ashton <ba...@brennanashton.com>.
On Wed, Dec 18, 2019, 5:06 PM Justin Mclean <ju...@me.com.invalid>
wrote:

> Nhi,
>
> > This conversation got me thinking: would we want a repository to write a
> > community-written-and-maintained NuttX book?
> >
> > I think this would have a couple of advantages:
> >
> > (1) Keeps the "official" documentation within the umbrella of the
> project.
> >
> > (2) Provides an additional way for people to contribute to Apache NuttX.
> > (Not all contributions need to be code; not all contributors need to be
> > software people.)
>
> Excellent idea. You might want to look at what the Apache Training project
> is doing or perhaps consider (if you have not) using asciidoctor. [1]
>
> Thanks,
> Justin
>
> 1. https://asciidoctor.org


I posted this previously but it got split between the mailing lists.

This is actually an area where the Rust team has done an amazing job. There
are several books both online and sometimes print that are up-to-date,
covering both the intro and domain specific topics.

https://doc.rust-lang.org/book/
https://doc.rust-lang.org/rust-by-example/
https://rust-cli.github.io/book/
https://rust-embedded.github.io/book/


Long as we do some form of markdown there are lots of options. gitbook,
mdbook, Sphinx, etc...

--Brennan

Re: Getting Started Guide (Re: [DISCUSS - NuttX Workflow])

Posted by Justin Mclean <ju...@me.com.INVALID>.
Nhi,

> This conversation got me thinking: would we want a repository to write a
> community-written-and-maintained NuttX book?
> 
> I think this would have a couple of advantages:
> 
> (1) Keeps the "official" documentation within the umbrella of the project.
> 
> (2) Provides an additional way for people to contribute to Apache NuttX.
> (Not all contributions need to be code; not all contributors need to be
> software people.)

Excellent idea. You might want to look at what the Apache Training project is doing or perhaps consider (if you have not) using asciidoctor. [1]

Thanks,
Justin

1. https://asciidoctor.org

Re: Getting Started Guide (Re: [DISCUSS - NuttX Workflow])

Posted by Gregory Nutt <sp...@gmail.com>.
>>> This conversation got me thinking: would we want a repository to write a
>>> community-written-and-maintained NuttX book?
>> Like https://nuttx_projects.gitlab.io/nuttx_book/
>>
> And whose project is that?
It is very old.  phreakuencies is v01d and, I think that is the old 
handle for Matias N. (Nitzche)

Re: Getting Started Guide (Re: [DISCUSS - NuttX Workflow])

Posted by Nathan Hartman <ha...@gmail.com>.
On Wed, Dec 18, 2019 at 6:45 PM Gregory Nutt <sp...@gmail.com> wrote:

>
> >>> Ben ve Veen has started a Getting Started Book and, as I recall, even
> >> has a publisher in mind.  He posted his status in the original thread
> the I
> >> renamed.  Perhaps you could discuss ways to collaborate?
> >>
> >> I saw that. I have published a book (on Android), writing it was a large
> >> amount of work and so I might be able to help out, but my bandwidth is
> >> limited.
> > This conversation got me thinking: would we want a repository to write a
> > community-written-and-maintained NuttX book?
> Like https://nuttx_projects.gitlab.io/nuttx_book/
>
And whose project is that?

Re: Getting Started Guide (Re: [DISCUSS - NuttX Workflow])

Posted by Gregory Nutt <sp...@gmail.com>.
>>> Ben ve Veen has started a Getting Started Book and, as I recall, even
>> has a publisher in mind.  He posted his status in the original thread the I
>> renamed.  Perhaps you could discuss ways to collaborate?
>>
>> I saw that. I have published a book (on Android), writing it was a large
>> amount of work and so I might be able to help out, but my bandwidth is
>> limited.
> This conversation got me thinking: would we want a repository to write a
> community-written-and-maintained NuttX book?
Like https://nuttx_projects.gitlab.io/nuttx_book/

Re: Getting Started Guide (Re: [DISCUSS - NuttX Workflow])

Posted by Nathan Hartman <ha...@gmail.com>.
On Wed, Dec 18, 2019 at 5:20 PM Justin Mclean <ju...@classsoftware.com>
wrote:

> > Ben ve Veen has started a Getting Started Book and, as I recall, even
> has a publisher in mind.  He posted his status in the original thread the I
> renamed.  Perhaps you could discuss ways to collaborate?
>
> I saw that. I have published a book (on Android), writing it was a large
> amount of work and so I might be able to help out, but my bandwidth is
> limited.


This conversation got me thinking: would we want a repository to write a
community-written-and-maintained NuttX book?

I think this would have a couple of advantages:

(1) Keeps the "official" documentation within the umbrella of the project.

(2) Provides an additional way for people to contribute to Apache NuttX.
(Not all contributions need to be code; not all contributors need to be
software people.)

Just a thought,
Nathan

Re: Getting Started Guide (Re: [DISCUSS - NuttX Workflow])

Posted by Justin Mclean <ju...@classsoftware.com>.
Hi,

> That is great that you could commit.  I notice that you are the Chair of the Mynewt project, so you must have some familiarity.

A little. :-)  I've worked on a few commercial projects in recent years and way back I did SCADA and real time systems and been involved as a hobbyist for 15 or so years. I’m spoken at the Open Hardware Summit and given a number of IoT and MyNewt talks at various conferences, and run basic Arduino courses. I organise the IoT meetup here in Sydney. One of the reasons I put my hand up to mentor this project, but I’ve never used NuttX.

> Ben ve Veen has started a Getting Started Book and, as I recall, even has a publisher in mind.  He posted his status in the original thread the I renamed.  Perhaps you could discuss ways to collaborate? 

I saw that. I have published a book (on Android), writing it was a large amount of work and so I might be able to help out, but my bandwidth is limited.

Thanks,
Justin

Getting Started Guide (Re: [DISCUSS - NuttX Workflow])

Posted by Gregory Nutt <sp...@gmail.com>.
>> We never had a Getting Started guide. We need one. And because it's so
>> hard for someone "in the know" not to assume knowledge, we may need
>> the help of some total n00bs to get this guide written -- to see where
>> they get stuck and waste time looking for answers, and write those
>> things in the guide. But that's a subject for another thread.
> I can probably help there,  being the total n00b that is :-) I’ve not used Nuttx but have some RTOS experience. I’m also a part time teacher.

That is great that you could commit.  I notice that you are the Chair of 
the Mynewt project, so you must have some familiarity. Ben ve Veen has 
started a Getting Started Book and, as I recall, even has a publisher in 
mind.  He posted his status in the original thread the I renamed.  
Perhaps you could discuss ways to collaborate?

OT:  I now recall the name of person that discussed incorporating Mynewt 
IoT components into NuttX.  It was James Pace (and another person).  But 
I do not see him on the roster.  I thought he was involved in the 
project, but now I think he just wanted to incorporated the components 
for some other reason.


Re: [DISCUSS - NuttX Workflow]

Posted by Justin Mclean <ju...@classsoftware.com>.
Hi,

> We never had a Getting Started guide. We need one. And because it's so
> hard for someone "in the know" not to assume knowledge, we may need
> the help of some total n00bs to get this guide written -- to see where
> they get stuck and waste time looking for answers, and write those
> things in the guide. But that's a subject for another thread.

I can probably help there,  being the total n00b that is :-) I’ve not used Nuttx but have some RTOS experience. I’m also a part time teacher.

Thanks,
Justin


Re: [DISCUSS - NuttX Workflow]

Posted by Gregory Nutt <sp...@gmail.com>.
>> What about the people who are just learning Nuttx? Simple is relative.
> We never had a Getting Started guide. We need one.
The top-level README.txt file has been the only authoritative, supported 
getting started guide.  It has the most complete discussion without 
focusing on any particular architecture.  Any prettier getting started 
guide should start with the README.txt file

Re: [DISCUSS - NuttX Workflow]

Posted by Nathan Hartman <ha...@gmail.com>.
On Wed, Dec 18, 2019 at 11:04 AM David Sidrane <da...@apache.org> wrote:
> What about the people who are just learning Nuttx? Simple is relative.

We never had a Getting Started guide. We need one. And because it's so
hard for someone "in the know" not to assume knowledge, we may need
the help of some total n00bs to get this guide written -- to see where
they get stuck and waste time looking for answers, and write those
things in the guide. But that's a subject for another thread.

Re: [DISCUSS - NuttX Workflow]

Posted by David Sidrane <da...@apache.org>.
What about the people who are just learning Nuttx? Simple is relative. I can see how a check out of one folder would make it hard in your setup and simple for the New folks is'nt that way we are here to grow the project?

BTW: your argument is solve by sub modules. You would just check out from nuttx repo  
It is also very helpful to have multiple remotes
nuttx
 nuttx ASF nuttx repo   
apps
  nat   nathan's apps repo 
  nuttx ASF apps repo 

git fetch nuttx
git log nuttx/apps - hmm that changed in make in afd890
git reset --hard nat/apps 
git cherry-pick afd890


On 2019/12/18 12:50:17, Nathan Hartman <ha...@gmail.com> wrote: 
> On Wed, Dec 18, 2019 at 5:46 AM David Sidrane <da...@apache.org> wrote:
> 
> > > 5 issue one pull request from your fork nuttx/apps to apache nuttx/apps
> > master branch
> >
> > Are you suggesting we have one repo NuttX with 2 folders apps and nuttx?
> >
> > That will simplify everything! - but I suspect we will receive STRONG
> > arguments against it.
> 
> 
> Yes, such as what will those of us do who have our own custom apps? Not
> everyone uses e.g. NSH. Some products are more deeply embedded than others.
> 
> i would oppose combining those two repos into one because i agree with the
> concept that we should not make the user's life harder for our convenience.
> 
> Most changes only affect only one repository or the other. For the much
> smaller number of changes that affect both, we should have some special
> handling.
> 
> Nathan
> 

Re: [DISCUSS - NuttX Workflow]

Posted by Alan Carvalho de Assis <ac...@gmail.com>.
On 12/18/19, Nathan Hartman <ha...@gmail.com> wrote:
> On Wed, Dec 18, 2019 at 5:46 AM David Sidrane <da...@apache.org> wrote:
>
>> > 5 issue one pull request from your fork nuttx/apps to apache nuttx/apps
>> master branch
>>
>> Are you suggesting we have one repo NuttX with 2 folders apps and nuttx?
>>
>> That will simplify everything! - but I suspect we will receive STRONG
>> arguments against it.
>
>
> Yes, such as what will those of us do who have our own custom apps? Not
> everyone uses e.g. NSH. Some products are more deeply embedded than others.
>
> i would oppose combining those two repos into one because i agree with the
> concept that we should not make the user's life harder for our convenience.
>
> Most changes only affect only one repository or the other. For the much
> smaller number of changes that affect both, we should have some special
> handling.
>

Good point Nathan!

So even using a single repository with submodules make our life easier
it brings other issues.

I also vote to keep the original configuration: apps and nuttx
repositories separated.

BR,

Alan

Re: [DISCUSS - NuttX Workflow]

Posted by Gregory Nutt <sp...@gmail.com>.
> i would oppose combining those two repos into one because i agree with the
> concept that we should not make the user's life harder for our convenience.
I am also (very) opposed to combining repositories.  Smearing 
functionality is just bad system architecture.  Separateness and 
modularity is always the best way to go.
> Most changes only affect only one repository or the other. For the much
> smaller number of changes that affect both, we should have some special
> handling.

That is not always the case.  Sometimes a new driver is accompanied by a 
new test application.  But, tehnically, those are still separate changes.



Re: [DISCUSS - NuttX Workflow]

Posted by Gregory Nutt <sp...@gmail.com>.
>>> 5 issue one pull request from your fork nuttx/apps to apache nuttx/apps
>> master branch
>>
>> Are you suggesting we have one repo NuttX with 2 folders apps and nuttx?
>>
>> That will simplify everything! - but I suspect we will receive STRONG
>> arguments against it.

Isn't this just another case where taking shortcuts in tool design is 
undermining gool architecture decisions?  Tools should support the 
architecture, not trash it.  The tools should be driven by the 
architecture and not vice versa.

This is the primary enemy of NuttX and this is the kind of thinking that 
destroy its clean module design.

Let's stop thinking about how to make things easy and talk instead about 
how to do it right... regardless of whether it is read or difficult.  
Let's not be afraid of difficult.  No of value comes easily.



Re: [DISCUSS - NuttX Workflow]

Posted by Nathan Hartman <ha...@gmail.com>.
On Wed, Dec 18, 2019 at 5:46 AM David Sidrane <da...@apache.org> wrote:

> > 5 issue one pull request from your fork nuttx/apps to apache nuttx/apps
> master branch
>
> Are you suggesting we have one repo NuttX with 2 folders apps and nuttx?
>
> That will simplify everything! - but I suspect we will receive STRONG
> arguments against it.


Yes, such as what will those of us do who have our own custom apps? Not
everyone uses e.g. NSH. Some products are more deeply embedded than others.

i would oppose combining those two repos into one because i agree with the
concept that we should not make the user's life harder for our convenience.

Most changes only affect only one repository or the other. For the much
smaller number of changes that affect both, we should have some special
handling.

Nathan

Re: [DISCUSS - NuttX Workflow]

Posted by David Sidrane <da...@apache.org>.
> 5 issue one pull request from your fork nuttx/apps to apache nuttx/apps master branch

Are you suggesting we have one repo NuttX with 2 folders apps and nuttx?

That will simplify everything! - but I suspect we will receive STRONG arguments against it.
 
So you  say "one pull request" 

Where? You have 2 repos. PR are against a single repo.

This it what the Knot does. - It is the where

On 2019/12/18 10:09:26, Alan Carvalho de Assis <ac...@gmail.com> wrote: 
> Hi Liu,
> 
> On Wednesday, December 18, 2019, Haitao Liu <li...@gmail.com> wrote:
> > How about just keep two separate git repositories (apps and nuttx
> > projects) instead
> > of add a parent knot repo with apps and nuttx as sub-modules?
> > As to jenkins CI, I haven’t found proper github plugin to get PRs from
> > multiple repos(especially PRs dependency in apps & nuttx ) in one Jenkins
> > job.  Before that, I wonder whether we could keep it simple and
> > directly, create
> > one jenkins job for apps and another  jenkins job for nuttx to process PR
> > trigger accordingly.  Just make sure the jenkins pipeline or build script
> > to sync both apps and nuttx repos, then pick the apps or nuttx PR to do
> > full build.
> >
> > Since nuttx and apps projects keeps same as before, developers adapt to
> > github workflow as usual:
> > 1 fork the official apache nuttx & apps projects in github
> > 2 git clone your fork projects locally
> > 3 edit locally and then git commit to local branch
> > 4 git push to your github fork nuttx/apps branch
> > 5 issue one pull request from your fork nuttx/apps to apache nuttx/apps
> > master branch
> > 6 jenkins CI auto-trigger: style check, build or test, if failed, go to
> > step 3, continue 3 ~ 7
> > 7 PMC start to review PR, review ok, merge to master; or review failed, go
> > to step 3, continue 3~7
> >
> > Detailed info about GitHub workflow:
> >
> https://help.github.com/en/github/collaborating-with-issues-and-pull-requests
> >
> 
> I agree! Using two repositores is better than creating submodules.
> 
> We Just need to guarantee that users will clone both directories. The build
> system can do it when the user try to build without the ../apps.
> 
> BR,
> 
> Alan
> 

Re: [DISCUSS - NuttX Workflow]

Posted by Alan Carvalho de Assis <ac...@gmail.com>.
Hi Liu,

On Wednesday, December 18, 2019, Haitao Liu <li...@gmail.com> wrote:
> How about just keep two separate git repositories (apps and nuttx
> projects) instead
> of add a parent knot repo with apps and nuttx as sub-modules?
> As to jenkins CI, I haven’t found proper github plugin to get PRs from
> multiple repos(especially PRs dependency in apps & nuttx ) in one Jenkins
> job.  Before that, I wonder whether we could keep it simple and
> directly, create
> one jenkins job for apps and another  jenkins job for nuttx to process PR
> trigger accordingly.  Just make sure the jenkins pipeline or build script
> to sync both apps and nuttx repos, then pick the apps or nuttx PR to do
> full build.
>
> Since nuttx and apps projects keeps same as before, developers adapt to
> github workflow as usual:
> 1 fork the official apache nuttx & apps projects in github
> 2 git clone your fork projects locally
> 3 edit locally and then git commit to local branch
> 4 git push to your github fork nuttx/apps branch
> 5 issue one pull request from your fork nuttx/apps to apache nuttx/apps
> master branch
> 6 jenkins CI auto-trigger: style check, build or test, if failed, go to
> step 3, continue 3 ~ 7
> 7 PMC start to review PR, review ok, merge to master; or review failed, go
> to step 3, continue 3~7
>
> Detailed info about GitHub workflow:
>
https://help.github.com/en/github/collaborating-with-issues-and-pull-requests
>

I agree! Using two repositores is better than creating submodules.

We Just need to guarantee that users will clone both directories. The build
system can do it when the user try to build without the ../apps.

BR,

Alan

Re: [DISCUSS - NuttX Workflow]

Posted by David Sidrane <da...@apache.org>.
>I haven’t found proper github plugin to get PRs from multiple repos(especially PRs dependency 

1) How would you create a way to do this. 

Hows about we add a file to the repo with the 2 shals in it and hand edit it before every push?

["NuttX/nuttx"]
	path = NuttX/nuttx
	url = https://github.com/NuttX/NuttX.git
	SHAL = 2757647897a6f1c3451180b4c242aec25185523e
["NuttX/apps"]
	path = NuttX/apps
	url = https://github.com/NuttX/apps.git
	SHAL = 01818b505f898f33176bf90f9563e84942ea56cf

2) Why would this exist if git supports it already?

this is what submodules are:

https://github.com/PX4/Firmware/blob/master/.gitmodules
..
[submodule "platforms/nuttx/NuttX/nuttx"]
	path = platforms/nuttx/NuttX/nuttx
	url = https://github.com/PX4/NuttX.git
	branch = px4_firmware_nuttx-8.2
[submodule "platforms/nuttx/NuttX/apps"]
	path = platforms/nuttx/NuttX/apps
	url = https://github.com/PX4/NuttX-apps.git
	branch = px4_firmware_nuttx-8.2


On 2019/12/18 09:51:45, Haitao Liu <li...@gmail.com> wrote: 
> How about just keep two separate git repositories (apps and nuttx
> projects) instead
> of add a parent knot repo with apps and nuttx as sub-modules?
> As to jenkins CI, I haven’t found proper github plugin to get PRs from
> multiple repos(especially PRs dependency in apps & nuttx ) in one Jenkins
> job.  Before that, I wonder whether we could keep it simple and
> directly, create
> one jenkins job for apps and another  jenkins job for nuttx to process PR
> trigger accordingly.  Just make sure the jenkins pipeline or build script
> to sync both apps and nuttx repos, then pick the apps or nuttx PR to do
> full build.
> 
> Since nuttx and apps projects keeps same as before, developers adapt to
> github workflow as usual:
> 1 fork the official apache nuttx & apps projects in github
> 2 git clone your fork projects locally
> 3 edit locally and then git commit to local branch
> 4 git push to your github fork nuttx/apps branch
> 5 issue one pull request from your fork nuttx/apps to apache nuttx/apps
> master branch
> 6 jenkins CI auto-trigger: style check, build or test, if failed, go to
> step 3, continue 3 ~ 7
> 7 PMC start to review PR, review ok, merge to master; or review failed, go
> to step 3, continue 3~7
> 
> Detailed info about GitHub workflow:
> https://help.github.com/en/github/collaborating-with-issues-and-pull-requests
> 
> <da...@gmail.com> 于2019年12月17日周二 下午5:36写道:
> 
> >  [DISCUSS - NuttX Workflow]
> >
> > I am creating this thread to discuss what we as a community would like to
> > have as NuttX Workflow. I have also created [REQUIREMENTS- NuttX Workflow]
> > I am asking us to not add discussion to [REQUIREMENTS- NuttX Workflow].
> > Please do that here.
> >
> > As this discussion evolves we shall create requirements and add them
> > to the [REQUIREMENTS-
> > NuttX Workflow] thread.
> >
> > Please use [DISCUSS - NuttX Workflow] to propose and discuss the ideas
> > and experiences
> > you have to offer.
> >
> > Be detailed; give examples, list pros and cons, why you like it and why you
> > don't.
> >
> > Then after the requirements are gathered in one place and discussed here
> > then can vote on them.
> >
> > Thank you.
> >
> > David
> >
> 

Re: [DISCUSS - NuttX Workflow]

Posted by Haitao Liu <li...@gmail.com>.
How about just keep two separate git repositories (apps and nuttx
projects) instead
of add a parent knot repo with apps and nuttx as sub-modules?
As to jenkins CI, I haven’t found proper github plugin to get PRs from
multiple repos(especially PRs dependency in apps & nuttx ) in one Jenkins
job.  Before that, I wonder whether we could keep it simple and
directly, create
one jenkins job for apps and another  jenkins job for nuttx to process PR
trigger accordingly.  Just make sure the jenkins pipeline or build script
to sync both apps and nuttx repos, then pick the apps or nuttx PR to do
full build.

Since nuttx and apps projects keeps same as before, developers adapt to
github workflow as usual:
1 fork the official apache nuttx & apps projects in github
2 git clone your fork projects locally
3 edit locally and then git commit to local branch
4 git push to your github fork nuttx/apps branch
5 issue one pull request from your fork nuttx/apps to apache nuttx/apps
master branch
6 jenkins CI auto-trigger: style check, build or test, if failed, go to
step 3, continue 3 ~ 7
7 PMC start to review PR, review ok, merge to master; or review failed, go
to step 3, continue 3~7

Detailed info about GitHub workflow:
https://help.github.com/en/github/collaborating-with-issues-and-pull-requests

<da...@gmail.com> 于2019年12月17日周二 下午5:36写道:

>  [DISCUSS - NuttX Workflow]
>
> I am creating this thread to discuss what we as a community would like to
> have as NuttX Workflow. I have also created [REQUIREMENTS- NuttX Workflow]
> I am asking us to not add discussion to [REQUIREMENTS- NuttX Workflow].
> Please do that here.
>
> As this discussion evolves we shall create requirements and add them
> to the [REQUIREMENTS-
> NuttX Workflow] thread.
>
> Please use [DISCUSS - NuttX Workflow] to propose and discuss the ideas
> and experiences
> you have to offer.
>
> Be detailed; give examples, list pros and cons, why you like it and why you
> don't.
>
> Then after the requirements are gathered in one place and discussed here
> then can vote on them.
>
> Thank you.
>
> David
>

Re: [DISCUSS - NuttX Workflow]

Posted by Gregory Nutt <sp...@gmail.com>.
> Option d)  Make minimal coding standard changes that can be 100% supported by option a.*
>
> *) Greg suggested this in the bar at NuttX2019 - caveat it was in the BAR!

No one should be held accountable for what the say in a bar 8-)

A lot depends on the nature of the coding standard change.  If you make 
small coding standard changes, the existing indent.sh is nearly perfect 
(nearly).  As a general principle, I think that the coding standard 
should not change to match to a tool.  Changing code to match a tool is 
a little bothersome as a concept because it is the "tag wagging the dog."

The Inviolables.txt addresses this:

    o Strict conformance to the NuttX coding style.  No "revolutionary" 
    changes to the coding standard (but perhaps some "evolutionary" 
    changes).

That is open to some interpretation.  I'm not sure if fixing the 
behavior of a tool by changing the coding standard is correctly 
"evolutionary" or not.  it is more or a kludge.  The Inviolables also say:

    o Expediency is not a justification for violating the coding standard.

Together, I would take that to mean that we should consider changing the 
coding standard only if we have exhausted all other possibilities.  If 
is is difficult to make a pretty printer behave properly, then that is 
not enough.  It must be impossible.  "Short cuts" is the enemy in the 
Inviolables.txt

I would add that one person cannot change the Inviolables or any NuttX 
standards.  That really must be a vote of the PPMC.  And, I think like a 
constitutional change or an impeachment, it probably should require more 
than a simple majority to change any standard.  If a super-majority is 
in favor of any change, that the other just need to accept it.

Greg





Re: [DISCUSS - NuttX Workflow]

Posted by David Sidrane <da...@apache.org>.
Hi, 

Sharing my thoughts here for discussion.

=== Source code checking ====

Prior to submission, the submission shall be checked by a source code beatify-er. 

REQ1: The submission shall not be possible without a local check passing.
REQ2: A tool shall be used to check the NuttX coding standard.
REQ3: A tool shall be used to check for ASF licence compliance.
REQ4: A tool shall be used to check for blank lines at the end of files.

DREQ1) An gold standard source code file need to be created to validate tool.

Option a) Enhance nxstyle to
   i. To be complete
   ii. Support class of errors: errors, warnings, info
  iii. Support format options that fixes the files
  iv. At a minimum give compiler output error message that allow rapid fixing of the source in a compiler output aware editor. vi, UE, VC, Eclipse.....

Options b) use a mature tool such as Astyle, Uncrutstify, clang format - train it with https://github.com/mikr/whatstyle.

Option c) Cascade a combination of a & b to get the last 2% that option a can not.

Option d)  Make minimal coding standard changes that can be 100% supported by option a.*

*) Greg suggested this in the bar at NuttX2019 - caveat it was in the BAR!


On 2019/12/17 09:36:28, david.sidrane@gmail.com wrote: 
>  [DISCUSS - NuttX Workflow]
> 
> I am creating this thread to discuss what we as a community would like to
> have as NuttX Workflow. I have also created [REQUIREMENTS- NuttX Workflow]
> I am asking us to not add discussion to [REQUIREMENTS- NuttX Workflow].
> Please do that here.
> 
> As this discussion evolves we shall create requirements and add them
> to the [REQUIREMENTS-
> NuttX Workflow] thread.
> 
> Please use [DISCUSS - NuttX Workflow] to propose and discuss the ideas
> and experiences
> you have to offer.
> 
> Be detailed; give examples, list pros and cons, why you like it and why you
> don't.
> 
> Then after the requirements are gathered in one place and discussed here
> then can vote on them.
> 
> Thank you.
> 
> David
>