You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mynewt.apache.org by GitBox <gi...@apache.org> on 2020/12/10 02:22:28 UTC

[GitHub] [mynewt-core] wes3 commented on pull request #2430: standardize OS_TICKS_PER_SEC definition

wes3 commented on pull request #2430:
URL: https://github.com/apache/mynewt-core/pull/2430#issuecomment-742192110


   @caspermeijn I can answer some of the questions you posed.
   
   1) Why would someone want to change this value?
   Some may want to change this value so that the time resolution for the os tick to be greater or less than the default value set. Some folks may want a 1 msec OS tick resolution whereas some may want longer. If you use 128 ticks per second the rate is, I believe 7.8125 msecs per tick.
   2) Why do all mcu define their own value?
   The main reason for this, I believe, is the default timer used for generating the os tick. We wanted to be able to generate the tick period exactly. Consider a 1 MHz timer and a 32768 Hz timer. For the 32768 timer, generating a 10msec os tick is not easy given integer number of ticks. That is why 128 is chosen for those MCUs as the 7.8125msec period is an integer number of ticks for a 32768 timer. For 1MHz, a 10msec resolution was chosen since that is an integer number of ticks for that timer.
   3) Would it make sense to standardize the actual value?
   I think, given the reason all mcus define their own values, no. There are really only two values I think used that are "standard": 128 and 100. So this seems ok to me.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org