You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mynewt.apache.org by GitBox <gi...@apache.org> on 2020/05/18 16:13:58 UTC

[GitHub] [mynewt-nimble] haukepetersen opened a new issue #818: L2CAP COC: link throughput inconsistent and fluctuating

haukepetersen opened a new issue #818:
URL: https://github.com/apache/mynewt-nimble/issues/818


   For my research I am lately conducting some experiments involving measuring the raw data throughput of BLE links between selected nodes. While doing so, I noticed that the throughput between two nodes is quite unreliable and showing some funny patterns. To investigate, I simplified my setup so that I currently do the following:
   - open a L2CAP connection between two nodes
   - open a connection oriented channel using NimBLE's L2CAP API
   - let the L2CAP client (`source` node) send a defined number of defined size chunks to the l2cap server (`sink` node)
   
   Below are some exemplary numbers when sending 25000 chunks of 100byte payload each from the source to the `sink`. Each decimal denotes the number of chunks per 250ms that were received at the `sink`, counting every `BLE_L2CAP_EVENT_COC_DATA_RECEIVED` event. One can see clearly, that for some reason the link is throttled to ~8kb/s (80 chunks per sec) for a random amount of time. But then something happens and the remainder of the chunks are send at a much higher speed of ~65kb/s (~650 chunks per sec), which is actually somewhere near of what I would expect as sensible throughput on nrf52-based platforms...
   
   ```
   0 0 0 0 0 0 0 0 36 20 20 20 20 20 20 20 20 23 20 22 22 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 21 21
   22 20 20 20 20 20 20 20 20 20 20 23 20 20 20 20 18 20 19 136 165 125 137 165 165 162 165 166 165 162 165 165 154 165 165 165 166 165 112 165
   109 152 165 165 166 165 165 165 165 162 165 165 161 166 166 165 165 165 165 128 165 165 165 162 165 165 163 148 149 166 165 165 165 165 152 164 163 166 159 165
   165 133 165 162 162 165 165 141 156 162 165 166 165 148 149 140 160 165 162 151 146 165 164 163 162 164 166 165 162 147 166 149 148 165 135 165 165 165 158 165
   158 165 165 165 165 152 164 163 162 165 165 166 142 158 165 165 165 163 75 149 162 165 166 165 132 136 165 165 165 165 162 165 162 166 165 165 165 165 162 165
   126 161 165 165 129 149 147 165 150 156 11 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
   ```
   Next run (no node is restarted etc, I simply trigger a shell command to start spamming the link again):
   ```
   0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 21 19 20 20 18 20
   20 20 20 20 25 22 20 20 20 20 20 20 23 20 20 20 20 20 20 20 20 20 20 20 20 22 21 21 20 21 20 20 20 20 20 20 20 20 23 20
   20 20 20 20 20 21 21 22 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 21 25 21 20 20 20 20 20 20 20 20 20 20 20
   20 20 20 20 20 20 20 22 21 21 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 21 21 22 20 20 20 20 20 20 20 20 20 20
   20 20 20 20 20 20 20 21 21 22 20 20 26 20 20 20 20 20 20 20 20 20 20 20 20 20 20 22 22 20 20 20 20 20 20 20 20 20 20 20
   20 20 20 20 20 20 20 20 22 21 76 163 162 155 165 165 133 151 162 141 151 165 139 167 165 165 165 165 165 165 165 166 132 155 165 165 165 165 137 166
   164 133 166 165 132 165 165 164 132 165 140 165 165 165 165 161 164 160 166 165 165 165 165 165 166 149 147 163 165 165 165 160 162 146 165 165 165 165 165 165
   165 154 147 165 165 165 165 146 162 165 109 165 165 133 136 143 147 166 165 143 165 162 165 150 166 162 165 152 152 165 165 165 165 143 148 165 165 165 162 166
   159 163 164 159 165 165 164 165 165 165 92 165 155 165 165 165 147 165 165 162 164 166 165 162 165 71 0 0 0 0 0 0 0 0 0 0 0 0 0 0
   ```
   The time it takes for the link to pick up the speed seems to be sporadic, sometimes it happes quite quickly, sometimes it takes quite a bit longer (1st run ~15s, 2nd run ~55s).
   
   I can't really explain why the throughput is behaving so erratic, I would expect a constant rate as the link is not used for anything else. Also there are not other modules working on the nodes that could steal CPU time or hardware peripherals from NimBLE, so there should not be any interference from the software side...
   
   One guess is that this maybe has something to do with how the controller schedules its timeslots? Any pointer or idea on how to debug this further is highly welcome!
   
   ### Setup
   
   Following some more infos on my platform:
   - as always using NimBLE (host + controller) on RIOT :-)
   - measured between two `nrf52dk` boards situated on my desk, not much radio traffic besides my own
   - using the following config overrides:
   MYNEWT_VAL_BLE_L2CAP_COC_MAX_NUM=1
   MYNEWT_VAL_BLE_L2CAP_COC_MPS=200
   MYNEWT_VAL_BLE_MAX_CONNECTIONS=1
   MYNEWT_VAL_MSYS_1_BLOCK_COUNT=50
   MYNEWT_VAL_MSYS_1_BLOCK_SIZE=298
   MYNEWT_VAL_BLE_LL_CFG_FEAT_DATA_LEN_EXT=1
   MYNEWT_VAL_BLE_LL_MAX_PKT_SIZE=251
   - the send function looks like this:
   ```c
   /* called from the shell command */
   static void _do_run(size_t csize, unsigned cnum){
   ...
       for (unsigned i = 0; i < cnum; i++) {
           _send(PKT_HDR_DATA, (uint8_t)i, csize);
       }
   ...
   }
   
   static void _send(uint8_t type, uint8_t seq, size_t len)
   {
   ...
       do {
           res = ble_l2cap_send(_coc, txd);
           if (res == BLE_HS_EBUSY) {
               thread_flags_wait_all(FLAG_TX_UNSTALLED);
           }
       } while (res == BLE_HS_EBUSY);
   ...
   }
   
   static int _on_l2cap_evt(struct ble_l2cap_event *event, void *arg)
   {
       (void)arg;
   
       switch (event->type) {
   ...
           case BLE_L2CAP_EVENT_COC_TX_UNSTALLED:
               thread_flags_set(_main, FLAG_TX_UNSTALLED);
               break;
   ...
   }
   ```


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [mynewt-nimble] haukepetersen commented on issue #818: L2CAP COC: link throughput inconsistent and fluctuating

Posted by GitBox <gi...@apache.org>.
haukepetersen commented on issue #818:
URL: https://github.com/apache/mynewt-nimble/issues/818#issuecomment-630288259


   Forgot to mention, this might be related to #373, but I was not able to make the correct connection (yet).


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [mynewt-nimble] haukepetersen commented on issue #818: L2CAP COC: link throughput inconsistent and fluctuating

Posted by GitBox <gi...@apache.org>.
haukepetersen commented on issue #818:
URL: https://github.com/apache/mynewt-nimble/issues/818#issuecomment-638873496


   Update: I still cannot specifically explain the throughput fluctuation. However, I was able to find a fix to get a stable throughput. Per default, the `MYNEWT_VAL_ACL_BUF_COUNT` variable for the RIOT port was set to `4`. Simply setting this to `5` (or any larger value) leads to a stable throughput of around ~62kb/s. 
   
   The issue above seems to be closely related to the value of `MYNEWT_VAL_ACL_BUF_COUNT`. If this value is too small, the throughput will decrease significantly:
   `MYNEWT_VAL_ACL_BUF_COUNT` := 1 --> ~2kb/s
   `MYNEWT_VAL_ACL_BUF_COUNT` := 2 --> ~4kb/s
   `MYNEWT_VAL_ACL_BUF_COUNT` := 3 --> ~6kb/s
   `MYNEWT_VAL_ACL_BUF_COUNT` := 4 --> ~8kb/s **BUT with temporary peaks to ~65kb/s**
   `MYNEWT_VAL_ACL_BUF_COUNT` >= 5 --> ~65kb/s
   
   So the question remains: why does the thoughput for `MYNEWT_VAL_ACL_BUF_COUNT := 4` fluctuate between slow and fast mode?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org