You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@trafficserver.apache.org by GitBox <gi...@apache.org> on 2021/02/26 01:59:02 UTC

[GitHub] [trafficserver] bryancall commented on issue #7546: LogObject::_checkout_write using a lot of CPU on many core servers

bryancall commented on issue #7546:
URL: https://github.com/apache/trafficserver/issues/7546#issuecomment-786356094


   Here is output from the package using the gcc compiler - it isn't as bad as AOCC, but still it is the most expensive operation by far:
   
   ```
   **http2load**
   finished in 32.31s, 92838.15 req/s, 1.44GB/s
   requests: 3000000 total, 3000000 started, 3000000 done, 3000000 succeeded, 0 failed, 0 errored, 0 timeout
   status codes: 3000000 2xx, 0 3xx, 0 4xx, 0 5xx
   traffic: 46.49GB (49917000000) total, 583.65MB (612000000) headers (space savings 0.00%), 45.78GB (49152000000) data
                        min         max         mean         sd        +/- sd
   time for request:      178us    864.37ms      1.88ms      6.56ms    95.16%
   time for connect:      179us     60.27ms     30.83ms     24.62ms    32.00%
   time to 1st byte:     7.67ms    252.08ms     93.93ms     50.69ms    68.00%
   req/s           :     469.10      651.57      573.43       54.94    64.50%
   
   **dstat**
   You did not select any stats, using -cdngy by default.
   ESC[7l----total-cpu-usage---- -dsk/total- ---net/lo-- -net/total- ---paging-- ---system--
   usr sys idl wai hiq siq| read  writ| recv  send: recv  send|  in   out | int   csw
     1   0  99   0   0   0|2531B  210k|   0     0 :   0     0 |   0     0 |  28k   84k
    56   6  38   0   0   1|   0   726k|1590M 1590M:  11M  903k|   0     0 |1097k  307k
    56   7  36   0   0   1|  43k   24M|1657M 1657M:  11M  779k|   0     0 | 960k  303k
    41   6  53   0   0   1|  56k 1315k|1548M 1548M:  11M 1048k|   0     0 | 767k  270k
   **perf stat**
   
    Performance counter stats for process id '7940':
   
         2,090,891.52 msec task-clock                #   52.006 CPUs utilized
            1,812,158      context-switches          #    0.867 K/sec
              173,492      cpu-migrations            #    0.083 K/sec
            2,323,668      page-faults               #    0.001 M/sec
    6,216,710,543,871      cycles                    #    2.973 GHz                      (66.78%)
      673,283,513,494      stalled-cycles-frontend   #   10.83% frontend cycles idle     (66.79%)
      305,534,719,417      stalled-cycles-backend    #    4.91% backend cycles idle      (66.81%)
    2,089,728,448,270      instructions              #    0.34  insn per cycle
                                                     #    0.32  stalled cycles per insn  (66.80%)
      452,605,506,651      branches                  #  216.465 M/sec                    (66.81%)
       16,368,538,783      branch-misses             #    3.62% of all branches          (66.81%)
   
         40.205123665 seconds time elapsed
   
   **perf report**
   # Total Lost Samples: 0
   #
   # Samples: 11M of event 'cycles'
   # Event count (approx.): 7027789530465
   #
   #   Overhead  Shared Object         Symbol
   # ..........  ....................  ..................................................
   #
         12.97%  traffic_server        [.] LogObject::_checkout_write
          4.54%  [kernel.kallsyms]     [k] native_queued_spin_lock_slowpath
          3.92%  traffic_server        [.] TSHttpTxnReenable
          3.13%  libtscore.so.9.0.1    [.] malloc_new
          3.08%  libtscore.so.9.0.1    [.] malloc_free
          2.87%  traffic_server        [.] HttpSM::state_api_callout
          2.69%  libtscore.so.9.0.1    [.] ink_freelist_new
          2.63%  traffic_server        [.] HttpHookState::getNext
          2.59%  libtscore.so.9.0.1    [.] ink_freelist_free
   ```


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org