You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@couchdb.apache.org by gi...@git.apache.org on 2017/07/08 06:24:45 UTC

[GitHub] nickva commented on issue #610: Optimize ddoc cache

nickva commented on issue #610: Optimize ddoc cache
URL: https://github.com/apache/couchdb/pull/610#issuecomment-313837711
 
 
   
   I ran the `ddoc_cache_speed` benchmark with the following diff:
   
   ```
    spawn_workers(WorkerCount) ->
        Self = self(),
   -    WorkerDb = list_to_binary(integer_to_list(WorkerCount)),
   -    spawn(fun() ->
   +    WorkerId = WorkerCount, % rem ?RANGE,
   +    WorkerDb = integer_to_binary(WorkerId),
   +    spawn_link(fun() ->
            do_work(Self, WorkerDb, 0)
        end),
        spawn_workers(WorkerCount - 1).
    
    
   -do_work(Parent, WorkerDb, Count) when Count >= 25 ->
   +do_work(Parent, WorkerDb, Count) when Count >= 1000 ->
        Parent ! {done, Count},
        do_work(Parent, WorkerDb, 0);
    
        case timer:now_diff(Now, Start) of
            N when N > 1000000 ->
                {_, MQL} = process_info(whereis(ddoc_cache_lru), message_queue_len),
   -            io:format("~p ~p~n", [Count, MQL]),
   +            CacheSize = ets:info(ddoc_cache_lru, size),
   +            io:format("~p ~p ~p~n", [Count, MQL, CacheSize]),
                report(Now, 0);
            _ ->
                receive
   ```
   
   `ddoc_cache_speed:go(2000).`
   
   ```
   1403000 1000 997
   1441000 999 997
   1416000 1001 998
   1387000 1000 997
   1403000 1001 996
   1418000 1001 997
   ```
   Left it running for 30 min or so and was seeing about 1M+ opens per second. Cache max size was left as the default at 1000. VM memory and number of processes stayed stable under 100MB and 3500 respectively.
   
   Cache ets size (the 3rd column) also stayed under 1000. Reducing the max_cache to 500 also reduced the table sizes accordingly:
   ```
   1336000 1500 497
   1252000 1499 498
   1307000 1500 497
   1240000 1500 497
   ```
   
   The interesting observation is that the LRU message queue size is in a steady steady and is almost exactly equal to `NumberOfWorkers-MaxSize`.  `NumberOfWorkers=2000` and when `MaxSize=1000`, then message queue was 2000-1000=1000. Then when `MaxSize` was reduced to 500, queue size became 2000-500=1500.
   
   +1 (but first see a few questions about ets delete and other minor nits).
   
   Very nice work!
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services