You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@couchdb.apache.org by "nickva (via GitHub)" <gi...@apache.org> on 2023/01/23 18:24:24 UTC

[GitHub] [couchdb] nickva opened a new pull request, #4401: Enforce docs ids _changes filter optimization limit

nickva opened a new pull request, #4401:
URL: https://github.com/apache/couchdb/pull/4401

   It turns out `changes_doc_ids_optimization_threshold` limit has never been applied for the clustered changes feeds. So it was effectively unlimited. This commit enables it, and also adds tests to ensure the limit works.
   
   Since we didn't have a good Erlang integration test suite for clustered changes feeds, which allowed this case to slip through the cracks, add a few more tests along the way to test the majority of parameter combinations which might interact: sharding single shards vs multiple, continuous vs normal, reverse, row limits etc.
   
   The previous limit was 100, but since it was never actually applied it's equivalent not having one at all, so let's pick a new one. I chose 1000 noticing that at Cloudant, close to 3000 we had fabric timeouts on a busy cluster, so that seemed too high. And 1000 seemed about the ballpark of the what size of _bulk_get batch might be. Adding a benchmarking eunit test https://gist.github.com/nickva/a21ef04b7e4bdbed5fdeb708f1d613b5 showed about 50-75 msec to query batches of 1000 random (uuid) doc_ids for Q values 1 through 8.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@couchdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [couchdb] jaydoane commented on a diff in pull request #4401: Enforce docs ids _changes filter optimization limit

Posted by "jaydoane (via GitHub)" <gi...@apache.org>.
jaydoane commented on code in PR #4401:
URL: https://github.com/apache/couchdb/pull/4401#discussion_r1084472219


##########
src/chttpd/test/eunit/chttpd_changes_test.erl:
##########
@@ -0,0 +1,663 @@
+% Licensed under the Apache License, Version 2.0 (the "License"); you may not
+% use this file except in compliance with the License. You may obtain a copy of
+% the License at
+%
+%   http://www.apache.org/licenses/LICENSE-2.0
+%
+% Unless required by applicable law or agreed to in writing, software
+% distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+% WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+% License for the specific language governing permissions and limitations under
+% the License.
+
+-module(chttpd_changes_test).
+
+-include_lib("couch/include/couch_db.hrl").
+-include_lib("couch/include/couch_eunit.hrl").
+
+-define(USER, "chttpd_changes_test_admin").
+-define(PASS, "pass").
+-define(AUTH, {basic_auth, {?USER, ?PASS}}).
+-define(JSON, {"Content-Type", "application/json"}).
+
+-define(DOC1, <<"doc1">>).
+-define(DDC2, <<"_design/doc2">>).
+-define(DOC3, <<"doc3">>).
+-define(REVA, <<"a">>).
+-define(REVB, <<"b">>).
+-define(REVC, <<"c">>).
+-define(DELETED, true).
+-define(LEAFREV, false).
+
+% doc1 starts as rev-a, then gets 2 conflicting revisions b and c
+% ddoc2 starts as deleted at rev-a, then gets re-created as rev-c
+% doc3 starts as rev-a, then gets deleted as rev-c
+%
+test_docs() ->
+    [
+        {?DOC1, [?REVA], ?LEAFREV},
+        {?DDC2, [?REVA], ?DELETED},
+        {?DOC3, [?REVA], ?LEAFREV},
+        {?DOC1, [?REVB, ?REVA], ?LEAFREV},
+        {?DOC1, [?REVC, ?REVA], ?LEAFREV},
+        {?DOC3, [?REVB, ?REVA], ?DELETED},
+        {?DDC2, [?REVC, ?REVA], ?LEAFREV}
+    ].
+
+% Thesa are run against a Q=1, N=1 db, so we can make
+% some stronger assumptions about the exact Seq prefixes
+% returned sequences will have
+%
+changes_test_() ->
+    {
+        setup,
+        fun setup_basic/0,
+        fun teardown_basic/1,
+        with([
+            ?TDEF(t_basic),
+            ?TDEF(t_basic_post),
+            ?TDEF(t_continuous),
+            ?TDEF(t_continuous_zero_timeout),
+            ?TDEF(t_longpoll),
+            ?TDEF(t_limit_zero),
+            ?TDEF(t_continuous_limit_zero),
+            ?TDEF(t_limit_one),
+            ?TDEF(t_since_now),
+            ?TDEF(t_continuous_since_now),
+            ?TDEF(t_longpoll_since_now),
+            ?TDEF(t_style_all_docs),
+            ?TDEF(t_reverse),
+            ?TDEF(t_continuous_reverse),
+            ?TDEF(t_reverse_limit_zero),
+            ?TDEF(t_reverse_limit_one),
+            ?TDEF(t_seq_interval),
+            ?TDEF(t_selector_filter),
+            ?TDEF(t_design_filter),
+            ?TDEF(t_docs_id_filter),
+            ?TDEF(t_docs_id_filter_over_limit)
+        ])
+    }.
+
+% For Q=8 sharded dbs, unlike Q=1, we cannot make strong
+% assumptions about the exact sequence IDs for each row
+% so we'll test all the changes return and that the sequences
+% are increasing.
+%
+changes_q8_test_() ->
+    {
+        setup,
+        fun setup_q8/0,
+        fun teardown_basic/1,
+        with([
+            ?TDEF(t_basic_q8),
+            ?TDEF(t_continuous_q8),
+            ?TDEF(t_limit_zero),
+            ?TDEF(t_limit_one_q8),
+            ?TDEF(t_since_now),
+            ?TDEF(t_longpoll_since_now),
+            ?TDEF(t_reverse_q8),
+            ?TDEF(t_reverse_limit_zero),
+            ?TDEF(t_reverse_limit_one_q8),
+            ?TDEF(t_selector_filter),
+            ?TDEF(t_design_filter),
+            ?TDEF(t_docs_id_filter_q8)
+        ])
+    }.
+
+% These tests are separate as they create aditional design docs
+% as they so technically would be order dependent as the sequence

Review Comment:
   > as they so technically would be order dependent as the sequence
   
   having difficulty understanding this



##########
src/chttpd/test/eunit/chttpd_changes_test.erl:
##########
@@ -0,0 +1,663 @@
+% Licensed under the Apache License, Version 2.0 (the "License"); you may not
+% use this file except in compliance with the License. You may obtain a copy of
+% the License at
+%
+%   http://www.apache.org/licenses/LICENSE-2.0
+%
+% Unless required by applicable law or agreed to in writing, software
+% distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+% WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+% License for the specific language governing permissions and limitations under
+% the License.
+
+-module(chttpd_changes_test).
+
+-include_lib("couch/include/couch_db.hrl").
+-include_lib("couch/include/couch_eunit.hrl").
+
+-define(USER, "chttpd_changes_test_admin").
+-define(PASS, "pass").
+-define(AUTH, {basic_auth, {?USER, ?PASS}}).
+-define(JSON, {"Content-Type", "application/json"}).
+
+-define(DOC1, <<"doc1">>).
+-define(DDC2, <<"_design/doc2">>).
+-define(DOC3, <<"doc3">>).
+-define(REVA, <<"a">>).
+-define(REVB, <<"b">>).
+-define(REVC, <<"c">>).
+-define(DELETED, true).
+-define(LEAFREV, false).
+
+% doc1 starts as rev-a, then gets 2 conflicting revisions b and c
+% ddoc2 starts as deleted at rev-a, then gets re-created as rev-c
+% doc3 starts as rev-a, then gets deleted as rev-c
+%
+test_docs() ->
+    [
+        {?DOC1, [?REVA], ?LEAFREV},
+        {?DDC2, [?REVA], ?DELETED},
+        {?DOC3, [?REVA], ?LEAFREV},
+        {?DOC1, [?REVB, ?REVA], ?LEAFREV},
+        {?DOC1, [?REVC, ?REVA], ?LEAFREV},
+        {?DOC3, [?REVB, ?REVA], ?DELETED},
+        {?DDC2, [?REVC, ?REVA], ?LEAFREV}
+    ].
+
+% Thesa are run against a Q=1, N=1 db, so we can make

Review Comment:
   s/Thesa/These/



##########
src/chttpd/test/eunit/chttpd_changes_test.erl:
##########
@@ -0,0 +1,663 @@
+% Licensed under the Apache License, Version 2.0 (the "License"); you may not
+% use this file except in compliance with the License. You may obtain a copy of
+% the License at
+%
+%   http://www.apache.org/licenses/LICENSE-2.0
+%
+% Unless required by applicable law or agreed to in writing, software
+% distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+% WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+% License for the specific language governing permissions and limitations under
+% the License.
+
+-module(chttpd_changes_test).
+
+-include_lib("couch/include/couch_db.hrl").
+-include_lib("couch/include/couch_eunit.hrl").
+
+-define(USER, "chttpd_changes_test_admin").
+-define(PASS, "pass").
+-define(AUTH, {basic_auth, {?USER, ?PASS}}).
+-define(JSON, {"Content-Type", "application/json"}).
+
+-define(DOC1, <<"doc1">>).
+-define(DDC2, <<"_design/doc2">>).
+-define(DOC3, <<"doc3">>).
+-define(REVA, <<"a">>).
+-define(REVB, <<"b">>).
+-define(REVC, <<"c">>).
+-define(DELETED, true).
+-define(LEAFREV, false).
+
+% doc1 starts as rev-a, then gets 2 conflicting revisions b and c
+% ddoc2 starts as deleted at rev-a, then gets re-created as rev-c
+% doc3 starts as rev-a, then gets deleted as rev-c
+%
+test_docs() ->
+    [
+        {?DOC1, [?REVA], ?LEAFREV},
+        {?DDC2, [?REVA], ?DELETED},
+        {?DOC3, [?REVA], ?LEAFREV},
+        {?DOC1, [?REVB, ?REVA], ?LEAFREV},
+        {?DOC1, [?REVC, ?REVA], ?LEAFREV},
+        {?DOC3, [?REVB, ?REVA], ?DELETED},
+        {?DDC2, [?REVC, ?REVA], ?LEAFREV}
+    ].
+
+% Thesa are run against a Q=1, N=1 db, so we can make
+% some stronger assumptions about the exact Seq prefixes
+% returned sequences will have
+%
+changes_test_() ->
+    {
+        setup,
+        fun setup_basic/0,
+        fun teardown_basic/1,
+        with([
+            ?TDEF(t_basic),
+            ?TDEF(t_basic_post),
+            ?TDEF(t_continuous),
+            ?TDEF(t_continuous_zero_timeout),
+            ?TDEF(t_longpoll),
+            ?TDEF(t_limit_zero),
+            ?TDEF(t_continuous_limit_zero),
+            ?TDEF(t_limit_one),
+            ?TDEF(t_since_now),
+            ?TDEF(t_continuous_since_now),
+            ?TDEF(t_longpoll_since_now),
+            ?TDEF(t_style_all_docs),
+            ?TDEF(t_reverse),
+            ?TDEF(t_continuous_reverse),
+            ?TDEF(t_reverse_limit_zero),
+            ?TDEF(t_reverse_limit_one),
+            ?TDEF(t_seq_interval),
+            ?TDEF(t_selector_filter),
+            ?TDEF(t_design_filter),
+            ?TDEF(t_docs_id_filter),
+            ?TDEF(t_docs_id_filter_over_limit)
+        ])
+    }.
+
+% For Q=8 sharded dbs, unlike Q=1, we cannot make strong
+% assumptions about the exact sequence IDs for each row
+% so we'll test all the changes return and that the sequences
+% are increasing.
+%
+changes_q8_test_() ->
+    {
+        setup,
+        fun setup_q8/0,
+        fun teardown_basic/1,
+        with([
+            ?TDEF(t_basic_q8),
+            ?TDEF(t_continuous_q8),
+            ?TDEF(t_limit_zero),
+            ?TDEF(t_limit_one_q8),
+            ?TDEF(t_since_now),
+            ?TDEF(t_longpoll_since_now),
+            ?TDEF(t_reverse_q8),
+            ?TDEF(t_reverse_limit_zero),
+            ?TDEF(t_reverse_limit_one_q8),
+            ?TDEF(t_selector_filter),
+            ?TDEF(t_design_filter),
+            ?TDEF(t_docs_id_filter_q8)
+        ])
+    }.
+
+% These tests are separate as they create aditional design docs
+% as they so technically would be order dependent as the sequence
+% would keep climbing up from test to test. To avoid that run them
+% in a foreach context so setup/teardown happens for each test case.
+%
+changes_js_filters_test_() ->
+    {
+        foreach,
+        fun setup_basic/0,
+        fun teardown_basic/1,
+        [
+            ?TDEF_FE(t_js_filter),
+            ?TDEF_FE(t_js_filter_no_match),
+            ?TDEF_FE(t_js_filter_with_query_param),
+            ?TDEF_FE(t_view_filter),
+            ?TDEF_FE(t_view_filter_no_match)
+        ]
+    }.
+
+t_basic({_, DbUrl}) ->
+    Res = {Seq, Pending, Rows} = changes(DbUrl),
+    ?assertEqual(7, Seq),
+    ?assertEqual(0, Pending),
+    ?assertEqual(
+        [
+            {5, {?DOC1, <<"2-c">>}, ?LEAFREV},
+            {6, {?DOC3, <<"2-b">>}, ?DELETED},
+            {7, {?DDC2, <<"2-c">>}, ?LEAFREV}
+        ],
+        Rows
+    ),
+    % since=0 is the default, so it should look exactly the same
+    ?assertEqual(Res, changes(DbUrl, "?since=0")).
+
+t_basic_q8({_, DbUrl}) ->
+    {Seq, Pending, Rows} = changes(DbUrl),
+    ?assertEqual(7, Seq),
+    ?assertEqual(0, Pending),
+    {Seqs, Revs, _Deleted} = lists:unzip3(Rows),
+    ?assertEqual(
+        [
+            {?DDC2, <<"2-c">>},
+            {?DOC1, <<"2-c">>},
+            {?DOC3, <<"2-b">>}
+        ],
+        lists:sort(Revs)
+    ),
+    ?assertEqual(Seqs, lists:sort(Seqs)).
+
+t_basic_post({_, DbUrl}) ->
+    {Seq, Pending, Rows} = changes_post(DbUrl, #{}),
+    ?assertEqual(7, Seq),
+    ?assertEqual(0, Pending),
+    ?assertEqual(
+        [
+            {5, {?DOC1, <<"2-c">>}, ?LEAFREV},
+            {6, {?DOC3, <<"2-b">>}, ?DELETED},
+            {7, {?DDC2, <<"2-c">>}, ?LEAFREV}
+        ],
+        Rows
+    ).
+
+t_continuous({_, DbUrl}) ->
+    Params = "?feed=continuous&timeout=10",
+    {Seq, Pending, Rows} = changes(DbUrl, Params),
+    ?assertEqual(7, Seq),
+    ?assertEqual(0, Pending),
+    ?assertEqual(
+        [
+            {5, {?DOC1, <<"2-c">>}, ?LEAFREV},
+            {6, {?DOC3, <<"2-b">>}, ?DELETED},
+            {7, {?DDC2, <<"2-c">>}, ?LEAFREV}
+        ],
+        Rows
+    ).
+
+t_continuous_q8({_, DbUrl}) ->
+    Params = "?feed=continuous&timeout=10",
+    {Seq, Pending, Rows} = changes(DbUrl, Params),
+    ?assertEqual(7, Seq),
+    ?assertEqual(0, Pending),
+    {Seqs, Revs, _Deleted} = lists:unzip3(Rows),
+    ?assertEqual(
+        [
+            {?DDC2, <<"2-c">>},
+            {?DOC1, <<"2-c">>},
+            {?DOC3, <<"2-b">>}
+        ],
+        lists:sort(Revs)
+    ),
+    ?assertEqual(Seqs, lists:sort(Seqs)).
+
+t_continuous_zero_timeout({_, DbUrl}) ->
+    Params = "?feed=continuous&timeout=0",
+    {Seq, Pending, Rows} = changes(DbUrl, Params),
+    ?assertEqual(7, Seq),
+    ?assertEqual(0, Pending),
+    ?assertEqual(
+        [
+            {5, {?DOC1, <<"2-c">>}, ?LEAFREV},
+            {6, {?DOC3, <<"2-b">>}, ?DELETED},
+            {7, {?DDC2, <<"2-c">>}, ?LEAFREV}
+        ],
+        Rows
+    ).
+
+t_longpoll({_, DbUrl}) ->
+    Params = "?feed=longpoll",
+    {Seq, Pending, Rows} = changes(DbUrl, Params),
+    ?assertEqual(7, Seq),
+    ?assertEqual(0, Pending),
+    ?assertEqual(
+        [
+            {5, {?DOC1, <<"2-c">>}, ?LEAFREV},
+            {6, {?DOC3, <<"2-b">>}, ?DELETED},
+            {7, {?DDC2, <<"2-c">>}, ?LEAFREV}
+        ],
+        Rows
+    ).
+
+t_limit_zero({_, DbUrl}) ->
+    Params = "?limit=0",
+    ?assertEqual({0, 3, []}, changes(DbUrl, Params)).
+
+t_continuous_limit_zero({_, DbUrl}) ->
+    Params = "?feed=continuous&timeout=10&limit=0",
+    ?assertEqual({0, 3, []}, changes(DbUrl, Params)).
+
+t_limit_one({_, DbUrl}) ->
+    Params = "?limit=1",
+    ?assertEqual(
+        {5, 2, [
+            {5, {?DOC1, <<"2-c">>}, ?LEAFREV}
+        ]},
+        changes(DbUrl, Params)
+    ).
+
+t_limit_one_q8({_, DbUrl}) ->
+    Params = "?limit=1",
+    ?assertMatch(
+        {_, _, [
+            {_, {<<_/binary>>, <<_/binary>>}, _}
+        ]},
+        changes(DbUrl, Params)
+    ).
+
+t_style_all_docs({_, DbUrl}) ->
+    Params = "?style=all_docs",
+    {Seq, Pending, Rows} = changes(DbUrl, Params),
+    ?assertEqual(7, Seq),
+    ?assertEqual(0, Pending),
+    ?assertEqual(
+        [
+            {5, {?DOC1, [<<"2-c">>, <<"2-b">>]}, ?LEAFREV},
+            {6, {?DOC3, <<"2-b">>}, ?DELETED},
+            {7, {?DDC2, <<"2-c">>}, ?LEAFREV}
+        ],
+        Rows
+    ).
+
+t_since_now({_, DbUrl}) ->
+    Params = "?since=now",
+    ?assertEqual({7, 0, []}, changes(DbUrl, Params)).
+
+t_continuous_since_now({_, DbUrl}) ->
+    Params = "?feed=continuous&timeout=10&since=now",
+    ?assertEqual({7, 0, []}, changes(DbUrl, Params)).
+
+t_longpoll_since_now({_, DbUrl}) ->
+    Params = "?feed=longpoll&timeout=10&since=now",
+    ?assertEqual({7, 0, []}, changes(DbUrl, Params)).
+
+t_reverse({_, DbUrl}) ->
+    Params = "?descending=true",
+    {Seq, Pending, Rows} = changes(DbUrl, Params),
+    ?assertEqual(5, Seq),
+    ?assertEqual(-3, Pending),
+    ?assertEqual(
+        [
+            {7, {?DDC2, <<"2-c">>}, ?LEAFREV},
+            {6, {?DOC3, <<"2-b">>}, ?DELETED},
+            {5, {?DOC1, <<"2-c">>}, ?LEAFREV}
+        ],
+        Rows
+    ).
+
+t_continuous_reverse({_, DbUrl}) ->
+    Params = "?feed=continuous&timeout=10&descending=true",
+    {Seq, Pending, Rows} = changes(DbUrl, Params),
+    ?assertEqual(5, Seq),
+    ?assertEqual(-3, Pending),
+    ?assertEqual(
+        [
+            {7, {?DDC2, <<"2-c">>}, ?LEAFREV},
+            {6, {?DOC3, <<"2-b">>}, ?DELETED},
+            {5, {?DOC1, <<"2-c">>}, ?LEAFREV}
+        ],
+        Rows
+    ).
+
+t_reverse_q8({_, DbUrl}) ->
+    Params = "?descending=true",
+    {Seq, Pending, Rows} = changes(DbUrl, Params),
+    ?assertEqual(7, Seq),
+    ?assertEqual(-3, Pending),
+    {Seqs, Revs, _Deleted} = lists:unzip3(Rows),
+    ?assertEqual(
+        [
+            {?DDC2, <<"2-c">>},
+            {?DOC1, <<"2-c">>},
+            {?DOC3, <<"2-b">>}
+        ],
+        lists:sort(Revs)
+    ),
+    ?assertEqual(Seqs, lists:sort(Seqs)).
+
+t_reverse_limit_zero({_, DbUrl}) ->
+    Params = "?descending=true&limit=0",
+    ?assertEqual({7, 0, []}, changes(DbUrl, Params)).
+
+t_reverse_limit_one({_, DbUrl}) ->
+    Params = "?descending=true&limit=1",
+    ?assertEqual(
+        {7, -1, [
+            {7, {?DDC2, <<"2-c">>}, ?LEAFREV}
+        ]},
+        changes(DbUrl, Params)
+    ).
+
+t_reverse_limit_one_q8({_, DbUrl}) ->
+    Params = "?descending=true&limit=1",
+    ?assertMatch(
+        {7, -1, [
+            {_, {<<_/binary>>, <<_/binary>>}, _}
+        ]},
+        changes(DbUrl, Params)
+    ).
+
+t_seq_interval({_, DbUrl}) ->
+    Params = "?seq_interval=3",
+    {Seq, Pending, Rows} = changes(DbUrl, Params),
+    ?assertEqual(7, Seq),
+    ?assertEqual(0, Pending),
+    ?assertEqual(
+        [
+            {null, {?DOC1, <<"2-c">>}, ?LEAFREV},
+            {6, {?DOC3, <<"2-b">>}, ?DELETED},
+            {null, {?DDC2, <<"2-c">>}, ?LEAFREV}
+        ],
+        Rows
+    ).
+
+t_selector_filter({_, DbUrl}) ->
+    Params = "?filter=_selector",
+    Body = #{<<"selector">> => #{<<"_id">> => ?DOC1}},
+    {Seq, Pending, Rows} = changes_post(DbUrl, Body, Params),
+    ?assertEqual(7, Seq),
+    ?assertEqual(0, Pending),
+    ?assertMatch([{_, {?DOC1, <<"2-c">>}, ?LEAFREV}], Rows).
+
+t_design_filter({_, DbUrl}) ->
+    Params = "?filter=_design",
+    {Seq, Pending, Rows} = changes(DbUrl, Params),
+    ?assertEqual(7, Seq),
+    ?assertEqual(2, Pending),
+    ?assertMatch([{_, {?DDC2, <<"2-c">>}, ?LEAFREV}], Rows).
+
+t_docs_id_filter({_, DbUrl}) ->
+    Params = "?filter=_doc_ids",
+    Body = #{<<"doc_ids">> => [?DOC3, ?DOC1]},
+    meck:reset(couch_changes),
+    {_, _, Rows} = changes_post(DbUrl, Body, Params),
+    ?assertEqual(1, meck:num_calls(couch_changes, send_changes_doc_ids, 6)),
+    ?assertEqual(
+        [
+            {5, {?DOC1, <<"2-c">>}, ?LEAFREV},
+            {6, {?DOC3, <<"2-b">>}, ?DELETED}
+        ],
+        Rows
+    ).
+
+t_docs_id_filter_q8({_, DbUrl}) ->
+    Params = "?filter=_doc_ids",
+    Body = #{<<"doc_ids">> => [?DOC3, ?DOC1]},
+    {_, _, Rows} = changes_post(DbUrl, Body, Params),
+    {Seqs, Revs, _Deleted} = lists:unzip3(Rows),
+    ?assertEqual(
+        [
+            {?DOC1, <<"2-c">>},
+            {?DOC3, <<"2-b">>}
+        ],
+        lists:sort(Revs)
+    ),
+    ?assertEqual(Seqs, lists:sort(Seqs)).
+
+t_docs_id_filter_over_limit({_, DbUrl}) ->
+    Params = "?filter=_doc_ids",
+    Body = #{<<"doc_ids">> => [<<"missingdoc">>, ?DOC3, <<"notthere">>, ?DOC1]},
+    meck:reset(couch_changes),
+    {_, _, Rows} = changes_post(DbUrl, Body, Params),
+    ?assertEqual(0, meck:num_calls(couch_changes, send_changes_doc_ids, 6)),
+    ?assertEqual(
+        [
+            {5, {?DOC1, <<"2-c">>}, ?LEAFREV},
+            {6, {?DOC3, <<"2-b">>}, ?DELETED}
+        ],
+        Rows
+    ).
+
+t_js_filter({_, DbUrl}) ->
+    DDocId = "_design/filters",
+    FilterFun = <<"function(doc, req) {return (doc._id == 'doc3')}">>,
+    DDoc = #{<<"filters">> => #{<<"f">> => FilterFun}},
+    DDocUrl = DbUrl ++ "/" ++ DDocId,
+    {_, #{<<"rev">> := Rev, <<"ok">> := true}} = req(put, DDocUrl, DDoc),
+    Params = "?filter=filters/f",
+    {Seq, Pending, Rows} = changes(DbUrl, Params),
+    ?assertEqual(8, Seq),
+    ?assertEqual(0, Pending),
+    ?assertEqual(
+        [
+            {6, {?DOC3, <<"2-b">>}, ?DELETED}
+        ],
+        Rows
+    ),
+    {200, #{}} = req(delete, DDocUrl ++ "?rev=" ++ binary_to_list(Rev)).
+
+t_js_filter_no_match({_, DbUrl}) ->
+    DDocId = "_design/filters",
+    FilterFun = <<"function(doc, req) {return false}">>,
+    DDoc = #{<<"filters">> => #{<<"f">> => FilterFun}},
+    DDocUrl = DbUrl ++ "/" ++ DDocId,
+    {_, #{<<"rev">> := Rev, <<"ok">> := true}} = req(put, DDocUrl, DDoc),
+    Params = "?filter=filters/f",
+    ?assertEqual({8, 0, []}, changes(DbUrl, Params)),
+    {200, #{}} = req(delete, DDocUrl ++ "?rev=" ++ binary_to_list(Rev)).
+
+t_js_filter_with_query_param({_, DbUrl}) ->
+    DDocId = "_design/filters",
+    FilterFun = <<"function(doc, req) {return (req.query.yup == 1)}">>,
+    DDoc = #{<<"filters">> => #{<<"f">> => FilterFun}},
+    DDocUrl = DbUrl ++ "/" ++ DDocId,
+    {_, #{<<"rev">> := Rev, <<"ok">> := true}} = req(put, DDocUrl, DDoc),
+    Params = "?filter=filters/f&yup=1",
+    {Seq, Pending, Rows} = changes(DbUrl, Params),
+    ?assertEqual(8, Seq),
+    ?assertEqual(0, Pending),
+    ?assertMatch(
+        [
+            {5, {?DOC1, <<"2-c">>}, ?LEAFREV},
+            {6, {?DOC3, <<"2-b">>}, ?DELETED},
+            {7, {?DDC2, <<"2-c">>}, ?LEAFREV},
+            {8, {<<"_design/filters">>, <<"1-", _/binary>>}, ?LEAFREV}
+        ],
+        Rows
+    ),
+    {200, #{}} = req(delete, DDocUrl ++ "?rev=" ++ binary_to_list(Rev)).
+
+t_view_filter({_, DbUrl}) ->
+    DDocId = "_design/views",
+    ViewFun = <<"function(doc) {if (doc._id == 'doc1') {emit(1, 1);}}">>,
+    DDoc = #{<<"views">> => #{<<"v">> => #{<<"map">> => ViewFun}}},
+    DDocUrl = DbUrl ++ "/" ++ DDocId,
+    {_, #{<<"rev">> := Rev, <<"ok">> := true}} = req(put, DDocUrl, DDoc),
+    Params = "?filter=_view&view=views/v",
+    {Seq, Pending, Rows} = changes(DbUrl, Params),
+    ?assertEqual(8, Seq),
+    ?assertEqual(0, Pending),
+    ?assertEqual(
+        [
+            {5, {?DOC1, <<"2-c">>}, ?LEAFREV}
+        ],
+        Rows
+    ),
+    {200, #{}} = req(delete, DDocUrl ++ "?rev=" ++ binary_to_list(Rev)).
+
+t_view_filter_no_match({_, DbUrl}) ->
+    DDocId = "_design/views",
+    ViewFun = <<"function(doc) {if (doc._id == 'docX') {emit(1, 1);}}">>,
+    DDoc = #{<<"views">> => #{<<"v">> => #{<<"map">> => ViewFun}}},
+    DDocUrl = DbUrl ++ "/" ++ DDocId,
+    {_, #{<<"rev">> := Rev, <<"ok">> := true}} = req(put, DDocUrl, DDoc),
+    Params = "?filter=_view&view=views/v",
+    ?assertEqual({8, 0, []}, changes(DbUrl, Params)),
+    {200, #{}} = req(delete, DDocUrl ++ "?rev=" ++ binary_to_list(Rev)).
+
+post_doc_ids(DbUrl, Body) ->

Review Comment:
   src/chttpd/test/eunit/chttpd_changes_test.erl:493:1: Warning: function post_doc_ids/2 is unused



##########
src/chttpd/test/eunit/chttpd_changes_test.erl:
##########
@@ -0,0 +1,654 @@
+% Licensed under the Apache License, Version 2.0 (the "License"); you may not
+% use this file except in compliance with the License. You may obtain a copy of
+% the License at
+%
+%   http://www.apache.org/licenses/LICENSE-2.0
+%
+% Unless required by applicable law or agreed to in writing, software
+% distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+% WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+% License for the specific language governing permissions and limitations under
+% the License.
+
+-module(chttpd_changes_test).
+
+-include_lib("couch/include/couch_db.hrl").
+-include_lib("couch/include/couch_eunit.hrl").
+
+-define(USER, "chttpd_changes_test_admin").
+-define(PASS, "pass").
+-define(AUTH, {basic_auth, {?USER, ?PASS}}).
+-define(JSON, {"Content-Type", "application/json"}).
+
+-define(DOC1, <<"doc1">>).
+-define(DDC2, <<"_design/doc2">>).
+-define(DOC3, <<"doc3">>).
+-define(REVA, <<"a">>).
+-define(REVB, <<"b">>).
+-define(REVC, <<"c">>).
+-define(DELETED, true).
+-define(LEAFREV, false).
+
+% doc1 starts as rev-a, then gets 2 conflicting revisions b and c
+% ddoc2 starts as deleted at rev-a, then gets re-created as rev-c

Review Comment:
   Here you call it ddoc2, which is another reason I was momentarily confused.



##########
src/chttpd/test/eunit/chttpd_changes_test.erl:
##########
@@ -0,0 +1,654 @@
+% Licensed under the Apache License, Version 2.0 (the "License"); you may not
+% use this file except in compliance with the License. You may obtain a copy of
+% the License at
+%
+%   http://www.apache.org/licenses/LICENSE-2.0
+%
+% Unless required by applicable law or agreed to in writing, software
+% distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+% WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+% License for the specific language governing permissions and limitations under
+% the License.
+
+-module(chttpd_changes_test).
+
+-include_lib("couch/include/couch_db.hrl").
+-include_lib("couch/include/couch_eunit.hrl").
+
+-define(USER, "chttpd_changes_test_admin").
+-define(PASS, "pass").
+-define(AUTH, {basic_auth, {?USER, ?PASS}}).
+-define(JSON, {"Content-Type", "application/json"}).
+
+-define(DOC1, <<"doc1">>).
+-define(DDC2, <<"_design/doc2">>).

Review Comment:
   I accidentally read this a `DOC2` and was confused. I think calling it `DDOC2` would make it clear it's not a regular doc?



##########
src/couch/src/couch_changes.erl:
##########
@@ -34,6 +34,9 @@
     keep_sending_changes/3
 ]).
 
+% Default max doc ids optimization limit.
+-define(MAX_DOC_IDS, 1000).

Review Comment:
   just a quibble but calling it `DEFAULT_MAX_DOC_IDS` would allow you to shorten the above comment ;)



##########
src/fabric/src/fabric_rpc.erl:
##########
@@ -116,36 +116,21 @@ changes(DbName, Options, StartVector, DbOptions) ->
             rexi:stream_last(Error)
     end.
 
-do_changes(Db, StartSeq, Enum, Acc0, Opts) ->
-    #fabric_changes_acc{
-        args = Args
-    } = Acc0,
-    #changes_args{
-        filter = Filter
-    } = Args,
+do_changes(Db, Seq, Enum, #fabric_changes_acc{args = Args} = Acc0, Opts) ->
+    #changes_args{filter_fun = Filter, dir = Dir} = Args,
+
     case Filter of
-        "_doc_ids" ->
-            % optimised code path, we’re looking up all doc_ids in the by-id instead of filtering
-            % the entire by-seq tree to find the doc_ids one by one
-            #changes_args{
-                filter_fun = {doc_ids, Style, DocIds},
-                dir = Dir
-            } = Args,
-            couch_changes:send_changes_doc_ids(
-                Db, StartSeq, Dir, Enum, Acc0, {doc_ids, Style, DocIds}
-            );
-        "_design_docs" ->
-            % optimised code path, we’re looking up all design_docs in the by-id instead of
-            % filtering the entire by-seq tree to find the design_docs one by one
-            #changes_args{
-                filter_fun = {design_docs, Style},
-                dir = Dir
-            } = Args,
-            couch_changes:send_changes_design_docs(
-                Db, StartSeq, Dir, Enum, Acc0, {design_docs, Style}
-            );
+        {doc_ids, _Style, DocIds} ->
+            case length(DocIds) =< couch_changes:doc_ids_limit() of
+                true ->
+                    couch_changes:send_changes_doc_ids(Db, Seq, Dir, Enum, Acc0, Filter);
+                false ->
+                    couch_db:fold_changes(Db, Seq, Enum, Acc0, Opts)
+            end;
+        {design_docs, _Style} ->
+            couch_changes:send_changes_design_docs(Db, Seq, Dir, Enum, Acc0, Filter);

Review Comment:
   this cleaned up nicely!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@couchdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [couchdb] nickva commented on a diff in pull request #4401: Enforce docs ids _changes filter optimization limit

Posted by "nickva (via GitHub)" <gi...@apache.org>.
nickva commented on code in PR #4401:
URL: https://github.com/apache/couchdb/pull/4401#discussion_r1084728790


##########
src/chttpd/test/eunit/chttpd_changes_test.erl:
##########
@@ -0,0 +1,654 @@
+% Licensed under the Apache License, Version 2.0 (the "License"); you may not
+% use this file except in compliance with the License. You may obtain a copy of
+% the License at
+%
+%   http://www.apache.org/licenses/LICENSE-2.0
+%
+% Unless required by applicable law or agreed to in writing, software
+% distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+% WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+% License for the specific language governing permissions and limitations under
+% the License.
+
+-module(chttpd_changes_test).
+
+-include_lib("couch/include/couch_db.hrl").
+-include_lib("couch/include/couch_eunit.hrl").
+
+-define(USER, "chttpd_changes_test_admin").
+-define(PASS, "pass").
+-define(AUTH, {basic_auth, {?USER, ?PASS}}).
+-define(JSON, {"Content-Type", "application/json"}).
+
+-define(DOC1, <<"doc1">>).
+-define(DDC2, <<"_design/doc2">>).
+-define(DOC3, <<"doc3">>).
+-define(REVA, <<"a">>).
+-define(REVB, <<"b">>).
+-define(REVC, <<"c">>).
+-define(DELETED, true).
+-define(LEAFREV, false).
+
+% doc1 starts as rev-a, then gets 2 conflicting revisions b and c
+% ddoc2 starts as deleted at rev-a, then gets re-created as rev-c

Review Comment:
   Indeed, I had just it more confusing. Good observation. Will update it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@couchdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [couchdb] nickva commented on a diff in pull request #4401: Enforce docs ids _changes filter optimization limit

Posted by "nickva (via GitHub)" <gi...@apache.org>.
nickva commented on code in PR #4401:
URL: https://github.com/apache/couchdb/pull/4401#discussion_r1084728528


##########
src/chttpd/test/eunit/chttpd_changes_test.erl:
##########
@@ -0,0 +1,654 @@
+% Licensed under the Apache License, Version 2.0 (the "License"); you may not
+% use this file except in compliance with the License. You may obtain a copy of
+% the License at
+%
+%   http://www.apache.org/licenses/LICENSE-2.0
+%
+% Unless required by applicable law or agreed to in writing, software
+% distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+% WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+% License for the specific language governing permissions and limitations under
+% the License.
+
+-module(chttpd_changes_test).
+
+-include_lib("couch/include/couch_db.hrl").
+-include_lib("couch/include/couch_eunit.hrl").
+
+-define(USER, "chttpd_changes_test_admin").
+-define(PASS, "pass").
+-define(AUTH, {basic_auth, {?USER, ?PASS}}).
+-define(JSON, {"Content-Type", "application/json"}).
+
+-define(DOC1, <<"doc1">>).
+-define(DDC2, <<"_design/doc2">>).

Review Comment:
   I'll call it DDOC2, it was DDOC2 initially but then it didn't line up visually into a nice table. But it's even more confusing now as D looks like O too much. Will update it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@couchdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [couchdb] nickva commented on a diff in pull request #4401: Enforce docs ids _changes filter optimization limit

Posted by "nickva (via GitHub)" <gi...@apache.org>.
nickva commented on code in PR #4401:
URL: https://github.com/apache/couchdb/pull/4401#discussion_r1084728528


##########
src/chttpd/test/eunit/chttpd_changes_test.erl:
##########
@@ -0,0 +1,654 @@
+% Licensed under the Apache License, Version 2.0 (the "License"); you may not
+% use this file except in compliance with the License. You may obtain a copy of
+% the License at
+%
+%   http://www.apache.org/licenses/LICENSE-2.0
+%
+% Unless required by applicable law or agreed to in writing, software
+% distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+% WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+% License for the specific language governing permissions and limitations under
+% the License.
+
+-module(chttpd_changes_test).
+
+-include_lib("couch/include/couch_db.hrl").
+-include_lib("couch/include/couch_eunit.hrl").
+
+-define(USER, "chttpd_changes_test_admin").
+-define(PASS, "pass").
+-define(AUTH, {basic_auth, {?USER, ?PASS}}).
+-define(JSON, {"Content-Type", "application/json"}).
+
+-define(DOC1, <<"doc1">>).
+-define(DDC2, <<"_design/doc2">>).

Review Comment:
   Ha! I called it DDOC2 initially but then it didn't line up visually into a nice table. But it's even more confusing now as D looks like O too much. Will update it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@couchdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [couchdb] nickva commented on a diff in pull request #4401: Enforce docs ids _changes filter optimization limit

Posted by "nickva (via GitHub)" <gi...@apache.org>.
nickva commented on code in PR #4401:
URL: https://github.com/apache/couchdb/pull/4401#discussion_r1084728164


##########
src/couch/src/couch_changes.erl:
##########
@@ -34,6 +34,9 @@
     keep_sending_changes/3
 ]).
 
+% Default max doc ids optimization limit.
+-define(MAX_DOC_IDS, 1000).

Review Comment:
   Good call!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@couchdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [couchdb] nickva commented on pull request #4401: Enforce docs ids _changes filter optimization limit

Posted by "nickva (via GitHub)" <gi...@apache.org>.
nickva commented on PR #4401:
URL: https://github.com/apache/couchdb/pull/4401#issuecomment-1402256054

   Thanks for investigating, @big-r81. Does it happen on every run or is it random?
   
   I could see how `{expression,"Pending"}, {expected,2}`  might happen, I think in case of a Q=8 we made too strong of an assertion that Pending will always be 2, it doesn't have to unless Q=1. I think we can relax that to Pending is an integer >= 0 and < 7.
   
   The other ones are bit unexpected:
   
   `{expression,"meck : num_calls ( couch_changes , send_changes_doc_ids , 6 )"},`
   
   Indicates somehow we failed to set the config value or didn't pass the parameters correctly through in the test request. See if you run a few times in a row if keep seeing this error. I checked on other architectures and macs, arm64 and various other OS seem to pass this test consistently.
   
   > in function chttpd_db_test:'-should_return_409_for_put_att_nonexistent_rev/1-fun-2-'/1 (test/eunit/chttpd_db_test.erl, line 330)
   in call from eunit_test:run_testfun/1 (eunit_test.erl, line 71)
   
   This ones looks rather generic and points to something being off with the network code and sometimes it fails unexpectedly.  `{error,connection_closed}` comes from ibrowse and just indicates that the connection Erlang process suddenly died. Wonder if we could add more debug logic to ibrowse or use Wireshark and the like to see what the connect state was and what the lower level error was.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@couchdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [couchdb] nickva commented on a diff in pull request #4401: Enforce docs ids _changes filter optimization limit

Posted by "nickva (via GitHub)" <gi...@apache.org>.
nickva commented on code in PR #4401:
URL: https://github.com/apache/couchdb/pull/4401#discussion_r1084801828


##########
src/chttpd/test/eunit/chttpd_changes_test.erl:
##########
@@ -0,0 +1,663 @@
+% Licensed under the Apache License, Version 2.0 (the "License"); you may not
+% use this file except in compliance with the License. You may obtain a copy of
+% the License at
+%
+%   http://www.apache.org/licenses/LICENSE-2.0
+%
+% Unless required by applicable law or agreed to in writing, software
+% distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+% WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+% License for the specific language governing permissions and limitations under
+% the License.
+
+-module(chttpd_changes_test).
+
+-include_lib("couch/include/couch_db.hrl").
+-include_lib("couch/include/couch_eunit.hrl").
+
+-define(USER, "chttpd_changes_test_admin").
+-define(PASS, "pass").
+-define(AUTH, {basic_auth, {?USER, ?PASS}}).
+-define(JSON, {"Content-Type", "application/json"}).
+
+-define(DOC1, <<"doc1">>).
+-define(DDC2, <<"_design/doc2">>).
+-define(DOC3, <<"doc3">>).
+-define(REVA, <<"a">>).
+-define(REVB, <<"b">>).
+-define(REVC, <<"c">>).
+-define(DELETED, true).
+-define(LEAFREV, false).
+
+% doc1 starts as rev-a, then gets 2 conflicting revisions b and c
+% ddoc2 starts as deleted at rev-a, then gets re-created as rev-c
+% doc3 starts as rev-a, then gets deleted as rev-c
+%
+test_docs() ->
+    [
+        {?DOC1, [?REVA], ?LEAFREV},
+        {?DDC2, [?REVA], ?DELETED},
+        {?DOC3, [?REVA], ?LEAFREV},
+        {?DOC1, [?REVB, ?REVA], ?LEAFREV},
+        {?DOC1, [?REVC, ?REVA], ?LEAFREV},
+        {?DOC3, [?REVB, ?REVA], ?DELETED},
+        {?DDC2, [?REVC, ?REVA], ?LEAFREV}
+    ].
+
+% Thesa are run against a Q=1, N=1 db, so we can make
+% some stronger assumptions about the exact Seq prefixes
+% returned sequences will have
+%
+changes_test_() ->
+    {
+        setup,
+        fun setup_basic/0,
+        fun teardown_basic/1,
+        with([
+            ?TDEF(t_basic),
+            ?TDEF(t_basic_post),
+            ?TDEF(t_continuous),
+            ?TDEF(t_continuous_zero_timeout),
+            ?TDEF(t_longpoll),
+            ?TDEF(t_limit_zero),
+            ?TDEF(t_continuous_limit_zero),
+            ?TDEF(t_limit_one),
+            ?TDEF(t_since_now),
+            ?TDEF(t_continuous_since_now),
+            ?TDEF(t_longpoll_since_now),
+            ?TDEF(t_style_all_docs),
+            ?TDEF(t_reverse),
+            ?TDEF(t_continuous_reverse),
+            ?TDEF(t_reverse_limit_zero),
+            ?TDEF(t_reverse_limit_one),
+            ?TDEF(t_seq_interval),
+            ?TDEF(t_selector_filter),
+            ?TDEF(t_design_filter),
+            ?TDEF(t_docs_id_filter),
+            ?TDEF(t_docs_id_filter_over_limit)
+        ])
+    }.
+
+% For Q=8 sharded dbs, unlike Q=1, we cannot make strong
+% assumptions about the exact sequence IDs for each row
+% so we'll test all the changes return and that the sequences
+% are increasing.
+%
+changes_q8_test_() ->
+    {
+        setup,
+        fun setup_q8/0,
+        fun teardown_basic/1,
+        with([
+            ?TDEF(t_basic_q8),
+            ?TDEF(t_continuous_q8),
+            ?TDEF(t_limit_zero),
+            ?TDEF(t_limit_one_q8),
+            ?TDEF(t_since_now),
+            ?TDEF(t_longpoll_since_now),
+            ?TDEF(t_reverse_q8),
+            ?TDEF(t_reverse_limit_zero),
+            ?TDEF(t_reverse_limit_one_q8),
+            ?TDEF(t_selector_filter),
+            ?TDEF(t_design_filter),
+            ?TDEF(t_docs_id_filter_q8)
+        ])
+    }.
+
+% These tests are separate as they create aditional design docs
+% as they so technically would be order dependent as the sequence

Review Comment:
   Good catch, I didn't explain that very well. I'll update the comment. The idea is that the filter design docs are added to the database during the test execution. Each time that happens the last update sequence gets bumped by 1. If we check last_seq being 8, for instance, the next test might need to assert 10 and so on. Now order is deterministic currently, it might look confusing and would prevent inserting a test in the middle somewhere as well.  To avoid that, these tests were moved to a separate suite and the setup/teardown run for each individual test.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@couchdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [couchdb] big-r81 commented on pull request #4401: Enforce docs ids _changes filter optimization limit

Posted by "big-r81 (via GitHub)" <gi...@apache.org>.
big-r81 commented on PR #4401:
URL: https://github.com/apache/couchdb/pull/4401#issuecomment-1401511199

   Hi,
   
   seeing this errors from `build-report` under Windows:
   
   ```
   Errors
   ======
   
   chttpd_changes_test:97 with (t_design_filter)
   ---------------------------------------------
   
   ::in function chttpd_changes_test:t_design_filter/1 (test/eunit/chttpd_changes_test.erl, line 373)
   in call from eunit_test:run_testfun/1 (eunit_test.erl, line 71)
   in call from eunit_proc:run_test/1 (eunit_proc.erl, line 531)
   in call from eunit_proc:with_timeout/3 (eunit_proc.erl, line 356)
   in call from eunit_proc:handle_test/2 (eunit_proc.erl, line 514)
   in call from eunit_proc:tests_inorder/3 (eunit_proc.erl, line 456)
   in call from eunit_proc:with_timeout/3 (eunit_proc.erl, line 346)
   in call from eunit_proc:run_group/2 (eunit_proc.erl, line 570)
   **error:{assertEqual,[{module,chttpd_changes_test},
                 {line,373},
                 {expression,"Pending"},
                 {expected,2},
                 {value,0}]}
   
   chttpd_changes_test:97 with (t_docs_id_filter_over_limit)
   ---------------------------------------------------------
   
   ::in function chttpd_changes_test:t_docs_id_filter_over_limit/1 (test/eunit/chttpd_changes_test.erl, line 409)
   in call from eunit_test:run_testfun/1 (eunit_test.erl, line 71)
   in call from eunit_proc:run_test/1 (eunit_proc.erl, line 531)
   in call from eunit_proc:with_timeout/3 (eunit_proc.erl, line 356)
   in call from eunit_proc:handle_test/2 (eunit_proc.erl, line 514)
   in call from eunit_proc:tests_inorder/3 (eunit_proc.erl, line 456)
   in call from eunit_proc:with_timeout/3 (eunit_proc.erl, line 346)
   in call from eunit_proc:run_group/2 (eunit_proc.erl, line 570)
   **error:{assertEqual,[{module,chttpd_changes_test},
                 {line,409},
                 {expression,"meck : num_calls ( couch_changes , send_changes_doc_ids , 6 )"},
                 {expected,0},
                 {value,1}]}
   
   chttpd_changes_test:97 with (t_design_filter)
   ---------------------------------------------
   
   ::in function chttpd_changes_test:t_design_filter/1 (test/eunit/chttpd_changes_test.erl, line 373)
   in call from eunit_test:run_testfun/1 (eunit_test.erl, line 71)
   in call from eunit_proc:run_test/1 (eunit_proc.erl, line 531)
   in call from eunit_proc:with_timeout/3 (eunit_proc.erl, line 356)
   in call from eunit_proc:handle_test/2 (eunit_proc.erl, line 514)
   in call from eunit_proc:tests_inorder/3 (eunit_proc.erl, line 456)
   in call from eunit_proc:with_timeout/3 (eunit_proc.erl, line 346)
   in call from eunit_proc:run_group/2 (eunit_proc.erl, line 570)
   **error:{assertEqual,[{module,chttpd_changes_test},
                 {line,373},
                 {expression,"Pending"},
                 {expected,2},
                 {value,0}]}
   
   chttpd_db_test:329 should_return_409_for_put_att_nonexistent_rev
   ----------------------------------------------------------------
   
   ::in function chttpd_db_test:'-should_return_409_for_put_att_nonexistent_rev/1-fun-2-'/1 (test/eunit/chttpd_db_test.erl, line 330)
   in call from eunit_test:run_testfun/1 (eunit_test.erl, line 71)
   in call from eunit_proc:run_test/1 (eunit_proc.erl, line 531)
   in call from eunit_proc:with_timeout/3 (eunit_proc.erl, line 356)
   in call from eunit_proc:handle_test/2 (eunit_proc.erl, line 514)
   in call from eunit_proc:tests_inorder/3 (eunit_proc.erl, line 456)
   in call from eunit_proc:with_timeout/3 (eunit_proc.erl, line 346)
   in call from eunit_proc:run_group/2 (eunit_proc.erl, line 570)
   **error:{badmatch,{error,connection_closed}}
   
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@couchdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [couchdb] nickva commented on a diff in pull request #4401: Enforce docs ids _changes filter optimization limit

Posted by "nickva (via GitHub)" <gi...@apache.org>.
nickva commented on code in PR #4401:
URL: https://github.com/apache/couchdb/pull/4401#discussion_r1084728026


##########
src/chttpd/test/eunit/chttpd_changes_test.erl:
##########
@@ -0,0 +1,663 @@
+% Licensed under the Apache License, Version 2.0 (the "License"); you may not
+% use this file except in compliance with the License. You may obtain a copy of
+% the License at
+%
+%   http://www.apache.org/licenses/LICENSE-2.0
+%
+% Unless required by applicable law or agreed to in writing, software
+% distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+% WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+% License for the specific language governing permissions and limitations under
+% the License.
+
+-module(chttpd_changes_test).
+
+-include_lib("couch/include/couch_db.hrl").
+-include_lib("couch/include/couch_eunit.hrl").
+
+-define(USER, "chttpd_changes_test_admin").
+-define(PASS, "pass").
+-define(AUTH, {basic_auth, {?USER, ?PASS}}).
+-define(JSON, {"Content-Type", "application/json"}).
+
+-define(DOC1, <<"doc1">>).
+-define(DDC2, <<"_design/doc2">>).
+-define(DOC3, <<"doc3">>).
+-define(REVA, <<"a">>).
+-define(REVB, <<"b">>).
+-define(REVC, <<"c">>).
+-define(DELETED, true).
+-define(LEAFREV, false).
+
+% doc1 starts as rev-a, then gets 2 conflicting revisions b and c
+% ddoc2 starts as deleted at rev-a, then gets re-created as rev-c
+% doc3 starts as rev-a, then gets deleted as rev-c
+%
+test_docs() ->
+    [
+        {?DOC1, [?REVA], ?LEAFREV},
+        {?DDC2, [?REVA], ?DELETED},
+        {?DOC3, [?REVA], ?LEAFREV},
+        {?DOC1, [?REVB, ?REVA], ?LEAFREV},
+        {?DOC1, [?REVC, ?REVA], ?LEAFREV},
+        {?DOC3, [?REVB, ?REVA], ?DELETED},
+        {?DDC2, [?REVC, ?REVA], ?LEAFREV}
+    ].
+
+% Thesa are run against a Q=1, N=1 db, so we can make
+% some stronger assumptions about the exact Seq prefixes
+% returned sequences will have
+%
+changes_test_() ->
+    {
+        setup,
+        fun setup_basic/0,
+        fun teardown_basic/1,
+        with([
+            ?TDEF(t_basic),
+            ?TDEF(t_basic_post),
+            ?TDEF(t_continuous),
+            ?TDEF(t_continuous_zero_timeout),
+            ?TDEF(t_longpoll),
+            ?TDEF(t_limit_zero),
+            ?TDEF(t_continuous_limit_zero),
+            ?TDEF(t_limit_one),
+            ?TDEF(t_since_now),
+            ?TDEF(t_continuous_since_now),
+            ?TDEF(t_longpoll_since_now),
+            ?TDEF(t_style_all_docs),
+            ?TDEF(t_reverse),
+            ?TDEF(t_continuous_reverse),
+            ?TDEF(t_reverse_limit_zero),
+            ?TDEF(t_reverse_limit_one),
+            ?TDEF(t_seq_interval),
+            ?TDEF(t_selector_filter),
+            ?TDEF(t_design_filter),
+            ?TDEF(t_docs_id_filter),
+            ?TDEF(t_docs_id_filter_over_limit)
+        ])
+    }.
+
+% For Q=8 sharded dbs, unlike Q=1, we cannot make strong
+% assumptions about the exact sequence IDs for each row
+% so we'll test all the changes return and that the sequences
+% are increasing.
+%
+changes_q8_test_() ->
+    {
+        setup,
+        fun setup_q8/0,
+        fun teardown_basic/1,
+        with([
+            ?TDEF(t_basic_q8),
+            ?TDEF(t_continuous_q8),
+            ?TDEF(t_limit_zero),
+            ?TDEF(t_limit_one_q8),
+            ?TDEF(t_since_now),
+            ?TDEF(t_longpoll_since_now),
+            ?TDEF(t_reverse_q8),
+            ?TDEF(t_reverse_limit_zero),
+            ?TDEF(t_reverse_limit_one_q8),
+            ?TDEF(t_selector_filter),
+            ?TDEF(t_design_filter),
+            ?TDEF(t_docs_id_filter_q8)
+        ])
+    }.
+
+% These tests are separate as they create aditional design docs
+% as they so technically would be order dependent as the sequence
+% would keep climbing up from test to test. To avoid that run them
+% in a foreach context so setup/teardown happens for each test case.
+%
+changes_js_filters_test_() ->
+    {
+        foreach,
+        fun setup_basic/0,
+        fun teardown_basic/1,
+        [
+            ?TDEF_FE(t_js_filter),
+            ?TDEF_FE(t_js_filter_no_match),
+            ?TDEF_FE(t_js_filter_with_query_param),
+            ?TDEF_FE(t_view_filter),
+            ?TDEF_FE(t_view_filter_no_match)
+        ]
+    }.
+
+t_basic({_, DbUrl}) ->
+    Res = {Seq, Pending, Rows} = changes(DbUrl),
+    ?assertEqual(7, Seq),
+    ?assertEqual(0, Pending),
+    ?assertEqual(
+        [
+            {5, {?DOC1, <<"2-c">>}, ?LEAFREV},
+            {6, {?DOC3, <<"2-b">>}, ?DELETED},
+            {7, {?DDC2, <<"2-c">>}, ?LEAFREV}
+        ],
+        Rows
+    ),
+    % since=0 is the default, so it should look exactly the same
+    ?assertEqual(Res, changes(DbUrl, "?since=0")).
+
+t_basic_q8({_, DbUrl}) ->
+    {Seq, Pending, Rows} = changes(DbUrl),
+    ?assertEqual(7, Seq),
+    ?assertEqual(0, Pending),
+    {Seqs, Revs, _Deleted} = lists:unzip3(Rows),
+    ?assertEqual(
+        [
+            {?DDC2, <<"2-c">>},
+            {?DOC1, <<"2-c">>},
+            {?DOC3, <<"2-b">>}
+        ],
+        lists:sort(Revs)
+    ),
+    ?assertEqual(Seqs, lists:sort(Seqs)).
+
+t_basic_post({_, DbUrl}) ->
+    {Seq, Pending, Rows} = changes_post(DbUrl, #{}),
+    ?assertEqual(7, Seq),
+    ?assertEqual(0, Pending),
+    ?assertEqual(
+        [
+            {5, {?DOC1, <<"2-c">>}, ?LEAFREV},
+            {6, {?DOC3, <<"2-b">>}, ?DELETED},
+            {7, {?DDC2, <<"2-c">>}, ?LEAFREV}
+        ],
+        Rows
+    ).
+
+t_continuous({_, DbUrl}) ->
+    Params = "?feed=continuous&timeout=10",
+    {Seq, Pending, Rows} = changes(DbUrl, Params),
+    ?assertEqual(7, Seq),
+    ?assertEqual(0, Pending),
+    ?assertEqual(
+        [
+            {5, {?DOC1, <<"2-c">>}, ?LEAFREV},
+            {6, {?DOC3, <<"2-b">>}, ?DELETED},
+            {7, {?DDC2, <<"2-c">>}, ?LEAFREV}
+        ],
+        Rows
+    ).
+
+t_continuous_q8({_, DbUrl}) ->
+    Params = "?feed=continuous&timeout=10",
+    {Seq, Pending, Rows} = changes(DbUrl, Params),
+    ?assertEqual(7, Seq),
+    ?assertEqual(0, Pending),
+    {Seqs, Revs, _Deleted} = lists:unzip3(Rows),
+    ?assertEqual(
+        [
+            {?DDC2, <<"2-c">>},
+            {?DOC1, <<"2-c">>},
+            {?DOC3, <<"2-b">>}
+        ],
+        lists:sort(Revs)
+    ),
+    ?assertEqual(Seqs, lists:sort(Seqs)).
+
+t_continuous_zero_timeout({_, DbUrl}) ->
+    Params = "?feed=continuous&timeout=0",
+    {Seq, Pending, Rows} = changes(DbUrl, Params),
+    ?assertEqual(7, Seq),
+    ?assertEqual(0, Pending),
+    ?assertEqual(
+        [
+            {5, {?DOC1, <<"2-c">>}, ?LEAFREV},
+            {6, {?DOC3, <<"2-b">>}, ?DELETED},
+            {7, {?DDC2, <<"2-c">>}, ?LEAFREV}
+        ],
+        Rows
+    ).
+
+t_longpoll({_, DbUrl}) ->
+    Params = "?feed=longpoll",
+    {Seq, Pending, Rows} = changes(DbUrl, Params),
+    ?assertEqual(7, Seq),
+    ?assertEqual(0, Pending),
+    ?assertEqual(
+        [
+            {5, {?DOC1, <<"2-c">>}, ?LEAFREV},
+            {6, {?DOC3, <<"2-b">>}, ?DELETED},
+            {7, {?DDC2, <<"2-c">>}, ?LEAFREV}
+        ],
+        Rows
+    ).
+
+t_limit_zero({_, DbUrl}) ->
+    Params = "?limit=0",
+    ?assertEqual({0, 3, []}, changes(DbUrl, Params)).
+
+t_continuous_limit_zero({_, DbUrl}) ->
+    Params = "?feed=continuous&timeout=10&limit=0",
+    ?assertEqual({0, 3, []}, changes(DbUrl, Params)).
+
+t_limit_one({_, DbUrl}) ->
+    Params = "?limit=1",
+    ?assertEqual(
+        {5, 2, [
+            {5, {?DOC1, <<"2-c">>}, ?LEAFREV}
+        ]},
+        changes(DbUrl, Params)
+    ).
+
+t_limit_one_q8({_, DbUrl}) ->
+    Params = "?limit=1",
+    ?assertMatch(
+        {_, _, [
+            {_, {<<_/binary>>, <<_/binary>>}, _}
+        ]},
+        changes(DbUrl, Params)
+    ).
+
+t_style_all_docs({_, DbUrl}) ->
+    Params = "?style=all_docs",
+    {Seq, Pending, Rows} = changes(DbUrl, Params),
+    ?assertEqual(7, Seq),
+    ?assertEqual(0, Pending),
+    ?assertEqual(
+        [
+            {5, {?DOC1, [<<"2-c">>, <<"2-b">>]}, ?LEAFREV},
+            {6, {?DOC3, <<"2-b">>}, ?DELETED},
+            {7, {?DDC2, <<"2-c">>}, ?LEAFREV}
+        ],
+        Rows
+    ).
+
+t_since_now({_, DbUrl}) ->
+    Params = "?since=now",
+    ?assertEqual({7, 0, []}, changes(DbUrl, Params)).
+
+t_continuous_since_now({_, DbUrl}) ->
+    Params = "?feed=continuous&timeout=10&since=now",
+    ?assertEqual({7, 0, []}, changes(DbUrl, Params)).
+
+t_longpoll_since_now({_, DbUrl}) ->
+    Params = "?feed=longpoll&timeout=10&since=now",
+    ?assertEqual({7, 0, []}, changes(DbUrl, Params)).
+
+t_reverse({_, DbUrl}) ->
+    Params = "?descending=true",
+    {Seq, Pending, Rows} = changes(DbUrl, Params),
+    ?assertEqual(5, Seq),
+    ?assertEqual(-3, Pending),
+    ?assertEqual(
+        [
+            {7, {?DDC2, <<"2-c">>}, ?LEAFREV},
+            {6, {?DOC3, <<"2-b">>}, ?DELETED},
+            {5, {?DOC1, <<"2-c">>}, ?LEAFREV}
+        ],
+        Rows
+    ).
+
+t_continuous_reverse({_, DbUrl}) ->
+    Params = "?feed=continuous&timeout=10&descending=true",
+    {Seq, Pending, Rows} = changes(DbUrl, Params),
+    ?assertEqual(5, Seq),
+    ?assertEqual(-3, Pending),
+    ?assertEqual(
+        [
+            {7, {?DDC2, <<"2-c">>}, ?LEAFREV},
+            {6, {?DOC3, <<"2-b">>}, ?DELETED},
+            {5, {?DOC1, <<"2-c">>}, ?LEAFREV}
+        ],
+        Rows
+    ).
+
+t_reverse_q8({_, DbUrl}) ->
+    Params = "?descending=true",
+    {Seq, Pending, Rows} = changes(DbUrl, Params),
+    ?assertEqual(7, Seq),
+    ?assertEqual(-3, Pending),
+    {Seqs, Revs, _Deleted} = lists:unzip3(Rows),
+    ?assertEqual(
+        [
+            {?DDC2, <<"2-c">>},
+            {?DOC1, <<"2-c">>},
+            {?DOC3, <<"2-b">>}
+        ],
+        lists:sort(Revs)
+    ),
+    ?assertEqual(Seqs, lists:sort(Seqs)).
+
+t_reverse_limit_zero({_, DbUrl}) ->
+    Params = "?descending=true&limit=0",
+    ?assertEqual({7, 0, []}, changes(DbUrl, Params)).
+
+t_reverse_limit_one({_, DbUrl}) ->
+    Params = "?descending=true&limit=1",
+    ?assertEqual(
+        {7, -1, [
+            {7, {?DDC2, <<"2-c">>}, ?LEAFREV}
+        ]},
+        changes(DbUrl, Params)
+    ).
+
+t_reverse_limit_one_q8({_, DbUrl}) ->
+    Params = "?descending=true&limit=1",
+    ?assertMatch(
+        {7, -1, [
+            {_, {<<_/binary>>, <<_/binary>>}, _}
+        ]},
+        changes(DbUrl, Params)
+    ).
+
+t_seq_interval({_, DbUrl}) ->
+    Params = "?seq_interval=3",
+    {Seq, Pending, Rows} = changes(DbUrl, Params),
+    ?assertEqual(7, Seq),
+    ?assertEqual(0, Pending),
+    ?assertEqual(
+        [
+            {null, {?DOC1, <<"2-c">>}, ?LEAFREV},
+            {6, {?DOC3, <<"2-b">>}, ?DELETED},
+            {null, {?DDC2, <<"2-c">>}, ?LEAFREV}
+        ],
+        Rows
+    ).
+
+t_selector_filter({_, DbUrl}) ->
+    Params = "?filter=_selector",
+    Body = #{<<"selector">> => #{<<"_id">> => ?DOC1}},
+    {Seq, Pending, Rows} = changes_post(DbUrl, Body, Params),
+    ?assertEqual(7, Seq),
+    ?assertEqual(0, Pending),
+    ?assertMatch([{_, {?DOC1, <<"2-c">>}, ?LEAFREV}], Rows).
+
+t_design_filter({_, DbUrl}) ->
+    Params = "?filter=_design",
+    {Seq, Pending, Rows} = changes(DbUrl, Params),
+    ?assertEqual(7, Seq),
+    ?assertEqual(2, Pending),
+    ?assertMatch([{_, {?DDC2, <<"2-c">>}, ?LEAFREV}], Rows).
+
+t_docs_id_filter({_, DbUrl}) ->
+    Params = "?filter=_doc_ids",
+    Body = #{<<"doc_ids">> => [?DOC3, ?DOC1]},
+    meck:reset(couch_changes),
+    {_, _, Rows} = changes_post(DbUrl, Body, Params),
+    ?assertEqual(1, meck:num_calls(couch_changes, send_changes_doc_ids, 6)),
+    ?assertEqual(
+        [
+            {5, {?DOC1, <<"2-c">>}, ?LEAFREV},
+            {6, {?DOC3, <<"2-b">>}, ?DELETED}
+        ],
+        Rows
+    ).
+
+t_docs_id_filter_q8({_, DbUrl}) ->
+    Params = "?filter=_doc_ids",
+    Body = #{<<"doc_ids">> => [?DOC3, ?DOC1]},
+    {_, _, Rows} = changes_post(DbUrl, Body, Params),
+    {Seqs, Revs, _Deleted} = lists:unzip3(Rows),
+    ?assertEqual(
+        [
+            {?DOC1, <<"2-c">>},
+            {?DOC3, <<"2-b">>}
+        ],
+        lists:sort(Revs)
+    ),
+    ?assertEqual(Seqs, lists:sort(Seqs)).
+
+t_docs_id_filter_over_limit({_, DbUrl}) ->
+    Params = "?filter=_doc_ids",
+    Body = #{<<"doc_ids">> => [<<"missingdoc">>, ?DOC3, <<"notthere">>, ?DOC1]},
+    meck:reset(couch_changes),
+    {_, _, Rows} = changes_post(DbUrl, Body, Params),
+    ?assertEqual(0, meck:num_calls(couch_changes, send_changes_doc_ids, 6)),
+    ?assertEqual(
+        [
+            {5, {?DOC1, <<"2-c">>}, ?LEAFREV},
+            {6, {?DOC3, <<"2-b">>}, ?DELETED}
+        ],
+        Rows
+    ).
+
+t_js_filter({_, DbUrl}) ->
+    DDocId = "_design/filters",
+    FilterFun = <<"function(doc, req) {return (doc._id == 'doc3')}">>,
+    DDoc = #{<<"filters">> => #{<<"f">> => FilterFun}},
+    DDocUrl = DbUrl ++ "/" ++ DDocId,
+    {_, #{<<"rev">> := Rev, <<"ok">> := true}} = req(put, DDocUrl, DDoc),
+    Params = "?filter=filters/f",
+    {Seq, Pending, Rows} = changes(DbUrl, Params),
+    ?assertEqual(8, Seq),
+    ?assertEqual(0, Pending),
+    ?assertEqual(
+        [
+            {6, {?DOC3, <<"2-b">>}, ?DELETED}
+        ],
+        Rows
+    ),
+    {200, #{}} = req(delete, DDocUrl ++ "?rev=" ++ binary_to_list(Rev)).
+
+t_js_filter_no_match({_, DbUrl}) ->
+    DDocId = "_design/filters",
+    FilterFun = <<"function(doc, req) {return false}">>,
+    DDoc = #{<<"filters">> => #{<<"f">> => FilterFun}},
+    DDocUrl = DbUrl ++ "/" ++ DDocId,
+    {_, #{<<"rev">> := Rev, <<"ok">> := true}} = req(put, DDocUrl, DDoc),
+    Params = "?filter=filters/f",
+    ?assertEqual({8, 0, []}, changes(DbUrl, Params)),
+    {200, #{}} = req(delete, DDocUrl ++ "?rev=" ++ binary_to_list(Rev)).
+
+t_js_filter_with_query_param({_, DbUrl}) ->
+    DDocId = "_design/filters",
+    FilterFun = <<"function(doc, req) {return (req.query.yup == 1)}">>,
+    DDoc = #{<<"filters">> => #{<<"f">> => FilterFun}},
+    DDocUrl = DbUrl ++ "/" ++ DDocId,
+    {_, #{<<"rev">> := Rev, <<"ok">> := true}} = req(put, DDocUrl, DDoc),
+    Params = "?filter=filters/f&yup=1",
+    {Seq, Pending, Rows} = changes(DbUrl, Params),
+    ?assertEqual(8, Seq),
+    ?assertEqual(0, Pending),
+    ?assertMatch(
+        [
+            {5, {?DOC1, <<"2-c">>}, ?LEAFREV},
+            {6, {?DOC3, <<"2-b">>}, ?DELETED},
+            {7, {?DDC2, <<"2-c">>}, ?LEAFREV},
+            {8, {<<"_design/filters">>, <<"1-", _/binary>>}, ?LEAFREV}
+        ],
+        Rows
+    ),
+    {200, #{}} = req(delete, DDocUrl ++ "?rev=" ++ binary_to_list(Rev)).
+
+t_view_filter({_, DbUrl}) ->
+    DDocId = "_design/views",
+    ViewFun = <<"function(doc) {if (doc._id == 'doc1') {emit(1, 1);}}">>,
+    DDoc = #{<<"views">> => #{<<"v">> => #{<<"map">> => ViewFun}}},
+    DDocUrl = DbUrl ++ "/" ++ DDocId,
+    {_, #{<<"rev">> := Rev, <<"ok">> := true}} = req(put, DDocUrl, DDoc),
+    Params = "?filter=_view&view=views/v",
+    {Seq, Pending, Rows} = changes(DbUrl, Params),
+    ?assertEqual(8, Seq),
+    ?assertEqual(0, Pending),
+    ?assertEqual(
+        [
+            {5, {?DOC1, <<"2-c">>}, ?LEAFREV}
+        ],
+        Rows
+    ),
+    {200, #{}} = req(delete, DDocUrl ++ "?rev=" ++ binary_to_list(Rev)).
+
+t_view_filter_no_match({_, DbUrl}) ->
+    DDocId = "_design/views",
+    ViewFun = <<"function(doc) {if (doc._id == 'docX') {emit(1, 1);}}">>,
+    DDoc = #{<<"views">> => #{<<"v">> => #{<<"map">> => ViewFun}}},
+    DDocUrl = DbUrl ++ "/" ++ DDocId,
+    {_, #{<<"rev">> := Rev, <<"ok">> := true}} = req(put, DDocUrl, DDoc),
+    Params = "?filter=_view&view=views/v",
+    ?assertEqual({8, 0, []}, changes(DbUrl, Params)),
+    {200, #{}} = req(delete, DDocUrl ++ "?rev=" ++ binary_to_list(Rev)).
+
+post_doc_ids(DbUrl, Body) ->

Review Comment:
   Sorry. That was a left-over from the benchmark eunit test. Removed it. Thanks!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@couchdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [couchdb] nickva merged pull request #4401: Enforce docs ids _changes filter optimization limit

Posted by "nickva (via GitHub)" <gi...@apache.org>.
nickva merged PR #4401:
URL: https://github.com/apache/couchdb/pull/4401


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@couchdb.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org