You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@couchdb.apache.org by va...@apache.org on 2022/10/26 03:06:57 UTC

[couchdb] branch main updated: Optimize _bulk_get endpoint

This is an automated email from the ASF dual-hosted git repository.

vatamane pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/couchdb.git


The following commit(s) were added to refs/heads/main by this push:
     new 4e9c5588d Optimize _bulk_get endpoint
4e9c5588d is described below

commit 4e9c5588d765b84742784de5aafa146d14eed11f
Author: Nick Vatamaniuc <va...@gmail.com>
AuthorDate: Thu Oct 20 23:17:57 2022 -0400

    Optimize _bulk_get endpoint
    
    Use new `fabric:open_revs/3` API implemented in #4201 to optimize the _bulk_get
    HTTP API. Since `open_revs/3` itself is new, allow reverting to individual doc
    fetches using the previous `open_revs/4` API via a config setting, mostly as a
    precautionary measure.
    
    The implementation consists of three main parts:
      * Parse and validate args
      * Fetch the docs using `open_revs/3` or `open_revs/4`
      * Emit results as json or multipart, based on the `Accept` header value
    
    Parsing and validation checks for various errors and then returns a map of
    `#{Ref => {DocId, RevOrError, DocOptions}}` and a list of Refs in the original
    argument order. The middle tuple element of `RevOrError` is notable that it may
    hold either the revision ID (`[Rev]` or `all`) or `{error, {Rev, ErrorTag,
    ErrorReason}}`.
    
    Fetching the docs is fairly straightforward. The slightly interesting aspect is
    when an error is returned from `open_revs/3` we have to pretend that all the
    batched docs failed with that error. That is done to preserve the "zip"
    property, where all the input arguments have their matching result at the same
    position in results list. Another notable thing here is we fixed a bug where
    the error returned from `fabric:open_revs/3,4` was not formatted in a way it
    could have been emitted as json resulting in a function clause. That is why we
    call `couch_util:to_binary/1` on it. This was detected by the integration
    testing outline before and was missed by the previous mocked unit test.
    
    The last part is emitting the results as either json or multipart. Here most
    changes are cleanups and grouping into separate handler functions. The `Accept`
    header can be either `multipart/related` or `multipart/mixed` and we try to
    emit the same content type as it was passed in the `Accept` header. One notable
    thing here is by DRY-ing the filtering of attachments in
    `non_stubbed_attachments/1` we fixed another bug when the multipart result was
    returning nonsense in cases when all attachments were stubs. The doc was
    returned as a multipart chunk with content type `multipart/...` instead of
    application/json. This was also detected in the integration tests described
    below.
    
    The largest changes are in the testing area. Previous multipart tests were
    using mocks heavily, were quite fragile, and didn't have good coverage. Those
    tests were removed and replaced by new end-to-end tests in
    `chttpd_bulk_get_test.erl`. To make that happen add a simple multipart parser
    utility function which knows how to parse multipart responses into maps. Those
    maps preserve chunk headers and we can match those with `?assertMatch(...)`
    fairly easily. The tests try to get decent coverage for `chttpd_db.erl`
    bulk_get implementation and its utility functions, but they are also end-to-end
    tests so they test everything below, including fabric and couch layers as well.
    
    Quick 1 node testing using the couchdyno replicating of 1 million docs shows at
    least a 2x speedup to complete the replication using this PR.
    
    On main:
    
    ```
    r=rep.Rep(); r.replicate_1_to_n_and_compare(1, num=1000000, normal=True)
    330 sec
    ```
    
    With this PR:
    ```
    r=rep.Rep(); r.replicate_1_to_n_and_compare(1, num=1000000, normal=True)
    160 sec
    ```
    
    Individual `_bulk_get` response times shows an even higher improvement: an 8x
    speedup:
    
    On main:
    ```
    [notice] ... POST /cdyno-0000001/_bulk_get?latest=true&revs=true&attachments=false 200 ok 468
    [notice] ... POST /cdyno-0000001/_bulk_get?latest=true&revs=true&attachments=false 200 ok 479
    ```
    
    With this PR:
    ```
    [notice] ... POST /cdyno-0000001/_bulk_get?latest=true&revs=true&attachments=false 200 ok 54
    [notice] ... POST /cdyno-0000001/_bulk_get?latest=true&revs=true&attachments=false 200 ok 61
    ```
    
    Fixes: https://github.com/apache/couchdb/issues/4183
---
 rel/overlay/etc/default.ini                        |   6 +
 src/chttpd/src/chttpd_db.erl                       | 393 +++++----
 src/chttpd/test/eunit/chttpd_bulk_get_test.erl     | 887 ++++++++++++++++++++-
 .../eunit/chttpd_db_bulk_get_multipart_test.erl    | 366 ---------
 src/chttpd/test/eunit/chttpd_db_bulk_get_test.erl  | 372 ---------
 src/docs/src/config/http.rst                       |  12 +
 6 files changed, 1095 insertions(+), 941 deletions(-)

diff --git a/rel/overlay/etc/default.ini b/rel/overlay/etc/default.ini
index 1b1f6111d..6bb2ef475 100644
--- a/rel/overlay/etc/default.ini
+++ b/rel/overlay/etc/default.ini
@@ -182,6 +182,12 @@ bind_address = 127.0.0.1
 ; Set to true to decode + to space in db and doc_id parts.
 ; decode_plus_to_space = true
 
+; Set to false to revert to a previous _bulk_get implementation using single
+; doc fetches internally. Using batches should be faster, however there may be
+; bugs in the new new implemention, so expose this option to allow reverting to
+; the old behavior.
+;bulk_get_use_batches = true
+
 ;[jwt_auth]
 ; List of claims to validate
 ; can be the name of a claim like "exp" or a tuple if the claim requires
diff --git a/src/chttpd/src/chttpd_db.erl b/src/chttpd/src/chttpd_db.erl
index 29a9a4157..748b356fd 100644
--- a/src/chttpd/src/chttpd_db.erl
+++ b/src/chttpd/src/chttpd_db.erl
@@ -667,14 +667,7 @@ db_req(#httpd{method = 'POST', path_parts = [_, <<"_bulk_docs">>], user_ctx = Ct
     end;
 db_req(#httpd{path_parts = [_, <<"_bulk_docs">>]} = Req, _Db) ->
     send_method_not_allowed(Req, "POST");
-db_req(
-    #httpd{
-        method = 'POST',
-        path_parts = [_, <<"_bulk_get">>],
-        mochi_req = MochiReq
-    } = Req,
-    Db
-) ->
+db_req(#httpd{method = 'POST', path_parts = [_, <<"_bulk_get">>]} = Req, Db) ->
     couch_stats:increment_counter([couchdb, httpd, bulk_requests]),
     couch_httpd:validate_ctype(Req, "application/json"),
     {JsonProps} = chttpd:json_body_obj(Req),
@@ -682,88 +675,13 @@ db_req(
         undefined ->
             throw({bad_request, <<"Missing JSON list of 'docs'.">>});
         Docs ->
-            #doc_query_args{
-                options = Options0
-            } = bulk_get_parse_doc_query(Req),
+            #doc_query_args{options = Options0} = bulk_get_parse_doc_query(Req),
             Options = [{user_ctx, Req#httpd.user_ctx} | Options0],
-
-            AcceptJson = MochiReq:accepts_content_type("application/json"),
-            AcceptMixedMp = MochiReq:accepts_content_type("multipart/mixed"),
-            AcceptRelatedMp = MochiReq:accepts_content_type("multipart/related"),
-            AcceptMp = not AcceptJson andalso (AcceptMixedMp orelse AcceptRelatedMp),
-            case AcceptMp of
-                false ->
-                    {ok, Resp} = start_json_response(Req, 200),
-                    send_chunk(Resp, <<"{\"results\": [">>),
-                    lists:foldl(
-                        fun(Doc, Sep) ->
-                            {DocId, Results, Options1} = bulk_get_open_doc_revs(
-                                Db,
-                                Doc,
-                                Options
-                            ),
-                            bulk_get_send_docs_json(Resp, DocId, Results, Options1, Sep),
-                            <<",">>
-                        end,
-                        <<"">>,
-                        Docs
-                    ),
-                    send_chunk(Resp, <<"]}">>),
-                    end_json_response(Resp);
-                true ->
-                    OuterBoundary = bulk_get_multipart_boundary(),
-                    MpType =
-                        case AcceptMixedMp of
-                            true ->
-                                "multipart/mixed";
-                            _ ->
-                                "multipart/related"
-                        end,
-                    CType =
-                        {"Content-Type",
-                            MpType ++ "; boundary=\"" ++
-                                ?b2l(OuterBoundary) ++ "\""},
-                    {ok, Resp} = start_chunked_response(Req, 200, [CType]),
-                    lists:foldl(
-                        fun(Doc, _Pre) ->
-                            case bulk_get_open_doc_revs(Db, Doc, Options) of
-                                {_, {ok, []}, _Options1} ->
-                                    ok;
-                                {_, {ok, Results}, Options1} ->
-                                    send_docs_multipart_bulk_get(
-                                        Results,
-                                        Options1,
-                                        OuterBoundary,
-                                        Resp
-                                    );
-                                {DocId, {error, {RevId, Error, Reason}}, _Options1} ->
-                                    Json = ?JSON_ENCODE(
-                                        {[
-                                            {<<"id">>, DocId},
-                                            {<<"rev">>, RevId},
-                                            {<<"error">>, Error},
-                                            {<<"reason">>, Reason}
-                                        ]}
-                                    ),
-                                    couch_httpd:send_chunk(Resp, [
-                                        <<"\r\n--", OuterBoundary/binary>>,
-                                        <<"\r\nContent-Type: application/json; error=\"true\"\r\n\r\n">>,
-                                        Json
-                                    ])
-                            end
-                        end,
-                        <<"">>,
-                        Docs
-                    ),
-                    case Docs of
-                        [] ->
-                            ok;
-                        _ ->
-                            couch_httpd:send_chunk(
-                                Resp, <<"\r\n", "--", OuterBoundary/binary, "--\r\n">>
-                            )
-                    end,
-                    couch_httpd:last_chunk(Resp)
+            {ArgsRefs, ArgsMap} = bulk_get_parse_args(Db, Docs),
+            ResultsMap = bulk_get_docs(Db, ArgsMap, Options),
+            case bulk_get_is_multipart(Req) of
+                false -> bulk_get_ret_json(Req, ArgsRefs, ResultsMap, Options);
+                true -> bulk_get_ret_multipart(Req, ArgsRefs, ResultsMap, Options)
             end
     end;
 db_req(#httpd{path_parts = [_, <<"_bulk_get">>]} = Req, _Db) ->
@@ -1363,7 +1281,7 @@ send_docs_multipart_bulk_get(Results, Options0, OuterBoundary, Resp) ->
                 try
                     JsonBytes = ?JSON_ENCODE(couch_doc:to_json_obj(Doc, Options)),
                     couch_httpd:send_chunk(Resp, <<"\r\n--", OuterBoundary/binary>>),
-                    case Atts of
+                    case non_stubbed_attachments(Atts) of
                         [] ->
                             couch_httpd:send_chunk(
                                 Resp, <<"\r\nContent-Type: application/json\r\n\r\n">>
@@ -2336,15 +2254,13 @@ monitor_attachments(Atts) when is_list(Atts) ->
             case couch_att:fetch(data, Att) of
                 {Fd, _} ->
                     [monitor(process, Fd) | Monitors];
-                stub ->
-                    Monitors;
                 Else ->
                     couch_log:error("~p from couch_att:fetch(data, ~p)", [Else, Att]),
                     Monitors
             end
         end,
         [],
-        Atts
+        non_stubbed_attachments(Atts)
     );
 monitor_attachments(Att) ->
     monitor_attachments([Att]).
@@ -2352,6 +2268,15 @@ monitor_attachments(Att) ->
 demonitor_refs(Refs) when is_list(Refs) ->
     [demonitor(Ref) || Ref <- Refs].
 
+% Return attachments which are not stubs
+non_stubbed_attachments(Atts) when is_list(Atts) ->
+    lists:filter(
+        fun(Att) ->
+            couch_att:fetch(data, Att) =/= stub
+        end,
+        Atts
+    ).
+
 set_namespace(<<"_all_docs">>, Args) ->
     set_namespace(undefined, Args);
 set_namespace(<<"_local_docs">>, Args) ->
@@ -2363,6 +2288,175 @@ set_namespace(NS, #mrargs{} = Args) ->
 
 %% /db/_bulk_get stuff
 
+bulk_get_is_multipart(#httpd{mochi_req = MochiReq}) ->
+    Json = MochiReq:accepts_content_type("application/json"),
+    Mixed = MochiReq:accepts_content_type("multipart/mixed"),
+    Related = MochiReq:accepts_content_type("multipart/related"),
+    not Json andalso (Mixed orelse Related).
+
+bulk_get_docs(Db, #{} = ArgsMap, Options) ->
+    % Sort args by doc ID to hopefully make querying B-trees a bit faster
+    KeyFun = fun({Ref, {DocId, _, _}}) -> {DocId, Ref} end,
+    CmpFun = fun(A, B) -> KeyFun(A) =< KeyFun(B) end,
+    ArgsList = lists:sort(CmpFun, maps:to_list(ArgsMap)),
+    % Split out known errors. Later, before returning, recombine them back into
+    % the final result map.
+    PartFun = fun({_Ref, {_DocId, RevsOrError, _DocOpts}}) ->
+        case RevsOrError of
+            L when is_list(L) -> true;
+            all -> true;
+            {error, _} -> false
+        end
+    end,
+    {ValidArgs, ErrorArgs} = lists:partition(PartFun, ArgsList),
+    UseBatches = config:get_boolean("chttpd", "bulk_get_use_batches", true),
+    Responses =
+        case UseBatches of
+            true -> bulk_get_docs_batched(Db, ValidArgs, Options);
+            false -> bulk_get_docs_individually(Db, ValidArgs, Options)
+        end,
+    MapFun = fun({Ref, {DocId, Response, _}} = RespTuple) ->
+        case Response of
+            [] ->
+                % Remap empty reponses to errors. This is a peculiarity of the
+                % _bulk_get HTTP API. If revision was not specifed, `undefined`
+                % must be returned as the error revision ID.
+                #{Ref := {_, Revs, _}} = ArgsMap,
+                RevStr = bulk_get_rev_error(Revs),
+                Error = {RevStr, <<"not_found">>, <<"missing">>},
+                {Ref, {DocId, {error, Error}, []}};
+            [_ | _] = DocRevisions ->
+                chttpd_stats:incr_reads(length(DocRevisions)),
+                RespTuple;
+            _ ->
+                RespTuple
+        end
+    end,
+    % Recombine with the inital known errors and return as a map
+    maps:from_list(lists:map(MapFun, Responses) ++ ErrorArgs).
+
+bulk_get_docs_batched(Db, Args, Options) when is_list(Args) ->
+    % Args is [{Ref, {DocId, Revs, DocOpts}}, ...] but fabric:open_revs/3
+    % accepts [{{DocId, Revs}, DocOpts}, ...] so we need to transform them
+    ArgsFun = fun({_Ref, {DocId, Revs, DocOpts}}) ->
+        {{DocId, Revs}, DocOpts}
+    end,
+    OpenRevsArgs = lists:map(ArgsFun, Args),
+    case fabric:open_revs(Db, OpenRevsArgs, Options) of
+        {ok, Responses} ->
+            ZipFun = fun({Ref, {DocId, _Rev, DocOpts}}, Response) ->
+                {Ref, {DocId, Response, DocOpts ++ Options}}
+            end,
+            lists:zipwith(ZipFun, Args, Responses);
+        {error, Error} ->
+            % Distribute error to all request args, so it looks like they
+            % individually failed with that error
+            MapFun = fun({Ref, {DocId, Revs, _DocOpts}}) ->
+                RevStr = bulk_get_rev_error(Revs),
+                Tag = internal_fabric_error,
+                % This error will be emitted as json so make sure it's rendered
+                % to a string first.
+                Reason = couch_util:to_binary(Error),
+                {Ref, {DocId, {error, {RevStr, Tag, Reason}}, []}}
+            end,
+            lists:map(MapFun, Args)
+    end.
+
+bulk_get_docs_individually(Db, Args, Options) when is_list(Args) ->
+    MapFun = fun({Ref, {DocId, Revs, DocOpts}}) ->
+        case fabric:open_revs(Db, DocId, Revs, DocOpts ++ Options) of
+            {ok, Response} ->
+                {Ref, {DocId, Response, DocOpts}};
+            {error, Error} ->
+                RevStr = bulk_get_rev_error(Revs),
+                Tag = internal_fabric_error,
+                % This error will be emitted as json so make sure it's rendered
+                % to a string first.
+                Reason = couch_util:to_binary(Error),
+                {Ref, {DocId, {error, {RevStr, Tag, Reason}}, []}}
+        end
+    end,
+    lists:map(MapFun, Args).
+
+bulk_get_ret_json(#httpd{} = Req, ArgsRefs, ResultsMap, Options) ->
+    send_json(Req, 200, #{
+        <<"results">> => lists:map(
+            fun(Ref) ->
+                #{Ref := {DocId, Result, DocOpts}} = ResultsMap,
+                % We are about to encode the document into json and some of the
+                % provided general options might affect that so we make sure to
+                % combine all doc options and the general options together
+                AllOptions = DocOpts ++ Options,
+                #{
+                    <<"id">> => DocId,
+                    <<"docs">> => bulk_get_result(DocId, Result, AllOptions)
+                }
+            end,
+            ArgsRefs
+        )
+    }).
+
+bulk_get_result(DocId, {error, {Rev, Error, Reason}}, _Options) ->
+    [bulk_get_json_error_map(DocId, Rev, Error, Reason)];
+bulk_get_result(DocId, [_ | _] = DocRevs, Options) ->
+    MapFun = fun
+        ({ok, Doc}) ->
+            #{<<"ok">> => couch_doc:to_json_obj(Doc, Options)};
+        ({{Error, Reason}, RevId}) ->
+            Rev = couch_doc:rev_to_str(RevId),
+            bulk_get_json_error_map(DocId, Rev, Error, Reason)
+    end,
+    lists:map(MapFun, DocRevs).
+
+bulk_get_json_error_map(DocId, Rev, Error, Reason) ->
+    #{
+        <<"error">> => #{
+            <<"id">> => DocId,
+            <<"rev">> => Rev,
+            <<"error">> => Error,
+            <<"reason">> => Reason
+        }
+    }.
+
+bulk_get_ret_multipart(#httpd{} = Req, ArgsRefs, ResultsMap, Options) ->
+    MochiReq = Req#httpd.mochi_req,
+    Mixed = MochiReq:accepts_content_type("multipart/mixed"),
+    MpType =
+        case Mixed of
+            true -> "multipart/mixed";
+            false -> "multipart/related"
+        end,
+    Boundary = bulk_get_multipart_boundary(),
+    BoundaryCType = MpType ++ "; boundary=\"" ++ ?b2l(Boundary) ++ "\"",
+    CType = {"Content-Type", BoundaryCType},
+    {ok, Resp} = start_chunked_response(Req, 200, [CType]),
+    ForeachFun = fun(Ref) ->
+        #{Ref := {DocId, Result, DocOpts}} = ResultsMap,
+        case Result of
+            [_ | _] = DocRevs ->
+                AllOptions = DocOpts ++ Options,
+                send_docs_multipart_bulk_get(DocRevs, AllOptions, Boundary, Resp);
+            {error, {RevId, Error, Reason}} ->
+                EJson = bulk_get_json_error_map(DocId, RevId, Error, Reason),
+                Json = ?JSON_ENCODE(map_get(<<"error">>, EJson)),
+                ErrCType = <<"Content-Type: application/json">>,
+                Prefix = <<"\r\n", ErrCType/binary, "; error=\"true\"\r\n\r\n">>,
+                ErrorChunk = [<<"\r\n--", Boundary/binary>>, Prefix, Json],
+                couch_httpd:send_chunk(Resp, ErrorChunk)
+        end
+    end,
+    lists:foreach(ForeachFun, ArgsRefs),
+    case ArgsRefs of
+        [] ->
+            % Didn't send any docs, don't need to send a closing boundary
+            ok;
+        [_ | _] ->
+            % Sent at least one doc response, so also send the last boundary
+            EndBoundary = <<"\r\n", "--", Boundary/binary, "--\r\n">>,
+            couch_httpd:send_chunk(Resp, EndBoundary)
+    end,
+    couch_httpd:last_chunk(Resp).
+
 bulk_get_parse_doc_query(Req) ->
     lists:foldl(
         fun({Key, Value}, Args) ->
@@ -2392,60 +2486,62 @@ throw_bad_query_param(Key) when is_binary(Key) ->
     Msg = <<"\"", Key/binary, "\" query parameter is not acceptable">>,
     throw({bad_request, Msg}).
 
-bulk_get_open_doc_revs(Db, {Props}, Options) ->
-    bulk_get_open_doc_revs1(Db, Props, Options, {});
-bulk_get_open_doc_revs(_Db, _Invalid, Options) ->
+% Parse and tag bulk_get arguments. Return a list of argument tags in the same
+% order as they were provided and a map of #{tag => {DocId, RevOrError,
+% DocOpts}. That list is used to return them in the response in the exact same
+% order.
+%
+bulk_get_parse_args(Db, Docs) ->
+    Fun = fun(Doc, Acc) ->
+        Ref = make_ref(),
+        Arg = {_DocId, _RevOrError, _DocOpts} = bulk_get_parse_arg(Db, Doc),
+        {Ref, Acc#{Ref => Arg}}
+    end,
+    lists:mapfoldr(Fun, #{}, Docs).
+
+bulk_get_parse_arg(Db, {[_ | _] = Props}) ->
+    bulk_get_parse_doc_id(Db, Props);
+bulk_get_parse_arg(_Db, _Invalid) ->
     Error = {null, bad_request, <<"document must be a JSON object">>},
-    {null, {error, Error}, Options}.
+    {null, {error, Error}, []}.
 
-bulk_get_open_doc_revs1(Db, Props, Options, {}) ->
+bulk_get_parse_doc_id(Db, [_ | _] = Props) ->
     case couch_util:get_value(<<"id">>, Props) of
         undefined ->
             Error = {null, bad_request, <<"document id missed">>},
-            {null, {error, Error}, Options};
+            {null, {error, Error}, []};
         DocId ->
             try
                 couch_db:validate_docid(Db, DocId),
-                bulk_get_open_doc_revs1(Db, Props, Options, {DocId})
+                bulk_get_parse_revs(Props, DocId)
             catch
                 throw:{Error, Reason} ->
-                    {DocId, {error, {null, Error, Reason}}, Options}
+                    {DocId, {error, {null, Error, Reason}}, []}
             end
-    end;
-bulk_get_open_doc_revs1(Db, Props, Options, {DocId}) ->
+    end.
+
+bulk_get_parse_revs(Props, DocId) ->
     RevStr = couch_util:get_value(<<"rev">>, Props),
 
     case parse_field(<<"rev">>, RevStr) of
         {error, {RevStr, Error, Reason}} ->
-            {DocId, {error, {RevStr, Error, Reason}}, Options};
+            {DocId, {error, {RevStr, Error, Reason}}, []};
         {ok, undefined} ->
-            bulk_get_open_doc_revs1(Db, Props, Options, {DocId, all});
+            bulk_get_parse_atts_since(Props, DocId, all);
         {ok, Rev} ->
-            bulk_get_open_doc_revs1(Db, Props, Options, {DocId, [Rev]})
-    end;
-bulk_get_open_doc_revs1(Db, Props, Options, {DocId, Revs}) ->
-    AttsSinceStr = couch_util:get_value(<<"atts_since">>, Props),
+            bulk_get_parse_atts_since(Props, DocId, [Rev])
+    end.
 
+bulk_get_parse_atts_since(Props, DocId, Revs) ->
+    AttsSinceStr = couch_util:get_value(<<"atts_since">>, Props),
     case parse_field(<<"atts_since">>, AttsSinceStr) of
         {error, {BadAttsSinceRev, Error, Reason}} ->
-            {DocId, {error, {BadAttsSinceRev, Error, Reason}}, Options};
+            {DocId, {error, {BadAttsSinceRev, Error, Reason}}, []};
         {ok, []} ->
-            bulk_get_open_doc_revs1(Db, Props, Options, {DocId, Revs, Options});
+            {DocId, Revs, []};
         {ok, RevList} ->
-            Options1 = [{atts_since, RevList}, attachments | Options],
-            bulk_get_open_doc_revs1(Db, Props, Options, {DocId, Revs, Options1})
-    end;
-bulk_get_open_doc_revs1(Db, Props, _, {DocId, Revs, Options}) ->
-    case fabric:open_revs(Db, DocId, Revs, Options) of
-        {ok, []} ->
-            RevStr = couch_util:get_value(<<"rev">>, Props),
-            Error = {RevStr, <<"not_found">>, <<"missing">>},
-            {DocId, {error, Error}, Options};
-        {ok, Resps} = Results ->
-            chttpd_stats:incr_reads(length(Resps)),
-            {DocId, Results, Options};
-        Else ->
-            {DocId, Else, Options}
+            Options = [{atts_since, RevList}, attachments],
+            {DocId, Revs, Options}
     end.
 
 parse_field(<<"rev">>, undefined) ->
@@ -2477,47 +2573,12 @@ parse_atts_since([RevStr | Rest], Acc) ->
             Error
     end.
 
-bulk_get_send_docs_json(Resp, DocId, Results, Options, Sep) ->
-    Id = ?JSON_ENCODE(DocId),
-    send_chunk(Resp, [Sep, <<"{\"id\": ">>, Id, <<", \"docs\": [">>]),
-    bulk_get_send_docs_json1(Resp, DocId, Results, Options),
-    send_chunk(Resp, <<"]}">>).
-
-bulk_get_send_docs_json1(Resp, DocId, {error, {Rev, Error, Reason}}, _) ->
-    send_chunk(Resp, [bulk_get_json_error(DocId, Rev, Error, Reason)]);
-bulk_get_send_docs_json1(_Resp, _DocId, {ok, []}, _) ->
-    ok;
-bulk_get_send_docs_json1(Resp, DocId, {ok, Docs}, Options) ->
-    lists:foldl(
-        fun(Result, AccSeparator) ->
-            case Result of
-                {ok, Doc} ->
-                    JsonDoc = couch_doc:to_json_obj(Doc, Options),
-                    Json = ?JSON_ENCODE({[{ok, JsonDoc}]}),
-                    send_chunk(Resp, [AccSeparator, Json]);
-                {{Error, Reason}, RevId} ->
-                    RevStr = couch_doc:rev_to_str(RevId),
-                    Json = bulk_get_json_error(DocId, RevStr, Error, Reason),
-                    send_chunk(Resp, [AccSeparator, Json])
-            end,
-            <<",">>
-        end,
-        <<"">>,
-        Docs
-    ).
-
-bulk_get_json_error(DocId, Rev, Error, Reason) ->
-    ?JSON_ENCODE(
-        {[
-            {error,
-                {[
-                    {<<"id">>, DocId},
-                    {<<"rev">>, Rev},
-                    {<<"error">>, Error},
-                    {<<"reason">>, Reason}
-                ]}}
-        ]}
-    ).
+bulk_get_rev_error(all) ->
+    % When revision is not defined respond with `undefined` on error in the
+    % revision field.
+    <<"undefined">>;
+bulk_get_rev_error([{Pos, RevId} = Rev]) when is_integer(Pos), is_binary(RevId) ->
+    couch_doc:rev_to_str(Rev).
 
 -ifdef(TEST).
 -include_lib("eunit/include/eunit.hrl").
diff --git a/src/chttpd/test/eunit/chttpd_bulk_get_test.erl b/src/chttpd/test/eunit/chttpd_bulk_get_test.erl
index 5b76fe756..b3a01bc30 100644
--- a/src/chttpd/test/eunit/chttpd_bulk_get_test.erl
+++ b/src/chttpd/test/eunit/chttpd_bulk_get_test.erl
@@ -19,6 +19,8 @@
 -define(PASS, "pass").
 -define(AUTH, {basic_auth, {?USER, ?PASS}}).
 -define(JSON, {"Content-Type", "application/json"}).
+-define(MP_MIXED, {"Accept", "multipart/mixed"}).
+-define(MP_RELATED, {"Accept", "multipart/related"}).
 
 -define(DOC, <<"doc">>).
 -define(REVA, <<"reva">>).
@@ -29,6 +31,52 @@
 
 -define(DOC_COUNT, 2000).
 
+-define(BULK_GET_TESTS, [
+    ?TDEF(t_invalid_method),
+    ?TDEF(t_empty_request),
+    ?TDEF(t_no_docs),
+    ?TDEF(t_invalid_query_params),
+    ?TDEF(t_invalid_doc),
+    ?TDEF(t_doc_no_id),
+    ?TDEF(t_invalid_doc_id),
+    ?TDEF(t_missing_doc),
+    ?TDEF(t_invalid_rev),
+    ?TDEF(t_missing_rev),
+    ?TDEF(t_doc_all_revs),
+    ?TDEF(t_specific_rev),
+    ?TDEF(t_specific_rev_latest),
+    ?TDEF(t_ancestor_rev_latest),
+    ?TDEF(t_revs_true),
+    ?TDEF(t_attachments_true),
+    ?TDEF(t_atts_since),
+    ?TDEF(t_invalid_atts_since),
+    ?TDEF(t_invalid_atts_since_invalid_rev),
+    ?TDEF(t_atts_since_returns_attachment),
+    ?TDEF(t_atts_since_overrides_attachments_true),
+    ?TDEF(t_atts_since_multiple),
+    ?TDEF(t_atts_since_multiple_attachments_true),
+    ?TDEF(t_missing_rev_latest),
+    ?TDEF(t_fabric_worker_error)
+]).
+
+-define(BULK_GET_MULTIPART_TESTS, [
+    ?TDEF(t_mp_empty_request),
+    ?TDEF(t_mp_no_docs),
+    ?TDEF(t_mp_invalid_doc),
+    ?TDEF(t_mp_doc_no_id),
+    ?TDEF(t_mp_invalid_doc_id),
+    ?TDEF(t_mp_missing_doc),
+    ?TDEF(t_mp_invalid_rev),
+    ?TDEF(t_mp_missing_rev),
+    ?TDEF(t_mp_doc_all_revs),
+    ?TDEF(t_mp_specific_rev),
+    ?TDEF(t_mp_specific_rev_multipart_related),
+    ?TDEF(t_mp_revs_true),
+    ?TDEF(t_mp_atts_since),
+    ?TDEF(t_mp_atts_since_returns_attachment),
+    ?TDEF(t_mp_atts_since_overrides_attachments_true)
+]).
+
 test_docs_revs() ->
     [
         {?DOC, [?REVA]},
@@ -41,27 +89,31 @@ bulk_get_test_() ->
         setup,
         fun setup_basic/0,
         fun teardown/1,
-        with([
-            ?TDEF(t_empty_request),
-            ?TDEF(t_no_docs),
-            ?TDEF(t_invalid_doc),
-            ?TDEF(t_doc_no_id),
-            ?TDEF(t_missing_doc),
-            ?TDEF(t_invalid_rev),
-            ?TDEF(t_missing_rev),
-            ?TDEF(t_doc_all_revs),
-            ?TDEF(t_specific_rev),
-            ?TDEF(t_specific_rev_latest),
-            ?TDEF(t_ancestor_rev_latest),
-            ?TDEF(t_revs_true),
-            ?TDEF(t_attachments_true),
-            ?TDEF(t_atts_since),
-            ?TDEF(t_atts_since_returns_attachment),
-            ?TDEF(t_atts_since_overrides_attachments_true),
-            ?TDEF(t_atts_since_multiple),
-            ?TDEF(t_atts_since_multiple_attachments_true),
-            ?TDEF(t_missing_rev_latest)
-        ])
+        with(?BULK_GET_TESTS)
+    }.
+
+bulk_get_multipart_test_() ->
+    {
+        setup,
+        fun setup_basic/0,
+        fun teardown/1,
+        with(?BULK_GET_MULTIPART_TESTS)
+    }.
+
+bulk_get_test_no_batches_test_() ->
+    {
+        setup,
+        fun setup_no_batches/0,
+        fun teardown_no_batches/1,
+        with(?BULK_GET_TESTS)
+    }.
+
+bulk_get_multipart_no_batches_test_() ->
+    {
+        setup,
+        fun setup_no_batches/0,
+        fun teardown_no_batches/1,
+        with(?BULK_GET_MULTIPART_TESTS)
     }.
 
 bulk_get_multiple_docs_test_() ->
@@ -70,18 +122,56 @@ bulk_get_multiple_docs_test_() ->
         fun setup_multiple/0,
         fun teardown/1,
         [
-            ?TDEF_FE(t_multiple_docs, 10)
+            ?TDEF_FE(t_multiple_docs, 10),
+            ?TDEF_FE(t_mp_multiple_docs, 10)
         ]
     }.
 
+bulk_get_multiple_docs_no_batches_test_() ->
+    {
+        foreach,
+        fun setup_multiple_no_batches/0,
+        fun teardown/1,
+        [
+            ?TDEF_FE(t_multiple_docs, 10),
+            ?TDEF_FE(t_mp_multiple_docs, 10)
+        ]
+    }.
+
+t_invalid_method({_, DbUrl}) ->
+    ?assertMatch({405, _}, req(put, DbUrl ++ "/_bulk_get", #{})).
+
 t_empty_request({_, DbUrl}) ->
     {Code, Res} = bulk_get(DbUrl, []),
     ?assertEqual(200, Code),
     ?assertEqual([], Res).
 
+t_mp_empty_request({_, DbUrl}) ->
+    {Code, Res} = bulk_get_mp(DbUrl, []),
+    ?assertEqual(200, Code),
+    ?assertEqual([], Res).
+
 t_no_docs({_, DbUrl}) ->
-    {Code, #{}} = req(post, DbUrl ++ "/_bulk_get", #{}),
-    ?assertEqual(400, Code).
+    {Code, Res} = req(post, DbUrl ++ "/_bulk_get", #{}),
+    ?assertEqual(400, Code),
+    ?assertMatch(#{<<"error">> := <<"bad_request">>}, Res).
+
+t_mp_no_docs({_, DbUrl}) ->
+    {Code, Res} = req_mp(post, DbUrl ++ "/_bulk_get", #{}, ?MP_MIXED),
+    ?assertEqual(400, Code),
+    ?assertMatch(#{<<"error">> := <<"bad_request">>}, Res).
+
+t_invalid_query_params({_, DbUrl}) ->
+    Docs = #{<<"docs">> => [#{<<"id">> => ?DOC, <<"rev">> => <<"2-revb">>}]},
+    Params = ["rev", "open_revs", "atts_since", "w", "new_edits"],
+    lists:foreach(
+        fun(Param) ->
+            {Code, Res} = req(post, DbUrl ++ "/_bulk_get?" ++ Param, Docs),
+            ?assertEqual(400, Code),
+            ?assertMatch(#{<<"error">> := <<"bad_request">>}, Res)
+        end,
+        Params
+    ).
 
 t_invalid_doc({_, DbUrl}) ->
     {Code, Res} = bulk_get(DbUrl, [<<"foo">>]),
@@ -104,6 +194,26 @@ t_invalid_doc({_, DbUrl}) ->
         Res
     ).
 
+t_mp_invalid_doc({_, DbUrl}) ->
+    {Code, Res} = bulk_get_mp(DbUrl, [<<"foo">>]),
+    ?assertEqual(200, Code),
+    ?assertMatch([{#{}, #{}}], Res),
+    [{ChunkHeaders, Error}] = Res,
+    ?assertMatch(
+        #{
+            <<"content-type">> := <<"application/json; error=\"true\"">>
+        },
+        ChunkHeaders
+    ),
+    ?assertMatch(
+        #{
+            <<"error">> := <<"bad_request">>,
+            <<"id">> := null,
+            <<"rev">> := null
+        },
+        Error
+    ).
+
 t_doc_no_id({_, DbUrl}) ->
     {Code, Res} = bulk_get(DbUrl, [#{<<"rev">> => <<"1-foo">>}]),
     ?assertEqual(200, Code),
@@ -125,6 +235,67 @@ t_doc_no_id({_, DbUrl}) ->
         Res
     ).
 
+t_mp_doc_no_id({_, DbUrl}) ->
+    {Code, Res} = bulk_get_mp(DbUrl, [#{<<"rev">> => <<"1-foo">>}]),
+    ?assertEqual(200, Code),
+    ?assertMatch([{#{}, #{}}], Res),
+    [{ChunkHeaders, Error}] = Res,
+    ?assertMatch(
+        #{
+            <<"content-type">> := <<"application/json; error=\"true\"">>
+        },
+        ChunkHeaders
+    ),
+    ?assertMatch(
+        #{
+            <<"error">> := <<"bad_request">>,
+            <<"id">> := null,
+            <<"rev">> := null
+        },
+        Error
+    ).
+
+t_invalid_doc_id({_, DbUrl}) ->
+    {Code, Res} = bulk_get(DbUrl, [#{<<"id">> => <<>>, <<"rev">> => <<"1-foo">>}]),
+    ?assertEqual(200, Code),
+    ?assertMatch(
+        [
+            #{
+                <<"docs">> := [
+                    #{
+                        <<"error">> := #{
+                            <<"id">> := <<>>,
+                            <<"rev">> := null,
+                            <<"error">> := <<"illegal_docid">>
+                        }
+                    }
+                ],
+                <<"id">> := <<>>
+            }
+        ],
+        Res
+    ).
+
+t_mp_invalid_doc_id({_, DbUrl}) ->
+    {Code, Res} = bulk_get_mp(DbUrl, [#{<<"id">> => <<>>, <<"rev">> => <<"1-foo">>}]),
+    ?assertEqual(200, Code),
+    ?assertMatch([{#{}, #{}}], Res),
+    [{ChunkHeaders, Error}] = Res,
+    ?assertMatch(
+        #{
+            <<"content-type">> := <<"application/json; error=\"true\"">>
+        },
+        ChunkHeaders
+    ),
+    ?assertMatch(
+        #{
+            <<"id">> := <<>>,
+            <<"rev">> := null,
+            <<"error">> := <<"illegal_docid">>
+        },
+        Error
+    ).
+
 t_missing_doc({_, DbUrl}) ->
     {Code, Res} = bulk_get(DbUrl, [#{<<"id">> => <<"missing">>}]),
     ?assertEqual(200, Code),
@@ -146,6 +317,26 @@ t_missing_doc({_, DbUrl}) ->
         Res
     ).
 
+t_mp_missing_doc({_, DbUrl}) ->
+    {Code, Res} = bulk_get_mp(DbUrl, [#{<<"id">> => <<"missing">>}]),
+    ?assertEqual(200, Code),
+    ?assertMatch([{#{}, #{}}], Res),
+    [{ChunkHeaders, Error}] = Res,
+    ?assertMatch(
+        #{
+            <<"content-type">> := <<"application/json; error=\"true\"">>
+        },
+        ChunkHeaders
+    ),
+    ?assertMatch(
+        #{
+            <<"error">> := <<"not_found">>,
+            <<"id">> := <<"missing">>,
+            <<"rev">> := <<"undefined">>
+        },
+        Error
+    ).
+
 t_invalid_rev({_, DbUrl}) ->
     Doc = #{<<"id">> => ?DOC, <<"rev">> => 42},
     {Code, Res} = bulk_get(DbUrl, [Doc]),
@@ -168,6 +359,27 @@ t_invalid_rev({_, DbUrl}) ->
         Res
     ).
 
+t_mp_invalid_rev({_, DbUrl}) ->
+    Doc = #{<<"id">> => ?DOC, <<"rev">> => 42},
+    {Code, Res} = bulk_get_mp(DbUrl, [Doc]),
+    ?assertEqual(200, Code),
+    ?assertMatch([{#{}, #{}}], Res),
+    [{ChunkHeaders, Error}] = Res,
+    ?assertMatch(
+        #{
+            <<"content-type">> := <<"application/json; error=\"true\"">>
+        },
+        ChunkHeaders
+    ),
+    ?assertMatch(
+        #{
+            <<"error">> := <<"bad_request">>,
+            <<"id">> := ?DOC,
+            <<"rev">> := 42
+        },
+        Error
+    ).
+
 t_missing_rev({_, DbUrl}) ->
     Doc = #{<<"id">> => ?DOC, <<"rev">> => <<"1-x">>},
     {Code, Res} = bulk_get(DbUrl, [Doc]),
@@ -190,6 +402,27 @@ t_missing_rev({_, DbUrl}) ->
         Res
     ).
 
+t_mp_missing_rev({_, DbUrl}) ->
+    Doc = #{<<"id">> => ?DOC, <<"rev">> => <<"1-x">>},
+    {Code, Res} = bulk_get_mp(DbUrl, [Doc]),
+    ?assertEqual(200, Code),
+    ?assertMatch([{#{}, #{}}], Res),
+    [{ChunkHeaders, Error}] = Res,
+    ?assertMatch(
+        #{
+            <<"content-type">> := <<"application/json; error=\"true\"">>
+        },
+        ChunkHeaders
+    ),
+    ?assertMatch(
+        #{
+            <<"error">> := <<"not_found">>,
+            <<"reason">> := <<"missing">>,
+            <<"rev">> := <<"1-x">>
+        },
+        Error
+    ).
+
 t_doc_all_revs({_, DbUrl}) ->
     {Code, Res} = bulk_get(DbUrl, [#{<<"id">> => ?DOC}]),
     ?assertEqual(200, Code),
@@ -222,6 +455,100 @@ t_doc_all_revs({_, DbUrl}) ->
         Res
     ).
 
+t_mp_doc_all_revs({_, DbUrl}) ->
+    {Code, Res0} = bulk_get_mp(DbUrl, [#{<<"id">> => ?DOC}]),
+    ?assertEqual(200, Code),
+    ?assertMatch([{#{}, [_, _]}, {#{}, [_, _]}], Res0),
+
+    % Sort to ensure we get a determinstic order for results
+    CmpFun = fun({#{<<"x-rev-id">> := A}, _}, {#{<<"x-rev-id">> := B}, _}) ->
+        A =< B
+    end,
+    Res = lists:sort(CmpFun, Res0),
+
+    [
+        {ChunkHeaders1, [Doc1, AttA]},
+        {ChunkHeaders2, [Doc2, AttB]}
+    ] = Res,
+
+    % Chunk headers
+    ?assertMatch(
+        #{
+            <<"x-doc-id">> := ?DOC,
+            <<"x-rev-id">> := <<"2-revb">>
+        },
+        ChunkHeaders1
+    ),
+
+    ?assertMatch(
+        #{
+            <<"x-doc-id">> := ?DOC,
+            <<"x-rev-id">> := <<"2-revc">>
+        },
+        ChunkHeaders2
+    ),
+
+    % Doc bodies
+    ?assertMatch({#{}, #{}}, Doc1),
+    {DocHeaders1, DocBody1} = Doc1,
+    ?assertMatch(
+        #{
+            <<"content-type">> := <<"application/json">>
+        },
+        DocHeaders1
+    ),
+    ?assertMatch(
+        #{
+            <<"_id">> := ?DOC,
+            <<"_rev">> := <<"2-revb">>,
+            <<"_attachments">> := #{
+                ?ATT := #{<<"follows">> := true}
+            }
+        },
+        DocBody1
+    ),
+
+    ?assertMatch({#{}, #{}}, Doc2),
+    {DocHeaders2, DocBody2} = Doc2,
+    ?assertMatch(
+        #{
+            <<"content-type">> := <<"application/json">>
+        },
+        DocHeaders2
+    ),
+    ?assertMatch(
+        #{
+            <<"_id">> := ?DOC,
+            <<"_rev">> := <<"2-revc">>,
+            <<"_attachments">> := #{
+                ?ATT := #{<<"follows">> := true}
+            }
+        },
+        DocBody2
+    ),
+
+    % 2-revb attachments
+    ?assertMatch({#{}, <<_/binary>>}, AttA),
+    {AttAHeaders, AttAData} = AttA,
+    ?assertMatch(
+        #{
+            <<"content-disposition">> := <<"attachment; filename=\"att\"">>
+        },
+        AttAHeaders
+    ),
+    ?assertEqual(<<"thedata">>, AttAData),
+
+    % 2-revc attachments
+    ?assertMatch({#{}, <<_/binary>>}, AttB),
+    {AttBHeaders, AttBData} = AttB,
+    ?assertMatch(
+        #{
+            <<"content-disposition">> := <<"attachment; filename=\"att\"">>
+        },
+        AttBHeaders
+    ),
+    ?assertEqual(<<"thedata">>, AttBData).
+
 t_specific_rev({_, DbUrl}) ->
     Doc = #{<<"id">> => ?DOC, <<"rev">> => <<"2-revb">>},
     {Code, Res} = bulk_get(DbUrl, [Doc]),
@@ -246,6 +573,102 @@ t_specific_rev({_, DbUrl}) ->
         Res
     ).
 
+t_mp_specific_rev({_, DbUrl}) ->
+    Doc = #{<<"id">> => ?DOC, <<"rev">> => <<"2-revb">>},
+    {Code, Res} = bulk_get_mp(DbUrl, [Doc]),
+    ?assertEqual(200, Code),
+    ?assertMatch([{#{}, [_, _]}], Res),
+
+    [{ChunkHeaders, [Doc1, Att]}] = Res,
+
+    % Whole doc + att chunk headers
+    ?assertMatch(
+        #{
+            <<"x-doc-id">> := ?DOC,
+            <<"x-rev-id">> := <<"2-revb">>
+        },
+        ChunkHeaders
+    ),
+
+    % Doc body
+    ?assertMatch({#{}, #{}}, Doc1),
+    {DocHeaders, DocBody} = Doc1,
+    ?assertMatch(
+        #{
+            <<"content-type">> := <<"application/json">>
+        },
+        DocHeaders
+    ),
+    ?assertMatch(
+        #{
+            <<"_id">> := ?DOC,
+            <<"_rev">> := <<"2-revb">>,
+            <<"_attachments">> := #{
+                ?ATT := #{<<"follows">> := true}
+            }
+        },
+        DocBody
+    ),
+
+    % Att
+    ?assertMatch({#{}, <<_/binary>>}, Att),
+    {AttHeaders, AttData} = Att,
+    ?assertMatch(
+        #{
+            <<"content-disposition">> := <<"attachment; filename=\"att\"">>
+        },
+        AttHeaders
+    ),
+    ?assertEqual(<<"thedata">>, AttData).
+
+t_mp_specific_rev_multipart_related({_, DbUrl}) ->
+    Doc = #{<<"id">> => ?DOC, <<"rev">> => <<"2-revb">>},
+    {Code, Res} = bulk_get_mp(DbUrl, [Doc], "", ?MP_RELATED),
+    ?assertEqual(200, Code),
+    ?assertMatch([{#{}, [_, _]}], Res),
+
+    [{ChunkHeaders, [Doc1, Att]}] = Res,
+
+    % Whole doc + att chunk headers
+    ?assertMatch(
+        #{
+            <<"x-doc-id">> := ?DOC,
+            <<"x-rev-id">> := <<"2-revb">>
+        },
+        ChunkHeaders
+    ),
+
+    % Doc body
+    ?assertMatch({#{}, #{}}, Doc1),
+    {DocHeaders, DocBody} = Doc1,
+    ?assertMatch(
+        #{
+            <<"content-type">> := <<"application/json">>
+        },
+        DocHeaders
+    ),
+    ?assertMatch(
+        #{
+            <<"_id">> := ?DOC,
+            <<"_rev">> := <<"2-revb">>,
+            <<"_attachments">> := #{
+                ?ATT := #{<<"follows">> := true}
+            }
+        },
+        DocBody
+    ),
+
+    % Att
+    ?assertMatch({#{}, <<_/binary>>}, Att),
+    {AttHeaders, AttData} = Att,
+    ?assertMatch(
+        #{
+            <<"content-disposition">> := <<"attachment; filename=\"att\"">>
+        },
+        AttHeaders
+    ),
+    ?assertEqual(<<"thedata">>, AttData).
+
 t_specific_rev_latest({_, DbUrl}) ->
     Doc = #{<<"id">> => ?DOC, <<"rev">> => <<"2-revb">>},
     {Code, Res} = bulk_get(DbUrl, [Doc], "?latest=true"),
@@ -332,6 +755,59 @@ t_revs_true({_, DbUrl}) ->
         Res
     ).
 
+t_mp_revs_true({_, DbUrl}) ->
+    Doc = #{
+        <<"id">> => ?DOC,
+        <<"rev">> => <<"1-reva">>
+    },
+    {Code, Res} = bulk_get_mp(DbUrl, [Doc], "?revs=true"),
+    ?assertEqual(200, Code),
+    ?assertMatch([{#{}, [_, _]}], Res),
+
+    [{ChunkHeaders, [Doc1, Att]}] = Res,
+
+    % Whole doc + att chunk headers
+    ?assertMatch(
+        #{
+            <<"x-doc-id">> := ?DOC,
+            <<"x-rev-id">> := <<"1-reva">>
+        },
+        ChunkHeaders
+    ),
+
+    % Doc body
+    ?assertMatch({#{}, #{}}, Doc1),
+    {DocHeaders, DocBody} = Doc1,
+    ?assertMatch(
+        #{
+            <<"content-type">> := <<"application/json">>
+        },
+        DocHeaders
+    ),
+    ?assertMatch(
+        #{
+            <<"_id">> := ?DOC,
+            <<"_rev">> := <<"1-reva">>,
+            <<"_revisions">> :=
+                #{<<"ids">> := [<<"reva">>], <<"start">> := 1},
+            <<"_attachments">> := #{
+                ?ATT := #{<<"follows">> := true}
+            }
+        },
+        DocBody
+    ),
+
+    % Att
+    ?assertMatch({#{}, <<_/binary>>}, Att),
+    {AttHeaders, AttData} = Att,
+    ?assertMatch(
+        #{
+            <<"content-disposition">> := <<"attachment; filename=\"att\"">>
+        },
+        AttHeaders
+    ),
+    ?assertEqual(<<"thedata">>, AttData).
+
 t_attachments_true({_, DbUrl}) ->
     Doc = #{
         <<"id">> => ?DOC,
@@ -389,6 +865,95 @@ t_atts_since({_, DbUrl}) ->
         Res
     ).
 
+t_invalid_atts_since({_, DbUrl}) ->
+    % atts_since is not a list even
+    Doc = #{
+        <<"id">> => ?DOC,
+        <<"rev">> => <<"2-revb">>,
+        <<"atts_since">> => <<"badsince">>
+    },
+    {Code, Res} = bulk_get(DbUrl, [Doc]),
+    ?assertEqual(200, Code),
+    ?assertMatch(
+        [
+            #{
+                <<"docs">> := [
+                    #{
+                        <<"error">> := #{
+                            <<"id">> := ?DOC,
+                            <<"error">> := <<"bad_request">>,
+                            <<"rev">> := <<"badsince">>
+                        }
+                    }
+                ],
+                <<"id">> := ?DOC
+            }
+        ],
+        Res
+    ).
+
+t_invalid_atts_since_invalid_rev({_, DbUrl}) ->
+    % atts_since is list but the revision is bad
+    Doc = #{
+        <<"id">> => ?DOC,
+        <<"rev">> => <<"2-revb">>,
+        <<"atts_since">> => [<<"badsince">>]
+    },
+    {Code, Res} = bulk_get(DbUrl, [Doc]),
+    ?assertEqual(200, Code),
+    ?assertMatch(
+        [
+            #{
+                <<"docs">> := [
+                    #{
+                        <<"error">> := #{
+                            <<"id">> := ?DOC,
+                            <<"error">> := <<"bad_request">>,
+                            <<"rev">> := <<"badsince">>
+                        }
+                    }
+                ],
+                <<"id">> := ?DOC
+            }
+        ],
+        Res
+    ).
+
+t_mp_atts_since({_, DbUrl}) ->
+    % Attachments should not be returned as 2 from 2-revb is not stricly
+    % greater than 1 from our attachment's revpos. As far as multpart encoding
+    % goes, this is an odd corner case: when all attachments are stubs it seems
+    % the doc body is encoded directly as the multipart/* top level part
+    % instead of having another nested application/json doc.
+    Doc = #{
+        <<"id">> => ?DOC,
+        <<"rev">> => <<"2-revb">>,
+        <<"atts_since">> => [<<"2-revb">>]
+    },
+    {Code, Res} = bulk_get_mp(DbUrl, [Doc]),
+    ?assertEqual(200, Code),
+
+    ?assertMatch([{#{}, #{}}], Res),
+    [{ChunkHeaders, DocBody}] = Res,
+
+    ?assertMatch(
+        #{
+            <<"content-type">> := <<"application/json">>
+        },
+        ChunkHeaders
+    ),
+
+    ?assertMatch(
+        #{
+            <<"_id">> := ?DOC,
+            <<"_rev">> := <<"2-revb">>,
+            <<"_attachments">> := #{
+                ?ATT := #{<<"stub">> := true}
+            }
+        },
+        DocBody
+    ).
+
 t_atts_since_returns_attachment({_, DbUrl}) ->
     % 0-baz revpos 0 is less than revpos 1 of our attachment
     Doc = #{
@@ -418,6 +983,60 @@ t_atts_since_returns_attachment({_, DbUrl}) ->
         Res
     ).
 
+t_mp_atts_since_returns_attachment({_, DbUrl}) ->
+    % 0-baz revpos 0 is less than revpos 1 of our attachment
+    Doc = #{
+        <<"id">> => ?DOC,
+        <<"rev">> => <<"2-revb">>,
+        <<"atts_since">> => [<<"0-baz">>]
+    },
+    {Code, Res} = bulk_get_mp(DbUrl, [Doc]),
+    ?assertEqual(200, Code),
+
+    ?assertMatch([{#{}, [_, _]}], Res),
+
+    [{ChunkHeaders, [Doc1, Att]}] = Res,
+
+    % Whole doc + att chunk headers
+    ?assertMatch(
+        #{
+            <<"x-doc-id">> := ?DOC,
+            <<"x-rev-id">> := <<"2-revb">>
+        },
+        ChunkHeaders
+    ),
+
+    % Doc body
+    ?assertMatch({#{}, #{}}, Doc1),
+    {DocHeaders, DocBody} = Doc1,
+    ?assertMatch(
+        #{
+            <<"content-type">> := <<"application/json">>
+        },
+        DocHeaders
+    ),
+    ?assertMatch(
+        #{
+            <<"_id">> := ?DOC,
+            <<"_rev">> := <<"2-revb">>,
+            <<"_attachments">> := #{
+                ?ATT := #{<<"follows">> := true}
+            }
+        },
+        DocBody
+    ),
+
+    % Att
+    ?assertMatch({#{}, <<_/binary>>}, Att),
+    {AttHeaders, AttData} = Att,
+    ?assertMatch(
+        #{
+            <<"content-disposition">> := <<"attachment; filename=\"att\"">>
+        },
+        AttHeaders
+    ),
+    ?assertEqual(<<"thedata">>, AttData).
+
 t_atts_since_overrides_attachments_true({_, DbUrl}) ->
     Doc = #{
         <<"id">> => ?DOC,
@@ -435,9 +1054,7 @@ t_atts_since_overrides_attachments_true({_, DbUrl}) ->
                             <<"_id">> := ?DOC,
                             <<"_rev">> := <<"2-revb">>,
                             <<"_attachments">> := #{
-                                ?ATT := #{
-                                    <<"stub">> := true
-                                }
+                                ?ATT := #{<<"stub">> := true}
                             }
                         }
                     }
@@ -448,6 +1065,36 @@ t_atts_since_overrides_attachments_true({_, DbUrl}) ->
         Res
     ).
 
+t_mp_atts_since_overrides_attachments_true({_, DbUrl}) ->
+    Doc = #{
+        <<"id">> => ?DOC,
+        <<"rev">> => <<"2-revb">>,
+        <<"atts_since">> => [<<"2-revb">>]
+    },
+    {Code, Res} = bulk_get_mp(DbUrl, [Doc], "?attachments=true"),
+    ?assertEqual(200, Code),
+
+    ?assertMatch([{#{}, #{}}], Res),
+    [{ChunkHeaders, DocBody}] = Res,
+
+    ?assertMatch(
+        #{
+            <<"content-type">> := <<"application/json">>
+        },
+        ChunkHeaders
+    ),
+
+    ?assertMatch(
+        #{
+            <<"_id">> := ?DOC,
+            <<"_rev">> := <<"2-revb">>,
+            <<"_attachments">> := #{
+                ?ATT := #{<<"stub">> := true}
+            }
+        },
+        DocBody
+    ).
+
 t_atts_since_multiple({_, DbUrl}) ->
     % Attachment revpos is 1 so we do not expect this attachment body
     Docs = [
@@ -525,8 +1172,9 @@ t_atts_since_multiple({_, DbUrl}) ->
                 <<"docs">> := [
                     #{
                         <<"ok">> := #{
-                            <<"_attachments">> :=
-                                #{?ATT := #{<<"stub">> := true}},
+                            <<"_attachments">> := #{
+                                ?ATT := #{<<"stub">> := true}
+                            },
                             <<"_id">> := ?DOC,
                             <<"_rev">> := <<"2-revb">>
                         }
@@ -618,8 +1266,9 @@ t_atts_since_multiple_attachments_true({_, DbUrl}) ->
                 <<"docs">> := [
                     #{
                         <<"ok">> := #{
-                            <<"_attachments">> :=
-                                #{?ATT := #{<<"data">> := ?ATT_DATA}},
+                            <<"_attachments">> := #{
+                                ?ATT := #{<<"data">> := ?ATT_DATA}
+                            },
                             <<"_id">> := ?DOC,
                             <<"_rev">> := <<"2-revb">>
                         }
@@ -654,6 +1303,34 @@ t_missing_rev_latest({_, DbUrl}) ->
         Res
     ).
 
+t_fabric_worker_error({_, DbUrl}) ->
+    % Check the case handling errors returned by fabric:open_revs/3,4
+    Doc = #{<<"id">> => ?DOC, <<"rev">> => <<"1-reva">>},
+    meck:expect(fabric, open_revs, 3, meck:val({error, fabric_error_foo})),
+    meck:expect(fabric, open_revs, 4, meck:val({error, fabric_error_foo})),
+    {Code, Res} = bulk_get(DbUrl, [Doc], "?latest=true"),
+    meck:expect(fabric, open_revs, 3, meck:passthrough()),
+    meck:expect(fabric, open_revs, 4, meck:passthrough()),
+    ?assertEqual(200, Code),
+    ?assertMatch(
+        [
+            #{
+                <<"docs">> := [
+                    #{
+                        <<"error">> := #{
+                            <<"error">> := <<"internal_fabric_error">>,
+                            <<"id">> := ?DOC,
+                            <<"rev">> := <<"1-reva">>,
+                            <<"reason">> := <<"fabric_error_foo">>
+                        }
+                    }
+                ],
+                <<"id">> := ?DOC
+            }
+        ],
+        Res
+    ).
+
 t_multiple_docs({_, DbUrl}) ->
     Reqs = [#{<<"id">> => integer_to_binary(I)} || I <- lists:seq(1, ?DOC_COUNT)],
     {Code, Res} = bulk_get(DbUrl, Reqs),
@@ -681,6 +1358,25 @@ t_multiple_docs({_, DbUrl}) ->
         lists:zip(lists:seq(1, ?DOC_COUNT), Res)
     ).
 
+t_mp_multiple_docs({_, DbUrl}) ->
+    Reqs = [#{<<"id">> => integer_to_binary(I)} || I <- lists:seq(1, ?DOC_COUNT)],
+    {Code, Res} = bulk_get_mp(DbUrl, Reqs),
+    ?assertEqual(200, Code),
+    ?assertEqual(?DOC_COUNT, length(Res)),
+    lists:foreach(
+        fun({I, Docs}) ->
+            Id = integer_to_binary(I),
+            ?assertMatch(
+                {
+                    #{<<"content-type">> := <<"application/json">>},
+                    #{<<"_id">> := Id, <<"_rev">> := <<"1-reva">>}
+                },
+                Docs
+            )
+        end,
+        lists:zip(lists:seq(1, ?DOC_COUNT), Res)
+    ).
+
 % Utility functions
 
 setup_ctx() ->
@@ -695,21 +1391,42 @@ setup_ctx() ->
     {Ctx, Url, Db}.
 
 teardown({Ctx, DbUrl}) ->
+    meck:unload(),
     delete_db(DbUrl),
-    ok = config:delete("admins", ?USER, _Persist = false),
+    Persist = false,
+    ok = config:delete("admins", ?USER, Persist),
+    ok = config:delete("chttpd", "use_batches", Persist),
     test_util:stop_couch(Ctx).
 
+teardown_no_batches({Ctx, DbUrl}) ->
+    % Verify that the non-batched open_revs/4 was called
+    ?assert(meck:num_calls(fabric, open_revs, 4) >= 1),
+    % Verify that no calls to the batched open_revs/3 were made
+    ?assertEqual(0, meck:num_calls(fabric, open_revs, 3)),
+    teardown({Ctx, DbUrl}).
+
 setup_basic() ->
     {Ctx, Url, Db} = setup_ctx(),
     DbUrl = Url ++ Db,
     ok = create_docs(DbUrl, test_docs_revs()),
+    meck:new(fabric, [passthrough]),
+    {Ctx, DbUrl}.
+
+setup_no_batches() ->
+    {Ctx, DbUrl} = setup_basic(),
+    config:set("chttpd", "bulk_get_use_batches", "false", _Persist = false),
     {Ctx, DbUrl}.
 
 setup_multiple() ->
     {Ctx, Url, Db} = setup_ctx(),
     DbUrl = Url ++ Db,
     Docs = [{integer_to_binary(I), [?REVA]} || I <- lists:seq(1, ?DOC_COUNT)],
-    ok = create_docs(DbUrl, Docs),
+    ok = create_docs(DbUrl, Docs, _WithAtts = false),
+    {Ctx, DbUrl}.
+
+setup_multiple_no_batches() ->
+    {Ctx, DbUrl} = setup_multiple(),
+    config:set("chttpd", "bulk_get_use_batches", "false", _Persist = false),
     {Ctx, DbUrl}.
 
 create_db(Top, Db) ->
@@ -729,6 +1446,9 @@ delete_db(DbUrl) ->
     end.
 
 create_docs(DbUrl, DocRevs) ->
+    create_docs(DbUrl, DocRevs, true).
+
+create_docs(DbUrl, DocRevs, WithAtts) ->
     Docs = lists:map(
         fun({Id, Revs}) ->
             Doc = #{
@@ -738,7 +1458,10 @@ create_docs(DbUrl, DocRevs) ->
                     <<"start">> => length(Revs)
                 }
             },
-            add_att(Doc)
+            case WithAtts of
+                true -> add_att(Doc);
+                false -> Doc
+            end
         end,
         DocRevs
     ),
@@ -756,7 +1479,7 @@ add_att(#{} = Doc) ->
         <<"_attachments">> => #{
             ?ATT => #{
                 <<"revpos">> => 1,
-                <<"content_type">> => <<"text/plain">>,
+                <<"content_type">> => <<"application/octet-stream">>,
                 <<"data">> => ?ATT_DATA
             }
         }
@@ -774,11 +1497,101 @@ bulk_get(DbUrl, Docs, Params) ->
 req(Method, Url) ->
     Headers = [?JSON, ?AUTH],
     {ok, Code, _, Res} = test_request:request(Method, Url, Headers),
-    {Code, jiffy:decode(Res, [return_maps])}.
+    {Code, json_decode(Res)}.
 
 req(Method, Url, #{} = Body) ->
     req(Method, Url, jiffy:encode(Body));
 req(Method, Url, Body) ->
     Headers = [?JSON, ?AUTH],
     {ok, Code, _, Res} = test_request:request(Method, Url, Headers, Body),
-    {Code, jiffy:decode(Res, [return_maps])}.
+    {Code, json_decode(Res)}.
+
+% Handle multipart _bulk_get requests
+
+bulk_get_mp(DbUrl, Docs) ->
+    bulk_get_mp(DbUrl, Docs, "").
+
+bulk_get_mp(DbUrl, Doc, Params) ->
+    bulk_get_mp(DbUrl, Doc, Params, ?MP_MIXED).
+
+bulk_get_mp(DbUrl, Docs, Params, MpType) ->
+    Url = DbUrl ++ "/_bulk_get" ++ Params,
+    {Code, Res} = req_mp(post, Url, #{<<"docs">> => Docs}, MpType),
+    {Code, Res}.
+
+req_mp(Method, Url, #{} = Body, MpType) ->
+    req_mp(Method, Url, jiffy:encode(Body), MpType);
+req_mp(Method, Url, Body, MpType) ->
+    Headers = [?JSON, ?AUTH, MpType],
+    {ok, Code, ResHeaders, Res} = test_request:request(Method, Url, Headers, Body),
+    CType = header_value("Content-Type", ResHeaders),
+    case CType of
+        "application/json" ->
+            {Code, json_decode(Res)};
+        "multipart/" ++ _ ->
+            Chunks = split(Res, CType),
+            {Code, lists:map(fun chunk_parse_fun/1, Chunks)}
+    end.
+
+% In a multipart response, each chunk would have its own headers, content type,
+% and potentially nested parts with their own multipart encoding
+%
+chunk_parse_fun(Chunk) when is_binary(Chunk) ->
+    {Headers, Body} = parse_headers_and_body(Chunk),
+    #{<<"content-type">> := CType} = Headers,
+    case CType of
+        <<"application/json", _/binary>> ->
+            {Headers, json_decode(Body)};
+        <<"multipart/", _/binary>> ->
+            {Headers, lists:map(fun chunk_parse_fun/1, split(Body, CType))};
+        _ ->
+            {Headers, Body}
+    end.
+
+% Split the binary into parts based on the provided boundary. The splitting is
+% naive, after a basic binary:split/3 we have to do some cleanups and remove a
+% few trailing bits off the start and end.
+%
+split(Chunk, CType) ->
+    Boundary = get_boundary(CType),
+    Parts = binary:split(Chunk, <<"--", Boundary/binary>>, [global]),
+    Parts1 = [string:trim(P) || P <- Parts],
+    [P || P <- Parts1, P =/= <<>> andalso P =/= <<"--">>].
+
+% Parse the headers and body from a binary chunk. This does just enough to
+% parse things out for the test and is not a full featured multipart parser
+%
+parse_headers_and_body(Bin) ->
+    [HeadersBin, BodyBin] = binary:split(Bin, <<"\r\n\r\n">>),
+    HeaderLines = binary:split(HeadersBin, <<"\r\n">>, [global, trim_all]),
+    MapFun = fun(Header) ->
+        [Name, Val] = binary:split(Header, <<":">>),
+        {string:lowercase(string:trim(Name)), string:trim(Val)}
+    end,
+    {maps:from_list(lists:map(MapFun, HeaderLines)), BodyBin}.
+
+header_value(Key, Headers) ->
+    header_value(Key, Headers, undefined).
+
+header_value(Key, Headers, Default) ->
+    Headers1 = [{string:to_lower(K), V} || {K, V} <- Headers],
+    case lists:keyfind(string:to_lower(Key), 1, Headers1) of
+        {_, Value} -> Value;
+        _ -> Default
+    end.
+
+get_boundary(CType) when is_binary(CType) ->
+    get_boundary(binary_to_list(CType));
+get_boundary(CType) when is_list(CType) ->
+    case mochiweb_util:parse_header(CType) of
+        {"multipart/" ++ _, HeaderOpts} ->
+            case couch_util:get_value("boundary", HeaderOpts) of
+                undefined -> undefined;
+                B when is_list(B) -> iolist_to_binary(B)
+            end;
+        _ ->
+            undefined
+    end.
+
+json_decode(Bin) when is_binary(Bin) ->
+    jiffy:decode(Bin, [return_maps]).
diff --git a/src/chttpd/test/eunit/chttpd_db_bulk_get_multipart_test.erl b/src/chttpd/test/eunit/chttpd_db_bulk_get_multipart_test.erl
deleted file mode 100644
index 91a3eaf19..000000000
--- a/src/chttpd/test/eunit/chttpd_db_bulk_get_multipart_test.erl
+++ /dev/null
@@ -1,366 +0,0 @@
-%% Licensed under the Apache License, Version 2.0 (the "License"); you may not
-%% use this file except in compliance with the License. You may obtain a copy of
-%% the License at
-%%
-%%   http://www.apache.org/licenses/LICENSE-2.0
-%%
-%% Unless required by applicable law or agreed to in writing, software
-%% distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-%% WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-%% License for the specific language governing permissions and limitations under
-%% the License.
-
--module(chttpd_db_bulk_get_multipart_test).
-
--include_lib("couch/include/couch_eunit.hrl").
--include_lib("couch/include/couch_db.hrl").
-
--define(TIMEOUT, 3000).
-
-setup_all() ->
-    mock(config),
-    mock(chttpd),
-    mock(couch_epi),
-    mock(couch_httpd),
-    mock(couch_stats),
-    mock(fabric),
-    mock(mochireq).
-
-teardown_all(_) ->
-    meck:unload().
-
-setup() ->
-    meck:reset([
-        config,
-        chttpd,
-        couch_epi,
-        couch_httpd,
-        couch_stats,
-        fabric,
-        mochireq
-    ]),
-    spawn_accumulator().
-
-teardown(Pid) ->
-    ok = stop_accumulator(Pid).
-
-bulk_get_test_() ->
-    {
-        "/db/_bulk_get tests",
-        {
-            setup,
-            fun setup_all/0,
-            fun teardown_all/1,
-            {
-                foreach,
-                fun setup/0,
-                fun teardown/1,
-                [
-                    fun should_require_docs_field/1,
-                    fun should_not_accept_specific_query_params/1,
-                    fun should_return_empty_results_on_no_docs/1,
-                    fun should_get_doc_with_all_revs/1,
-                    fun should_validate_doc_with_bad_id/1,
-                    fun should_validate_doc_with_bad_rev/1,
-                    fun should_validate_missing_doc/1,
-                    fun should_validate_bad_atts_since/1,
-                    fun should_include_attachments_when_atts_since_specified/1
-                ]
-            }
-        }
-    }.
-
-should_require_docs_field(_) ->
-    Req = fake_request({[{}]}),
-    Db = test_util:fake_db([{name, <<"foo">>}]),
-    ?_assertThrow({bad_request, _}, chttpd_db:db_req(Req, Db)).
-
-should_not_accept_specific_query_params(_) ->
-    Req = fake_request({[{<<"docs">>, []}]}),
-    Db = test_util:fake_db([{name, <<"foo">>}]),
-    lists:map(
-        fun(Param) ->
-            {Param,
-                ?_assertThrow({bad_request, _}, begin
-                    BadReq = Req#httpd{qs = [{Param, ""}]},
-                    chttpd_db:db_req(BadReq, Db)
-                end)}
-        end,
-        ["rev", "open_revs", "atts_since", "w", "new_edits"]
-    ).
-
-should_return_empty_results_on_no_docs(Pid) ->
-    Req = fake_request({[{<<"docs">>, []}]}),
-    Db = test_util:fake_db([{name, <<"foo">>}]),
-    chttpd_db:db_req(Req, Db),
-    Results = get_results_from_response(Pid),
-    ?_assertEqual([], Results).
-
-should_get_doc_with_all_revs(Pid) ->
-    DocId = <<"docudoc">>,
-    Req = fake_request(DocId),
-    Db = test_util:fake_db([{name, <<"foo">>}]),
-
-    DocRevA = #doc{id = DocId, body = {[{<<"_rev">>, <<"1-ABC">>}]}},
-    DocRevB = #doc{id = DocId, body = {[{<<"_rev">>, <<"1-CDE">>}]}},
-
-    mock_open_revs(all, {ok, [{ok, DocRevA}, {ok, DocRevB}]}),
-    chttpd_db:db_req(Req, Db),
-
-    Result = get_results_from_response(Pid),
-    ?_assertEqual(DocId, couch_util:get_value(<<"_id">>, Result)).
-
-should_validate_doc_with_bad_id(Pid) ->
-    DocId = <<"_docudoc">>,
-
-    Req = fake_request(DocId),
-    Db = test_util:fake_db([{name, <<"foo">>}]),
-    chttpd_db:db_req(Req, Db),
-
-    Result = get_results_from_response(Pid),
-    ?assertEqual(DocId, couch_util:get_value(<<"id">>, Result)),
-
-    ?_assertMatch(
-        [
-            {<<"id">>, DocId},
-            {<<"rev">>, null},
-            {<<"error">>, <<"illegal_docid">>},
-            {<<"reason">>, _}
-        ],
-        Result
-    ).
-
-should_validate_doc_with_bad_rev(Pid) ->
-    DocId = <<"docudoc">>,
-    Rev = <<"revorev">>,
-
-    Req = fake_request(DocId, Rev),
-    Db = test_util:fake_db([{name, <<"foo">>}]),
-    chttpd_db:db_req(Req, Db),
-
-    Result = get_results_from_response(Pid),
-    ?assertEqual(DocId, couch_util:get_value(<<"id">>, Result)),
-
-    ?_assertMatch(
-        [
-            {<<"id">>, DocId},
-            {<<"rev">>, Rev},
-            {<<"error">>, <<"bad_request">>},
-            {<<"reason">>, _}
-        ],
-        Result
-    ).
-
-should_validate_missing_doc(Pid) ->
-    DocId = <<"docudoc">>,
-    Rev = <<"1-revorev">>,
-
-    Req = fake_request(DocId, Rev),
-    Db = test_util:fake_db([{name, <<"foo">>}]),
-    mock_open_revs([{1, <<"revorev">>}], {ok, []}),
-    chttpd_db:db_req(Req, Db),
-
-    Result = get_results_from_response(Pid),
-    ?assertEqual(DocId, couch_util:get_value(<<"id">>, Result)),
-
-    ?_assertMatch(
-        [
-            {<<"id">>, DocId},
-            {<<"rev">>, Rev},
-            {<<"error">>, <<"not_found">>},
-            {<<"reason">>, _}
-        ],
-        Result
-    ).
-
-should_validate_bad_atts_since(Pid) ->
-    DocId = <<"docudoc">>,
-    Rev = <<"1-revorev">>,
-
-    Req = fake_request(DocId, Rev, <<"badattsince">>),
-    Db = test_util:fake_db([{name, <<"foo">>}]),
-    mock_open_revs([{1, <<"revorev">>}], {ok, []}),
-    chttpd_db:db_req(Req, Db),
-
-    Result = get_results_from_response(Pid),
-    ?assertEqual(DocId, couch_util:get_value(<<"id">>, Result)),
-
-    ?_assertMatch(
-        [
-            {<<"id">>, DocId},
-            {<<"rev">>, <<"badattsince">>},
-            {<<"error">>, <<"bad_request">>},
-            {<<"reason">>, _}
-        ],
-        Result
-    ).
-
-should_include_attachments_when_atts_since_specified(_) ->
-    DocId = <<"docudoc">>,
-    Rev = <<"1-revorev">>,
-
-    Req = fake_request(DocId, Rev, [<<"1-abc">>]),
-    Db = test_util:fake_db([{name, <<"foo">>}]),
-    mock_open_revs([{1, <<"revorev">>}], {ok, []}),
-    chttpd_db:db_req(Req, Db),
-
-    ?_assert(
-        meck:called(
-            fabric,
-            open_revs,
-            [
-                '_',
-                DocId,
-                [{1, <<"revorev">>}],
-                [
-                    {atts_since, [{1, <<"abc">>}]},
-                    attachments,
-                    {user_ctx, undefined}
-                ]
-            ]
-        )
-    ).
-
-%% helpers
-
-fake_request(Payload) when is_tuple(Payload) ->
-    #httpd{
-        method = 'POST',
-        path_parts = [<<"db">>, <<"_bulk_get">>],
-        mochi_req = mochireq,
-        req_body = Payload
-    };
-fake_request(DocId) when is_binary(DocId) ->
-    fake_request({[{<<"docs">>, [{[{<<"id">>, DocId}]}]}]}).
-
-fake_request(DocId, Rev) ->
-    fake_request({[{<<"docs">>, [{[{<<"id">>, DocId}, {<<"rev">>, Rev}]}]}]}).
-
-fake_request(DocId, Rev, AttsSince) ->
-    fake_request(
-        {[
-            {<<"docs">>, [
-                {[
-                    {<<"id">>, DocId},
-                    {<<"rev">>, Rev},
-                    {<<"atts_since">>, AttsSince}
-                ]}
-            ]}
-        ]}
-    ).
-
-mock_open_revs(RevsReq0, RevsResp) ->
-    ok = meck:expect(
-        fabric,
-        open_revs,
-        fun(_, _, RevsReq1, _) ->
-            ?assertEqual(RevsReq0, RevsReq1),
-            RevsResp
-        end
-    ).
-
-mock(mochireq) ->
-    ok = meck:new(mochireq, [non_strict]),
-    ok = meck:expect(mochireq, parse_qs, fun() -> [] end),
-    ok = meck:expect(mochireq, accepts_content_type, fun
-        ("multipart/mixed") -> true;
-        ("multipart/related") -> true;
-        (_) -> false
-    end),
-    ok;
-mock(couch_httpd) ->
-    ok = meck:new(couch_httpd, [passthrough]),
-    ok = meck:expect(couch_httpd, validate_ctype, fun(_, _) -> ok end),
-    ok = meck:expect(couch_httpd, last_chunk, fun(_) -> {ok, nil} end),
-    ok = meck:expect(couch_httpd, send_chunk, fun send_chunk/2),
-    ok;
-mock(chttpd) ->
-    ok = meck:new(chttpd, [passthrough]),
-    ok = meck:expect(chttpd, start_json_response, fun(_, _) -> {ok, nil} end),
-    ok = meck:expect(chttpd, start_chunked_response, fun(_, _, _) -> {ok, nil} end),
-    ok = meck:expect(chttpd, end_json_response, fun(_) -> ok end),
-    ok = meck:expect(chttpd, send_chunk, fun send_chunk/2),
-    ok = meck:expect(chttpd, json_body_obj, fun(#httpd{req_body = Body}) -> Body end),
-    ok;
-mock(couch_epi) ->
-    ok = meck:new(couch_epi, [passthrough]),
-    ok = meck:expect(couch_epi, any, fun(_, _, _, _, _) -> false end),
-    ok;
-mock(couch_stats) ->
-    ok = meck:new(couch_stats, [passthrough]),
-    ok = meck:expect(couch_stats, increment_counter, fun(_) -> ok end),
-    ok = meck:expect(couch_stats, increment_counter, fun(_, _) -> ok end),
-    ok = meck:expect(couch_stats, decrement_counter, fun(_) -> ok end),
-    ok = meck:expect(couch_stats, decrement_counter, fun(_, _) -> ok end),
-    ok = meck:expect(couch_stats, update_histogram, fun(_, _) -> ok end),
-    ok = meck:expect(couch_stats, update_gauge, fun(_, _) -> ok end),
-    ok;
-mock(fabric) ->
-    ok = meck:new(fabric, [passthrough]),
-    ok;
-mock(config) ->
-    ok = meck:new(config, [passthrough]),
-    ok = meck:expect(config, get, fun(_, _, Default) -> Default end),
-    ok.
-
-spawn_accumulator() ->
-    Parent = self(),
-    Pid = spawn(fun() -> accumulator_loop(Parent, []) end),
-    erlang:put(chunks_gather, Pid),
-    Pid.
-
-accumulator_loop(Parent, Acc) ->
-    receive
-        {stop, Ref} ->
-            Parent ! {ok, Ref};
-        {get, Ref} ->
-            Parent ! {ok, Ref, Acc},
-            accumulator_loop(Parent, Acc);
-        {put, Ref, Chunk} ->
-            Parent ! {ok, Ref},
-            accumulator_loop(Parent, [Chunk | Acc])
-    end.
-
-stop_accumulator(Pid) ->
-    Ref = make_ref(),
-    Pid ! {stop, Ref},
-    receive
-        {ok, Ref} ->
-            ok
-    after ?TIMEOUT ->
-        throw({timeout, <<"process stop timeout">>})
-    end.
-
-send_chunk(_, []) ->
-    {ok, nil};
-send_chunk(_Req, [H | T] = Chunk) when is_list(Chunk) ->
-    send_chunk(_Req, H),
-    send_chunk(_Req, T);
-send_chunk(_, Chunk) ->
-    Worker = erlang:get(chunks_gather),
-    Ref = make_ref(),
-    Worker ! {put, Ref, Chunk},
-    receive
-        {ok, Ref} -> {ok, nil}
-    after ?TIMEOUT ->
-        throw({timeout, <<"send chunk timeout">>})
-    end.
-
-get_response(Pid) ->
-    Ref = make_ref(),
-    Pid ! {get, Ref},
-    receive
-        {ok, Ref, Acc} ->
-            Acc
-    after ?TIMEOUT ->
-        throw({timeout, <<"get response timeout">>})
-    end.
-
-get_results_from_response(Pid) ->
-    case get_response(Pid) of
-        [] ->
-            [];
-        Result ->
-            {Result1} = ?JSON_DECODE(lists:nth(2, Result)),
-            Result1
-    end.
diff --git a/src/chttpd/test/eunit/chttpd_db_bulk_get_test.erl b/src/chttpd/test/eunit/chttpd_db_bulk_get_test.erl
deleted file mode 100644
index 81dfe098b..000000000
--- a/src/chttpd/test/eunit/chttpd_db_bulk_get_test.erl
+++ /dev/null
@@ -1,372 +0,0 @@
-%% Licensed under the Apache License, Version 2.0 (the "License"); you may not
-%% use this file except in compliance with the License. You may obtain a copy of
-%% the License at
-%%
-%%   http://www.apache.org/licenses/LICENSE-2.0
-%%
-%% Unless required by applicable law or agreed to in writing, software
-%% distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-%% WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-%% License for the specific language governing permissions and limitations under
-%% the License.
-
--module(chttpd_db_bulk_get_test).
-
--include_lib("couch/include/couch_eunit.hrl").
--include_lib("couch/include/couch_db.hrl").
-
--define(TIMEOUT, 3000).
-
-setup_all() ->
-    mock(config),
-    mock(chttpd),
-    mock(couch_epi),
-    mock(couch_httpd),
-    mock(couch_stats),
-    mock(fabric),
-    mock(mochireq).
-
-teardown_all(_) ->
-    meck:unload().
-
-setup() ->
-    spawn_accumulator().
-
-teardown(Pid) ->
-    ok = stop_accumulator(Pid).
-
-bulk_get_test_() ->
-    {
-        "/db/_bulk_get tests",
-        {
-            setup,
-            fun setup_all/0,
-            fun teardown_all/1,
-            {
-                foreach,
-                fun setup/0,
-                fun teardown/1,
-                [
-                    fun should_require_docs_field/1,
-                    fun should_not_accept_specific_query_params/1,
-                    fun should_return_empty_results_on_no_docs/1,
-                    fun should_get_doc_with_all_revs/1,
-                    fun should_validate_doc_with_bad_id/1,
-                    fun should_validate_doc_with_bad_rev/1,
-                    fun should_validate_missing_doc/1,
-                    fun should_validate_bad_atts_since/1,
-                    fun should_include_attachments_when_atts_since_specified/1
-                ]
-            }
-        }
-    }.
-
-should_require_docs_field(_) ->
-    Req = fake_request({[{}]}),
-    ?_assertThrow({bad_request, _}, chttpd_db:db_req(Req, nil)).
-
-should_not_accept_specific_query_params(_) ->
-    Req = fake_request({[{<<"docs">>, []}]}),
-    lists:map(
-        fun(Param) ->
-            {Param,
-                ?_assertThrow({bad_request, _}, begin
-                    BadReq = Req#httpd{qs = [{Param, ""}]},
-                    chttpd_db:db_req(BadReq, nil)
-                end)}
-        end,
-        ["rev", "open_revs", "atts_since", "w", "new_edits"]
-    ).
-
-should_return_empty_results_on_no_docs(Pid) ->
-    Req = fake_request({[{<<"docs">>, []}]}),
-    chttpd_db:db_req(Req, nil),
-    Results = get_results_from_response(Pid),
-    ?_assertEqual([], Results).
-
-should_get_doc_with_all_revs(Pid) ->
-    DocId = <<"docudoc">>,
-    Req = fake_request(DocId),
-
-    RevA = {[{<<"_id">>, DocId}, {<<"_rev">>, <<"1-ABC">>}]},
-    RevB = {[{<<"_id">>, DocId}, {<<"_rev">>, <<"1-CDE">>}]},
-    DocRevA = #doc{id = DocId, body = {[{<<"_rev">>, <<"1-ABC">>}]}},
-    DocRevB = #doc{id = DocId, body = {[{<<"_rev">>, <<"1-CDE">>}]}},
-
-    mock_open_revs(all, {ok, [{ok, DocRevA}, {ok, DocRevB}]}),
-    chttpd_db:db_req(Req, test_util:fake_db([{name, <<"foo">>}])),
-
-    [{Result}] = get_results_from_response(Pid),
-    ?assertEqual(DocId, couch_util:get_value(<<"id">>, Result)),
-
-    Docs = couch_util:get_value(<<"docs">>, Result),
-    ?assertEqual(2, length(Docs)),
-
-    [{DocA0}, {DocB0}] = Docs,
-
-    DocA = couch_util:get_value(<<"ok">>, DocA0),
-    DocB = couch_util:get_value(<<"ok">>, DocB0),
-
-    ?_assertEqual([RevA, RevB], [DocA, DocB]).
-
-should_validate_doc_with_bad_id(Pid) ->
-    DocId = <<"_docudoc">>,
-
-    Req = fake_request(DocId),
-    chttpd_db:db_req(Req, test_util:fake_db([{name, <<"foo">>}])),
-
-    [{Result}] = get_results_from_response(Pid),
-    ?assertEqual(DocId, couch_util:get_value(<<"id">>, Result)),
-
-    Docs = couch_util:get_value(<<"docs">>, Result),
-    ?assertEqual(1, length(Docs)),
-    [{DocResult}] = Docs,
-
-    Doc = couch_util:get_value(<<"error">>, DocResult),
-
-    ?_assertMatch(
-        {[
-            {<<"id">>, DocId},
-            {<<"rev">>, null},
-            {<<"error">>, <<"illegal_docid">>},
-            {<<"reason">>, _}
-        ]},
-        Doc
-    ).
-
-should_validate_doc_with_bad_rev(Pid) ->
-    DocId = <<"docudoc">>,
-    Rev = <<"revorev">>,
-
-    Req = fake_request(DocId, Rev),
-    chttpd_db:db_req(Req, test_util:fake_db([{name, <<"foo">>}])),
-
-    [{Result}] = get_results_from_response(Pid),
-    ?assertEqual(DocId, couch_util:get_value(<<"id">>, Result)),
-
-    Docs = couch_util:get_value(<<"docs">>, Result),
-    ?assertEqual(1, length(Docs)),
-    [{DocResult}] = Docs,
-
-    Doc = couch_util:get_value(<<"error">>, DocResult),
-
-    ?_assertMatch(
-        {[
-            {<<"id">>, DocId},
-            {<<"rev">>, Rev},
-            {<<"error">>, <<"bad_request">>},
-            {<<"reason">>, _}
-        ]},
-        Doc
-    ).
-
-should_validate_missing_doc(Pid) ->
-    DocId = <<"docudoc">>,
-    Rev = <<"1-revorev">>,
-
-    Req = fake_request(DocId, Rev),
-    mock_open_revs([{1, <<"revorev">>}], {ok, []}),
-    chttpd_db:db_req(Req, test_util:fake_db([{name, <<"foo">>}])),
-
-    [{Result}] = get_results_from_response(Pid),
-    ?assertEqual(DocId, couch_util:get_value(<<"id">>, Result)),
-
-    Docs = couch_util:get_value(<<"docs">>, Result),
-    ?assertEqual(1, length(Docs)),
-    [{DocResult}] = Docs,
-
-    Doc = couch_util:get_value(<<"error">>, DocResult),
-
-    ?_assertMatch(
-        {[
-            {<<"id">>, DocId},
-            {<<"rev">>, Rev},
-            {<<"error">>, <<"not_found">>},
-            {<<"reason">>, _}
-        ]},
-        Doc
-    ).
-
-should_validate_bad_atts_since(Pid) ->
-    DocId = <<"docudoc">>,
-    Rev = <<"1-revorev">>,
-
-    Req = fake_request(DocId, Rev, <<"badattsince">>),
-    mock_open_revs([{1, <<"revorev">>}], {ok, []}),
-    chttpd_db:db_req(Req, test_util:fake_db([{name, <<"foo">>}])),
-
-    [{Result}] = get_results_from_response(Pid),
-    ?assertEqual(DocId, couch_util:get_value(<<"id">>, Result)),
-
-    Docs = couch_util:get_value(<<"docs">>, Result),
-    ?assertEqual(1, length(Docs)),
-    [{DocResult}] = Docs,
-
-    Doc = couch_util:get_value(<<"error">>, DocResult),
-
-    ?_assertMatch(
-        {[
-            {<<"id">>, DocId},
-            {<<"rev">>, <<"badattsince">>},
-            {<<"error">>, <<"bad_request">>},
-            {<<"reason">>, _}
-        ]},
-        Doc
-    ).
-
-should_include_attachments_when_atts_since_specified(_) ->
-    DocId = <<"docudoc">>,
-    Rev = <<"1-revorev">>,
-
-    Req = fake_request(DocId, Rev, [<<"1-abc">>]),
-    mock_open_revs([{1, <<"revorev">>}], {ok, []}),
-    chttpd_db:db_req(Req, test_util:fake_db([{name, <<"foo">>}])),
-
-    ?_assert(
-        meck:called(
-            fabric,
-            open_revs,
-            [
-                '_',
-                DocId,
-                [{1, <<"revorev">>}],
-                [
-                    {atts_since, [{1, <<"abc">>}]},
-                    attachments,
-                    {user_ctx, undefined}
-                ]
-            ]
-        )
-    ).
-
-%% helpers
-
-fake_request(Payload) when is_tuple(Payload) ->
-    #httpd{
-        method = 'POST',
-        path_parts = [<<"db">>, <<"_bulk_get">>],
-        mochi_req = mochireq,
-        req_body = Payload
-    };
-fake_request(DocId) when is_binary(DocId) ->
-    fake_request({[{<<"docs">>, [{[{<<"id">>, DocId}]}]}]}).
-
-fake_request(DocId, Rev) ->
-    fake_request({[{<<"docs">>, [{[{<<"id">>, DocId}, {<<"rev">>, Rev}]}]}]}).
-
-fake_request(DocId, Rev, AttsSince) ->
-    fake_request(
-        {[
-            {<<"docs">>, [
-                {[
-                    {<<"id">>, DocId},
-                    {<<"rev">>, Rev},
-                    {<<"atts_since">>, AttsSince}
-                ]}
-            ]}
-        ]}
-    ).
-
-mock_open_revs(RevsReq0, RevsResp) ->
-    ok = meck:expect(
-        fabric,
-        open_revs,
-        fun(_, _, RevsReq1, _) ->
-            ?assertEqual(RevsReq0, RevsReq1),
-            RevsResp
-        end
-    ).
-
-mock(mochireq) ->
-    ok = meck:new(mochireq, [non_strict]),
-    ok = meck:expect(mochireq, parse_qs, fun() -> [] end),
-    ok = meck:expect(mochireq, accepts_content_type, fun(_) -> false end),
-    ok;
-mock(couch_httpd) ->
-    ok = meck:new(couch_httpd, [passthrough]),
-    ok = meck:expect(couch_httpd, validate_ctype, fun(_, _) -> ok end),
-    ok;
-mock(chttpd) ->
-    ok = meck:new(chttpd, [passthrough]),
-    ok = meck:expect(chttpd, start_json_response, fun(_, _) -> {ok, nil} end),
-    ok = meck:expect(chttpd, end_json_response, fun(_) -> ok end),
-    ok = meck:expect(chttpd, send_chunk, fun send_chunk/2),
-    ok = meck:expect(chttpd, json_body_obj, fun(#httpd{req_body = Body}) -> Body end),
-    ok;
-mock(couch_epi) ->
-    ok = meck:new(couch_epi, [passthrough]),
-    ok = meck:expect(couch_epi, any, fun(_, _, _, _, _) -> false end),
-    ok;
-mock(couch_stats) ->
-    ok = meck:new(couch_stats, [passthrough]),
-    ok = meck:expect(couch_stats, increment_counter, fun(_) -> ok end),
-    ok = meck:expect(couch_stats, increment_counter, fun(_, _) -> ok end),
-    ok = meck:expect(couch_stats, decrement_counter, fun(_) -> ok end),
-    ok = meck:expect(couch_stats, decrement_counter, fun(_, _) -> ok end),
-    ok = meck:expect(couch_stats, update_histogram, fun(_, _) -> ok end),
-    ok = meck:expect(couch_stats, update_gauge, fun(_, _) -> ok end),
-    ok;
-mock(fabric) ->
-    ok = meck:new(fabric, [passthrough]),
-    ok;
-mock(config) ->
-    ok = meck:new(config, [passthrough]),
-    ok = meck:expect(config, get, fun(_, _, Default) -> Default end),
-    ok.
-
-spawn_accumulator() ->
-    Parent = self(),
-    Pid = spawn(fun() -> accumulator_loop(Parent, []) end),
-    erlang:put(chunks_gather, Pid),
-    Pid.
-
-accumulator_loop(Parent, Acc) ->
-    receive
-        {stop, Ref} ->
-            Parent ! {ok, Ref};
-        {get, Ref} ->
-            Parent ! {ok, Ref, Acc},
-            accumulator_loop(Parent, Acc);
-        {put, Ref, Chunk} ->
-            Parent ! {ok, Ref},
-            accumulator_loop(Parent, [Chunk | Acc])
-    end.
-
-stop_accumulator(Pid) ->
-    Ref = make_ref(),
-    Pid ! {stop, Ref},
-    receive
-        {ok, Ref} ->
-            ok
-    after ?TIMEOUT ->
-        throw({timeout, <<"process stop timeout">>})
-    end.
-
-send_chunk(_, []) ->
-    {ok, nil};
-send_chunk(_Req, [H | T] = Chunk) when is_list(Chunk) ->
-    send_chunk(_Req, H),
-    send_chunk(_Req, T);
-send_chunk(_, Chunk) ->
-    Worker = erlang:get(chunks_gather),
-    Ref = make_ref(),
-    Worker ! {put, Ref, Chunk},
-    receive
-        {ok, Ref} -> {ok, nil}
-    after ?TIMEOUT ->
-        throw({timeout, <<"send chunk timeout">>})
-    end.
-
-get_response(Pid) ->
-    Ref = make_ref(),
-    Pid ! {get, Ref},
-    receive
-        {ok, Ref, Acc} ->
-            ?JSON_DECODE(iolist_to_binary(lists:reverse(Acc)))
-    after ?TIMEOUT ->
-        throw({timeout, <<"get response timeout">>})
-    end.
-
-get_results_from_response(Pid) ->
-    {Resp} = get_response(Pid),
-    couch_util:get_value(<<"results">>, Resp).
diff --git a/src/docs/src/config/http.rst b/src/docs/src/config/http.rst
index 19f434161..7be15a367 100644
--- a/src/docs/src/config/http.rst
+++ b/src/docs/src/config/http.rst
@@ -232,6 +232,18 @@ HTTP Server Options
            upgrade, it is advisable to review the usage of these configuration
            settings.
 
+    .. config:option:: bulk_get_use_batches :: Use the optimized bulk_get implementation
+
+        .. versionadded:: 3.3
+
+        Set to false to revert to a previous ``_bulk_get`` implementation using
+        single doc fetches internally. Using batches should be faster, however
+        there may be bugs in the new new implemention, so expose this option to
+        allow reverting to the old behavior. ::
+
+            [chttpd]
+            bulk_get_use_batches = true
+
 .. config:section:: httpd :: HTTP Server Options
 
     .. versionchanged:: 3.2 These options were moved to [chttpd] section: