You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@couchdb.apache.org by ch...@apache.org on 2018/06/22 22:15:29 UTC
[couchdb] branch elixir-suite updated (1c5ae3e -> 656934a)
This is an automated email from the ASF dual-hosted git repository.
chewbranca pushed a change to branch elixir-suite
in repository https://gitbox.apache.org/repos/asf/couchdb.git.
discard 1c5ae3e Port replication.js to replication_test.ex
discard 7351759 Add the user tag to create users declaratively
discard b95b212 Add helper functions for user sessions
discard d26f670 Allow tests to set config values dynamically
discard bd3587a Fix bug when canceling replications
omit 97a71b1 Allow tests to specify the request Content-Type
omit cb0ffe1 Replace header match with regexp
omit 871ff45 Add list of test ports status to README
omit 53dc90f Integrate Elixir suite with `make`
omit 3afa337 Move elixir_suite to test/elixir
omit 9ddbbe5 WIP: Port most of rewrite.js suite
omit f91b181 Port httpotion functionality until released
omit 5bce2d9 Prefer ?w=3 over hacky sleeps
omit 6895fdc Port reduce.js suite
omit 7aba50b Remove extraneous comment
omit 86708b5 Port view_collation.js to view_collation_test.exs
omit 7b8e6c1 Embrace the Elixir
omit 67f6fae Update the context in place for setup
omit 1e0cb25 DRY constant definition
omit b52f778 Ignore erln8.config
omit 7e10c70 Simple .gitignore for sanity
omit 44516be Add simple Makefile for muscle memory
omit 4898c08 Remove module attribute
omit 5c5b60b Port uuids.js to Elixir
omit a9123ae Add support for a config tag
omit 1edae17 Port config.js tests
omit eeaf8c7 Port all_docs.js tests
omit 418470e Misc updates
omit 736a779 Port remaining basics.js tests
omit 9155687 WIP: Elixir test suite
add 8b4e92a Improve Mango test suite performance (#995)
add 9b71781 (typo) fetchig -> fetching
add 44cca52 Move cluster_start_period and cluster_quiet_period to replicator section
add 7c78922 Update COMMITTERS.md
add bdaeaff Fix replicator cluster stability race condition
add ede5dd9 Fix index validation for nested $and (#1014)
add a406cc0 Test duplicate fields in Mango selector (#998)
add 27dcd6b Fix _explain for sort descending (#1025)
add 3e511b3 Allow replicator documents to include params for db creation - specify q in "create_target_params": {"q": "1", ...} issue-887
add b66e52a Add missing methods to fake index
add ccb657e Remove invalid meck unload
add 743bd88 warn instead of error when use_index not valid (#962)
add 9eb845b Fix eunit "suites" example
add fda4c67 Remove Spidermonkey as an "optional" depedency
add c5e48a8 Remove Bob's 2.0 TODO list
add 57b615d Remove references to etap
add 6fce0fe Fix replicator create target options test
add ac7a00c Make q configurable for peruser dbs issue 875
add 0493988 Allow to use index with or (#1038)
add 718f89d Multiple fixes and refactoring of couch tests. (#1062)
add f8e56e9 Add coverage reports to more applications
add beb8781 Add couch_stats tracking back to couch_log (#1064)
add 8e0e8b3 Fix haproxy stats (#1039)
add 7c37e58 Mango: change catch-all field range priority (#1069)
add 3b28b84 Fix mango native proc crash (#1067)
add dd7cb4e Refactor couch_log
add 3c90cc3 Merge pull request #1078 from cloudant/issue-832-couch_log-refactor
add b791106 Cleanup data dirs in eunit_plugin before test run
add f71daa2 Merge pull request #1090 from apache/cleanup-on-setup_eunit
add f3ecd13 Use uuid in tmp db names in unit tests
add bc192d1 Merge pull request #1092 from apache/use-uuid-in-eunit-dbnames
add 0414ef3 Make sure mango tests's recreate fun creates db
add 4ac9ab0 Merge pull request #1091 from apache/better-mango-test-recreate-function
add 6fb3577 Return friendly error message when creating user with invalid password (#1087)
add 65fbcd0 fallback to "selector" on empty "partial_filter_selector" (#1098)
add 730dcf7 Simplify couch_key_tree test setup
add 1768aea Remove warning on `couch_epi_codegen` compile
add 649b808 Allow override of `-args_file` and `-config` parameters (#1095)
add 567a16e Fix couch_peruser_test
add ba82c4e Return null for update_seq and offset if update_seq is true issue 969
add aa7821b Merge pull request #1085 from cloudant/issue-969-update_seq-true
add b43c401 Create all needed directories to build docs (#1115)
add 0fd9509 Add support for queries in /{db}/_all_docs POST Fixes #820
add 91f5985 Set eunit timeout on a whole test object
add b2e0e13 Merge pull request #1123 from apache/fix-create_delete_database_continuously-test
add d16f2db Make peruser database prefix configurable
add c3bc956 Remove outdated docker targets and docs (#1109)
add 4e35b36 Hide Auth information in replication document for reader - don't display credential information for user who just wants to check replication status. In basic authentication, the credential information is available in header field of doc
add 52e7cbe Decode destination header for doc copy
add 380ae69 Remove 'smartquote' from default.ini, broke the build
add 1ecf363 Make _design_docs to respect query parameters
add 92a280a Introduce new _dbs_info endpoint to get info of a list of databases
add 884db89 Merge pull request #1082 from cloudant/issue-822-all-dbs-info
add 960a6f9 Fix for issue #603 - Error 500 when creating a db below quorum
add 1c39e0c feat: add quorum tests to make check
add 7a296d2 Fix for issue #1134 clean up dev/lib before run mango tests (#1135)
add 1446e87 Remove queries for _all_docs issue 820
add 3b53c1c Merge pull request #1143 from cloudant/issue-820-remove-queries-for_all_docs
add d35f00a Simplify make dist approach
add d3a2871 feat: add ./configure --dev as alias for -c --disable-{docs,fauxton}
add c6f0208 Add config app to couch_replicator app dependencies
add 6cd62c2 Bump config to 1.0.2 for dialyzer related fixes
add 4976f49 Use callbacks for couch_event_listener behavior
add 302bd8b feat: update mochiweb to 2.17.0
add 6d959a7 Avoid unconditional retries in replicator's http client
add 0832393 Prevent chttpd multipart zombie processes
add 364ea20 Remove old rolling reboot upgrade code
add fb2b046 Add couch_db_engine module
add 8e34ce3 Add legacy storage engine implementation
add f6a7711 Implement pluggable storage engines
add d80f7df Add storage engine test suite
add 49d4194 Ensure deterministic revisions for attachments
add fdc7a26 Increase timeout for storage engine tests
add 51cb6ae Eliminate "default" Erlang platform from Jenkins CI
add 9b99722 Remove unused code for starting compactions
add 4a73d03 re-enable "flaky" test in quest to nail down #745
add 72b41c4 Implement pluggable authentication and session support for replicator
add 2c43e62 This fixes couch_bt_engine:fold_local_docs/4
add 5ef942a feat: introduce snooze_period to reduce compaction_daemon load.
add 0761b6a fix: simplify config integer get
add 58fef34 feat: bump the compaction daemon check_interval to one hour
add e5bf9d4 doc: add snooze_period doc to default.ini
add 42aba99 feat: demote view index opening/closing to log level debug
add 3bd033b Prevent access to Fauxton on node-local port (5986)
add 8898104 Fix dialyzer warning on `couch_key_tree:merge/2`
add 5218348 Fix validate_doc_update for Erlang functions
add 101f29b Use precise term comparison
add 0464c46 Switching `couch_stats_process_tracker:track/2` argument names
add e1fa0f5 Use `chttpd:send_error/2` in mango_httpd
add 087a1b2 Merge pull request #1193 from cloudant/fix-error-reporting-in-mango
add cd598d8 Add error tuple return type to replicator auth spec and callback
add ba624ea Revert "re-enable "flaky" test in quest to nail down #745"
add 817b2b6 Add bcrypt hashing option
add b58021e Introduce bin_opt_info erl_opts compilation option
add 3c26bc3 Fix dialyzer warning for couch_att:from_disk_term
add 1b21587 feat: allow eunit to run without setup or make all
add 6f294c1 fix: compaction daemon symlink resolution and log level #1097
add b05c177 Proper error handling for file:open() call
add 42c89ba fix whitespace
add f999071 style fixes as per @davisp
add 2b5cf23 Increase PSE test engine timeouts
add bb74b16 Implement format_status/2 for replication worker gen_server
add 18f8362 Bump config dependency to 1.0.3
add 36ecf92 Support queries for endpoints
add 0a477b5 Merge pull request #1222 from cloudant/issue-820-add-queries
add 95a78ce Remove _config call in couch_peruser_test
add a0c863d Merge pull request #1130 from cloudant/issue-876-remove-_config-call-in-eunit-test
add 89a727b Replace resource expensive bcrypt test with shorter version (#1231)
add 45da9f3 Validate password_scheme in user doc
add 3d702d8 Revert "Revert "re-enable "flaky" test in quest to nail down #745""
add e7c48b3 Improve 413 response handling
add f0887c1 Allow couch_os_daemons to live in directories with spaces
add 7bfdedb Fix DB-specific compaction configuration (#1059)
add 6f987ae Merge branch 'master' into daemon-spaces
add f28d896 make it so
add 3621725 add bootstrap
add 0f559a9 add ignore
add 58c4948 add http stub
add e8c4966 add basic action handling
add a5213f7 add Apache License stanza everywhere
add 404692f add the plan to readme
add ecf310a add note about skipping a step if the node is already setup
add 38eaa88 add delete_node API
add 9f1fa23 hack for storing erlang cookie value on new nodes
add 068bdf1 add action hints
add 94eab12 add license
add 3ad82e5 remove leftover
add 317e5a4 formatting
add 0145bae formatting & clarification
add bc41677 mroe formatting
add 277ca66 wip: implement setup handling
add 92da54e wip: full receive feature, setup now works yay
add fc39fab add simple test script
add 354647b add finish cluster routine
add 7c6c3bb add some more testing
add 4c423e6 s/_cassim/cassim/ for the time being
add 7528f5b add license header
add 0a676fc add testing instructions to readme
add 3304add hash admin passwords, more resilient port parsing
add 14e0374 handle GET cluster state
add 9c3eb0a show cluster finished state
add be52f7e R14 compatibility
add 9728b34 Remove error-handling clause
add cd7d0ec Fix LICENSE indention
add deeb073 Rename cassim db to _metadata
add 127e85a Use _nodes db
add 372dd8b fix tests
add ecb601b Create _global_changes database on cluster setup
add 616789b cluster_enable: add remote_node feature
add f4fd3fa whitespace fix
add aa17a55 use couch_log instead of io:format
add 5c0e927 Use dynamic handlers
add ff19be1 add catch-all clause for url_handler
add bdb8a0c configure the right http interface
add 647ffbc fix enable_cluster_http for admin-party clusters
add fb61c04 Update to new couch_epi API
add d0a9b72 Pass supervisor's children to couch_epi
add 747144e Return HTTP 200 on GET
add b9e1f3b Return HTTP 405 for unsupported request method
add e8d1e32 feat: cassim is off for now
add 75a7682 require nodecount on setup
add dd68945 use config:setineger/3
add b107042 fix wording
add 401d776 Merge remote-tracking branch 'robertkowalski/2594-2598-number-of-nodes'
add 2590fbc Fixed some minor errors in the documentation.
add d75693e add_node: Don't fail if node name != "couchdb" or "node1"
add b2b93c1 Merge remote-tracking branch 'adrienverge/COUCHDB-3119'
add 54623ce fix cluster setup: use same admin pq salt on all nodes
add c38d7aa Merge remote-tracking branch 'asf/salt-distribution'
add 18314a6 Add support for new ensure_dbs_exist option to GET, POST/finish_cluster
add 92dd9d1 Add new enable_single_node action for cluster_setup endpoint
add e153d48 address comments from rnewson
add d61381a fix typo/compilation error
add 942c665 chore: whitespace
add 4b90eca chore: better log output
add 4d9bd58 Merge branch '593-setup-single-node' of https://github.com/apache/couchdb-setup
add 68545af fix: make sure cluster setups do not exceed n=3 by default
add 9fd7f44 Merge branch 'fix/node-count' of https://github.com/apache/couchdb-setup
add 2f725d9 Import couchdb-setup
add e282d70 Update rebar.config for local src/setup
add 1a040a4 Merge pull request #1243 from apache/import-setup-again
add b163663 Merge branch 'master' into daemon-spaces
add c300673 Merge pull request #1242 from apache/daemon-spaces
add 266c56b Various top-level directory cleanups
add 25de7b5 Merge pull request #1240 from apache/cleanup
add 0e1cdef Fix couch peruser test suite (#1247)
add 0074b4f fix: more reliable password scheme tests
add f6fc285 add test covering loading admins from config
add fe1ce42 feat: add debug log output for shard open errors
add 99a64b2 Fix shard substitution in changes feeds
add 790783e Fix killing of OS processes
add 6ffe042 Make loginUser wait for successful authentication
add 455d634 Fix compaction resumption for the BT engine
add b52683c Test compaction resumption after error
add 8f38625 fix file_compression value description
add 948a131 Fix typo in node local _replicate handler
add 02c9429 Key tree property tests
add 0e92688 Kill fabric attachment receiver when middleman times out
add a0dd946 Do not drop updated httpdb record after auth headers are updated
add f9aa52f Switch to using a mirrored triq dependency
add 1ae2aae Minor documentation cleanup for couch_replicator
add 5b74e66 Set update_lru_on_read=false as default
add 33783c3 call commit_data where needed
add 3d1eecb Merge pull request #1281 from apache/commit_data_pse
add b0f673f In _scheduler/docs fix `crashing` state showing as `pending` sometimes
add 581bd05 Adopt fake_db to PSE changes
add 356069d Merge pull request #1273 from cloudant/adopt-fake_db-to-PSE
add 069c02b Document enable_database_recovery ini option
add 8de46c7 Fix mem3 tests (#1285)
add 894accb Fix length badarg error in mp parser
add 47a38d3 Force use of SMP enabled BEAM VM, fixes #1296
add ae29e65 Bump fauxton to fix CI builds
add f541e48 Add SSL session_lifetime limit for ibrowse pids
add 8e28fd2 Mango: _id and _rev index selection
add 5290a32 Update Jenkins build process:
add 6d44e17 Add _approx_count_distinct as a builtin reduce function (#1346)
add 8a46473 Add hyper app to dependencies
add 0392c51 Finalize in couch_mrview, but make it optional
add 5fa3c43 Use finalize operation to simplify _stats
add 398ac18 Ignore trailing characters in a builtin reduce
add 62f71c0 Fix container for package-building step
add 994b370 Jenkinsfile: typo
add 71c33b1 Update skip_deps for 3rd parties eunit (#1386)
add 2bf04a0 Revert "Introduce bin_opt_info erl_opts compilation option"
add 76790d5 Add compile's command line options
add 2fe402f Remove debug_info from compile options
add dfa8780 Make bin_opt_info optional based on env variable
add 41decfa Allow custom compile options with env variable
add c7d35c3 Merge pull request #1387 from cloudant/make-bin_opt_info-optional
add 000766c Fix active size calculations for views
add aebdbc4 Optimize couch_key_tree:stem/2
add 3c98385 Fix couch_key_tree_tests.erl
add f040d75 Add set_mqd_off_heap utility function
add a13efba Call `set_mqd_off_heap` for critical processes
add f3a0f42 refactor process_request to not drop req (#1375)
add fe53e43 Prepare to fabric attachment receiver from a fun closure to a tuple
add 5b5c8a1 Add constant index fields to sort based on the selector (#1376)
add 103a062 Update snappy dep to CouchDB-1.0.1 with 21.0 support
new d099375 WIP: Elixir test suite
new 35d5721 Port remaining basics.js tests
new 0fc3f02 Misc updates
new b3ca83c Port all_docs.js tests
new 3c4730e Port config.js tests
new 05fa713 Add support for a config tag
new fba7e88 Port uuids.js to Elixir
new a8e7a7c Remove module attribute
new 754202e Add simple Makefile for muscle memory
new cb1e1d1 Simple .gitignore for sanity
new 5f72bc7 Ignore erln8.config
new 005d48b DRY constant definition
new 42d1dca Update the context in place for setup
new 3667d8f Embrace the Elixir
new abdd7d6 Port view_collation.js to view_collation_test.exs
new d159ded Remove extraneous comment
new 859533a Port reduce.js suite
new 91687ee Prefer ?w=3 over hacky sleeps
new cae385a Port httpotion functionality until released
new 0097ac5 WIP: Port most of rewrite.js suite
new 0b48df6 Move elixir_suite to test/elixir
new 17f8de1 Integrate Elixir suite with `make`
new 9d392c7 Add list of test ports status to README
new dadbf13 Replace header match with regexp
new 1d97b35 Allow tests to specify the request Content-Type
new 35dcf85 Fix bug when canceling replications
new 534739f Allow tests to set config values dynamically
new 508cb82 Add helper functions for user sessions
new 6d01bb9 Add the user tag to create users declaratively
new 29a6561 Port replication.js to replication_test.ex
new 656934a Port the first half of security_validation_tests.js
This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version. This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:
* -- * -- B -- O -- O -- O (1c5ae3e)
\
N -- N -- N refs/heads/elixir-suite (656934a)
You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.
Any revisions marked "omit" are not gone; other references still
refer to them. Any revisions marked "discard" are gone forever.
The 31 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails. The revisions
listed as "add" were already present in the repository and have only
been added to this reference.
Summary of changes:
.gitignore | 8 +-
COMMITTERS.md | 51 +-
Jenkinsfile | 209 +--
LICENSE | 167 ++-
Makefile | 99 +-
Makefile.win | 22 +-
NOTICE | 15 +-
README-DEV.rst | 28 +-
TODO | 10 -
Vagrantfile | 69 -
introspect => build-aux/introspect | 0
configure | 9 +
dev/run | 24 +-
license.skip | 216 ----
rebar.config.script | 28 +-
rel/haproxy.cfg | 4 +-
rel/overlay/bin/couchdb | 8 +-
rel/overlay/etc/default.ini | 70 +-
rel/overlay/etc/local.ini | 3 +
rel/overlay/etc/vm.args | 6 +
rel/plugins/eunit_plugin.erl | 20 +
rel/reltool.config | 4 +
src/{couch_log => chttpd}/rebar.config | 0
src/chttpd/src/chttpd.erl | 51 +-
src/chttpd/src/chttpd_auth_request.erl | 4 +
src/chttpd/src/chttpd_db.erl | 63 +-
src/chttpd/src/chttpd_httpd_handlers.erl | 1 +
src/chttpd/src/chttpd_misc.erl | 35 +
src/chttpd/src/chttpd_view.erl | 16 +
.../test/chttpd_csp_tests.erl} | 11 +-
src/chttpd/test/chttpd_db_test.erl | 218 +++-
src/chttpd/test/chttpd_dbs_info_test.erl | 169 +++
src/chttpd/test/chttpd_security_tests.erl | 22 +
src/chttpd/test/chttpd_view_test.erl | 123 ++
src/couch/.gitignore | 5 +
src/couch/include/couch_db.hrl | 5 +-
src/couch/include/couch_eunit.hrl | 11 +-
src/couch/include/couch_js_functions.hrl | 9 +
src/couch/src/couch.app.src | 4 +-
src/couch/src/couch.erl | 1 +
src/couch/src/couch_att.erl | 146 ++-
src/couch/src/couch_auth_cache.erl | 16 +-
src/couch/src/couch_bt_engine.erl | 963 ++++++++++++++
.../{couch_server_int.hrl => couch_bt_engine.hrl} | 21 +-
src/couch/src/couch_bt_engine_compactor.erl | 496 +++++++
...ch_db_header.erl => couch_bt_engine_header.erl} | 39 +-
src/couch/src/couch_bt_engine_stream.erl | 70 +
src/couch/src/couch_changes.erl | 21 +-
src/couch/src/couch_compaction_daemon.erl | 105 +-
src/couch/src/couch_db.erl | 726 +++++------
src/couch/src/couch_db_engine.erl | 893 +++++++++++++
src/couch/src/couch_db_int.hrl | 69 +-
src/couch/src/couch_db_updater.erl | 1350 +++++---------------
src/couch/src/couch_file.erl | 40 +-
src/couch/src/couch_httpd.erl | 13 +
src/couch/src/couch_httpd_auth.erl | 12 +-
src/couch/src/couch_httpd_db.erl | 11 +-
src/couch/src/couch_httpd_misc_handlers.erl | 38 +-
src/couch/src/couch_httpd_multipart.erl | 78 +-
src/couch/src/couch_key_tree.erl | 84 +-
src/couch/src/couch_os_daemons.erl | 2 +-
src/couch/src/couch_os_process.erl | 1 -
src/couch/src/couch_passwords.erl | 39 +-
src/couch/src/couch_query_servers.erl | 36 +-
src/couch/src/couch_server.erl | 240 +++-
src/couch/src/couch_stream.erl | 255 ++--
src/couch/src/couch_users_db.erl | 10 +-
src/couch/src/couch_util.erl | 57 +-
src/couch/src/test_engine_attachments.erl | 93 ++
src/couch/src/test_engine_compaction.erl | 185 +++
src/couch/src/test_engine_fold_changes.erl | 190 +++
src/couch/src/test_engine_fold_docs.erl | 390 ++++++
src/couch/src/test_engine_get_set_props.erl | 70 +
src/couch/src/test_engine_open_close_delete.erl | 81 ++
src/couch/src/test_engine_purge_docs.erl | 158 +++
src/couch/src/test_engine_read_write_docs.erl | 317 +++++
src/couch/src/test_engine_ref_counting.erl | 103 ++
src/couch/src/test_engine_util.erl | 607 +++++++++
src/couch/src/test_request.erl | 10 +
src/couch/src/test_util.erl | 26 +-
src/couch/test/chttpd_endpoints_tests.erl | 1 +
src/couch/test/couch_auth_cache_tests.erl | 16 +-
src/couch/test/couch_bt_engine_compactor_tests.erl | 130 ++
.../test/couch_bt_engine_tests.erl} | 11 +-
src/couch/test/couch_db_plugin_tests.erl | 2 +-
src/couch/test/couch_db_tests.erl | 221 ++--
src/couch/test/couch_key_tree_prop_tests.erl | 531 ++++++++
src/couch/test/couch_key_tree_tests.erl | 42 +-
src/couch/test/couch_passwords_tests.erl | 42 +-
src/couch/test/couch_server_tests.erl | 16 +
src/couch/test/couch_stream_tests.erl | 32 +-
src/couch/test/couchdb_compaction_daemon_tests.erl | 3 +-
src/couch/test/couchdb_cookie_domain_tests.erl | 78 +-
src/couch/test/couchdb_vhosts_tests.erl | 25 -
src/couch/test/couchdb_views_tests.erl | 42 +-
src/couch/test/global_changes_tests.erl | 2 +-
src/couch_epi/rebar.config | 4 +
src/couch_epi/src/couch_epi_codegen.erl | 19 +-
src/couch_event/src/couch_event_listener.erl | 28 +-
src/{couch_log => couch_index}/rebar.config | 0
src/couch_index/src/couch_index.erl | 6 +-
src/couch_index/src/couch_index_updater.erl | 6 +-
.../test/couch_index_compaction_tests.erl | 16 +-
.../test/couch_index_ddoc_updated_tests.erl | 5 +-
src/couch_log/src/couch_log.erl | 1 +
src/couch_log/src/couch_log_server.erl | 1 +
src/couch_log/test/couch_log_test_util.erl | 7 +-
src/{couch_log => couch_mrview}/rebar.config | 0
src/couch_mrview/src/couch_mrview.erl | 62 +-
src/couch_mrview/src/couch_mrview_index.erl | 3 +-
src/couch_mrview/src/couch_mrview_util.erl | 36 +-
.../test/couch_mrview_index_info_tests.erl | 96 +-
.../test/couch_mrview_local_docs_tests.erl | 6 +-
src/couch_peruser/src/couch_peruser.erl | 60 +-
src/couch_peruser/test/couch_peruser_test.erl | 307 +++--
src/couch_replicator/README.md | 43 +-
.../{src => include}/couch_replicator_api_wrap.hrl | 13 +-
src/couch_replicator/src/couch_replicator.app.src | 1 +
src/couch_replicator/src/couch_replicator.erl | 2 +-
.../src/couch_replicator_api_wrap.erl | 30 +-
src/couch_replicator/src/couch_replicator_auth.erl | 100 ++
.../src/couch_replicator_auth_noop.erl | 52 +
.../src/couch_replicator_auth_session.erl | 693 ++++++++++
.../src/couch_replicator_changes_reader.erl | 2 +-
.../src/couch_replicator_clustering.erl | 73 +-
src/couch_replicator/src/couch_replicator_docs.erl | 54 +-
.../src/couch_replicator_httpc.erl | 92 +-
.../src/couch_replicator_httpd.erl | 2 +-
src/couch_replicator/src/couch_replicator_ids.erl | 51 +-
.../src/couch_replicator_scheduler.erl | 84 +-
.../src/couch_replicator_scheduler_job.erl | 5 +-
.../src/couch_replicator_utils.erl | 90 +-
.../src/couch_replicator_worker.erl | 49 +-
.../test/couch_replicator_compact_tests.erl | 4 +-
...replicator_create_target_with_options_tests.erl | 143 +++
.../test/couch_replicator_filtered_tests.erl | 4 +-
.../test/couch_replicator_missing_stubs_tests.erl | 4 +-
.../test/couch_replicator_proxy_tests.erl | 2 +-
.../test/couch_replicator_selector_tests.erl | 4 +-
...ch_replicator_small_max_request_size_target.erl | 28 +-
.../test/couch_replicator_test_helper.erl | 4 +-
.../couch_replicator_use_checkpoints_tests.erl | 4 +-
src/couch_stats/src/couch_stats.app.src | 2 +-
src/couch_stats/src/couch_stats.erl | 2 +-
.../src/couch_stats_process_tracker.erl | 12 +-
src/ddoc_cache/src/ddoc_cache_lru.erl | 1 +
src/fabric/rebar.config | 5 +-
src/fabric/src/fabric_db_create.erl | 33 +-
src/fabric/src/fabric_doc_attachments.erl | 5 +-
...ric_doc_attachments.erl => fabric_doc_atts.erl} | 33 +-
src/fabric/src/fabric_rpc.erl | 16 +-
src/fabric/src/fabric_util.erl | 4 +-
src/fabric/src/fabric_view.erl | 3 +-
src/fabric/src/fabric_view_all_docs.erl | 10 +-
src/mango/src/mango_cursor.erl | 76 +-
src/mango/src/mango_cursor_special.erl | 7 +-
src/mango/src/mango_cursor_text.erl | 2 +-
src/mango/src/mango_cursor_view.erl | 4 +-
src/mango/src/mango_error.erl | 19 -
src/mango/src/mango_httpd.erl | 2 +-
src/mango/src/mango_idx.erl | 46 +-
src/mango/src/mango_idx_special.erl | 13 +-
src/mango/src/mango_idx_text.erl | 2 +-
src/mango/src/mango_idx_view.erl | 38 +-
src/mango/src/mango_native_proc.erl | 51 +-
src/mango/src/mango_selector.erl | 338 ++++-
src/mango/test/02-basic-find-test.py | 19 +
src/mango/test/03-operator-test.py | 9 +-
src/mango/test/05-index-selection-test.py | 114 +-
src/mango/test/12-use-correct-index-test.py | 13 +
src/mango/test/16-index-selectors-test.py | 10 +
src/mango/test/18-json-sort.py | 222 ++++
src/mango/test/mango.py | 47 +-
src/mem3/include/mem3.hrl | 6 +-
src/mem3/src/mem3.erl | 19 +-
src/mem3/src/mem3_nodes.erl | 3 +-
src/mem3/src/mem3_rep.erl | 17 +-
src/mem3/src/mem3_shards.erl | 103 +-
src/mem3/src/mem3_util.erl | 17 +-
src/mem3/test/01-config-default.ini | 14 -
src/mem3/test/mem3_sync_security_test.erl | 19 +-
src/mem3/test/mem3_util_test.erl | 71 +-
src/{couch_log => rexi}/rebar.config | 0
src/rexi/src/rexi_server.erl | 1 +
src/setup/.gitignore | 4 +
src/{global_changes => setup}/LICENSE | 0
src/setup/README.md | 193 +++
.../src/setup.app.src} | 15 +-
src/setup/src/setup.erl | 289 +++++
.../couch_epi_app.erl => setup/src/setup_app.erl} | 11 +-
.../src/mem3_epi.erl => setup/src/setup_epi.erl} | 7 +-
src/setup/src/setup_httpd.erl | 169 +++
.../src/setup_httpd_handlers.erl} | 3 +-
.../src/setup_sup.erl} | 19 +-
src/setup/test/t-frontend-setup.sh | 63 +
src/setup/test/t-single-node.sh | 46 +
src/setup/test/t.sh | 63 +
test/build/test-run-couch-for-mango.sh | 3 +
test/elixir/lib/couch.ex | 8 +-
test/elixir/test/security_validation_test.exs | 309 +++++
test/javascript/couch_test_runner.js | 4 +-
test/javascript/run | 16 +-
.../tests-cluster/with-quorum/db-creation.js | 26 +-
.../without-quorum/db-creation.js} | 28 +-
test/javascript/tests/design_docs_query.js | 154 +++
test/javascript/tests/reduce_builtin.js | 20 +
test/javascript/tests/users_db_security.js | 182 ++-
test/javascript/tests/view_errors.js | 2 +-
208 files changed, 12870 insertions(+), 3561 deletions(-)
delete mode 100644 TODO
delete mode 100644 Vagrantfile
rename introspect => build-aux/introspect (100%)
delete mode 100644 license.skip
copy src/{couch_log => chttpd}/rebar.config (100%)
rename src/{couch/test/couchdb_csp_tests.erl => chttpd/test/chttpd_csp_tests.erl} (90%)
create mode 100644 src/chttpd/test/chttpd_dbs_info_test.erl
create mode 100644 src/chttpd/test/chttpd_view_test.erl
create mode 100644 src/couch/src/couch_bt_engine.erl
copy src/couch/src/{couch_server_int.hrl => couch_bt_engine.hrl} (77%)
create mode 100644 src/couch/src/couch_bt_engine_compactor.erl
copy src/couch/src/{couch_db_header.erl => couch_bt_engine_header.erl} (92%)
create mode 100644 src/couch/src/couch_bt_engine_stream.erl
create mode 100644 src/couch/src/couch_db_engine.erl
create mode 100644 src/couch/src/test_engine_attachments.erl
create mode 100644 src/couch/src/test_engine_compaction.erl
create mode 100644 src/couch/src/test_engine_fold_changes.erl
create mode 100644 src/couch/src/test_engine_fold_docs.erl
create mode 100644 src/couch/src/test_engine_get_set_props.erl
create mode 100644 src/couch/src/test_engine_open_close_delete.erl
create mode 100644 src/couch/src/test_engine_purge_docs.erl
create mode 100644 src/couch/src/test_engine_read_write_docs.erl
create mode 100644 src/couch/src/test_engine_ref_counting.erl
create mode 100644 src/couch/src/test_engine_util.erl
create mode 100644 src/couch/test/couch_bt_engine_compactor_tests.erl
copy src/{ddoc_cache/test/ddoc_cache_ev.erl => couch/test/couch_bt_engine_tests.erl} (77%)
create mode 100644 src/couch/test/couch_key_tree_prop_tests.erl
copy src/{couch_log => couch_index}/rebar.config (100%)
copy src/{couch_log => couch_mrview}/rebar.config (100%)
rename src/couch_replicator/{src => include}/couch_replicator_api_wrap.hrl (86%)
create mode 100644 src/couch_replicator/src/couch_replicator_auth.erl
create mode 100644 src/couch_replicator/src/couch_replicator_auth_noop.erl
create mode 100644 src/couch_replicator/src/couch_replicator_auth_session.erl
create mode 100644 src/couch_replicator/test/couch_replicator_create_target_with_options_tests.erl
copy src/fabric/src/{fabric_doc_attachments.erl => fabric_doc_atts.erl} (91%)
create mode 100644 src/mango/test/18-json-sort.py
delete mode 100644 src/mem3/test/01-config-default.ini
copy src/{couch_log => rexi}/rebar.config (100%)
create mode 100644 src/setup/.gitignore
copy src/{global_changes => setup}/LICENSE (100%)
create mode 100644 src/setup/README.md
copy src/{couch_plugins/src/couch_plugins.app.src => setup/src/setup.app.src} (71%)
create mode 100644 src/setup/src/setup.erl
copy src/{couch_epi/src/couch_epi_app.erl => setup/src/setup_app.erl} (69%)
copy src/{mem3/src/mem3_epi.erl => setup/src/setup_epi.erl} (91%)
create mode 100644 src/setup/src/setup_httpd.erl
copy src/{couch/src/couch_httpd_handlers.erl => setup/src/setup_httpd_handlers.erl} (86%)
copy src/{couch_peruser/src/couch_peruser_sup.erl => setup/src/setup_sup.erl} (57%)
create mode 100755 src/setup/test/t-frontend-setup.sh
create mode 100755 src/setup/test/t-single-node.sh
create mode 100755 src/setup/test/t.sh
create mode 100644 test/elixir/test/security_validation_test.exs
copy share/server/validate.js => test/javascript/tests-cluster/with-quorum/db-creation.js (59%)
copy test/javascript/{tests/large_docs.js => tests-cluster/without-quorum/db-creation.js} (56%)
create mode 100644 test/javascript/tests/design_docs_query.js
[couchdb] 01/31: WIP: Elixir test suite
Posted by ch...@apache.org.
This is an automated email from the ASF dual-hosted git repository.
chewbranca pushed a commit to branch elixir-suite
in repository https://gitbox.apache.org/repos/asf/couchdb.git
commit d0993752a230ac29dc356fd0518d136a8a64c970
Author: Russell Branca <ch...@apache.org>
AuthorDate: Wed Nov 15 23:58:57 2017 +0000
WIP: Elixir test suite
---
elixir_suite/README.md | 12 ++++
elixir_suite/config/config.exs | 30 ++++++++++
elixir_suite/config/test.exs | 3 +
elixir_suite/lib/couch.ex | 33 +++++++++++
elixir_suite/mix.exs | 30 ++++++++++
elixir_suite/mix.lock | 3 +
elixir_suite/test/basics_test.exs | 118 ++++++++++++++++++++++++++++++++++++++
elixir_suite/test/test_helper.exs | 73 +++++++++++++++++++++++
8 files changed, 302 insertions(+)
diff --git a/elixir_suite/README.md b/elixir_suite/README.md
new file mode 100644
index 0000000..a7aedd3
--- /dev/null
+++ b/elixir_suite/README.md
@@ -0,0 +1,12 @@
+# Elixir Test Suite
+
+Proof of concept porting the JS test suite to Elixir.
+
+Currently the basics.js suite has been partially ported over.
+
+To run the suite:
+
+```
+mix deps.get
+mix test --trace
+```
diff --git a/elixir_suite/config/config.exs b/elixir_suite/config/config.exs
new file mode 100644
index 0000000..966ae83
--- /dev/null
+++ b/elixir_suite/config/config.exs
@@ -0,0 +1,30 @@
+# This file is responsible for configuring your application
+# and its dependencies with the aid of the Mix.Config module.
+use Mix.Config
+
+# This configuration is loaded before any dependency and is restricted
+# to this project. If another project depends on this project, this
+# file won't be loaded nor affect the parent project. For this reason,
+# if you want to provide default values for your application for
+# 3rd-party users, it should be done in your "mix.exs" file.
+
+# You can configure your application as:
+#
+# config :foo, key: :value
+#
+# and access this configuration in your application as:
+#
+# Application.get_env(:foo, :key)
+#
+# You can also configure a 3rd-party app:
+#
+# config :logger, level: :info
+#
+
+# It is also possible to import configuration files, relative to this
+# directory. For example, you can emulate configuration per environment
+# by uncommenting the line below and defining dev.exs, test.exs and such.
+# Configuration from the imported file will override the ones defined
+# here (which is why it is important to import them last).
+#
+# import_config "#{Mix.env}.exs"
diff --git a/elixir_suite/config/test.exs b/elixir_suite/config/test.exs
new file mode 100644
index 0000000..4b28ea9
--- /dev/null
+++ b/elixir_suite/config/test.exs
@@ -0,0 +1,3 @@
+config :logger,
+ backends: [:console],
+ compile_time_purge_level: :debug
diff --git a/elixir_suite/lib/couch.ex b/elixir_suite/lib/couch.ex
new file mode 100644
index 0000000..aafe829
--- /dev/null
+++ b/elixir_suite/lib/couch.ex
@@ -0,0 +1,33 @@
+defmodule Couch do
+ use HTTPotion.Base
+
+ @moduledoc """
+ CouchDB library to power test suite.
+ """
+
+ def process_url(url) do
+ "http://localhost:15984" <> url
+ end
+
+ def process_request_headers(headers) do
+ headers
+ |> Dict.put(:"User-Agent", "couch-potion")
+ |> Dict.put(:"Content-Type", "application/json")
+ end
+
+ def process_options(options) do
+ Dict.put options, :basic_auth, {"adm", "pass"}
+ end
+
+ def process_request_body(body) do
+ if is_map(body) do
+ :jiffy.encode(body)
+ else
+ body
+ end
+ end
+
+ def process_response_body(body) do
+ body |> IO.iodata_to_binary |> :jiffy.decode([:return_maps])
+ end
+end
diff --git a/elixir_suite/mix.exs b/elixir_suite/mix.exs
new file mode 100644
index 0000000..9b0f642
--- /dev/null
+++ b/elixir_suite/mix.exs
@@ -0,0 +1,30 @@
+defmodule Foo.Mixfile do
+ use Mix.Project
+
+ def project do
+ [
+ app: :foo,
+ version: "0.1.0",
+ elixir: "~> 1.5",
+ start_permanent: Mix.env == :prod,
+ deps: deps()
+ ]
+ end
+
+ # Run "mix help compile.app" to learn about applications.
+ def application do
+ [
+ extra_applications: [:logger]
+ ]
+ end
+
+ # Run "mix help deps" to learn about dependencies.
+ defp deps do
+ [
+ # {:dep_from_hexpm, "~> 0.3.0"},
+ {:httpotion, "~> 3.0"},
+ {:jiffy, "~> 0.14.11"}
+ # {:dep_from_git, git: "https://github.com/elixir-lang/my_dep.git", tag: "0.1.0"},
+ ]
+ end
+end
diff --git a/elixir_suite/mix.lock b/elixir_suite/mix.lock
new file mode 100644
index 0000000..0723e94
--- /dev/null
+++ b/elixir_suite/mix.lock
@@ -0,0 +1,3 @@
+%{"httpotion": {:hex, :httpotion, "3.0.3", "17096ea1a7c0b2df74509e9c15a82b670d66fc4d66e6ef584189f63a9759428d", [], [{:ibrowse, "~> 4.4", [hex: :ibrowse, repo: "hexpm", optional: false]}], "hexpm"},
+ "ibrowse": {:hex, :ibrowse, "4.4.0", "2d923325efe0d2cb09b9c6a047b2835a5eda69d8a47ed6ff8bc03628b764e991", [], [], "hexpm"},
+ "jiffy": {:hex, :jiffy, "0.14.11", "919a87d491c5a6b5e3bbc27fafedc3a0761ca0b4c405394f121f582fd4e3f0e5", [], [], "hexpm"}}
diff --git a/elixir_suite/test/basics_test.exs b/elixir_suite/test/basics_test.exs
new file mode 100644
index 0000000..87b4aff
--- /dev/null
+++ b/elixir_suite/test/basics_test.exs
@@ -0,0 +1,118 @@
+defmodule BasicsTest do
+ use CouchTestCase
+
+ @moduledoc """
+ Test CouchDB basics.
+ This is a port of the basics.js suite
+ """
+
+ test "Session contains adm context" do
+ userCtx = Couch.get("/_session").body["userCtx"]
+ assert userCtx["name"] == "adm", "Should have adm user context"
+ assert userCtx["roles"] == ["_admin"], "Should have _admin role"
+ end
+
+ test "Welcome endpoint" do
+ assert Couch.get("/").body["couchdb"] == "Welcome", "Should say welcome"
+ end
+
+ @tag :with_db
+ test "PUT on existing DB should return 412 instead of 500", context do
+ db_name = context[:db_name]
+ assert Couch.put("/#{db_name}").status_code == 412
+ end
+
+ @tag :with_db_name
+ test "Creating a new DB should return location header", context do
+ db_name = context[:db_name]
+ {:ok, resp} = create_db(db_name)
+ msg = "Should return Location header for new db"
+ assert String.ends_with?(resp.headers["location"], db_name), msg
+ {:ok, _} = delete_db(db_name)
+ end
+
+ @tag :with_db_name
+ test "Creating a new DB with slashes should return Location header (COUCHDB-411)", context do
+ db_name = context[:db_name] <> "%2Fwith_slashes"
+ {:ok, resp} = create_db(db_name)
+ msg = "Should return Location header for new db"
+ assert String.ends_with?(resp.headers["location"], db_name), msg
+ {:ok, _} = delete_db(db_name)
+ end
+
+ @tag :with_db
+ test "Created database has appropriate db info name", context do
+ db_name = context[:db_name]
+ assert Couch.get("/#{db_name}").body["db_name"] == db_name, "Get correct database name"
+ end
+
+ @tag :with_db
+ test "Database should be in _all_dbs", context do
+ assert context[:db_name] in Couch.get("/_all_dbs").body, "Db name in _all_dbs"
+ end
+
+ @tag :with_db
+ test "Empty database should have zero docs", context do
+ assert Couch.get("/#{context[:db_name]}").body["doc_count"] == 0, "Empty doc count in empty db"
+ end
+
+ @tag :with_db
+ test "Create a document and save it to the database", context do
+ resp = Couch.post("/#{context[:db_name]}", [body: %{:_id => "0", :a => 1, :b => 1}])
+ assert resp.status_code == 201, "Should be 201 created"
+ assert resp.body["id"], "Id should be present"
+ assert resp.body["rev"], "Rev should be present"
+
+ resp2 = Couch.get("/#{context[:db_name]}/#{resp.body["id"]}")
+ assert resp2.body["_id"] == resp.body["id"], "Ids should match"
+ assert resp2.body["_rev"] == resp.body["rev"], "Revs should match"
+ end
+
+ @tag :with_db
+ test "Revs info status is good", context do
+ db_name = context[:db_name]
+ {:ok, _} = create_doc(db_name, sample_doc_foo())
+ resp = Couch.get("/#{db_name}/foo", [query: %{:revs_info => true}])
+ assert hd(resp.body["_revs_info"])["status"] == "available", "Revs info is available"
+ end
+
+ @tag :with_db
+ test "Make sure you can do a seq=true option", context do
+ db_name = context[:db_name]
+ {:ok, _} = create_doc(db_name, sample_doc_foo())
+ resp = Couch.get("/#{db_name}/foo", [query: %{:local_seq => true}])
+ assert resp.body["_local_seq"] == 1, "Local seq value == 1"
+ end
+
+ @tag :with_db
+ test "Can create several documents", context do
+ db_name = context[:db_name]
+ assert Couch.post("/#{db_name}", [body: %{:_id => "1", :a => 2, :b => 4}]).body["ok"]
+ assert Couch.post("/#{db_name}", [body: %{:_id => "2", :a => 3, :b => 9}])
+ assert Couch.post("/#{db_name}", [body: %{:_id => "3", :a => 4, :b => 16}]).body["ok"]
+ assert Couch.get("/#{db_name}").body["doc_count"] == 3
+ end
+
+ @tag :with_db
+ test "Regression test for COUCHDB-954", context do
+ db_name = context[:db_name]
+ doc = %{:_id => "COUCHDB-954", :a => 1}
+
+ resp1 = Couch.post("/#{db_name}", [body: doc])
+ assert resp1.body["ok"]
+ old_rev = resp1.body["rev"]
+
+ doc = Map.put(doc, :_rev, old_rev)
+ resp2 = Couch.post("/#{db_name}", [body: doc])
+ assert resp2.body["ok"]
+ new_rev = resp2.body["rev"]
+
+ # TODO: enable chunked encoding
+ #resp3 = Couch.get("/#{db_name}/COUCHDB-954", [query: %{:open_revs => "[#{old_rev}, #{new_rev}]"}])
+ #assert length(resp3.body) == 2, "Should get two revisions back"
+ #resp3 = Couch.get("/#{db_name}/COUCHDB-954", [query: %{:open_revs => "[#{old_rev}]", :latest => true}])
+ #assert resp3.body["_rev"] == new_rev
+ end
+
+
+end
diff --git a/elixir_suite/test/test_helper.exs b/elixir_suite/test/test_helper.exs
new file mode 100644
index 0000000..181642b
--- /dev/null
+++ b/elixir_suite/test/test_helper.exs
@@ -0,0 +1,73 @@
+ExUnit.start()
+
+# TODO
+#def random_db_name do
+# "asdf"
+#end
+
+defmodule CouchTestCase do
+ use ExUnit.Case
+
+ defmacro __using__(_opts) do
+ quote do
+ require Logger
+ use ExUnit.Case
+
+ setup context do
+ db_name = if context[:with_db] != nil or context[:with_db_name] != nil do
+ if context[:with_db] != nil and context[:with_db] != true do
+ context[:with_db]
+ else
+ case context[:with_db_name] do
+ nil -> random_db_name()
+ true -> random_db_name()
+ name -> name
+ end
+ end
+ end
+
+ if context[:with_db] != nil do
+ {:ok, _} = create_db(db_name)
+
+ on_exit(fn -> delete_db(db_name) end)
+ end
+
+ {:ok, db_name: db_name}
+ end
+
+ def random_db_name do
+ time = :erlang.monotonic_time()
+ umi = :erlang.unique_integer([:monotonic])
+ "random-test-db-#{time}-#{umi}"
+ end
+
+ def create_db(db_name) do
+ resp = Couch.put("/#{db_name}")
+ assert resp.status_code == 201
+ assert resp.body == %{"ok" => true}
+ {:ok, resp}
+ end
+
+ def delete_db(db_name) do
+ resp = Couch.delete("/#{db_name}")
+ assert resp.status_code == 200
+ assert resp.body == %{"ok" => true}
+ {:ok, resp}
+ end
+
+ def create_doc(db_name, body) do
+ resp = Couch.post("/#{db_name}", [body: body])
+ assert resp.status_code == 201
+ assert resp.body["ok"]
+ {:ok, resp}
+ end
+
+ def sample_doc_foo do
+ %{
+ "_id": "foo",
+ "bar": "baz"
+ }
+ end
+ end
+ end
+end
[couchdb] 17/31: Port reduce.js suite
Posted by ch...@apache.org.
This is an automated email from the ASF dual-hosted git repository.
chewbranca pushed a commit to branch elixir-suite
in repository https://gitbox.apache.org/repos/asf/couchdb.git
commit 859533a9db9794a69feeb024609cefec865876d2
Author: Russell Branca <ch...@apache.org>
AuthorDate: Fri Dec 8 20:56:24 2017 +0000
Port reduce.js suite
---
elixir_suite/test/reduce_test.exs | 430 ++++++++++++++++++++++++++++++++++++++
1 file changed, 430 insertions(+)
diff --git a/elixir_suite/test/reduce_test.exs b/elixir_suite/test/reduce_test.exs
new file mode 100644
index 0000000..a01c997
--- /dev/null
+++ b/elixir_suite/test/reduce_test.exs
@@ -0,0 +1,430 @@
+defmodule ReduceTest do
+ use CouchTestCase
+
+ @moduletag :views
+
+ @moduledoc """
+ Test CouchDB view reduces
+ This is a port of the reduce.js suite
+ """
+
+ def summate(n) do
+ (n + 1) * n / 2
+ end
+
+ def make_docs(id, count) do
+ for i <- id..count do
+ %{
+ :_id => Integer.to_string(i),
+ :integer => i,
+ :string => Integer.to_string(i)
+ }
+ end
+ end
+
+ @tag :with_db
+ test "Basic reduce functions", context do
+ db_name = context[:db_name]
+ view_url = "/#{db_name}/_design/foo/_view/bar"
+ num_docs = 500
+ map = ~s"""
+function (doc) {
+ emit(doc.integer, doc.integer);
+ emit(doc.integer, doc.integer);
+};
+ """
+ reduce = "function (keys, values) { return sum(values); };"
+ red_doc = %{:views => %{:bar => %{:map => map, :reduce => reduce}}}
+
+ assert Couch.put("/#{db_name}/_design/foo", [body: red_doc]).body["ok"]
+ docs = make_docs(1, num_docs)
+ assert Couch.post("/#{db_name}/_bulk_docs", [body: %{:docs => docs}]).status_code == 201
+ :timer.sleep(200) # *sigh*
+
+ rows = Couch.get(view_url).body["rows"]
+ assert hd(rows)["value"] == 2 * summate(num_docs)
+
+ query = %{:startkey => 4, :endkey => 4}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert hd(rows)["value"] == 8
+
+ query = %{:startkey => 4, :endkey => 5}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert hd(rows)["value"] == 18
+
+ query = %{:startkey => 4, :endkey => 6}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert hd(rows)["value"] == 30
+
+ query = %{:group => true, :limit => 3}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert Enum.at(rows, 0)["value"] == 2
+ assert Enum.at(rows, 1)["value"] == 4
+ assert Enum.at(rows, 2)["value"] == 6
+
+ half_num_docs = Integer.floor_div(num_docs, 2)
+ max = Integer.floor_div(num_docs, 30) + 1
+ for i <- 1..max, i * 30 + 1 < half_num_docs do
+ i = i * 30 + 1
+ query = %{:startkey => i, :endkey => num_docs - i}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert hd(rows)["value"] == 2 * (summate(num_docs - i) - summate(i - 1))
+ end
+ end
+
+ @tag :with_db
+ test "More complex array key view row testing", context do
+ db_name = context[:db_name]
+ view_url = "/#{db_name}/_design/foo/_view/bar"
+ map = "function (doc) { emit(doc.keys, 1); };"
+ reduce = "function (keys, values) { return sum(values); };"
+ red_doc = %{:views => %{bar: %{map: map, reduce: reduce}}}
+
+ assert Couch.put("/#{db_name}/_design/foo", [body: red_doc]).body["ok"]
+ for i <- 1..5 do
+ for j <- 0..9 do
+ docs = [
+ %{keys: ["a"]},
+ %{keys: ["a"]},
+ %{keys: ["a", "b"]},
+ %{keys: ["a", "b"]},
+ %{keys: ["a", "b", "c"]},
+ %{keys: ["a", "b", "d"]},
+ %{keys: ["a", "c", "d"]},
+ %{keys: ["d"]},
+ %{keys: ["d", "a"]},
+ %{keys: ["d", "b"]},
+ %{keys: ["d", "c"]}
+ ]
+ assert Couch.post("/#{db_name}/_bulk_docs", [body: %{docs: docs}]).status_code == 201
+ :timer.sleep(20) # *sigh*
+ total_docs = 1 + ((i - 1) * 10 * 11) + ((j + 1) * 11);
+ assert Couch.get("/#{db_name}").body["doc_count"] == total_docs
+ end
+
+ # test group by exact key match
+ query = %{group: true}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert Enum.at(rows, 0) == %{"key" => ["a"], "value" => 20 * i}
+ assert Enum.at(rows, 0) == %{"key" => ["a"], "value" => 20 * i}
+ assert Enum.at(rows, 0) == %{"key" => ["a"], "value" => 20 * i}
+ assert Enum.at(rows, 1) == %{"key" => ["a", "b"], "value" => 20 * i}
+ assert Enum.at(rows, 2) == %{"key" => ["a", "b", "c"], "value" => 10 * i}
+ assert Enum.at(rows, 3) == %{"key" => ["a", "b", "d"], "value" => 10 * i}
+
+ # test group reduce and limit params provide valid json
+ query = %{group: true, limit: 2}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert Enum.at(rows, 0) == %{"key" => ["a"], "value" => 20 * i}
+ assert length(rows) == 2
+
+ # test group by the first element in the key array
+ query = %{group_level: 2}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert Enum.at(rows, 0) == %{"key" => ["a"], "value" => 20*i}
+ assert Enum.at(rows, 1) == %{"key" => ["a","b"], "value" => 40*i}
+ assert Enum.at(rows, 2) == %{"key" => ["a","c"], "value" => 10*i}
+ assert Enum.at(rows, 3) == %{"key" => ["d"], "value" => 10*i}
+ assert Enum.at(rows, 4) == %{"key" => ["d","a"], "value" => 10*i}
+ assert Enum.at(rows, 5) == %{"key" => ["d","b"], "value" => 10*i}
+ assert Enum.at(rows, 6) == %{"key" => ["d","c"], "value" => 10*i}
+
+ # test endkey with inclusive_end=true
+ query = %{group_level: 2, endkey: ~s(["d"]), inclusive_end: true}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert Enum.at(rows, 0) == %{"key" => ["a"], "value" => 20*i}
+ assert Enum.at(rows, 1) == %{"key" => ["a","b"], "value" => 40*i}
+ assert Enum.at(rows, 2) == %{"key" => ["a","c"], "value" => 10*i}
+ assert Enum.at(rows, 3) == %{"key" => ["d"], "value" => 10*i}
+ assert length(rows) == 4
+
+ # test endkey with inclusive_end=false
+ query = %{group_level: 2, endkey: ~s(["d"]), inclusive_end: false}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert Enum.at(rows, 0) == %{"key" => ["a"], "value" => 20*i}
+ assert Enum.at(rows, 1) == %{"key" => ["a","b"], "value" => 40*i}
+ assert Enum.at(rows, 2) == %{"key" => ["a","c"], "value" => 10*i}
+ assert length(rows) == 3
+ end
+ end
+
+ @tag :with_db
+ test "More complex reductions that need to use the combine option", context do
+ db_name = context[:db_name]
+ view_url = "/#{db_name}/_design/foo/_view/bar"
+ map = "function (doc) { emit(doc.val, doc.val); };"
+ reduce = ~s"""
+function (keys, values, rereduce) {
+ // This computes the standard deviation of the mapped results
+ var stdDeviation=0.0;
+ var count=0;
+ var total=0.0;
+ var sqrTotal=0.0;
+
+ if (!rereduce) {
+ // This is the reduce phase, we are reducing over emitted values from
+ // the map functions.
+ for(var i in values) {
+ total = total + values[i];
+ sqrTotal = sqrTotal + (values[i] * values[i]);
+ }
+ count = values.length;
+ }
+ else {
+ // This is the rereduce phase, we are re-reducing previosuly
+ // reduced values.
+ for(var i in values) {
+ count = count + values[i].count;
+ total = total + values[i].total;
+ sqrTotal = sqrTotal + values[i].sqrTotal;
+ }
+ }
+
+ var variance = (sqrTotal - ((total * total)/count)) / count;
+ stdDeviation = Math.sqrt(variance);
+
+ // the reduce result. It contains enough information to be rereduced
+ // with other reduce results.
+ return {"stdDeviation":stdDeviation,"count":count,
+ "total":total,"sqrTotal":sqrTotal};
+}
+ """
+
+ red_doc = %{:views => %{:bar => %{:map => map, :reduce => reduce}}}
+ assert Couch.put("/#{db_name}/_design/foo", [body: red_doc]).body["ok"]
+
+ Enum.each(1..10, fn _ ->
+ docs = for i <- 1..10, do: %{val: i * 10}
+ assert Couch.post("/#{db_name}/_bulk_docs", [body: %{:docs => docs}]).status_code == 201
+ end)
+ :timer.sleep(200) # *sigh*
+
+ rows = Couch.get(view_url).body["rows"]
+ assert_in_delta hd(rows)["value"]["stdDeviation"], 28.722813232690143, 0.0000000001
+ end
+
+ @tag :with_db
+ test "Reduce pagination", context do
+ db_name = context[:db_name]
+ view_url = "/#{db_name}/_design/foo/_view/bar"
+ ddoc = %{
+ _id: "_design/foo",
+ language: "javascript",
+ views: %{
+ bar: %{
+ reduce: "_count",
+ map: ~s"""
+ function(doc) {
+ emit(doc.int, doc._id);
+ emit(doc.int + 1, doc._id);
+ emit(doc.int + 2, doc._id);
+ }
+ """
+ }
+ }
+ }
+
+ assert Couch.put("/#{db_name}/_design/foo", [body: ddoc]).body["ok"]
+ docs = for i <- 0..1122, do: %{_id: Integer.to_string(i), int: i}
+ assert Couch.post("/#{db_name}/_bulk_docs", [body: %{:docs => docs}]).status_code == 201
+ :timer.sleep(200) # *sigh*
+
+ rand_val = fn -> :rand.uniform(100000000) end
+
+ # ?group=false tests
+ query = %{startkey: 400, endkey: 402, foobar: rand_val.()}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert hd(rows)["value"] == 9
+ query = %{startkey: 402, endkey: 400, foobar: rand_val.(), descending: true}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert hd(rows)["value"] == 9
+
+ query = %{startkey: 400, endkey: 402, foobar: rand_val.(), inclusive_end: false}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert hd(rows)["value"] == 6
+ query = %{startkey: 402, endkey: 400, foobar: rand_val.(), inclusive_end: false, descending: true}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert hd(rows)["value"] == 6
+
+ query = %{startkey: 400, endkey: 402, foobar: rand_val.(), endkey_docid: "400"}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert hd(rows)["value"] == 7
+ query = %{startkey: 400, endkey: 402, foobar: rand_val.(), endkey_docid: "400", inclusive_end: false}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert hd(rows)["value"] == 6
+
+ query = %{startkey: 400, endkey: 402, foobar: rand_val.(), endkey_docid: "401"}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert hd(rows)["value"] == 8
+ query = %{startkey: 400, endkey: 402, foobar: rand_val.(), endkey_docid: "401", inclusive_end: false}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert hd(rows)["value"] == 7
+
+ query = %{startkey: 400, endkey: 402, foobar: rand_val.(), endkey_docid: "402"}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert hd(rows)["value"] == 9
+ query = %{startkey: 400, endkey: 402, foobar: rand_val.(), endkey_docid: "402", inclusive_end: false}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert hd(rows)["value"] == 8
+
+ query = %{startkey: 402, endkey: 400, foobar: rand_val.(), endkey_docid: "398", descending: true}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert hd(rows)["value"] == 9
+ query = %{startkey: 402, endkey: 400, foobar: rand_val.(), endkey_docid: "398", descending: true, inclusive_end: false}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert hd(rows)["value"] == 8
+
+ query = %{startkey: 402, endkey: 400, foobar: rand_val.(), endkey_docid: "399", descending: true}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert hd(rows)["value"] == 8
+ query = %{startkey: 402, endkey: 400, foobar: rand_val.(), endkey_docid: "399", descending: true, inclusive_end: false}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert hd(rows)["value"] == 7
+
+ query = %{startkey: 402, endkey: 400, foobar: rand_val.(), endkey_docid: "400", descending: true}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert hd(rows)["value"] == 7
+ query = %{startkey: 402, endkey: 400, foobar: rand_val.(), endkey_docid: "400", descending: true, inclusive_end: false}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert hd(rows)["value"] == 6
+
+ query = %{startkey: 402, endkey: 400, foobar: rand_val.(), startkey_docid: "400", descending: true}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert hd(rows)["value"] == 7
+
+ query = %{startkey: 402, endkey: 400, foobar: rand_val.(), startkey_docid: "401", descending: true, inclusive_end: false}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert hd(rows)["value"] == 5
+
+ # ?group=true tests
+ query = %{:group => true, startkey: 400, endkey: 402, foobar: rand_val.()}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert length(rows) == 3
+ assert Enum.at(rows, 0)["key"] == 400
+ assert Enum.at(rows, 0)["value"] == 3
+ assert Enum.at(rows, 1)["key"] == 401
+ assert Enum.at(rows, 1)["value"] == 3
+ assert Enum.at(rows, 2)["key"] == 402
+ assert Enum.at(rows, 2)["value"] == 3
+
+ query = %{:group => true, startkey: 402, endkey: 400, foobar: rand_val.(), descending: true}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert length(rows) == 3
+ assert Enum.at(rows, 0)["key"] == 402
+ assert Enum.at(rows, 0)["value"] == 3
+ assert Enum.at(rows, 1)["key"] == 401
+ assert Enum.at(rows, 1)["value"] == 3
+ assert Enum.at(rows, 2)["key"] == 400
+ assert Enum.at(rows, 2)["value"] == 3
+
+ query = %{:group => true, startkey: 400, endkey: 402, foobar: rand_val.(), inclusive_end: false}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert length(rows) == 2
+ assert Enum.at(rows, 0)["key"] == 400
+ assert Enum.at(rows, 0)["value"] == 3
+ assert Enum.at(rows, 1)["key"] == 401
+ assert Enum.at(rows, 1)["value"] == 3
+
+ query = %{:group => true, startkey: 402, endkey: 400, foobar: rand_val.(), inclusive_end: false, descending: true}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert length(rows) == 2
+ assert Enum.at(rows, 0)["key"] == 402
+ assert Enum.at(rows, 0)["value"] == 3
+ assert Enum.at(rows, 1)["key"] == 401
+ assert Enum.at(rows, 1)["value"] == 3
+
+ query = %{:group => true, startkey: 400, endkey: 402, foobar: rand_val.(), endkey_docid: "401"}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert length(rows) == 3
+ assert Enum.at(rows, 0)["key"] == 400
+ assert Enum.at(rows, 0)["value"] == 3
+ assert Enum.at(rows, 1)["key"] == 401
+ assert Enum.at(rows, 1)["value"] == 3
+ assert Enum.at(rows, 2)["key"] == 402
+ assert Enum.at(rows, 2)["value"] == 2
+
+ query = %{:group => true, startkey: 400, endkey: 402, foobar: rand_val.(), endkey_docid: "400"}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert length(rows) == 3
+ assert Enum.at(rows, 0)["key"] == 400
+ assert Enum.at(rows, 0)["value"] == 3
+ assert Enum.at(rows, 1)["key"] == 401
+ assert Enum.at(rows, 1)["value"] == 3
+ assert Enum.at(rows, 2)["key"] == 402
+ assert Enum.at(rows, 2)["value"] == 1
+
+ query = %{:group => true, startkey: 402, endkey: 400, foobar: rand_val.(), startkey_docid: "401", descending: true}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert length(rows) == 3
+ assert Enum.at(rows, 0)["key"] == 402
+ assert Enum.at(rows, 0)["value"] == 2
+ assert Enum.at(rows, 1)["key"] == 401
+ assert Enum.at(rows, 1)["value"] == 3
+ assert Enum.at(rows, 2)["key"] == 400
+ assert Enum.at(rows, 2)["value"] == 3
+
+ query = %{:group => true, startkey: 402, endkey: 400, foobar: rand_val.(), startkey_docid: "400", descending: true}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert length(rows) == 3
+ assert Enum.at(rows, 0)["key"] == 402
+ assert Enum.at(rows, 0)["value"] == 1
+ assert Enum.at(rows, 1)["key"] == 401
+ assert Enum.at(rows, 1)["value"] == 3
+ assert Enum.at(rows, 2)["key"] == 400
+ assert Enum.at(rows, 2)["value"] == 3
+
+ query = %{:group => true, startkey: 402, endkey: 400, foobar: rand_val.(), startkey_docid: "401", descending: true, inclusive_end: false}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert length(rows) == 2
+ assert Enum.at(rows, 0)["key"] == 402
+ assert Enum.at(rows, 0)["value"] == 2
+ assert Enum.at(rows, 1)["key"] == 401
+ assert Enum.at(rows, 1)["value"] == 3
+
+ query = %{:group => true, startkey: 402, endkey: 400, foobar: rand_val.(), startkey_docid: "400", descending: true, inclusive_end: false}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert length(rows) == 2
+ assert Enum.at(rows, 0)["key"] == 402
+ assert Enum.at(rows, 0)["value"] == 1
+ assert Enum.at(rows, 1)["key"] == 401
+ assert Enum.at(rows, 1)["value"] == 3
+
+ query = %{:group => true, startkey: 402, endkey: 400, foobar: rand_val.(), endkey_docid: "398", descending: true, inclusive_end: true}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert length(rows) == 3
+ assert Enum.at(rows, 0)["key"] == 402
+ assert Enum.at(rows, 0)["value"] == 3
+ assert Enum.at(rows, 1)["key"] == 401
+ assert Enum.at(rows, 1)["value"] == 3
+ assert Enum.at(rows, 2)["key"] == 400
+ assert Enum.at(rows, 2)["value"] == 3
+
+ query = %{:group => true, startkey: 402, endkey: 400, foobar: rand_val.(), endkey_docid: "399", descending: true, inclusive_end: true}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert length(rows) == 3
+ assert Enum.at(rows, 0)["key"] == 402
+ assert Enum.at(rows, 0)["value"] == 3
+ assert Enum.at(rows, 1)["key"] == 401
+ assert Enum.at(rows, 1)["value"] == 3
+ assert Enum.at(rows, 2)["key"] == 400
+ assert Enum.at(rows, 2)["value"] == 2
+
+ query = %{:group => true, startkey: 402, endkey: 400, foobar: rand_val.(), endkey_docid: "399", descending: true, inclusive_end: false}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert length(rows) == 3
+ assert Enum.at(rows, 0)["key"] == 402
+ assert Enum.at(rows, 0)["value"] == 3
+ assert Enum.at(rows, 1)["key"] == 401
+ assert Enum.at(rows, 1)["value"] == 3
+ assert Enum.at(rows, 2)["key"] == 400
+ assert Enum.at(rows, 2)["value"] == 1
+
+ query = %{:group => true, startkey: 402, endkey: 400, foobar: rand_val.(), endkey_docid: "400", descending: true, inclusive_end: false}
+ rows = Couch.get(view_url, query: query).body["rows"]
+ assert length(rows) == 2
+ assert Enum.at(rows, 0)["key"] == 402
+ assert Enum.at(rows, 0)["value"] == 3
+ assert Enum.at(rows, 1)["key"] == 401
+ assert Enum.at(rows, 1)["value"] == 3
+ end
+end
[couchdb] 14/31: Embrace the Elixir
Posted by ch...@apache.org.
This is an automated email from the ASF dual-hosted git repository.
chewbranca pushed a commit to branch elixir-suite
in repository https://gitbox.apache.org/repos/asf/couchdb.git
commit 3667d8f26763d4f1ac804df39c0c708ca4624fb8
Author: Paul J. Davis <pa...@gmail.com>
AuthorDate: Thu Dec 7 11:39:43 2017 -0600
Embrace the Elixir
Turns out this idiom has a builtin reduce variant.
---
elixir_suite/test/uuids_test.exs | 9 ++-------
1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/elixir_suite/test/uuids_test.exs b/elixir_suite/test/uuids_test.exs
index 563f73b..3eda458 100644
--- a/elixir_suite/test/uuids_test.exs
+++ b/elixir_suite/test/uuids_test.exs
@@ -50,10 +50,7 @@ defmodule UUIDsTest do
test "sequential uuids are sequential" do
resp = Couch.get("/_uuids", query: %{:count => 1000})
assert resp.status_code == 200
- [uuid | rest_uuids] = resp.body["uuids"]
-
- assert String.length(uuid) == 32
- Enum.reduce(rest_uuids, uuid, fn curr, acc ->
+ Enum.reduce(resp.body["uuids"], fn curr, acc ->
assert String.length(curr) == 32
assert acc < curr
curr
@@ -87,9 +84,7 @@ defmodule UUIDsTest do
test "utc_id uuids are correct" do
resp = Couch.get("/_uuids", query: %{:count => 10})
assert resp.status_code == 200
- [uuid | rest_uuids] = resp.body["uuids"]
-
- Enum.reduce(rest_uuids, uuid, fn curr, acc ->
+ Enum.reduce(resp.body["uuids"], fn curr, acc ->
assert String.length(curr) == 14 + String.length(@utc_id_suffix)
assert String.slice(curr, 14..-1) == @utc_id_suffix
assert curr > acc
[couchdb] 19/31: Port httpotion functionality until released
Posted by ch...@apache.org.
This is an automated email from the ASF dual-hosted git repository.
chewbranca pushed a commit to branch elixir-suite
in repository https://gitbox.apache.org/repos/asf/couchdb.git
commit cae385afa74126c9a4dd1ca1dd9dfc1fecef4aed
Author: Russell Branca <ch...@apache.org>
AuthorDate: Thu Dec 14 20:26:22 2017 +0000
Port httpotion functionality until released
---
elixir_suite/lib/couch.ex | 38 ++++++++++++++++++++++++++++++++++++--
1 file changed, 36 insertions(+), 2 deletions(-)
diff --git a/elixir_suite/lib/couch.ex b/elixir_suite/lib/couch.ex
index d879ecf..5119011 100644
--- a/elixir_suite/lib/couch.ex
+++ b/elixir_suite/lib/couch.ex
@@ -27,8 +27,13 @@ defmodule Couch do
end
end
- def process_response_body(body) do
- body |> IO.iodata_to_binary |> :jiffy.decode([:return_maps])
+ def process_response_body(headers, body) do
+ case headers[:'content-type'] do
+ "application/json" ->
+ body |> IO.iodata_to_binary |> :jiffy.decode([:return_maps])
+ _ ->
+ process_response_body(body)
+ end
end
def login(user, pass) do
@@ -36,4 +41,33 @@ defmodule Couch do
true = resp.body["ok"]
resp.body
end
+
+ # HACK: this is here until this commit lands in a release
+ # https://github.com/myfreeweb/httpotion/commit/f3fa2f0bc3b9b400573942b3ba4628b48bc3c614
+ def handle_response(response) do
+ case response do
+ { :ok, status_code, headers, body, _ } ->
+ processed_headers = process_response_headers(headers)
+ %HTTPotion.Response{
+ status_code: process_status_code(status_code),
+ headers: processed_headers,
+ body: process_response_body(processed_headers, body)
+ }
+ { :ok, status_code, headers, body } ->
+ processed_headers = process_response_headers(headers)
+ %HTTPotion.Response{
+ status_code: process_status_code(status_code),
+ headers: processed_headers,
+ body: process_response_body(processed_headers, body)
+ }
+ { :ibrowse_req_id, id } ->
+ %HTTPotion.AsyncResponse{ id: id }
+ { :error, { :conn_failed, { :error, reason }}} ->
+ %HTTPotion.ErrorResponse{ message: error_to_string(reason)}
+ { :error, :conn_failed } ->
+ %HTTPotion.ErrorResponse{ message: "conn_failed"}
+ { :error, reason } ->
+ %HTTPotion.ErrorResponse{ message: error_to_string(reason)}
+ end
+ end
end
[couchdb] 20/31: WIP: Port most of rewrite.js suite
Posted by ch...@apache.org.
This is an automated email from the ASF dual-hosted git repository.
chewbranca pushed a commit to branch elixir-suite
in repository https://gitbox.apache.org/repos/asf/couchdb.git
commit 0097ac565517c460fa25eb40f1ad4e3750266eca
Author: Russell Branca <ch...@apache.org>
AuthorDate: Thu Dec 14 20:26:45 2017 +0000
WIP: Port most of rewrite.js suite
---
elixir_suite/test/reduce_test.exs | 10 --
elixir_suite/test/rewrite_test.exs | 339 +++++++++++++++++++++++++++++++++++++
elixir_suite/test/test_helper.exs | 20 ++-
3 files changed, 358 insertions(+), 11 deletions(-)
diff --git a/elixir_suite/test/reduce_test.exs b/elixir_suite/test/reduce_test.exs
index 9a49bfa..5c56165 100644
--- a/elixir_suite/test/reduce_test.exs
+++ b/elixir_suite/test/reduce_test.exs
@@ -12,16 +12,6 @@ defmodule ReduceTest do
(n + 1) * n / 2
end
- def make_docs(id, count) do
- for i <- id..count do
- %{
- :_id => Integer.to_string(i),
- :integer => i,
- :string => Integer.to_string(i)
- }
- end
- end
-
@tag :with_db
test "Basic reduce functions", context do
db_name = context[:db_name]
diff --git a/elixir_suite/test/rewrite_test.exs b/elixir_suite/test/rewrite_test.exs
new file mode 100644
index 0000000..54ff80a
--- /dev/null
+++ b/elixir_suite/test/rewrite_test.exs
@@ -0,0 +1,339 @@
+defmodule RewriteTest do
+ use CouchTestCase
+
+ @moduletag :js_engine
+
+ @moduledoc """
+ Test CouchDB rewrites
+ This is a port of the rewrite.js suite
+ """
+
+ Enum.each(["test_rewrite_suite_db", "test_rewrite_suite_db%2Fwith_slashes"], fn db_name ->
+ @tag with_random_db: db_name
+ @tag config: [
+ {"httpd", "authentication_handlers", "{couch_httpd_auth, special_test_authentication_handler}"},
+ {"httpd", "WWW-Authenticate", "X-Couch-Test-Auth"}
+ ]
+ test "Test basic rewrites on #{db_name}", context do
+ db_name = context[:db_name]
+ ddoc = ~S"""
+{
+ "_id": "_design/test",
+ "language": "javascript",
+ "_attachments": {
+ "foo.txt": {
+ "content_type":"text/plain",
+ "data": "VGhpcyBpcyBhIGJhc2U2NCBlbmNvZGVkIHRleHQ="
+ }
+ },
+ "rewrites": [
+ {
+ "from": "foo",
+ "to": "foo.txt"
+ },
+ {
+ "from": "foo2",
+ "to": "foo.txt",
+ "method": "GET"
+ },
+ {
+ "from": "hello/:id",
+ "to": "_update/hello/:id",
+ "method": "PUT"
+ },
+ {
+ "from": "/welcome",
+ "to": "_show/welcome"
+ },
+ {
+ "from": "/welcome/:name",
+ "to": "_show/welcome",
+ "query": {
+ "name": ":name"
+ }
+ },
+ {
+ "from": "/welcome2",
+ "to": "_show/welcome",
+ "query": {
+ "name": "user"
+ }
+ },
+ {
+ "from": "/welcome3/:name",
+ "to": "_update/welcome2/:name",
+ "method": "PUT"
+ },
+ {
+ "from": "/welcome3/:name",
+ "to": "_show/welcome2/:name",
+ "method": "GET"
+ },
+ {
+ "from": "/welcome4/*",
+ "to" : "_show/welcome3",
+ "query": {
+ "name": "*"
+ }
+ },
+ {
+ "from": "/welcome5/*",
+ "to" : "_show/*",
+ "query": {
+ "name": "*"
+ }
+ },
+ {
+ "from": "basicView",
+ "to": "_view/basicView"
+ },
+ {
+ "from": "simpleForm/basicView",
+ "to": "_list/simpleForm/basicView"
+ },
+ {
+ "from": "simpleForm/basicViewFixed",
+ "to": "_list/simpleForm/basicView",
+ "query": {
+ "startkey": 3,
+ "endkey": 8
+ }
+ },
+ {
+ "from": "simpleForm/basicViewPath/:start/:end",
+ "to": "_list/simpleForm/basicView",
+ "query": {
+ "startkey": ":start",
+ "endkey": ":end"
+ },
+ "formats": {
+ "start": "int",
+ "end": "int"
+ }
+ },
+ {
+ "from": "simpleForm/complexView",
+ "to": "_list/simpleForm/complexView",
+ "query": {
+ "key": [1, 2]
+ }
+ },
+ {
+ "from": "simpleForm/complexView2",
+ "to": "_list/simpleForm/complexView",
+ "query": {
+ "key": ["test", {}]
+ }
+ },
+ {
+ "from": "simpleForm/complexView3",
+ "to": "_list/simpleForm/complexView",
+ "query": {
+ "key": ["test", ["test", "essai"]]
+ }
+ },
+ {
+ "from": "simpleForm/complexView4",
+ "to": "_list/simpleForm/complexView2",
+ "query": {
+ "key": {"c": 1}
+ }
+ },
+ {
+ "from": "simpleForm/complexView5/:a/:b",
+ "to": "_list/simpleForm/complexView3",
+ "query": {
+ "key": [":a", ":b"]
+ }
+ },
+ {
+ "from": "simpleForm/complexView6",
+ "to": "_list/simpleForm/complexView3",
+ "query": {
+ "key": [":a", ":b"]
+ }
+ },
+ {
+ "from": "simpleForm/complexView7/:a/:b",
+ "to": "_view/complexView3",
+ "query": {
+ "key": [":a", ":b"],
+ "include_docs": ":doc"
+ },
+ "format": {
+ "doc": "bool"
+ }
+
+ },
+ {
+ "from": "/",
+ "to": "_view/basicView"
+ },
+ {
+ "from": "/db/*",
+ "to": "../../*"
+ }
+ ],
+ "lists": {
+ "simpleForm": "function(head, req) {
+ log(\"simpleForm\");
+ send(\"<ul>\");
+ var row, row_number = 0, prevKey, firstKey = null;
+ while (row = getRow()) {
+ row_number += 1;
+ if (!firstKey) firstKey = row.key;
+ prevKey = row.key;
+ send(\"\\n<li>Key: \"+row.key
+ +\" Value: \"+row.value
+ +\" LineNo: \"+row_number+\"</li>\");
+ }
+ return \"</ul><p>FirstKey: \"+ firstKey + \" LastKey: \"+ prevKey+\"</p>\";
+ }"
+ },
+ "shows": {
+ "welcome": "(function(doc,req) {
+ return \"Welcome \" + req.query[\"name\"];
+ })",
+ "welcome2": "(function(doc, req) {
+ return \"Welcome \" + doc.name;
+ })",
+ "welcome3": "(function(doc,req) {
+ return \"Welcome \" + req.query[\"name\"];
+ })"
+ },
+ "updates": {
+ "hello" : "(function(doc, req) {
+ if (!doc) {
+ if (req.id) {
+ return [{
+ _id : req.id
+ }, \"New World\"]
+ }
+ return [null, \"Empty World\"];
+ }
+ doc.world = \"hello\";
+ doc.edited_by = req.userCtx;
+ return [doc, \"hello doc\"];
+ })",
+ "welcome2": "(function(doc, req) {
+ if (!doc) {
+ if (req.id) {
+ return [{
+ _id: req.id,
+ name: req.id
+ }, \"New World\"]
+ }
+ return [null, \"Empty World\"];
+ }
+ return [doc, \"hello doc\"];
+ })"
+ },
+ "views" : {
+ "basicView" : {
+ "map" : "(function(doc) {
+ if (doc.integer) {
+ emit(doc.integer, doc.string);
+ }
+
+ })"
+ },
+ "complexView": {
+ "map": "(function(doc) {
+ if (doc.type == \"complex\") {
+ emit([doc.a, doc.b], doc.string);
+ }
+ })"
+ },
+ "complexView2": {
+ "map": "(function(doc) {
+ if (doc.type == \"complex\") {
+ emit(doc.a, doc.string);
+ }
+ })"
+ },
+ "complexView3": {
+ "map": "(function(doc) {
+ if (doc.type == \"complex\") {
+ emit(doc.b, doc.string);
+ }
+ })"
+ }
+ }
+}
+ """
+ ddoc = String.replace(ddoc, ~r/[\r\n]+/, "")
+
+ docs1 = make_docs(0, 9)
+ docs2 = [
+ %{"a" => 1, "b" => 1, "string" => "doc 1", "type" => "complex"},
+ %{"a" => 1, "b" => 2, "string" => "doc 2", "type" => "complex"},
+ %{"a" => "test", "b" => %{}, "string" => "doc 3", "type" => "complex"},
+ %{"a" => "test", "b" => ["test", "essai"], "string" => "doc 4", "type" => "complex"},
+ %{"a" => %{"c" => 1}, "b" => "", "string" => "doc 5", "type" => "complex"}
+ ]
+
+ assert Couch.put("/#{db_name}/_design/test", [body: ddoc]).body["ok"]
+ assert Couch.post("/#{db_name}/_bulk_docs", [body: %{:docs => docs1}, query: %{w: 3}]).status_code == 201
+ assert Couch.post("/#{db_name}/_bulk_docs", [body: %{:docs => docs2}, query: %{w: 3}]).status_code == 201
+
+ # Test simple rewriting
+ resp = Couch.get("/#{db_name}/_design/test/_rewrite/foo")
+ assert resp.body == "This is a base64 encoded text"
+ assert resp.headers["Content-Type"] == "text/plain"
+
+ resp = Couch.get("/#{db_name}/_design/test/_rewrite/foo2")
+ assert resp.body == "This is a base64 encoded text"
+ assert resp.headers["Content-Type"] == "text/plain"
+
+ # Test POST, hello update world
+ resp = Couch.post("/#{db_name}", [body: %{"word" => "plankton", "name" => "Rusty"}]).body
+ assert resp["ok"]
+ doc_id = resp["id"]
+ assert doc_id
+
+ resp = Couch.put("/#{db_name}/_design/test/_rewrite/hello/#{doc_id}")
+ assert resp.status_code == 201
+ assert resp.body == "hello doc"
+ assert String.match?(resp.headers["Content-Type"], ~r/charset=utf-8/)
+
+ assert Couch.get("/#{db_name}/#{doc_id}").body["world"] == "hello"
+
+ resp = Couch.get("/#{db_name}/_design/test/_rewrite/welcome?name=user")
+ assert resp.body == "Welcome user"
+
+ resp = Couch.get("/#{db_name}/_design/test/_rewrite/welcome/user")
+ assert resp.body == "Welcome user"
+
+ resp = Couch.get("/#{db_name}/_design/test/_rewrite/welcome2")
+ assert resp.body == "Welcome user"
+
+ resp = Couch.put("/#{db_name}/_design/test/_rewrite/welcome3/test")
+ assert resp.status_code == 201
+ assert resp.body == "New World"
+ assert String.match?(resp.headers["Content-Type"], ~r/charset=utf-8/)
+
+ resp = Couch.get("/#{db_name}/_design/test/_rewrite/welcome3/test")
+ assert resp.body == "Welcome test"
+
+ # TODO: port the two "bugged" tests from rewrite.js
+
+ resp = Couch.get("/#{db_name}/_design/test/_rewrite/basicView")
+ assert resp.status_code == 200
+ assert resp.body["total_rows"] == 9
+
+ resp = Couch.get("/#{db_name}/_design/test/_rewrite")
+ assert resp.status_code == 200
+ assert resp.body["total_rows"] == 9
+
+ # TODO: port _list function tests and everything below in rewrite.js
+ # This is currently broken because _list funcitons default to application/json
+ # response bodies and my attempts to change the content-type from within the
+ # _list function have not yet succeeded.
+ #
+ # Test GET with query params
+ # resp = Couch.get("/#{db_name}/_design/test/_rewrite/simpleForm/basicView", query: %{startkey: 3, endkey: 8})
+ # Logger.error("GOT RESP: #{inspect resp.body}")
+ # assert resp.status_code == 200
+ end
+ end)
+end
diff --git a/elixir_suite/test/test_helper.exs b/elixir_suite/test/test_helper.exs
index 1c61e4a..cb01fc2 100644
--- a/elixir_suite/test/test_helper.exs
+++ b/elixir_suite/test/test_helper.exs
@@ -26,6 +26,10 @@ defmodule CouchTestCase do
Map.put(context, :db_name, random_db_name())
%{:with_db_name => db_name} when is_binary(db_name) ->
Map.put(context, :db_name, db_name)
+ %{:with_random_db => db_name} when is_binary(db_name) ->
+ context
+ |> Map.put(:db_name, random_db_name(db_name))
+ |> Map.put(:with_db, true)
%{:with_db => true} ->
Map.put(context, :db_name, random_db_name())
%{:with_db => db_name} when is_binary(db_name) ->
@@ -52,9 +56,13 @@ defmodule CouchTestCase do
end
def random_db_name do
+ random_db_name("random-test-db")
+ end
+
+ def random_db_name(prefix) do
time = :erlang.monotonic_time()
umi = :erlang.unique_integer([:monotonic])
- "random-test-db-#{time}-#{umi}"
+ "#{prefix}-#{time}-#{umi}"
end
def set_config({section, key, value}) do
@@ -112,6 +120,16 @@ defmodule CouchTestCase do
"bar": "baz"
}
end
+
+ def make_docs(id, count) do
+ for i <- id..count do
+ %{
+ :_id => Integer.to_string(i),
+ :integer => i,
+ :string => Integer.to_string(i)
+ }
+ end
+ end
end
end
end
[couchdb] 25/31: Allow tests to specify the request Content-Type
Posted by ch...@apache.org.
This is an automated email from the ASF dual-hosted git repository.
chewbranca pushed a commit to branch elixir-suite
in repository https://gitbox.apache.org/repos/asf/couchdb.git
commit 1d97b3586354423362f8005704c883acfe2077c8
Author: Paul J. Davis <pa...@gmail.com>
AuthorDate: Fri Dec 15 12:31:12 2017 -0600
Allow tests to specify the request Content-Type
---
test/elixir/lib/couch.ex | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/test/elixir/lib/couch.ex b/test/elixir/lib/couch.ex
index 8f0aca9..8ad4821 100644
--- a/test/elixir/lib/couch.ex
+++ b/test/elixir/lib/couch.ex
@@ -10,9 +10,12 @@ defmodule Couch do
end
def process_request_headers(headers) do
- headers
- |> Dict.put(:"User-Agent", "couch-potion")
- |> Dict.put(:"Content-Type", "application/json")
+ headers = Keyword.put(headers, :"User-Agent", "couch-potion")
+ if headers[:"Content-Type"] do
+ headers
+ else
+ Keyword.put(headers, :"Content-Type", "application/json")
+ end
end
def process_options(options) do
[couchdb] 27/31: Allow tests to set config values dynamically
Posted by ch...@apache.org.
This is an automated email from the ASF dual-hosted git repository.
chewbranca pushed a commit to branch elixir-suite
in repository https://gitbox.apache.org/repos/asf/couchdb.git
commit 534739fe56422245b3f48e9f686f70168efe3b11
Author: Paul J. Davis <pa...@gmail.com>
AuthorDate: Fri Jan 26 14:31:47 2018 -0600
Allow tests to set config values dynamically
---
test/elixir/test/test_helper.exs | 22 +++++++++++++---------
1 file changed, 13 insertions(+), 9 deletions(-)
diff --git a/test/elixir/test/test_helper.exs b/test/elixir/test/test_helper.exs
index cb01fc2..9baf204 100644
--- a/test/elixir/test/test_helper.exs
+++ b/test/elixir/test/test_helper.exs
@@ -66,15 +66,7 @@ defmodule CouchTestCase do
end
def set_config({section, key, value}) do
- resp = Couch.get("/_membership")
- existing = Enum.map(resp.body["all_nodes"], fn node ->
- url = "/_node/#{node}/_config/#{section}/#{key}"
- headers = ["X-Couch-Persist": "false"]
- body = :jiffy.encode(value)
- resp = Couch.put(url, headers: headers, body: body)
- assert resp.status_code == 200
- {node, resp.body}
- end)
+ existing = set_config_raw(section, key, value)
on_exit(fn ->
Enum.each(existing, fn {node, prev_value} ->
if prev_value != "" do
@@ -93,6 +85,18 @@ defmodule CouchTestCase do
end)
end
+ def set_config_raw(section, key, value) do
+ resp = Couch.get("/_membership")
+ Enum.map(resp.body["all_nodes"], fn node ->
+ url = "/_node/#{node}/_config/#{section}/#{key}"
+ headers = ["X-Couch-Persist": "false"]
+ body = :jiffy.encode(value)
+ resp = Couch.put(url, headers: headers, body: body)
+ assert resp.status_code == 200
+ {node, resp.body}
+ end)
+ end
+
def create_db(db_name) do
resp = Couch.put("/#{db_name}")
assert resp.status_code == 201
[couchdb] 28/31: Add helper functions for user sessions
Posted by ch...@apache.org.
This is an automated email from the ASF dual-hosted git repository.
chewbranca pushed a commit to branch elixir-suite
in repository https://gitbox.apache.org/repos/asf/couchdb.git
commit 508cb824273654a53663cc5ef2d2e242e219b998
Author: Paul J. Davis <pa...@gmail.com>
AuthorDate: Fri Jan 26 14:30:31 2018 -0600
Add helper functions for user sessions
---
test/elixir/lib/couch.ex | 98 ++++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 94 insertions(+), 4 deletions(-)
diff --git a/test/elixir/lib/couch.ex b/test/elixir/lib/couch.ex
index 8ad4821..3a491e8 100644
--- a/test/elixir/lib/couch.ex
+++ b/test/elixir/lib/couch.ex
@@ -1,3 +1,44 @@
+defmodule Couch.Session do
+ @enforce_keys [:cookie]
+ defstruct [:cookie]
+
+ def new(cookie) do
+ %Couch.Session{cookie: cookie}
+ end
+
+ def logout(sess) do
+ headers = [
+ "Content-Type": "application/x-www-form-urlencoded",
+ "X-CouchDB-WWW-Authenticate": "Cookie",
+ "Cookie": sess.cookie
+ ]
+ Couch.delete!("/_session", headers: headers)
+ end
+
+ def get(sess, url, opts \\ []), do: go(sess, :get, url, opts)
+ def get!(sess, url, opts \\ []), do: go!(sess, :get, url, opts)
+ def put(sess, url, opts \\ []), do: go(sess, :put, url, opts)
+ def put!(sess, url, opts \\ []), do: go!(sess, :put, url, opts)
+ def post(sess, url, opts \\ []), do: go(sess, :post, url, opts)
+ def post!(sess, url, opts \\ []), do: go!(sess, :post, url, opts)
+ def delete(sess, url, opts \\ []), do: go(sess, :delete, url, opts)
+ def delete!(sess, url, opts \\ []), do: go!(sess, :delete, url, opts)
+
+ # Skipping head/patch/options for YAGNI. Feel free to add
+ # if the need arises.
+
+ def go(%Couch.Session{} = sess, method, url, opts) do
+ opts = Keyword.merge(opts, [cookie: sess.cookie])
+ Couch.request(method, url, opts)
+ end
+
+ def go!(%Couch.Session{} = sess, method, url, opts) do
+ opts = Keyword.merge(opts, [cookie: sess.cookie])
+ Couch.request!(method, url, opts)
+ end
+end
+
+
defmodule Couch do
use HTTPotion.Base
@@ -9,17 +50,28 @@ defmodule Couch do
"http://localhost:15984" <> url
end
- def process_request_headers(headers) do
+ def process_request_headers(headers, options) do
headers = Keyword.put(headers, :"User-Agent", "couch-potion")
- if headers[:"Content-Type"] do
+ headers = if headers[:"Content-Type"] do
headers
else
Keyword.put(headers, :"Content-Type", "application/json")
end
+ case Keyword.get options, :cookie do
+ nil ->
+ headers
+ cookie ->
+ Keyword.put headers, :"Cookie", cookie
+ end
end
+
def process_options(options) do
- Dict.put options, :basic_auth, {"adm", "pass"}
+ if Keyword.get(options, :cookie) == nil do
+ Keyword.put(options, :basic_auth, {"adm", "pass"})
+ else
+ options
+ end
end
def process_request_body(body) do
@@ -38,10 +90,17 @@ defmodule Couch do
end
end
+ def login(userinfo) do
+ [user, pass] = String.split(userinfo, ":", [parts: 2])
+ login(user, pass)
+ end
+
def login(user, pass) do
resp = Couch.post("/_session", body: %{:username => user, :password => pass})
true = resp.body["ok"]
- resp.body
+ cookie = resp.headers[:'set-cookie']
+ [token | _] = String.split(cookie, ";")
+ %Couch.Session{cookie: token}
end
# HACK: this is here until this commit lands in a release
@@ -72,4 +131,35 @@ defmodule Couch do
%HTTPotion.ErrorResponse{ message: error_to_string(reason)}
end
end
+
+ # Anther HACK: Until we can get process_request_headers/2 merged
+ # upstream.
+ @spec process_arguments(atom, String.t, [{atom(), any()}]) :: %{}
+ defp process_arguments(method, url, options) do
+ options = process_options(options)
+
+ body = Keyword.get(options, :body, "")
+ headers = Keyword.merge Application.get_env(:httpotion, :default_headers, []), Keyword.get(options, :headers, [])
+ timeout = Keyword.get(options, :timeout, Application.get_env(:httpotion, :default_timeout, 5000))
+ ib_options = Keyword.merge Application.get_env(:httpotion, :default_ibrowse, []), Keyword.get(options, :ibrowse, [])
+ follow_redirects = Keyword.get(options, :follow_redirects, Application.get_env(:httpotion, :default_follow_redirects, false))
+
+ ib_options = if stream_to = Keyword.get(options, :stream_to), do: Keyword.put(ib_options, :stream_to, spawn(__MODULE__, :transformer, [stream_to, method, url, options])), else: ib_options
+ ib_options = if user_password = Keyword.get(options, :basic_auth) do
+ {user, password} = user_password
+ Keyword.put(ib_options, :basic_auth, { to_charlist(user), to_charlist(password) })
+ else
+ ib_options
+ end
+
+ %{
+ method: method,
+ url: url |> to_string |> process_url(options) |> to_charlist,
+ body: body |> process_request_body,
+ headers: headers |> process_request_headers(options) |> Enum.map(fn ({k, v}) -> { to_charlist(k), to_charlist(v) } end),
+ timeout: timeout,
+ ib_options: ib_options,
+ follow_redirects: follow_redirects
+ }
+ end
end
[couchdb] 06/31: Add support for a config tag
Posted by ch...@apache.org.
This is an automated email from the ASF dual-hosted git repository.
chewbranca pushed a commit to branch elixir-suite
in repository https://gitbox.apache.org/repos/asf/couchdb.git
commit 05fa713f32329bd7b04d23259109efca30563da7
Author: Paul J. Davis <pa...@gmail.com>
AuthorDate: Wed Dec 6 18:28:16 2017 -0600
Add support for a config tag
This allows tests to use a config tag to mimic the JavaScript
"run_on_modified_server" function. The value of a config tag should be
set to a list of 3-tuples that contain the section/key/value to set. The
previous value is restored automatically after the test is finished.
---
elixir_suite/test/test_helper.exs | 43 +++++++++++++++++++++++++++++++++++++++
1 file changed, 43 insertions(+)
diff --git a/elixir_suite/test/test_helper.exs b/elixir_suite/test/test_helper.exs
index b8adb52..ecd88e5 100644
--- a/elixir_suite/test/test_helper.exs
+++ b/elixir_suite/test/test_helper.exs
@@ -15,6 +15,12 @@ defmodule CouchTestCase do
use ExUnit.Case
setup context do
+ {:ok, db_context} = set_db_context(context)
+ {:ok, cfg_context} = set_config_context(context)
+ {:ok, db_context ++ cfg_context}
+ end
+
+ def set_db_context(context) do
db_name = if context[:with_db] != nil or context[:with_db_name] != nil do
if context[:with_db] != nil and context[:with_db] != true do
context[:with_db]
@@ -36,12 +42,49 @@ defmodule CouchTestCase do
{:ok, db_name: db_name}
end
+ def set_config_context(context) do
+ if is_list(context[:config]) do
+ Enum.each(context[:config], fn cfg ->
+ set_config(cfg)
+ end)
+ end
+ {:ok, []}
+ end
+
def random_db_name do
time = :erlang.monotonic_time()
umi = :erlang.unique_integer([:monotonic])
"random-test-db-#{time}-#{umi}"
end
+ def set_config({section, key, value}) do
+ resp = Couch.get("/_membership")
+ existing = Enum.map(resp.body["all_nodes"], fn node ->
+ url = "/_node/#{node}/_config/#{section}/#{key}"
+ headers = ["X-Couch-Persist": "false"]
+ body = :jiffy.encode(value)
+ resp = Couch.put(url, headers: headers, body: body)
+ assert resp.status_code == 200
+ {node, resp.body}
+ end)
+ on_exit(fn ->
+ Enum.each(existing, fn {node, prev_value} ->
+ if prev_value != "" do
+ url = "/_node/#{node}/_config/#{section}/#{key}"
+ headers = ["X-Couch-Persist": "false"]
+ body = :jiffy.encode(prev_value)
+ resp = Couch.put(url, headers: headers, body: body)
+ assert resp.status_code == 200
+ else
+ url = "/_node/#{node}/_config/#{section}/#{key}"
+ headers = ["X-Couch-Persist": "false"]
+ resp = Couch.delete(url, headers: headers)
+ assert resp.status_code == 200
+ end
+ end)
+ end)
+ end
+
def create_db(db_name) do
resp = Couch.put("/#{db_name}")
assert resp.status_code == 201
[couchdb] 30/31: Port replication.js to replication_test.ex
Posted by ch...@apache.org.
This is an automated email from the ASF dual-hosted git repository.
chewbranca pushed a commit to branch elixir-suite
in repository https://gitbox.apache.org/repos/asf/couchdb.git
commit 29a656128f6896349ee10c45c1d8906f2c5b874f
Author: Paul J. Davis <pa...@gmail.com>
AuthorDate: Fri Jan 26 14:37:06 2018 -0600
Port replication.js to replication_test.ex
---
test/elixir/test/data/lorem.txt | 103 ++
test/elixir/test/data/lorem_b64.txt | 1 +
test/elixir/test/replication_test.exs | 1711 +++++++++++++++++++++++++++++++++
3 files changed, 1815 insertions(+)
diff --git a/test/elixir/test/data/lorem.txt b/test/elixir/test/data/lorem.txt
new file mode 100644
index 0000000..0ef85ba
--- /dev/null
+++ b/test/elixir/test/data/lorem.txt
@@ -0,0 +1,103 @@
+Lorem ipsum dolor sit amet, consectetur adipiscing elit. Phasellus nunc sapien, porta id pellentesque at, elementum et felis. Curabitur condimentum ante in metus iaculis quis congue diam commodo. Donec eleifend ante sed nulla dapibus convallis. Ut cursus aliquam neque, vel porttitor tellus interdum ut. Sed pharetra lacinia adipiscing. In tristique tristique felis non tincidunt. Nulla auctor mauris a velit cursus ultricies. In at libero quis justo consectetur laoreet. Nullam id ultrices n [...]
+
+Nulla in convallis tellus. Proin tincidunt suscipit vulputate. Suspendisse potenti. Nullam tristique justo mi, a tristique ligula. Duis convallis aliquam iaculis. Nulla dictum fringilla congue. Suspendisse ac leo lectus, ac aliquam justo. Ut porttitor commodo mi sed luctus. Nulla at enim lorem. Nunc eu justo sapien, a blandit odio. Curabitur faucibus sollicitudin dolor, id lacinia sem auctor in. Donec varius nunc at lectus sagittis nec luctus arcu pharetra. Nunc sed metus justo. Cras vel [...]
+
+In et dolor vitae orci adipiscing congue. Aliquam gravida nibh at nisl gravida molestie. Curabitur a bibendum sapien. Aliquam tincidunt, nulla nec pretium lobortis, odio augue tincidunt arcu, a lobortis odio sem ut purus. Donec accumsan mattis nunc vitae lacinia. Suspendisse potenti. Integer commodo nisl quis nibh interdum non fringilla dui sodales. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. In hac habitasse platea dictumst. Etiam ullamcor [...]
+
+In a magna nisi, a ultricies massa. Donec elit neque, viverra non tempor quis, fringilla in metus. Integer odio odio, euismod vitae mollis sed, sodales eget libero. Donec nec massa in felis ornare pharetra at nec tellus. Nunc lorem dolor, pretium vel auctor in, volutpat vitae felis. Maecenas rhoncus, orci vel blandit euismod, turpis erat tincidunt ante, elementum adipiscing nisl urna in nisi. Phasellus sagittis, enim sed accumsan consequat, urna augue lobortis erat, non malesuada quam me [...]
+
+Pellentesque sed risus a ante vulputate lobortis sit amet eu nisl. Suspendisse ut eros mi, a rhoncus lacus. Curabitur fermentum vehicula tellus, a ornare mi condimentum vel. Integer molestie volutpat viverra. Integer posuere euismod venenatis. Proin ac mauris sed nulla pharetra porttitor. Duis vel dui in risus sodales auctor sit amet non enim. Maecenas mollis lacus at ligula faucibus sodales. Cras vel neque arcu. Sed tincidunt tortor pretium nisi interdum quis dictum arcu laoreet. Morbi [...]
+
+Donec nec nulla urna, ac sagittis lectus. Suspendisse non elit sed mi auctor facilisis vitae et lectus. Fusce ac vulputate mauris. Morbi condimentum ultrices metus, et accumsan purus malesuada at. Maecenas lobortis ante sed massa dictum vitae venenatis elit commodo. Proin tellus eros, adipiscing sed dignissim vitae, tempor eget ante. Aenean id tellus nec magna cursus pharetra vitae vel enim. Morbi vestibulum pharetra est in vulputate. Aliquam vitae metus arcu, id aliquet nulla. Phasellus [...]
+
+Donec mi enim, laoreet pulvinar mollis eu, malesuada viverra nunc. In vitae metus vitae neque tempor dapibus. Maecenas tincidunt purus a felis aliquam placerat. Nulla facilisi. Suspendisse placerat pharetra mattis. Integer tempor malesuada justo at tempus. Maecenas vehicula lorem a sapien bibendum vel iaculis risus feugiat. Pellentesque diam erat, dapibus et pellentesque quis, molestie ut massa. Vivamus iaculis interdum massa id bibendum. Quisque ut mauris dui, sit amet varius elit. Vest [...]
+
+Sed in metus nulla. Praesent nec adipiscing sapien. Donec laoreet, velit non rutrum vestibulum, ligula neque adipiscing turpis, at auctor sapien elit ut massa. Nullam aliquam, enim vel posuere rutrum, justo erat laoreet est, vel fringilla lacus nisi non lectus. Etiam lectus nunc, laoreet et placerat at, venenatis quis libero. Praesent in placerat elit. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Pellentesque fringilla augue eu nibh placerat [...]
+
+Nulla nec felis elit. Nullam in ipsum in ipsum consequat fringilla quis vel tortor. Phasellus non massa nisi, sit amet aliquam urna. Sed fermentum nibh vitae lacus tincidunt nec tincidunt massa bibendum. Etiam elit dui, facilisis sit amet vehicula nec, iaculis at sapien. Ut at massa id dui ultrices volutpat ut ac libero. Fusce ipsum mi, bibendum a lacinia et, pulvinar eget mauris. Proin faucibus urna ut lorem elementum vulputate. Duis quam leo, malesuada non euismod ut, blandit facilisis [...]
+
+Nulla a turpis quis sapien commodo dignissim eu quis justo. Maecenas eu lorem odio, ut hendrerit velit. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Proin facilisis porttitor ullamcorper. Praesent mollis dignissim massa, laoreet aliquet velit pellentesque non. Nunc facilisis convallis tristique. Mauris porttitor ante at tellus convallis placerat. Morbi aliquet nisi ac nisl pulvinar id dictum nisl mollis. Sed ornare sem et risus placerat lobortis i [...]
+
+Duis pretium ultrices mattis. Nam euismod risus a erat lacinia bibendum. Morbi massa tortor, consectetur id eleifend id, pellentesque vel tortor. Praesent urna lorem, porttitor at condimentum vitae, luctus eget elit. Maecenas fringilla quam convallis est hendrerit viverra. Etiam vehicula, sapien non pulvinar adipiscing, nisi massa vestibulum est, id interdum mauris velit eu est. Vestibulum est arcu, facilisis at ultricies non, vulputate id sapien. Vestibulum ipsum metus, pharetra nec pel [...]
+
+Nam dignissim, nisl eget consequat euismod, sem lectus auctor orci, ut porttitor lacus dui ac neque. In hac habitasse platea dictumst. Fusce egestas porta facilisis. In hac habitasse platea dictumst. Mauris cursus rhoncus risus ac euismod. Quisque vitae risus a tellus venenatis convallis. Curabitur laoreet sapien eu quam luctus lobortis. Vivamus sollicitudin sodales dolor vitae sodales. Suspendisse pharetra laoreet aliquet. Maecenas ullamcorper orci vel tortor luctus iaculis ut vitae met [...]
+
+In sed feugiat eros. Donec bibendum ullamcorper diam, eu faucibus mauris dictum sed. Duis tincidunt justo in neque accumsan dictum. Maecenas in rutrum sapien. Ut id feugiat lacus. Nulla facilisi. Nunc ac lorem id quam varius cursus a et elit. Aenean posuere libero eu tortor vehicula ut ullamcorper odio consequat. Sed in dignissim dui. Curabitur iaculis tempor quam nec placerat. Aliquam venenatis nibh et justo iaculis lacinia. Pellentesque habitant morbi tristique senectus et netus et mal [...]
+
+Integer sem sem, semper in vestibulum vitae, lobortis quis erat. Duis ante lectus, fermentum sed tempor sit amet, placerat sit amet sem. Mauris congue tincidunt ipsum. Ut viverra, lacus vel varius pharetra, purus enim pulvinar ipsum, non pellentesque enim justo non erat. Fusce ipsum orci, ultrices sed pellentesque at, hendrerit laoreet enim. Nunc blandit mollis pretium. Ut mollis, nulla aliquam sodales vestibulum, libero lorem tempus tortor, a pellentesque nibh elit a ipsum. Phasellus fe [...]
+
+Nunc vel ullamcorper mi. Suspendisse potenti. Nunc et urna a augue scelerisque ultrices non quis mi. In quis porttitor elit. Aenean quis erat nulla, a venenatis tellus. Fusce vestibulum nisi sed leo adipiscing dignissim. Nunc interdum, lorem et lacinia vestibulum, quam est mattis magna, sit amet volutpat elit augue at libero. Cras gravida dui quis velit lobortis condimentum et eleifend ligula. Phasellus ac metus quam, id venenatis mi. Aliquam ut turpis ac tellus dapibus dapibus eu in mi. [...]
+
+Vestibulum semper egestas mauris. Morbi vestibulum sem sem. Aliquam venenatis, felis sed eleifend porta, mauris diam semper arcu, sit amet ultricies est sapien sit amet libero. Vestibulum dui orci, ornare condimentum mollis nec, molestie ac eros. Proin vitae mollis velit. Praesent eget felis mi. Maecenas eu vulputate nisi. Vestibulum varius, arcu in ultricies vestibulum, nibh leo sagittis odio, ut bibendum nisl mi nec diam. Integer at enim feugiat nulla semper bibendum ut a velit. Proin [...]
+
+Sed aliquam mattis quam, in vulputate sapien ultrices in. Pellentesque quis velit sed dui hendrerit cursus. Pellentesque non nunc lacus, a semper metus. Fusce euismod velit quis diam suscipit consequat. Praesent commodo accumsan neque. Proin viverra, ipsum non tristique ultrices, velit velit facilisis lorem, vel rutrum neque eros ac nisi. Suspendisse felis massa, faucibus in volutpat ac, dapibus et odio. Pellentesque id tellus sit amet risus ultricies ullamcorper non nec sapien. Nam plac [...]
+
+Aliquam lorem eros, pharetra nec egestas vitae, mattis nec risus. Mauris arcu massa, sodales eget gravida sed, viverra vitae turpis. Ut ligula urna, euismod ac tincidunt eu, faucibus sed felis. Praesent mollis, ipsum quis rhoncus dignissim, odio sem venenatis nulla, at consequat felis augue vel erat. Nam fermentum feugiat volutpat. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Etiam vitae dui in nisi adipiscing ultricies non eu justo. Donec t [...]
+
+Etiam sit amet nibh justo, posuere volutpat nunc. Morbi pellentesque neque in orci volutpat eu scelerisque lorem dictum. Mauris mollis iaculis est, nec sagittis sapien consequat id. Nunc nec malesuada odio. Duis quis suscipit odio. Mauris purus dui, sodales id mattis sit amet, posuere in arcu. Phasellus porta elementum convallis. Maecenas at orci et mi vulputate sollicitudin in in turpis. Pellentesque cursus adipiscing neque sit amet commodo. Fusce ut mi eu lectus porttitor volutpat et n [...]
+
+Curabitur scelerisque eros quis nisl viverra vel ultrices velit vestibulum. Sed lobortis pulvinar sapien ac venenatis. Sed ante nibh, rhoncus eget dictum in, mollis ut nisi. Phasellus facilisis mi non lorem tristique non eleifend sem fringilla. Integer ut augue est. In venenatis tincidunt scelerisque. Etiam ante dui, posuere quis malesuada vitae, malesuada a arcu. Aenean faucibus venenatis sapien, ut facilisis nisi blandit vel. Aenean ac lorem eu sem fermentum placerat. Proin neque purus [...]
+
+Fusce hendrerit porttitor euismod. Donec malesuada egestas turpis, et ultricies felis elementum vitae. Nullam in sem nibh. Nullam ultricies hendrerit justo sit amet lobortis. Sed tincidunt, mauris at ornare laoreet, sapien purus elementum elit, nec porttitor nisl purus et erat. Donec felis nisi, rutrum ullamcorper gravida ac, tincidunt sit amet urna. Proin vel justo vitae eros sagittis bibendum a ut nibh. Phasellus sodales laoreet tincidunt. Maecenas odio massa, condimentum id aliquet ut [...]
+
+Praesent venenatis magna id sem dictum eu vehicula ipsum vulputate. Sed a convallis sapien. Sed justo dolor, rhoncus vel rutrum mattis, sollicitudin ut risus. Nullam sit amet convallis est. Etiam non tincidunt ligula. Fusce suscipit pretium elit at ullamcorper. Quisque sollicitudin, diam id interdum porta, metus ipsum volutpat libero, id venenatis felis orci non velit. Suspendisse potenti. Mauris rutrum, tortor sit amet pellentesque tincidunt, erat quam ultricies odio, id aliquam elit le [...]
+
+Praesent euismod, turpis quis laoreet consequat, neque ante imperdiet quam, ac semper tortor nibh in nulla. Integer scelerisque eros vehicula urna lacinia ac facilisis mauris accumsan. Phasellus at mauris nibh. Curabitur enim ante, rutrum sed adipiscing hendrerit, pellentesque non augue. In hac habitasse platea dictumst. Nam tempus euismod massa a dictum. Donec sit amet justo ac diam ultricies ultricies. Sed tincidunt erat quis quam tempus vel interdum erat rhoncus. In hac habitasse plat [...]
+
+Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum nisl metus, hendrerit ut laoreet sed, consectetur at purus. Duis interdum congue lobortis. Nullam sed massa porta felis eleifend consequat sit amet nec metus. Aliquam placerat dictum erat at eleifend. Vestibulum libero ante, ullamcorper a porttitor suscipit, accumsan vel nisi. Donec et magna neque. Nam elementum ultrices justo, eget sollicitudin sapien imperdiet eget. Nullam auctor dictum nunc, at feugiat odio vestibulum [...]
+
+In sed eros augue, non rutrum odio. Etiam vitae dui neque, in tristique massa. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Maecenas dictum elit at lectus tempor non pharetra nisl hendrerit. Sed sed quam eu lectus ultrices malesuada tincidunt a est. Nam vel eros risus. Maecenas eros elit, blandit fermentum tempor eget, lobortis id diam. Vestibulum lacinia lacus vitae magna volutpat eu dignissim eros convallis. Vivamus ac velit tellus, a congue n [...]
+
+Aliquam imperdiet tellus posuere justo vehicula sed vestibulum ante tristique. Fusce feugiat faucibus purus nec molestie. Nulla tempor neque id magna iaculis quis sollicitudin eros semper. Praesent viverra sagittis luctus. Morbi sit amet magna sed odio gravida varius. Ut nisi libero, vulputate feugiat pretium tempus, egestas sit amet justo. Pellentesque consequat tempor nisi in lobortis. Sed fermentum convallis dui ac sollicitudin. Integer auctor augue eget tellus tempus fringilla. Proin [...]
+
+Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Aliquam ultrices erat non turpis auctor id ornare mauris sagittis. Quisque porttitor, tellus ut convallis sagittis, mi libero feugiat tellus, rhoncus placerat ipsum tortor id risus. Donec tincidunt feugiat leo. Cras id mi neque, eu malesuada eros. Ut molestie magna quis libero placerat malesuada. Aliquam erat volutpat. Aliquam non mauris lorem, in adipiscing metus. Donec eget ipsum in elit commo [...]
+
+Phasellus suscipit, tortor eu varius fringilla, sapien magna egestas risus, ut suscipit dui mauris quis velit. Cras a sapien quis sapien hendrerit tristique a sit amet elit. Pellentesque dui arcu, malesuada et sodales sit amet, dapibus vel quam. Sed non adipiscing ligula. Ut vulputate purus at nisl posuere sodales. Maecenas diam velit, tincidunt id mattis eu, aliquam ac nisi. Maecenas pretium, augue a sagittis suscipit, leo ligula eleifend dolor, mollis feugiat odio augue non eros. Pelle [...]
+
+Duis suscipit pellentesque pellentesque. Praesent porta lobortis cursus. Quisque sagittis velit non tellus bibendum at sollicitudin lacus aliquet. Sed nibh risus, blandit a aliquet eget, vehicula et est. Suspendisse facilisis bibendum aliquam. Fusce consectetur convallis erat, eget mollis diam fermentum sollicitudin. Quisque tincidunt porttitor pretium. Nullam id nisl et urna vulputate dapibus. Donec quis lorem urna. Quisque id justo nec nunc blandit convallis. Nunc volutpat, massa solli [...]
+
+Morbi ultricies diam eget massa posuere lobortis. Aliquam volutpat pellentesque enim eu porttitor. Donec lacus felis, consectetur a pretium vitae, bibendum non enim. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Etiam ut nibh a quam pellentesque auctor ut id velit. Duis lacinia justo eget mi placerat bibendum. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Donec velit tortor, tempus nec tristique id, a [...]
+
+Quisque interdum tellus ac ante posuere ut cursus lorem egestas. Nulla facilisi. Aenean sed massa nec nisi scelerisque vulputate. Etiam convallis consectetur iaculis. Maecenas ac purus ut ante dignissim auctor ac quis lorem. Pellentesque suscipit tincidunt orci. Fusce aliquam dapibus orci, at bibendum ipsum adipiscing eget. Morbi pellentesque hendrerit quam, nec placerat urna vulputate sed. Quisque vel diam lorem. Praesent id diam quis enim elementum rhoncus sagittis eget purus. Quisque [...]
+
+Ut id augue id dolor luctus euismod et quis velit. Maecenas enim dolor, tempus sit amet hendrerit eu, faucibus vitae neque. Proin sit amet varius elit. Proin varius felis ullamcorper purus dignissim consequat. Cras cursus tempus eros. Nunc ultrices venenatis ullamcorper. Aliquam et feugiat tellus. Phasellus sit amet vestibulum elit. Phasellus ac purus lacus, et accumsan eros. Morbi ultrices, purus a porta sodales, odio metus posuere neque, nec elementum risus turpis sit amet magna. Sed e [...]
+
+Phasellus viverra iaculis placerat. Nulla consequat dolor sit amet erat dignissim posuere. Nulla lacinia augue vitae mi tempor gravida. Phasellus non tempor tellus. Quisque non enim semper tortor sagittis facilisis. Aliquam urna felis, egestas at posuere nec, aliquet eu nibh. Praesent sed vestibulum enim. Mauris iaculis velit dui, et fringilla enim. Nulla nec nisi orci. Sed volutpat, justo eget fringilla adipiscing, nisl nulla condimentum libero, sed sodales est est et odio. Cras ipsum d [...]
+
+Ut malesuada molestie eleifend. Curabitur id enim dui, eu tincidunt nibh. Mauris sit amet ante leo. Duis turpis ipsum, bibendum sed mattis sit amet, accumsan quis dolor. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Aenean a imperdiet metus. Quisque sollicitudin felis id neque tempor scelerisque. Donec at orci felis. Vivamus tempus convallis auctor. Donec interdum euismod lobortis. Sed at lacus nec odio dignissim mollis. Sed sapien orci, porttito [...]
+
+Suspendisse egestas, sapien sit amet blandit scelerisque, nulla arcu tristique dui, a porta justo quam vitae arcu. In metus libero, bibendum non volutpat ut, laoreet vel turpis. Nunc faucibus velit eu ipsum commodo nec iaculis eros volutpat. Vivamus congue auctor elit sed suscipit. Duis commodo, libero eu vestibulum feugiat, leo mi dapibus tellus, in placerat nisl dui at est. Vestibulum viverra tristique lorem, ornare egestas erat rutrum a. Nullam at augue massa, ut consectetur ipsum. Pe [...]
+
+Vivamus in odio a nisi dignissim rhoncus in in lacus. Donec et nisl tortor. Donec sagittis consequat mi, vel placerat tellus convallis id. Aliquam facilisis rutrum nisl sed pretium. Donec et lacinia nisl. Aliquam erat volutpat. Curabitur ac pulvinar tellus. Nullam varius lobortis porta. Cras dapibus, ligula ut porta ultricies, leo lacus viverra purus, quis mollis urna risus eu leo. Nunc malesuada consectetur purus, vel auctor lectus scelerisque posuere. Maecenas dui massa, vestibulum bib [...]
+
+Praesent sed ipsum urna. Praesent sagittis varius magna, id commodo dolor malesuada ac. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Quisque sit amet nunc eu sem ornare tempor. Mauris id dolor nec erat convallis porta in lobortis nisi. Curabitur hendrerit rhoncus tortor eu hendrerit. Pellentesque eu ante vel elit luctus eleifend quis viverra nulla. Suspendisse odio diam, euismod eu porttitor molestie, sollicitudin sit amet nulla. Sed ante [...]
+
+Etiam quis augue in tellus consequat eleifend. Aenean dignissim congue felis id elementum. Duis fringilla varius ipsum, nec suscipit leo semper vel. Ut sollicitudin, orci a tincidunt accumsan, diam lectus laoreet lacus, vel fermentum quam est vel eros. Aliquam fringilla sapien ac sapien faucibus convallis. Aliquam id nunc eu justo consequat tincidunt. Quisque nec nisl dui. Phasellus augue lectus, varius vitae auctor vel, rutrum at risus. Vivamus lacinia leo quis neque ultrices nec elemen [...]
+
+Curabitur sapien lorem, mollis ut accumsan non, ultricies et metus. Curabitur vel lorem quis sapien fringilla laoreet. Morbi id urna ac orci elementum blandit eget volutpat neque. Pellentesque sem odio, iaculis eu pharetra vitae, cursus in quam. Nulla molestie ligula id massa luctus et pulvinar nisi pulvinar. Nunc fermentum augue a lacus fringilla rhoncus porttitor erat dictum. Nunc sit amet tellus et dui viverra auctor euismod at nisl. In sed congue magna. Proin et tortor ut augue place [...]
+
+Etiam in auctor urna. Fusce ultricies molestie convallis. In hac habitasse platea dictumst. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Mauris iaculis lorem faucibus purus gravida at convallis turpis sollicitudin. Suspendisse at velit lorem, a fermentum ipsum. Etiam condimentum, dui vel condimentum elementum, sapien sem blandit sapien, et pharetra leo neque et lectus. Nunc viverra urna iaculis augue ultrices ac porttitor lacus dignissim. Aliqua [...]
+
+Mauris aliquet urna eget lectus adipiscing at congue turpis consequat. Vivamus tincidunt fermentum risus et feugiat. Nulla molestie ullamcorper nibh sed facilisis. Phasellus et cursus purus. Nam cursus, dui dictum ultrices viverra, erat risus varius elit, eu molestie dui eros quis quam. Aliquam et ante neque, ac consectetur dui. Donec condimentum erat id elit dictum sed accumsan leo sagittis. Proin consequat congue risus, vel tincidunt leo imperdiet eu. Vestibulum malesuada turpis eu met [...]
+
+Pellentesque id molestie nisl. Maecenas et lectus at justo molestie viverra sit amet sit amet ligula. Nullam non porttitor magna. Quisque elementum arcu cursus tortor rutrum lobortis. Morbi sit amet lectus vitae enim euismod dignissim eget at neque. Vivamus consequat vehicula dui, vitae auctor augue dignissim in. In tempus sem quis justo tincidunt sit amet auctor turpis lobortis. Pellentesque non est nunc. Vestibulum mollis fringilla interdum. Maecenas ipsum dolor, pharetra id tristique [...]
+
+Donec vitae pretium nibh. Maecenas bibendum bibendum diam in placerat. Ut accumsan, mi vitae vestibulum euismod, nunc justo vulputate nisi, non placerat mi urna et diam. Maecenas malesuada lorem ut arcu mattis mollis. Nulla facilisi. Donec est leo, bibendum eu pulvinar in, cursus vel metus. Aliquam erat volutpat. Nullam feugiat porttitor neque in vulputate. Quisque nec mi eu magna consequat cursus non at arcu. Etiam risus metus, sollicitudin et ultrices at, tincidunt sed nunc. Sed eget s [...]
+
+Curabitur ac fermentum quam. Morbi eu eros sapien, vitae tempus dolor. Mauris vestibulum blandit enim ut venenatis. Aliquam egestas, eros at consectetur tincidunt, lorem augue iaculis est, nec mollis felis arcu in nunc. Sed in odio sed libero pellentesque volutpat vitae a ante. Morbi commodo volutpat tellus, ut viverra purus placerat fermentum. Integer iaculis facilisis arcu, at gravida lorem bibendum at. Aenean id eros eget est sagittis convallis sed et dui. Donec eu pulvinar tellus. Nu [...]
+
+Nulla commodo odio justo. Pellentesque non ornare diam. In consectetur sapien ac nunc sagittis malesuada. Morbi ullamcorper tempor erat nec rutrum. Duis ut commodo justo. Cras est orci, consectetur sed interdum sed, scelerisque sit amet nulla. Vestibulum justo nulla, pellentesque a tempus et, dapibus et arcu. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Morbi tristique, eros nec congue adipiscing, ligula sem rhoncus felis, at ornare tellus mauris ac risus. Vestibulum ante ips [...]
+
+In nec tempor risus. In faucibus nisi eget diam dignissim consequat. Donec pulvinar ante nec enim mattis rutrum. Vestibulum leo augue, molestie nec dapibus in, dictum at enim. Integer aliquam, lorem eu vulputate lacinia, mi orci tempor enim, eget mattis ligula magna a magna. Praesent sed erat ut tortor interdum viverra. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla facilisi. Maecenas sit amet lectus lacus. Nunc vitae purus id ligula laoreet condimentum. Duis auctor torto [...]
+
+Curabitur velit arcu, pretium porta placerat quis, varius ut metus. Vestibulum vulputate tincidunt justo, vitae porttitor lectus imperdiet sit amet. Vivamus enim dolor, sollicitudin ut semper non, ornare ornare dui. Aliquam tempor fermentum sapien eget condimentum. Curabitur laoreet bibendum ante, in euismod lacus lacinia eu. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Suspendisse potenti. Sed at libero eu tortor tempus scelerisque. Nulla [...]
+
+Nulla varius, nisi eget condimentum semper, metus est dictum odio, vel mattis risus est sed velit. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Nunc non est nec tellus ultricies mattis ut eget velit. Integer condimentum ante id lorem blandit lacinia. Donec vel tortor augue, in condimentum nisi. Pellentesque pellentesque nulla ut nulla porttitor quis sodales enim rutrum. Sed augue risus, euismod a aliquet at, vulputate non libero. Nullam nibh odio, [...]
+
+Vivamus at fringilla eros. Vivamus at nisl id massa commodo feugiat quis non massa. Morbi tellus urna, auctor sit amet elementum sed, rutrum non lectus. Nulla feugiat dui in sapien ornare et imperdiet est ornare. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Vestibulum semper rutrum tempor. Sed in felis nibh, sed aliquam enim. Curabitur ut quam scelerisque velit placerat dictum. Donec eleifend vehicula purus, eu vestibulum sapien rutrum eu. [...]
+
+Maecenas ipsum neque, auctor quis lacinia vitae, euismod ac orci. Donec molestie massa consequat est porta ac porta purus tincidunt. Nam bibendum leo nec lacus mollis non condimentum dolor rhoncus. Nulla ac volutpat lorem. Nullam erat purus, convallis eget commodo id, varius quis augue. Nullam aliquam egestas mi, vel suscipit nisl mattis consequat. Quisque vel egestas sapien. Nunc lorem velit, convallis nec laoreet et, aliquet eget massa. Nam et nibh ac dui vehicula aliquam quis eu augue [...]
+
+Cras consectetur ante eu turpis placerat sollicitudin. Mauris et lacus tortor, eget pharetra velit. Donec accumsan ultrices tempor. Donec at nibh a elit condimentum dapibus. Integer sit amet vulputate ante. Suspendisse potenti. In sodales laoreet massa vitae lacinia. Morbi vel lacus feugiat arcu vulputate molestie. Aliquam massa magna, ullamcorper accumsan gravida quis, rhoncus pulvinar nulla. Praesent sit amet ipsum diam, sit amet lacinia neque. In et sapien augue. Etiam enim elit, ultr [...]
+
+Proin et egestas neque. Praesent et ipsum dolor. Nunc non varius nisl. Fusce in tortor nisi. Maecenas convallis neque in ligula blandit quis vehicula leo mollis. Pellentesque sagittis blandit leo, dapibus pellentesque leo ultrices ac. Curabitur ac egestas libero. Donec pretium pharetra pretium. Fusce imperdiet, turpis eu aliquam porta, ante elit eleifend risus, luctus auctor arcu ante ut nunc. Vivamus in leo felis, vitae eleifend lacus. Donec tempus aliquam purus porttitor tristique. Sus [...]
diff --git a/test/elixir/test/data/lorem_b64.txt b/test/elixir/test/data/lorem_b64.txt
new file mode 100644
index 0000000..8a21d79
--- /dev/null
+++ b/test/elixir/test/data/lorem_b64.txt
@@ -0,0 +1 @@
+TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdC4gUGhhc2VsbHVzIG51bmMgc2FwaWVuLCBwb3J0YSBpZCBwZWxsZW50ZXNxdWUgYXQsIGVsZW1lbnR1bSBldCBmZWxpcy4gQ3VyYWJpdHVyIGNvbmRpbWVudHVtIGFudGUgaW4gbWV0dXMgaWFjdWxpcyBxdWlzIGNvbmd1ZSBkaWFtIGNvbW1vZG8uIERvbmVjIGVsZWlmZW5kIGFudGUgc2VkIG51bGxhIGRhcGlidXMgY29udmFsbGlzLiBVdCBjdXJzdXMgYWxpcXVhbSBuZXF1ZSwgdmVsIHBvcnR0aXRvciB0ZWxsdXMgaW50ZXJkdW0gdXQuIFNlZCBwaGFyZXRyYSBsYWNpbmlhIGFkaXBpc2NpbmcuIEluIHRyaXN0aXF1ZSB0cmlzdGlxdWUgZmVsaXMgbm9u [...]
\ No newline at end of file
diff --git a/test/elixir/test/replication_test.exs b/test/elixir/test/replication_test.exs
new file mode 100644
index 0000000..a9e11e5
--- /dev/null
+++ b/test/elixir/test/replication_test.exs
@@ -0,0 +1,1711 @@
+defmodule ReplicationTest do
+ use CouchTestCase
+
+ @moduledoc """
+ Test CouchDB View Collation Behavior
+ This is a port of the view_collation.js suite
+ """
+
+ # TODO: Parameterize these
+ @admin_account "adm:pass"
+ @db_pairs_prefixes [
+ {"local-to-local", "", ""},
+ {"remote-to-local", "http://localhost:15984/", ""},
+ {"local-to-remote", "", "http://localhost:15984/"},
+ {"remote-to-remote", "http://localhost:15984/", "http://localhost:15984/"}
+ ]
+
+ # This should probably go into `make elixir` like what
+ # happens for JavaScript tests.
+ @moduletag config: [{"replicator", "startup_jitter", "0"}]
+
+ test "source database does not exist" do
+ name = random_db_name()
+ check_not_found(name <> "_src", name <> "_tgt")
+ end
+
+ test "source database not found with path - COUCHDB-317" do
+ name = random_db_name()
+ check_not_found(name <> "_src", name <> "_tgt")
+ end
+
+ test "source database not found with host" do
+ name = random_db_name()
+ url = "http://localhost:15984/" <> name <> "_src"
+ check_not_found(url, name <> "_tgt")
+ end
+
+ def check_not_found(src, tgt) do
+ body = %{:source => src, :target => tgt}
+ resp = Couch.post("/_replicate", body: body)
+ assert resp.body["error"] == "db_not_found"
+ end
+
+ test "replicating attachment without conflict - COUCHDB-885" do
+ name = random_db_name()
+ src_db_name = name <> "_src"
+ tgt_db_name = name <> "_tgt"
+
+ create_db(src_db_name)
+ create_db(tgt_db_name)
+
+ doc = %{"_id" => "doc1"}
+ [doc] = save_docs(src_db_name, [doc])
+
+ result = replicate(src_db_name, "http://localhost:15984/" <> tgt_db_name)
+ assert result["ok"]
+ assert is_list(result["history"])
+ history = Enum.at(result["history"], 0)
+ assert history["docs_written"] == 1
+ assert history["docs_read"] == 1
+ assert history["doc_write_failures"] == 0
+
+ doc = Map.put(doc, "_attachments", %{
+ "hello.txt" => %{
+ "content_type" => "text/plain",
+ "data" => "aGVsbG8gd29ybGQ=" # base64:encode("hello world")
+ },
+ "foo.dat" => %{
+ "content_type" => "not/compressible",
+ "data" => "aSBhbSBub3QgZ3ppcGVk" # base64:encode("i am not gziped")
+ }
+ })
+ [doc] = save_docs(src_db_name, [doc])
+
+ result = replicate(src_db_name, "http://localhost:15984/" <> tgt_db_name)
+
+ assert result["ok"]
+ assert is_list(result["history"])
+ assert length(result["history"]) == 2
+ history = Enum.at(result["history"], 0)
+ assert history["docs_written"] == 1
+ assert history["docs_read"] == 1
+ assert history["doc_write_failures"] == 0
+
+ query = %{
+ :conflicts => true,
+ :deleted_conflicts => true,
+ :attachments => true,
+ :att_encoding_info => true
+ }
+ opts = [headers: ["Accept": "application/json"], query: query]
+ resp = Couch.get("/#{tgt_db_name}/#{doc["_id"]}", opts)
+ assert HTTPotion.Response.success? resp
+ assert is_map(resp.body)
+ refute Map.has_key? resp.body, "_conflicts"
+ refute Map.has_key? resp.body, "_deleted_conflicts"
+
+ atts = resp.body["_attachments"]
+
+ assert atts["hello.txt"]["content_type"] == "text/plain"
+ assert atts["hello.txt"]["data"] == "aGVsbG8gd29ybGQ="
+ assert atts["hello.txt"]["encoding"] == "gzip"
+
+ assert atts["foo.dat"]["content_type"] == "not/compressible"
+ assert atts["foo.dat"]["data"] == "aSBhbSBub3QgZ3ppcGVk"
+ refute Map.has_key? atts["foo.dat"], "encoding"
+ end
+
+ test "replication cancellation" do
+ name = random_db_name()
+ src_db_name = name <> "_src"
+ tgt_db_name = name <> "_tgt"
+
+ create_db(src_db_name)
+ create_db(tgt_db_name)
+
+ save_docs(src_db_name, make_docs(1..6))
+
+ repl_body = %{:continuous => true, :create_target => true}
+ repl_src = "http://127.0.0.1:15984/" <> src_db_name
+ result = replicate(repl_src, tgt_db_name, body: repl_body)
+
+ assert result["ok"]
+ assert is_binary(result["_local_id"])
+ repl_id = result["_local_id"]
+
+ task = get_task(repl_id, 3_000)
+ assert is_map(task)
+
+ assert task["replication_id"] == repl_id
+ repl_body = %{
+ "replication_id" => repl_id,
+ "cancel": true
+ }
+ result = Couch.post("/_replicate", body: repl_body)
+ assert result.status_code == 200
+
+ wait_for_repl_stop(repl_id)
+
+ assert get_task(repl_id, 0) == :nil
+
+ result = Couch.post("/_replicate", body: repl_body)
+ assert result.status_code == 404
+ end
+
+ @tag user: [name: "joe", password: "erly", roles: ["erlanger"]]
+ test "unauthorized replication cancellation", ctx do
+ name = random_db_name()
+ src_db_name = name <> "_src"
+ tgt_db_name = name <> "_tgt"
+
+ create_db(src_db_name)
+ create_db(tgt_db_name)
+
+ save_docs(src_db_name, make_docs(1..6))
+
+ repl_src = "http://localhost:15984/" <> src_db_name
+ repl_body = %{"continuous" => true}
+ result = replicate(repl_src, tgt_db_name, body: repl_body)
+
+ assert result["ok"]
+ assert is_binary(result["_local_id"])
+ repl_id = result["_local_id"]
+
+ task = get_task(repl_id, 5_000)
+ assert is_map(task)
+
+ sess = Couch.login(ctx[:userinfo])
+ resp = Couch.Session.get(sess, "/_session")
+ assert resp.body["ok"]
+ assert resp.body["userCtx"]["name"] == "joe"
+
+ repl_body = %{
+ "replication_id" => repl_id,
+ "cancel": true
+ }
+ resp = Couch.Session.post(sess, "/_replicate", body: repl_body)
+ assert resp.status_code == 401
+ assert resp.body["error"] == "unauthorized"
+
+ assert Couch.Session.logout(sess).body["ok"]
+
+ resp = Couch.post("/_replicate", body: repl_body)
+ assert resp.status_code == 200
+ end
+
+ Enum.each(@db_pairs_prefixes, fn {name, src_prefix, tgt_prefix} ->
+ @src_prefix src_prefix
+ @tgt_prefix tgt_prefix
+
+ test "simple #{name} replication - #{name}" do
+ run_simple_repl(@src_prefix, @tgt_prefix)
+ end
+
+ test "replicate with since_seq - #{name}" do
+ run_since_seq_repl(@src_prefix, @tgt_prefix)
+ end
+
+ test "validate_doc_update failure replications - #{name}" do
+ run_vdu_repl(@src_prefix, @tgt_prefix)
+ end
+
+ test "create_target filter option - #{name}" do
+ run_create_target_repl(@src_prefix, @tgt_prefix)
+ end
+
+ test "filtered replications - #{name}" do
+ run_filtered_repl(@src_prefix, @tgt_prefix)
+ end
+
+ test "replication restarts after filter change - COUCHDB-892 - #{name}" do
+ run_filter_changed_repl(@src_prefix, @tgt_prefix)
+ end
+
+ test "replication by doc ids - #{name}" do
+ run_by_id_repl(@src_prefix, @tgt_prefix)
+ end
+
+ test "continuous replication - #{name}" do
+ run_continuous_repl(@src_prefix, @tgt_prefix)
+ end
+
+ @tag config: [
+ {"attachments", "compression_level", "8"},
+ {"attachments", "compressible_types", "text/*"}
+ ]
+ test "compressed attachment replication - #{name}" do
+ run_compressed_att_repl(@src_prefix, @tgt_prefix)
+ end
+
+ @tag user: [name: "joe", password: "erly", roles: ["erlanger"]]
+ test "non-admin user on target - #{name}", ctx do
+ run_non_admin_target_user_repl(@src_prefix, @tgt_prefix, ctx)
+ end
+
+ @tag user: [name: "joe", password: "erly", roles: ["erlanger"]]
+ test "non-admin or reader user on source - #{name}", ctx do
+ run_non_admin_or_reader_source_user_repl(@src_prefix, @tgt_prefix, ctx)
+ end
+ end)
+
+ def run_simple_repl(src_prefix, tgt_prefix) do
+ base_db_name = random_db_name()
+ src_db_name = base_db_name <> "_src"
+ tgt_db_name = base_db_name <> "_tgt"
+
+ create_db(src_db_name)
+ create_db(tgt_db_name)
+
+ on_exit(fn -> delete_db(src_db_name) end)
+ on_exit(fn -> delete_db(tgt_db_name) end)
+
+ att1_data = get_att1_data()
+ att2_data = get_att2_data()
+
+ ddoc = %{
+ "_id" => "_design/foo",
+ "language" => "javascript",
+ "value" => "ddoc"
+ }
+ docs = make_docs(1..20) ++ [ddoc]
+ docs = save_docs(src_db_name, docs)
+
+ docs = for doc <- docs do
+ if doc["integer"] >= 10 and doc["integer"] < 15 do
+ add_attachment(src_db_name, doc, body: att1_data)
+ else
+ doc
+ end
+ end
+
+ result = replicate(src_prefix <> src_db_name, tgt_prefix <> tgt_db_name)
+ assert result["ok"]
+
+ src_info = get_db_info(src_db_name)
+ tgt_info = get_db_info(tgt_db_name)
+
+ assert src_info["doc_count"] == tgt_info["doc_count"]
+
+ assert is_binary(result["session_id"])
+ assert is_list(result["history"])
+ assert length(result["history"]) == 1
+ history = Enum.at(result["history"], 0)
+ assert is_binary(history["start_time"])
+ assert is_binary(history["end_time"])
+ assert history["start_last_seq"] == 0
+ assert history["missing_checked"] == src_info["doc_count"]
+ assert history["missing_found"] == src_info["doc_count"]
+ assert history["docs_read"] == src_info["doc_count"]
+ assert history["docs_written"] == src_info["doc_count"]
+ assert history["doc_write_failures"] == 0
+
+ for doc <- docs do
+ copy = Couch.get!("/#{tgt_db_name}/#{doc["_id"]}").body
+ assert cmp_json(doc, copy)
+
+ if doc["integer"] >= 10 and doc["integer"] < 15 do
+ atts = copy["_attachments"]
+ assert is_map(atts)
+ att = atts["readme.txt"]
+ assert is_map(att)
+ assert att["revpos"] == 2
+ assert String.match?(att["content_type"], ~r/text\/plain/)
+ assert att["stub"]
+
+ resp = Couch.get!("/#{tgt_db_name}/#{copy["_id"]}/readme.txt")
+ assert String.length(resp.body) == String.length(att1_data)
+ assert resp.body == att1_data
+ end
+ end
+
+ # Add one more doc to source and more attachments to existing docs
+ new_doc = %{"_id" => "foo666", "value" => "d"}
+ [new_doc] = save_docs(src_db_name, [new_doc])
+
+ docs = for doc <- docs do
+ if doc["integer"] >= 10 and doc["integer"] < 15 do
+ ctype = "application/binary"
+ opts = [name: "data.dat", body: att2_data, content_type: ctype]
+ add_attachment(src_db_name, doc, opts)
+ else
+ doc
+ end
+ end
+
+ result = replicate(src_prefix <> src_db_name, tgt_prefix <> tgt_db_name)
+ assert result["ok"]
+
+ src_info = get_db_info(src_db_name)
+ tgt_info = get_db_info(tgt_db_name)
+
+ assert tgt_info["doc_count"] == src_info["doc_count"]
+
+ assert is_binary(result["session_id"])
+ assert is_list(result["history"])
+ assert length(result["history"]) == 2
+ history = Enum.at(result["history"], 0)
+ assert history["session_id"] == result["session_id"]
+ assert is_binary(history["start_time"])
+ assert is_binary(history["end_time"])
+ assert history["missing_checked"] == 6
+ assert history["missing_found"] == 6
+ assert history["docs_read"] == 6
+ assert history["docs_written"] == 6
+ assert history["doc_write_failures"] == 0
+
+ copy = Couch.get!("/#{tgt_db_name}/#{new_doc["_id"]}").body
+ assert copy["_id"] == new_doc["_id"]
+ assert copy["value"] == new_doc["value"]
+
+ for i <- 10..14 do
+ doc = Enum.at(docs, i - 1)
+ copy = Couch.get!("/#{tgt_db_name}/#{i}").body
+ assert cmp_json(doc, copy)
+
+ atts = copy["_attachments"]
+ assert is_map(atts)
+ att = atts["readme.txt"]
+ assert is_map(atts)
+ assert att["revpos"] == 2
+ assert String.match?(att["content_type"], ~r/text\/plain/)
+ assert att["stub"]
+
+ resp = Couch.get!("/#{tgt_db_name}/#{i}/readme.txt")
+ assert String.length(resp.body) == String.length(att1_data)
+ assert resp.body == att1_data
+
+ att = atts["data.dat"]
+ assert is_map(att)
+ assert att["revpos"] == 3
+ assert String.match?(att["content_type"], ~r/application\/binary/)
+ assert att["stub"]
+
+ resp = Couch.get!("/#{tgt_db_name}/#{i}/data.dat")
+ assert String.length(resp.body) == String.length(att2_data)
+ assert resp.body == att2_data
+ end
+
+ # Test deletion is replicated
+ del_doc = %{
+ "_id" => "1",
+ "_rev" => Enum.at(docs, 0)["_rev"],
+ "_deleted" => true
+ }
+ [del_doc] = save_docs(src_db_name, [del_doc])
+
+ result = replicate(src_prefix <> src_db_name, tgt_prefix <> tgt_db_name)
+ assert result["ok"]
+
+ src_info = get_db_info(src_db_name)
+ tgt_info = get_db_info(tgt_db_name)
+
+ assert tgt_info["doc_count"] == src_info["doc_count"]
+ assert tgt_info["doc_del_count"] == src_info["doc_del_count"]
+ assert tgt_info["doc_del_count"] == 1
+
+ assert is_list(result["history"])
+ assert length(result["history"]) == 3
+ history = Enum.at(result["history"], 0)
+ assert history["missing_checked"] == 1
+ assert history["missing_found"] == 1
+ assert history["docs_read"] == 1
+ assert history["docs_written"] == 1
+ assert history["doc_write_failures"] == 0
+
+ resp = Couch.get("/#{tgt_db_name}/#{del_doc["_id"]}")
+ assert resp.status_code == 404
+
+ resp = Couch.get!("/#{tgt_db_name}/_changes")
+ [change] = Enum.filter(resp.body["results"], &(&1["id"] == del_doc["_id"]))
+ assert change["id"] == del_doc["_id"]
+ assert change["deleted"]
+
+ # Test replicating a conflict
+ doc = Couch.get!("/#{src_db_name}/2").body
+ [doc] = save_docs(src_db_name, [Map.put(doc, :value, "white")])
+
+ copy = Couch.get!("/#{tgt_db_name}/2").body
+ save_docs(tgt_db_name, [Map.put(copy, :value, "black")])
+
+ result = replicate(src_prefix <> src_db_name, tgt_prefix <> tgt_db_name)
+ assert result["ok"]
+
+ src_info = get_db_info(src_db_name)
+ tgt_info = get_db_info(tgt_db_name)
+
+ assert tgt_info["doc_count"] == src_info["doc_count"]
+
+ assert is_list(result["history"])
+ assert length(result["history"]) == 4
+ history = Enum.at(result["history"], 0)
+ assert history["missing_checked"] == 1
+ assert history["missing_found"] == 1
+ assert history["docs_read"] == 1
+ assert history["docs_written"] == 1
+ assert history["doc_write_failures"] == 0
+
+ copy = Couch.get!("/#{tgt_db_name}/2", query: %{:conflicts => true}).body
+ assert String.match?(copy["_rev"], ~r/^2-/)
+ assert is_list(copy["_conflicts"])
+ assert length(copy["_conflicts"]) == 1
+ conflict = Enum.at(copy["_conflicts"], 0)
+ assert String.match?(conflict, ~r/^2-/)
+
+ # Re-replicate updated conflict
+ [doc] = save_docs(src_db_name, [Map.put(doc, :value, "yellow")])
+
+ result = replicate(src_prefix <> src_db_name, tgt_prefix <> tgt_db_name)
+ assert result["ok"]
+
+ src_info = get_db_info(src_db_name)
+ tgt_info = get_db_info(tgt_db_name)
+
+ assert tgt_info["doc_count"] == src_info["doc_count"]
+
+ assert is_list(result["history"])
+ assert length(result["history"]) == 5
+ history = Enum.at(result["history"], 0)
+ assert history["missing_checked"] == 1
+ assert history["missing_found"] == 1
+ assert history["docs_read"] == 1
+ assert history["docs_written"] == 1
+ assert history["doc_write_failures"] == 0
+
+ copy = Couch.get!("/#{tgt_db_name}/2", query: %{:conflicts => true}).body
+ assert String.match?(copy["_rev"], ~r/^3-/)
+ assert is_list(copy["_conflicts"])
+ assert length(copy["_conflicts"]) == 1
+ conflict = Enum.at(copy["_conflicts"], 0)
+ assert String.match?(conflict, ~r/^2-/)
+
+ # Resolve the conflict and re-replicate new revision
+ resolve_doc = %{"_id" => "2", "_rev" => conflict, "_deleted" => true}
+ save_docs(tgt_db_name, [resolve_doc])
+ save_docs(src_db_name, [Map.put(doc, :value, "rainbow")])
+
+ result = replicate(src_prefix <> src_db_name, tgt_prefix <> tgt_db_name)
+ assert result["ok"]
+
+ src_info = get_db_info(src_db_name)
+ tgt_info = get_db_info(tgt_db_name)
+
+ assert tgt_info["doc_count"] == src_info["doc_count"]
+
+ assert is_list(result["history"])
+ assert length(result["history"]) == 6
+ history = Enum.at(result["history"], 0)
+ assert history["missing_checked"] == 1
+ assert history["missing_found"] == 1
+ assert history["docs_read"] == 1
+ assert history["docs_written"] == 1
+ assert history["doc_write_failures"] == 0
+
+ copy = Couch.get!("/#{tgt_db_name}/2", query: %{:conflicts => true}).body
+
+ assert String.match?(copy["_rev"], ~r/^4-/)
+ assert not Map.has_key?(copy, "_conflicts")
+
+ # Test that existing revisions are not replicated
+ src_docs = [
+ %{"_id" => "foo1", "value" => 111},
+ %{"_id" => "foo2", "value" => 222},
+ %{"_id" => "foo3", "value" => 333}
+ ]
+ save_docs(src_db_name, src_docs)
+ save_docs(tgt_db_name, Enum.filter(src_docs, &(&1["_id"] != "foo2")))
+
+ result = replicate(src_prefix <> src_db_name, tgt_prefix <> tgt_db_name)
+ assert result["ok"]
+
+ src_info = get_db_info(src_db_name)
+ tgt_info = get_db_info(tgt_db_name)
+
+ assert tgt_info["doc_count"] == src_info["doc_count"]
+
+ assert is_list(result["history"])
+ assert length(result["history"]) == 7
+ history = Enum.at(result["history"], 0)
+ assert history["missing_checked"] == 3
+ assert history["missing_found"] == 1
+ assert history["docs_read"] == 1
+ assert history["docs_written"] == 1
+ assert history["doc_write_failures"] == 0
+
+ docs = [
+ %{"_id" => "foo4", "value" => 444},
+ %{"_id" => "foo5", "value" => 555}
+ ]
+ save_docs(src_db_name, docs)
+ save_docs(tgt_db_name, docs)
+
+ result = replicate(src_prefix <> src_db_name, tgt_prefix <> tgt_db_name)
+ assert result["ok"]
+
+ src_info = get_db_info(src_db_name)
+ tgt_info = get_db_info(tgt_db_name)
+
+ assert tgt_info["doc_count"] == src_info["doc_count"]
+
+ assert is_list(result["history"])
+ assert length(result["history"]) == 8
+ history = Enum.at(result["history"], 0)
+ assert history["missing_checked"] == 2
+ assert history["missing_found"] == 0
+ assert history["docs_read"] == 0
+ assert history["docs_written"] == 0
+ assert history["doc_write_failures"] == 0
+
+ # Test nothing to replicate
+ result = replicate(src_prefix <> src_db_name, tgt_prefix <> tgt_db_name)
+ assert result["ok"]
+ assert result["no_changes"]
+ end
+
+ def run_since_seq_repl(src_prefix, tgt_prefix) do
+ base_db_name = random_db_name()
+ src_db_name = base_db_name <> "_src"
+ tgt_db_name = base_db_name <> "_tgt"
+ repl_src = src_prefix <> src_db_name
+ repl_tgt = tgt_prefix <> tgt_db_name
+
+ create_db(src_db_name)
+ create_db(tgt_db_name)
+
+ on_exit(fn -> delete_db(src_db_name) end)
+ on_exit(fn -> delete_db(tgt_db_name) end)
+
+ docs = make_docs(1..5)
+ docs = save_docs(src_db_name, docs)
+
+ changes = get_db_changes(src_db_name)["results"]
+ since_seq = Enum.at(changes, 2)["seq"]
+
+ # TODO: In JS we re-fetch _changes with since_seq, is that
+ # really necessary?
+ expected_ids = for change <- Enum.drop(changes, 3) do
+ change["id"]
+ end
+ assert length(expected_ids) == 2
+
+ cancel_replication(repl_src, repl_tgt)
+ result = replicate(repl_src, repl_tgt, body: %{:since_seq => since_seq})
+ cancel_replication(repl_src, repl_tgt)
+
+ assert result["ok"]
+ assert is_list(result["history"])
+ history = Enum.at(result["history"], 0)
+ assert history["missing_checked"] == 2
+ assert history["missing_found"] == 2
+ assert history["docs_read"] == 2
+ assert history["docs_written"] == 2
+ assert history["doc_write_failures"] == 0
+
+ Enum.each(docs, fn doc ->
+ result = Couch.get("/#{tgt_db_name}/#{doc["_id"]}")
+ if Enum.member?(expected_ids, doc["_id"]) do
+ assert result.status_code < 300
+ assert cmp_json(doc, result.body)
+ else
+ assert result.status_code == 404
+ end
+ end)
+ end
+
+ def run_vdu_repl(src_prefix, tgt_prefix) do
+ base_db_name = random_db_name()
+ src_db_name = base_db_name <> "_src"
+ tgt_db_name = base_db_name <> "_tgt"
+ repl_src = src_prefix <> src_db_name
+ repl_tgt = tgt_prefix <> tgt_db_name
+
+ create_db(src_db_name)
+ create_db(tgt_db_name)
+
+ on_exit(fn -> delete_db(src_db_name) end)
+ on_exit(fn -> delete_db(tgt_db_name) end)
+
+ docs = make_docs(1..7)
+ docs = for doc <- docs do
+ if doc["integer"] == 2 do
+ Map.put(doc, "_attachments", %{
+ "hello.txt" => %{
+ :content_type => "text/plain",
+ :data => "aGVsbG8gd29ybGQ=" # base64:encode("hello world")
+ }
+ })
+ else
+ doc
+ end
+ end
+ docs = save_docs(src_db_name, docs)
+
+ ddoc = %{
+ "_id" => "_design/test",
+ "language" => "javascript",
+ "validate_doc_update" => """
+ function(newDoc, oldDoc, userCtx, secObj) {
+ if((newDoc.integer % 2) !== 0) {
+ throw {forbidden: "I only like multiples of 2."};
+ }
+ }
+ """
+ }
+ [_] = save_docs(tgt_db_name, [ddoc])
+
+ result = replicate(repl_src, repl_tgt)
+ assert result["ok"]
+
+ assert is_list(result["history"])
+ history = Enum.at(result["history"], 0)
+ assert history["missing_checked"] == 7
+ assert history["missing_found"] == 7
+ assert history["docs_read"] == 7
+ assert history["docs_written"] == 3
+ assert history["doc_write_failures"] == 4
+
+ for doc <- docs do
+ result = Couch.get("/#{tgt_db_name}/#{doc["_id"]}")
+ if rem(doc["integer"], 2) == 0 do
+ assert result.status_code < 300
+ assert result.body["integer"] == doc["integer"]
+ else
+ assert result.status_code == 404
+ end
+ end
+ end
+
+ def run_create_target_repl(src_prefix, tgt_prefix) do
+ base_db_name = random_db_name()
+ src_db_name = base_db_name <> "_src"
+ tgt_db_name = base_db_name <> "_tgt"
+ repl_src = src_prefix <> src_db_name
+ repl_tgt = tgt_prefix <> tgt_db_name
+
+ create_db(src_db_name)
+
+ on_exit(fn -> delete_db(src_db_name) end)
+ # This is created by the replication
+ on_exit(fn -> delete_db(tgt_db_name) end)
+
+ docs = make_docs(1..2)
+ save_docs(src_db_name, docs)
+
+ replicate(repl_src, repl_tgt, body: %{:create_target => true})
+
+ src_info = get_db_info(src_db_name)
+ tgt_info = get_db_info(tgt_db_name)
+
+ assert tgt_info["doc_count"] == src_info["doc_count"]
+
+ src_shards = seq_to_shards(src_info["update_seq"])
+ tgt_shards = seq_to_shards(tgt_info["update_seq"])
+ assert tgt_shards == src_shards
+ end
+
+ def run_filtered_repl(src_prefix, tgt_prefix) do
+ base_db_name = random_db_name()
+ src_db_name = base_db_name <> "_src"
+ tgt_db_name = base_db_name <> "_tgt"
+ repl_src = src_prefix <> src_db_name
+ repl_tgt = tgt_prefix <> tgt_db_name
+
+ create_db(src_db_name)
+ create_db(tgt_db_name)
+
+ on_exit(fn -> delete_db(src_db_name) end)
+ on_exit(fn -> delete_db(tgt_db_name) end)
+
+ docs = make_docs(1..30)
+ ddoc = %{
+ "_id" => "_design/mydesign",
+ "language" => "javascript",
+ "filters" => %{
+ "myfilter" => """
+ function(doc, req) {
+ var modulus = Number(req.query.modulus);
+ var special = req.query.special;
+ return (doc.integer % modulus === 0) || (doc.string === special);
+ }
+ """
+ }
+ }
+
+ [_ | docs] = save_docs(src_db_name, [ddoc | docs])
+
+ repl_body = %{
+ "filter" => "mydesign/myfilter",
+ "query_params" => %{
+ "modulus" => "2",
+ "special" => "7"
+ }
+ }
+
+ result = replicate(repl_src, repl_tgt, body: repl_body)
+ assert result["ok"]
+
+ Enum.each(docs, fn doc ->
+ resp = Couch.get!("/#{tgt_db_name}/#{doc["_id"]}")
+ if(rem(doc["integer"], 2) == 0 || doc["string"] == "7") do
+ assert resp.status_code < 300
+ assert cmp_json(doc, resp.body)
+ else
+ assert resp.status_code == 404
+ end
+ end)
+
+ assert is_list(result["history"])
+ assert length(result["history"]) == 1
+ history = Enum.at(result["history"], 0)
+
+ # We (incorrectly) don't record update sequences for things
+ # that don't pass the changes feed filter. Historically the
+ # last document to pass was the second to last doc which has
+ # an update sequence of 30. Work that has been applied to avoid
+ # conflicts from duplicate IDs breaking _bulk_docs updates added
+ # a sort to the logic which changes this. Now the last document
+ # to pass has a doc id of "8" and is at update_seq 29 (because only
+ # "9" and the design doc are after it).
+ #
+ # In the future the fix ought to be that we record that update
+ # sequence of the database. BigCouch has some existing work on
+ # this in the clustered case because if you have very few documents
+ # that pass the filter then (given single node's behavior) you end
+ # up having to rescan a large portion of the database.
+ # we can't rely on sequences in a cluster
+ # not only can one figure appear twice (at least for n>1), there's also
+ # hashes involved now - so comparing seq==29 is lottery
+ # (= cutting off hashes is nonsense) above, there was brute-force
+ # comparing all attrs of all docs - now we did check if excluded docs
+ # did NOT make it in any way, we can't rely on sequences in a
+ # cluster (so leave out)
+
+ # 16 => 15 docs with even integer field + 1 doc with string field "7"
+ assert history["missing_checked"] == 16
+ assert history["missing_found"] == 16
+ assert history["docs_read"] == 16
+ assert history["docs_written"] == 16
+ assert history["doc_write_failures"] == 0
+
+ new_docs = make_docs(50..55)
+ new_docs = save_docs(src_db_name, new_docs)
+
+ result = replicate(repl_src, repl_tgt, body: repl_body)
+ assert result["ok"]
+
+ Enum.each(new_docs, fn doc ->
+ resp = Couch.get!("/#{tgt_db_name}/#{doc["_id"]}")
+ if(rem(doc["integer"], 2) == 0) do
+ assert resp.status_code < 300
+ assert cmp_json(doc, resp.body)
+ else
+ assert resp.status_code == 404
+ end
+ end)
+
+ assert is_list(result["history"])
+ assert length(result["history"]) == 2
+ history = Enum.at(result["history"], 0)
+
+ assert history["missing_checked"] == 3
+ assert history["missing_found"] == 3
+ assert history["docs_read"] == 3
+ assert history["docs_written"] == 3
+ assert history["doc_write_failures"] == 0
+ end
+
+ def run_filter_changed_repl(src_prefix, tgt_prefix) do
+ base_db_name = random_db_name()
+ src_db_name = base_db_name <> "_src"
+ tgt_db_name = base_db_name <> "_tgt"
+ repl_src = src_prefix <> src_db_name
+ repl_tgt = tgt_prefix <> tgt_db_name
+
+ create_db(src_db_name)
+ create_db(tgt_db_name)
+
+ on_exit(fn -> delete_db(src_db_name) end)
+ on_exit(fn -> delete_db(tgt_db_name) end)
+
+ filter_fun_1 = """
+ function(doc, req) {
+ if(doc.value < Number(req.query.maxvalue)) {
+ return true;
+ } else {
+ return false;
+ }
+ }
+ """
+
+ filter_fun_2 = """
+ function(doc, req) {
+ return true;
+ }
+ """
+
+ docs = [
+ %{"_id" => "foo1", "value" => 1},
+ %{"_id" => "foo2", "value" => 2},
+ %{"_id" => "foo3", :value => 3},
+ %{"_id" => "foo4", :value => 4}
+ ]
+ ddoc = %{
+ "_id" => "_design/mydesign",
+ :language => "javascript",
+ :filters => %{
+ :myfilter => filter_fun_1
+ }
+ }
+
+ [ddoc | _] = save_docs(src_db_name, [ddoc | docs])
+
+ repl_body = %{
+ :filter => "mydesign/myfilter",
+ :query_params => %{
+ :maxvalue => "3"
+ }
+ }
+ result = replicate(repl_src, repl_tgt, body: repl_body)
+ assert result["ok"]
+
+ assert is_list(result["history"])
+ assert length(result["history"]) == 1
+ history = Enum.at(result["history"], 0)
+ assert history["docs_read"] == 2
+ assert history["docs_written"] == 2
+ assert history["doc_write_failures"] == 0
+
+ resp = Couch.get!("/#{tgt_db_name}/foo1")
+ assert HTTPotion.Response.success?(resp)
+ assert resp.body["value"] == 1
+
+ resp = Couch.get!("/#{tgt_db_name}/foo2")
+ assert HTTPotion.Response.success?(resp)
+ assert resp.body["value"] == 2
+
+ resp = Couch.get!("/#{tgt_db_name}/foo3")
+ assert resp.status_code == 404
+
+ resp = Couch.get!("/#{tgt_db_name}/foo4")
+ assert resp.status_code == 404
+
+ # Replication should start from scratch after the filter's code changed
+ ddoc = Map.put(ddoc, :filters, %{:myfilter => filter_fun_2})
+ [_] = save_docs(src_db_name, [ddoc])
+
+ result = replicate(repl_src, repl_tgt, body: repl_body)
+ assert result["ok"]
+
+ assert is_list(result["history"])
+ assert length(result["history"]) == 1
+ history = Enum.at(result["history"], 0)
+ assert history["docs_read"] == 3
+ assert history["docs_written"] == 3
+ assert history["doc_write_failures"] == 0
+
+ resp = Couch.get!("/#{tgt_db_name}/foo1")
+ assert HTTPotion.Response.success?(resp)
+ assert resp.body["value"] == 1
+
+ resp = Couch.get!("/#{tgt_db_name}/foo2")
+ assert HTTPotion.Response.success?(resp)
+ assert resp.body["value"] == 2
+
+ resp = Couch.get!("/#{tgt_db_name}/foo3")
+ assert HTTPotion.Response.success?(resp)
+ assert resp.body["value"] == 3
+
+ resp = Couch.get!("/#{tgt_db_name}/foo4")
+ assert HTTPotion.Response.success?(resp)
+ assert resp.body["value"] == 4
+
+ resp = Couch.get!("/#{tgt_db_name}/_design/mydesign")
+ assert HTTPotion.Response.success?(resp)
+ end
+
+ def run_by_id_repl(src_prefix, tgt_prefix) do
+ target_doc_ids = [
+ %{
+ :initial => ["1", "2", "10"],
+ :after => [],
+ :conflict_id => "2"
+ },
+ %{
+ :initial => ["1", "2"],
+ :after => ["7"],
+ :conflict_id => "1"
+ },
+ %{
+ :initial => ["1", "foo_666", "10"],
+ :after => ["7"],
+ :conflict_id => "10"
+ },
+ %{
+ :initial => ["_design/foo", "8"],
+ :after => ["foo_5"],
+ :conflict_id => "8"
+ },
+ %{
+ :initial => ["_design%2Ffoo", "8"],
+ :after => ["foo_5"],
+ :conflict_id => "8"
+ },
+ %{
+ :initial => [],
+ :after => ["foo_1000", "_design/foo", "1"],
+ :conflict_id => "1"
+ }
+ ]
+
+ Enum.each(target_doc_ids, fn test_data ->
+ run_by_id_repl_impl(src_prefix, tgt_prefix, test_data)
+ end)
+ end
+
+ def run_by_id_repl_impl(src_prefix, tgt_prefix, test_data) do
+ base_db_name = random_db_name()
+ src_db_name = base_db_name <> "_src"
+ tgt_db_name = base_db_name <> "_tgt"
+ repl_src = src_prefix <> src_db_name
+ repl_tgt = tgt_prefix <> tgt_db_name
+
+ create_db(src_db_name)
+ create_db(tgt_db_name)
+
+ on_exit(fn -> delete_db(src_db_name) end)
+ on_exit(fn -> delete_db(tgt_db_name) end)
+
+ docs = make_docs(1..10)
+ ddoc = %{
+ "_id" => "_design/foo",
+ :language => "javascript",
+ "integer" => 1
+ }
+
+ doc_ids = test_data[:initial]
+ num_missing = Enum.count(doc_ids, fn doc_id ->
+ String.starts_with?(doc_id, "foo_")
+ end)
+ total_replicated = length(doc_ids) - num_missing
+
+ [_ | docs] = save_docs(src_db_name, [ddoc | docs])
+
+ repl_body = %{:doc_ids => doc_ids}
+ result = replicate(repl_src, repl_tgt, body: repl_body)
+ assert result["ok"]
+
+ if(total_replicated == 0) do
+ assert result["no_changes"]
+ else
+ assert is_binary(result["start_time"])
+ assert is_binary(result["end_time"])
+ assert result["docs_read"] == total_replicated
+ assert result["docs_written"] == total_replicated
+ assert result["doc_write_failures"] == 0
+ end
+
+ Enum.each(doc_ids, fn doc_id ->
+ doc_id = URI.decode(doc_id)
+ orig = Couch.get!("/#{src_db_name}/#{doc_id}")
+ copy = Couch.get!("/#{tgt_db_name}/#{doc_id}")
+
+ if(String.starts_with?(doc_id, "foo_")) do
+ assert orig.status_code == 404
+ assert copy.status_code == 404
+ else
+ assert HTTPotion.Response.success?(orig)
+ assert HTTPotion.Response.success?(copy)
+ assert cmp_json(orig.body, copy.body)
+ end
+ end)
+
+ # Be absolutely sure that other docs were not replicated
+ Enum.each(docs, fn doc ->
+ encoded_id = URI.encode_www_form(doc["_id"])
+ copy = Couch.get!("/#{tgt_db_name}/#{doc["_id"]}")
+ is_doc_id = &(Enum.member?(doc_ids, &1))
+ if(is_doc_id.(doc["_id"]) or is_doc_id.(encoded_id)) do
+ assert HTTPotion.Response.success?(copy)
+ else
+ assert copy.status_code == 404
+ end
+ end)
+
+ tgt_info = get_db_info(tgt_db_name)
+ assert tgt_info["doc_count"] == total_replicated
+
+ doc_ids_after = test_data[:after]
+ num_missing_after = Enum.count(doc_ids_after, fn doc_id ->
+ String.starts_with?(doc_id, "foo_")
+ end)
+
+ repl_body = %{:doc_ids => doc_ids_after}
+ result = replicate(repl_src, repl_tgt, body: repl_body)
+ assert result["ok"]
+
+ total_replicated_after = length(doc_ids_after) - num_missing_after
+ if(total_replicated_after == 0) do
+ assert result["no_changes"]
+ else
+ assert is_binary(result["start_time"])
+ assert is_binary(result["end_time"])
+ assert result["docs_read"] == total_replicated_after
+ assert result["docs_written"] == total_replicated_after
+ assert result["doc_write_failures"] == 0
+ end
+
+ Enum.each(doc_ids_after, fn doc_id ->
+ orig = Couch.get!("/#{src_db_name}/#{doc_id}")
+ copy = Couch.get!("/#{tgt_db_name}/#{doc_id}")
+
+ if(String.starts_with?(doc_id, "foo_")) do
+ assert orig.status_code == 404
+ assert copy.status_code == 404
+ else
+ assert HTTPotion.Response.success?(orig)
+ assert HTTPotion.Response.success?(copy)
+ assert cmp_json(orig.body, copy.body)
+ end
+ end)
+
+ # Be absolutely sure that other docs were not replicated
+ all_doc_ids = doc_ids ++ doc_ids_after
+ Enum.each(docs, fn doc ->
+ encoded_id = URI.encode_www_form(doc["_id"])
+ copy = Couch.get!("/#{tgt_db_name}/#{doc["_id"]}")
+ is_doc_id = &(Enum.member?(all_doc_ids, &1))
+ if(is_doc_id.(doc["_id"]) or is_doc_id.(encoded_id)) do
+ assert HTTPotion.Response.success?(copy)
+ else
+ assert copy.status_code == 404
+ end
+ end)
+
+ tgt_info = get_db_info(tgt_db_name)
+ assert tgt_info["doc_count"] == total_replicated + total_replicated_after,
+ "#{inspect test_data}"
+
+ # Update a source document and re-replicate (no conflict introduced)
+ conflict_id = test_data[:conflict_id]
+ doc = Couch.get!("/#{src_db_name}/#{conflict_id}").body
+ assert is_map(doc)
+ doc = Map.put(doc, "integer", 666)
+ [doc] = save_docs(src_db_name, [doc])
+
+ att1 = [
+ name: "readme.txt",
+ body: get_att1_data(),
+ content_type: "text/plain"
+ ]
+ att2 = [
+ name: "data.dat",
+ body: get_att2_data(),
+ content_type: "application/binary"
+ ]
+ doc = add_attachment(src_db_name, doc, att1)
+ doc = add_attachment(src_db_name, doc, att2)
+
+ repl_body = %{:doc_ids => [conflict_id]}
+ result = replicate(repl_src, repl_tgt, body: repl_body)
+ assert result["ok"]
+
+ assert result["docs_read"] == 1
+ assert result["docs_written"] == 1
+ assert result["doc_write_failures"] == 0
+
+ query = %{"conflicts" => "true"}
+ copy = Couch.get!("/#{tgt_db_name}/#{conflict_id}", query: query)
+ assert HTTPotion.Response.success?(copy)
+ assert copy.body["integer"] == 666
+ assert String.starts_with?(copy.body["_rev"], "4-")
+ assert not Map.has_key?(doc, "_conflicts")
+
+ atts = copy.body["_attachments"]
+ assert is_map(atts)
+ assert is_map(atts["readme.txt"])
+ assert atts["readme.txt"]["revpos"] == 3
+ assert String.match?(atts["readme.txt"]["content_type"], ~r/text\/plain/)
+ assert atts["readme.txt"]["stub"]
+
+ att1_data = Couch.get!("/#{tgt_db_name}/#{conflict_id}/readme.txt").body
+ assert String.length(att1_data) == String.length(att1[:body])
+ assert att1_data == att1[:body]
+
+ assert is_map(atts["data.dat"])
+ assert atts["data.dat"]["revpos"] == 4
+ ct_re = ~r/application\/binary/
+ assert String.match?(atts["data.dat"]["content_type"], ct_re)
+ assert atts["data.dat"]["stub"]
+
+ att2_data = Couch.get!("/#{tgt_db_name}/#{conflict_id}/data.dat").body
+ assert String.length(att2_data) == String.length(att2[:body])
+ assert att2_data == att2[:body]
+
+ # Generate a conflict using replication by doc ids
+ orig = Couch.get!("/#{src_db_name}/#{conflict_id}").body
+ orig = Map.update!(orig, "integer", &(&1 + 100))
+ [_] = save_docs(src_db_name, [orig])
+
+ copy = Couch.get!("/#{tgt_db_name}/#{conflict_id}").body
+ copy = Map.update!(copy, "integer", &(&1 + 1))
+ [_] = save_docs(tgt_db_name, [copy])
+
+ result = replicate(repl_src, repl_tgt, body: repl_body)
+ assert result["ok"]
+ assert result["docs_read"] == 1
+ assert result["docs_written"] == 1
+ assert result["doc_write_failures"] == 0
+
+ copy = Couch.get!("/#{tgt_db_name}/#{conflict_id}", query: query).body
+ assert String.match?(copy["_rev"], ~r/^5-/)
+ assert is_list(copy["_conflicts"])
+ assert length(copy["_conflicts"]) == 1
+ conflict_rev = Enum.at(copy["_conflicts"], 0)
+ assert String.match?(conflict_rev, ~r/^5-/)
+ end
+
+ def run_continuous_repl(src_prefix, tgt_prefix) do
+ base_db_name = random_db_name()
+ src_db_name = base_db_name <> "_src"
+ tgt_db_name = base_db_name <> "_tgt"
+ repl_src = src_prefix <> src_db_name
+ repl_tgt = tgt_prefix <> tgt_db_name
+
+ create_db(src_db_name)
+ create_db(tgt_db_name)
+
+ ddoc = %{
+ "_id" => "_design/mydesign",
+ "language" => "javascript",
+ "filters" => %{
+ "myfilter" => "function(doc, req) { return true; }"
+ }
+ }
+ docs = make_docs(1..25)
+ docs = save_docs(src_db_name, docs ++ [ddoc])
+
+ att1_data = get_att1_data()
+
+ docs = for doc <- docs do
+ if doc["integer"] >= 10 and doc["integer"] < 15 do
+ add_attachment(src_db_name, doc)
+ else
+ doc
+ end
+ end
+
+ repl_body = %{:continuous => true}
+ result = replicate(repl_src, repl_tgt, body: repl_body)
+
+ assert result["ok"]
+ assert is_binary(result["_local_id"])
+
+ repl_id = result["_local_id"]
+ task = get_task(repl_id, 30000)
+ assert is_map(task), "Error waiting for replication to start"
+
+ wait_for_repl(src_db_name, repl_id, 26)
+
+ Enum.each(docs, fn doc ->
+ resp = Couch.get!("/#{tgt_db_name}/#{doc["_id"]}")
+ assert resp.status_code < 300
+ assert cmp_json(doc, resp.body)
+
+ if doc["integer"] >= 10 and doc["integer"] < 15 do
+ atts = resp.body["_attachments"]
+ assert is_map(atts)
+ att = atts["readme.txt"]
+ assert is_map(att)
+ assert att["revpos"] == 2
+ assert String.match?(att["content_type"], ~r/text\/plain/)
+ assert att["stub"]
+
+ resp = Couch.get!("/#{tgt_db_name}/#{doc["_id"]}/readme.txt")
+ assert String.length(resp.body) == String.length("some text")
+ assert resp.body == "some text"
+ end
+ end)
+
+ src_info = get_db_info(src_db_name)
+ tgt_info = get_db_info(tgt_db_name)
+
+ assert tgt_info["doc_count"] == src_info["doc_count"]
+
+ # Add attachments to more source docs
+ docs = for doc <- docs do
+ is_ddoc = String.starts_with?(doc["_id"], "_design/")
+ case doc["integer"] do
+ n when n >= 10 and n < 15 ->
+ ctype = "application/binary"
+ opts = [name: "data.dat", body: att1_data, content_type: ctype]
+ add_attachment(src_db_name, doc, opts)
+ _ when is_ddoc ->
+ add_attachment(src_db_name, doc)
+ _ ->
+ doc
+ end
+ end
+
+ wait_for_repl(src_db_name, repl_id, 32)
+
+ Enum.each(docs, fn doc ->
+ is_ddoc = String.starts_with?(doc["_id"], "_design/")
+ case doc["integer"] do
+ N when N >= 10 and N < 15 or is_ddoc ->
+ resp = Couch.get!("/#{tgt_db_name}/#{doc["_id"]}")
+ atts = resp.body["_attachments"]
+ assert is_map(atts)
+ att = atts["readme.txt"]
+ assert is_map(att)
+ assert att["revpos"] == 2
+ assert String.match?(att["content_type"], ~r/text\/plain/)
+ assert att["stub"]
+
+ resp = Couch.get!("/#{tgt_db_name}/#{doc["_id"]}/readme.txt")
+ assert String.length(resp.body) == String.length("some text")
+ assert resp.body == "some text"
+
+ if not is_ddoc do
+ att = atts["data.dat"]
+ assert is_map(att)
+ assert att["revpos"] == 3
+ assert String.match?(att["content_type"], ~r/application\/binary/)
+ assert att["stub"]
+
+ resp = Couch.get!("/#{tgt_db_name}/#{doc["_id"]}/data.dat")
+ assert String.length(resp.body) == String.length(att1_data)
+ assert resp.body == att1_data
+ end
+ _ ->
+ :ok
+ end
+ end)
+
+ src_info = get_db_info(src_db_name)
+ tgt_info = get_db_info(tgt_db_name)
+
+ assert tgt_info["doc_count"] == src_info["doc_count"]
+
+ ddoc = List.last(docs)
+ ctype = "application/binary"
+ opts = [name: "data.dat", body: att1_data, content_type: ctype]
+ add_attachment(src_db_name, ddoc, opts)
+
+ wait_for_repl(src_db_name, repl_id, 33)
+
+ resp = Couch.get("/#{tgt_db_name}/#{ddoc["_id"]}")
+ atts = resp.body["_attachments"]
+ assert is_map(atts)
+ att = atts["readme.txt"]
+ assert is_map(att)
+ assert att["revpos"] == 2
+ assert String.match?(att["content_type"], ~r/text\/plain/)
+ assert att["stub"]
+
+ resp = Couch.get!("/#{tgt_db_name}/#{ddoc["_id"]}/readme.txt")
+ assert String.length(resp.body) == String.length("some text")
+ assert resp.body == "some text"
+
+ att = atts["data.dat"]
+ assert is_map(att)
+ assert att["revpos"] == 3
+ assert String.match?(att["content_type"], ~r/application\/binary/)
+ assert att["stub"]
+
+ resp = Couch.get!("/#{tgt_db_name}/#{ddoc["_id"]}/data.dat")
+ assert String.length(resp.body) == String.length(att1_data)
+ assert resp.body == att1_data
+
+ src_info = get_db_info(src_db_name)
+ tgt_info = get_db_info(tgt_db_name)
+
+ assert tgt_info["doc_count"] == src_info["doc_count"]
+
+ # Check creating new normal documents
+ new_docs = make_docs(26..35)
+ new_docs = save_docs(src_db_name, new_docs)
+
+ wait_for_repl(src_db_name, repl_id, 43)
+
+ Enum.each(new_docs, fn doc ->
+ resp = Couch.get!("/#{tgt_db_name}/#{doc["_id"]}")
+ assert resp.status_code < 300
+ assert cmp_json(doc, resp.body)
+ end)
+
+ src_info = get_db_info(src_db_name)
+ tgt_info = get_db_info(tgt_db_name)
+
+ assert tgt_info["doc_count"] == src_info["doc_count"]
+
+ # Delete docs from the source
+
+ doc1 = Enum.at(new_docs, 0)
+ query = %{:rev => doc1["_rev"]}
+ Couch.delete!("/#{src_db_name}/#{doc1["_id"]}", query: query)
+
+ doc2 = Enum.at(new_docs, 6)
+ query = %{:rev => doc2["_rev"]}
+ Couch.delete!("/#{src_db_name}/#{doc2["_id"]}", query: query)
+
+ wait_for_repl(src_db_name, repl_id, 45)
+
+ resp = Couch.get("/#{tgt_db_name}/#{doc1["_id"]}")
+ assert resp.status_code == 404
+ resp = Couch.get("/#{tgt_db_name}/#{doc2["_id"]}")
+ assert resp.status_code == 404
+
+ changes = get_db_changes(tgt_db_name, %{:since => tgt_info["update_seq"]})
+ # quite unfortunately, there is no way on relying on ordering in a cluster
+ # but we can assume a length of 2
+ changes = for change <- changes["results"] do
+ {change["id"], change["deleted"]}
+ end
+ assert Enum.sort(changes) == [{doc1["_id"], true}, {doc2["_id"], true}]
+
+ # Cancel the replication
+ repl_body = %{:continuous => true, :cancel => true}
+ resp = replicate(repl_src, repl_tgt, body: repl_body)
+ assert resp["ok"]
+ assert resp["_local_id"] == repl_id
+
+ doc = %{"_id" => "foobar", "value": 666}
+ [doc] = save_docs(src_db_name, [doc])
+
+ wait_for_repl_stop(repl_id, 30000)
+
+ resp = Couch.get("/#{tgt_db_name}/#{doc["_id"]}")
+ assert resp.status_code == 404
+ end
+
+ def run_compressed_att_repl(src_prefix, tgt_prefix) do
+ base_db_name = random_db_name()
+ src_db_name = base_db_name <> "_src"
+ tgt_db_name = base_db_name <> "_tgt"
+ repl_src = src_prefix <> src_db_name
+ repl_tgt = tgt_prefix <> tgt_db_name
+
+ create_db(src_db_name)
+ create_db(tgt_db_name)
+
+ doc = %{"_id" => "foobar"}
+ [doc] = save_docs(src_db_name, [doc])
+
+ att1_data = get_att1_data()
+ num_copies = 1 + round(128 * 1024 / String.length(att1_data))
+ big_att = List.foldl(Enum.to_list(1..num_copies), "", fn _, acc ->
+ acc <> att1_data
+ end)
+
+ doc = add_attachment(src_db_name, doc, [body: big_att])
+
+ # Disable attachment compression
+ set_config_raw("attachments", "compression_level", "0")
+
+ result = replicate(repl_src, repl_tgt)
+ assert result["ok"]
+ assert is_list(result["history"])
+ assert length(result["history"]) == 1
+ history = Enum.at(result["history"], 0)
+ assert history["missing_checked"] == 1
+ assert history["missing_found"] == 1
+ assert history["docs_read"] == 1
+ assert history["docs_written"] == 1
+ assert history["doc_write_failures"] == 0
+
+ token = Enum.random(1..1_000_000)
+ query = %{"att_encoding_info": "true", "bypass_cache": token}
+ resp = Couch.get("/#{tgt_db_name}/#{doc["_id"]}", query: query)
+ assert resp.status_code < 300
+ assert is_map(resp.body["_attachments"])
+ att = resp.body["_attachments"]["readme.txt"]
+ assert att["encoding"] == "gzip"
+ assert is_integer(att["length"])
+ assert is_integer(att["encoded_length"])
+ assert att["encoded_length"] < att["length"]
+ end
+
+ def run_non_admin_target_user_repl(src_prefix, tgt_prefix, ctx) do
+ base_db_name = random_db_name()
+ src_db_name = base_db_name <> "_src"
+ tgt_db_name = base_db_name <> "_tgt"
+ repl_src = src_prefix <> src_db_name
+ repl_tgt = tgt_prefix <> tgt_db_name
+
+ create_db(src_db_name)
+ create_db(tgt_db_name)
+
+ set_security(tgt_db_name, %{
+ :admins => %{
+ :names => ["superman"],
+ :roles => ["god"]
+ }})
+
+ docs = make_docs(1..6)
+ ddoc = %{"_id" => "_design/foo", "language" => "javascript"}
+ docs = save_docs(src_db_name, [ddoc | docs])
+
+ sess = Couch.login(ctx[:userinfo])
+ resp = Couch.Session.get(sess, "/_session")
+ assert resp.body["ok"]
+ assert resp.body["userCtx"]["name"] == "joe"
+
+ opts = [
+ userinfo: ctx[:userinfo],
+ headers: [cookie: sess.cookie]
+ ]
+ result = replicate(repl_src, repl_tgt, opts)
+
+ assert Couch.Session.logout(sess).body["ok"]
+
+ assert result["ok"]
+ history = Enum.at(result["history"], 0)
+ assert history["docs_read"] == length(docs)
+ assert history["docs_written"] == length(docs) - 1 # ddoc write failed
+ assert history["doc_write_failures"] == 1 # ddoc write failed
+
+ Enum.each(docs, fn doc ->
+ resp = Couch.get("/#{tgt_db_name}/#{doc["_id"]}")
+ if String.starts_with?(doc["_id"], "_design/") do
+ assert resp.status_code == 404
+ else
+ assert HTTPotion.Response.success?(resp)
+ assert cmp_json(doc, resp.body)
+ end
+ end)
+ end
+
+ def run_non_admin_or_reader_source_user_repl(src_prefix, tgt_prefix, ctx) do
+ base_db_name = random_db_name()
+ src_db_name = base_db_name <> "_src"
+ tgt_db_name = base_db_name <> "_tgt"
+ repl_src = src_prefix <> src_db_name
+ repl_tgt = tgt_prefix <> tgt_db_name
+
+ create_db(src_db_name)
+ create_db(tgt_db_name)
+
+ set_security(tgt_db_name, %{
+ :admins => %{
+ :names => ["superman"],
+ :roles => ["god"]
+ },
+ :readers => %{
+ :names => ["john"],
+ :roles => ["secret"]
+ }
+ })
+
+ docs = make_docs(1..6)
+ ddoc = %{"_id" => "_design/foo", "language" => "javascript"}
+ docs = save_docs(src_db_name, [ddoc | docs])
+
+ sess = Couch.login(ctx[:userinfo])
+ resp = Couch.Session.get(sess, "/_session")
+ assert resp.body["ok"]
+ assert resp.body["userCtx"]["name"] == "joe"
+
+ opts = [
+ userinfo: ctx[:userinfo],
+ headers: [cookie: sess.cookie]
+ ]
+ assert_raise(ExUnit.AssertionError, fn() ->
+ replicate(repl_src, repl_tgt, opts)
+ end)
+
+ assert Couch.Session.logout(sess).body["ok"]
+
+ Enum.each(docs, fn doc ->
+ resp = Couch.get("/#{tgt_db_name}/#{doc["_id"]}")
+ assert resp.status_code == 404
+ end)
+ end
+
+ def get_db_info(db_name) do
+ resp = Couch.get("/#{db_name}")
+ assert HTTPotion.Response.success?(resp)
+ resp.body
+ end
+
+ def replicate(src, tgt, options \\ []) do
+ {userinfo, options} = Keyword.pop(options, :userinfo)
+ userinfo = if userinfo == nil do
+ @admin_account
+ else
+ userinfo
+ end
+
+ src = set_user(src, userinfo)
+ tgt = set_user(tgt, userinfo)
+
+ defaults = [headers: [], body: %{}, timeout: 30_000]
+ options = Keyword.merge(defaults, options) |> Enum.into(%{})
+
+ %{body: body} = options
+ body = [source: src, target: tgt] |> Enum.into(body)
+ options = Map.put(options, :body, body)
+
+ resp = Couch.post("/_replicate", Enum.to_list options)
+ assert HTTPotion.Response.success?(resp), "#{inspect resp}"
+ resp.body
+ end
+
+ def cancel_replication(src, tgt) do
+ body = %{:cancel => true}
+ try do
+ replicate(src, tgt, body: body)
+ rescue
+ ExUnit.AssertionError -> :ok
+ end
+ end
+
+ def get_db_changes(db_name, query \\ %{}) do
+ resp = Couch.get("/#{db_name}/_changes", query: query)
+ assert HTTPotion.Response.success?(resp), "#{inspect resp}"
+ resp.body
+ end
+
+ def save_docs(db_name, docs) do
+ query = %{w: 3}
+ body = %{docs: docs}
+ resp = Couch.post("/#{db_name}/_bulk_docs", query: query, body: body)
+ assert HTTPotion.Response.success?(resp)
+ for {doc, resp} <- Enum.zip(docs, resp.body) do
+ assert resp["ok"], "Error saving doc: #{doc["_id"]}"
+ Map.put(doc, "_rev", resp["rev"])
+ end
+ end
+
+ def set_security(db_name, sec_props) do
+ resp = Couch.put("/#{db_name}/_security", body: :jiffy.encode(sec_props))
+ assert HTTPotion.Response.success?(resp)
+ assert resp.body["ok"]
+ end
+
+ def add_attachment(db_name, doc, att \\ []) do
+ defaults = [
+ name: <<"readme.txt">>,
+ body: <<"some text">>,
+ content_type: "text/plain"
+ ]
+ att = Keyword.merge(defaults, att) |> Enum.into(%{})
+ uri = "/#{db_name}/#{URI.encode(doc["_id"])}/#{att[:name]}"
+ headers = ["Content-Type": att[:content_type]]
+ params = if doc["_rev"] do
+ %{:w => 3, :rev => doc["_rev"]}
+ else
+ %{:w => 3}
+ end
+ resp = Couch.put(uri, headers: headers, query: params, body: att[:body])
+ assert HTTPotion.Response.success?(resp)
+ Map.put(doc, "_rev", resp.body["rev"])
+ end
+
+ def wait_for_repl(src_db_name, repl_id, expect_revs_checked) do
+ wait_for_repl(src_db_name, repl_id, expect_revs_checked, 30000)
+ end
+
+ def wait_for_repl(_, _, _, wait_left) when wait_left <= 0 do
+ assert false, "Timeout waiting for replication"
+ end
+
+ def wait_for_repl(src_db_name, repl_id, expect_revs_checked, wait_left) do
+ task = get_task(repl_id, 0)
+ through_seq = task["through_seq"]
+ revs_checked = task["revisions_checked"]
+ changes = get_db_changes(src_db_name, %{:since => through_seq})
+ if length(changes["results"]) > 0 or revs_checked < expect_revs_checked do
+ :timer.sleep(500)
+ wait_for_repl(src_db_name, repl_id, expect_revs_checked, wait_left - 500)
+ end
+ task
+ end
+
+ def wait_for_repl_stop(repl_id) do
+ wait_for_repl_stop(repl_id, 30000)
+ end
+
+ def wait_for_repl_stop(repl_id, wait_left) when wait_left <= 0 do
+ assert false, "Timeout waiting for replication task to stop: #{repl_id}"
+ end
+
+ def wait_for_repl_stop(repl_id, wait_left) do
+ task = get_task(repl_id, 0)
+ if is_map(task) do
+ :timer.sleep(500)
+ wait_for_repl_stop(repl_id, wait_left - 500)
+ end
+ end
+
+ def get_last_seq(db_name) do
+ body = get_db_changes(db_name, %{:since => "now"})
+ body["last_seq"]
+ end
+
+ def get_task(repl_id, delay) when delay <= 0 do
+ try_get_task(repl_id)
+ end
+
+ def get_task(repl_id, delay) do
+ case try_get_task(repl_id) do
+ result when is_map(result) ->
+ result
+ _ ->
+ :timer.sleep(500)
+ get_task(repl_id, delay - 500)
+ end
+ end
+
+ def try_get_task(repl_id) do
+ resp = Couch.get("/_active_tasks")
+ assert HTTPotion.Response.success?(resp)
+ assert is_list(resp.body)
+ Enum.find(resp.body, :nil, fn task ->
+ task["replication_id"] == repl_id
+ end)
+ end
+
+ def make_docs(ids) do
+ for id <- ids, str_id = Integer.to_string(id) do
+ %{"_id" => str_id, "integer" => id, "string" => str_id}
+ end
+ end
+
+ def set_user(uri, userinfo) do
+ case URI.parse(uri) do
+ %{scheme: nil} ->
+ uri
+ %{userinfo: nil} = uri ->
+ URI.to_string(Map.put(uri, :userinfo, userinfo))
+ _ ->
+ uri
+ end
+ end
+
+ def get_att1_data do
+ File.read!("test/data/lorem.txt")
+ end
+
+ def get_att2_data do
+ File.read!("test/data/lorem_b64.txt")
+ end
+
+ def cmp_json(lhs, rhs) when is_map(lhs) and is_map(rhs) do
+ Enum.reduce_while(lhs, true, fn {k, v}, true ->
+ if Map.has_key?(rhs, k) do
+ if cmp_json(v, rhs[k]) do
+ {:cont, true}
+ else
+ Logger.error "#{inspect lhs} != #{inspect rhs}"
+ {:halt, false}
+ end
+ else
+ Logger.error "#{inspect lhs} != #{inspect rhs}"
+ {:halt, false}
+ end
+ end)
+ end
+
+ def cmp_json(lhs, rhs), do: lhs == rhs
+
+ def seq_to_shards(seq) do
+ for {_node, range, update_seq} <- decode_seq(seq) do
+ {range, update_seq}
+ end
+ end
+
+ def decode_seq(seq) do
+ seq = String.replace(seq, ~r/\d+-/, "", global: false)
+ :erlang.binary_to_term(Base.url_decode64!(seq, padding: false))
+ end
+end
[couchdb] 03/31: Misc updates
Posted by ch...@apache.org.
This is an automated email from the ASF dual-hosted git repository.
chewbranca pushed a commit to branch elixir-suite
in repository https://gitbox.apache.org/repos/asf/couchdb.git
commit 0fc3f02ff1d89b462afa498dea861dac1d584dbd
Author: Russell Branca <ch...@apache.org>
AuthorDate: Mon Dec 4 23:25:52 2017 +0000
Misc updates
---
elixir_suite/test/basics_test.exs | 5 ++++-
elixir_suite/test/test_helper.exs | 1 +
2 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/elixir_suite/test/basics_test.exs b/elixir_suite/test/basics_test.exs
index 1a1adad..40ad4bd 100644
--- a/elixir_suite/test/basics_test.exs
+++ b/elixir_suite/test/basics_test.exs
@@ -92,11 +92,12 @@ defmodule BasicsTest do
test "Can create several documents", context do
db_name = context[:db_name]
assert Couch.post("/#{db_name}", [body: %{:_id => "1", :a => 2, :b => 4}]).body["ok"]
- assert Couch.post("/#{db_name}", [body: %{:_id => "2", :a => 3, :b => 9}])
+ assert Couch.post("/#{db_name}", [body: %{:_id => "2", :a => 3, :b => 9}]).body["ok"]
assert Couch.post("/#{db_name}", [body: %{:_id => "3", :a => 4, :b => 16}]).body["ok"]
assert Couch.get("/#{db_name}").body["doc_count"] == 3
end
+ @tag :pending
@tag :with_db
test "Regression test for COUCHDB-954", context do
db_name = context[:db_name]
@@ -189,6 +190,7 @@ defmodule BasicsTest do
assert Couch.get("/#{db_name}/oppossum").body["yar"] == "matey"
end
+ @tag :pending
@tag :with_db
test "PUT doc has a Location header", context do
db_name = context[:db_name]
@@ -266,6 +268,7 @@ defmodule BasicsTest do
assert resp.body["reason"] == "You tried to DELETE a database with a ?=rev parameter. Did you mean to DELETE a document instead?"
end
+ @tag :pending
@tag :with_db
test "On restart, a request for creating an already existing db can not override", context do
# TODO
diff --git a/elixir_suite/test/test_helper.exs b/elixir_suite/test/test_helper.exs
index 181642b..b8adb52 100644
--- a/elixir_suite/test/test_helper.exs
+++ b/elixir_suite/test/test_helper.exs
@@ -1,3 +1,4 @@
+ExUnit.configure(exclude: [pending: true])
ExUnit.start()
# TODO
[couchdb] 31/31: Port the first half of security_validation_tests.js
Posted by ch...@apache.org.
This is an automated email from the ASF dual-hosted git repository.
chewbranca pushed a commit to branch elixir-suite
in repository https://gitbox.apache.org/repos/asf/couchdb.git
commit 656934a9be65ca678d3dfb56cc72cae35259b3a4
Author: Russell Branca <ch...@apache.org>
AuthorDate: Fri Jun 22 22:12:28 2018 +0000
Port the first half of security_validation_tests.js
---
test/elixir/lib/couch.ex | 8 +-
test/elixir/test/security_validation_test.exs | 309 ++++++++++++++++++++++++++
2 files changed, 316 insertions(+), 1 deletion(-)
diff --git a/test/elixir/lib/couch.ex b/test/elixir/lib/couch.ex
index 3a491e8..6ae702e 100644
--- a/test/elixir/lib/couch.ex
+++ b/test/elixir/lib/couch.ex
@@ -67,8 +67,14 @@ defmodule Couch do
def process_options(options) do
+
if Keyword.get(options, :cookie) == nil do
- Keyword.put(options, :basic_auth, {"adm", "pass"})
+ headers = Keyword.get(options, :headers, [])
+ if headers[:basic_auth] != nil or headers[:authorization] != nil do
+ options
+ else
+ Keyword.put(options, :basic_auth, {"adm", "pass"})
+ end
else
options
end
diff --git a/test/elixir/test/security_validation_test.exs b/test/elixir/test/security_validation_test.exs
new file mode 100644
index 0000000..32117e2
--- /dev/null
+++ b/test/elixir/test/security_validation_test.exs
@@ -0,0 +1,309 @@
+defmodule SecurityValidationTest do
+ use CouchTestCase
+
+ @moduletag :security
+
+ @moduledoc """
+ Test CouchDB Security Validations
+ This is a port of the security_validation.js suite
+ """
+
+ @auth_headers %{
+ jerry: [
+ authorization: "Basic amVycnk6bW91c2U=" # jerry:mouse
+ ],
+ tom: [
+ authorization: "Basic dG9tOmNhdA==" # tom:cat
+ ],
+ spike_cat: [
+ authorization: "Basic c3Bpa2U6Y2F0" # spike:cat - which is wrong
+ ]
+ }
+
+ @ddoc %{
+ _id: "_design/test",
+ language: "javascript",
+ validate_doc_update: ~s"""
+ (function (newDoc, oldDoc, userCtx, secObj) {
+ if (secObj.admin_override) {
+ if (userCtx.roles.indexOf('_admin') != -1) {
+ // user is admin, they can do anything
+ return true;
+ }
+ }
+ // docs should have an author field.
+ if (!newDoc._deleted && !newDoc.author) {
+ throw {forbidden:
+ \"Documents must have an author field\"};
+ }
+ if (oldDoc && oldDoc.author != userCtx.name) {
+ throw {unauthorized:
+ \"You are '\" + userCtx.name + \"', not the author '\" + oldDoc.author + \"' of this document. You jerk.\"};
+ }
+ })
+ """
+ }
+
+ setup_all do
+ auth_db_name = random_db_name()
+ {:ok, _} = create_db(auth_db_name)
+ on_exit(fn -> delete_db(auth_db_name) end)
+
+ configs = [
+ {"httpd", "authentication_handlers", "{couch_httpd_auth, cookie_authentication_handler}, {couch_httpd_auth, default_authentication_handler}"},
+ {"couch_httpd_auth", "authentication_db", auth_db_name},
+ {"chttpd_auth", "authentication_db", auth_db_name}
+ ]
+ Enum.each(configs, &set_config/1)
+
+ # port of comment from security_validation.js
+ # the special case handler does not exist (any longer) in clusters, so we have
+ # to replicate the behavior using a "normal" DB even though tests might no more
+ # run universally (why the "X-Couch-Test-Auth" header was introduced).
+ # btw: this needs to be INSIDE configured server to propagate correctly ;-)
+ # At least they'd run in the build, though
+ users = [{"tom", "cat"}, {"jerry", "mouse"}, {"spike", "dog"}]
+ Enum.each(users, fn {name, pass} ->
+ doc = %{
+ :_id => "org.couchdb.user:#{name}",
+ :name => name,
+ :roles => [],
+ :password => pass
+ }
+ assert Couch.post("/#{auth_db_name}", body: doc).body["ok"]
+ end)
+
+ {:ok, [auth_db_name: auth_db_name]}
+ end
+
+ @tag :with_db_name
+ test "Saving document using the wrong credentials", context do
+ headers = @auth_headers[:spike_cat] # spike:cat - which is wrong
+ resp = Couch.post("/#{context[:db_name]}", [body: %{foo: 1}, headers: headers])
+ assert resp.body["error"] == "unauthorized"
+ assert resp.status_code == 401
+ end
+
+ test "Force basic login" do
+ headers = @auth_headers[:spike_cat] # spike:cat - which is wrong
+ resp = Couch.get("/_session", [query: %{basic: true}, headers: headers])
+ assert resp.status_code == 401
+ assert resp.body["error"] == "unauthorized"
+ end
+
+ @tag :with_db
+ test "Jerry can save a document normally", context do
+ headers = @auth_headers[:jerry]
+ assert Couch.get("/_session", headers: headers).body["userCtx"]["name"] == "jerry"
+
+ doc = %{_id: "testdoc", foo: 1, author: "jerry"}
+ assert Couch.post("/#{context[:db_name]}", body: doc).body["ok"]
+ end
+
+ @tag :with_db
+ test "Non-admin user cannot save a ddoc", context do
+ headers = @auth_headers[:jerry]
+ resp = Couch.post("/#{context[:db_name]}", [body: @ddoc, headers: headers])
+ assert resp.status_code == 403
+ assert resp.body["error"] == "forbidden"
+ end
+
+ @tag :with_db
+ test "Ddoc writes with admin and replication contexts", context do
+ db_name = context[:db_name]
+ sec_obj = %{admins: %{names: ["jerry"]}}
+
+ assert Couch.put("/#{db_name}/_security", body: sec_obj).body["ok"]
+ assert Couch.post("/#{db_name}", body: @ddoc).body["ok"]
+
+ new_rev = "2-642e20f96624a0aae6025b4dba0c6fb2"
+ ddoc = Map.put(@ddoc, :_rev, new_rev) |> Map.put(:foo, "bar")
+ headers = @auth_headers[:tom]
+ # attempt to save doc in replication context, eg ?new_edits=false
+ resp = Couch.put("/#{db_name}/#{ddoc[:_id]}", [body: ddoc, headers: headers, query: %{new_edits: false}])
+ assert resp.status_code == 403
+ assert resp.body["error"] == "forbidden"
+ end
+
+ test "_session API" do
+ headers = @auth_headers[:jerry]
+ resp = Couch.get("/_session", headers: headers)
+ assert resp.body["userCtx"]["name"] == "jerry"
+ assert resp.body["userCtx"]["roles"] == []
+ end
+
+ @tag :with_db
+ test "Author presence and user security", context do
+ db_name = context[:db_name]
+ sec_obj = %{admin_override: false, admins: %{names: ["jerry"]}}
+
+ jerry = @auth_headers[:jerry]
+ tom = @auth_headers[:tom]
+
+ assert Couch.put("/#{db_name}/_security", body: sec_obj).body["ok"]
+ assert Couch.post("/#{db_name}", body: @ddoc).body["ok"]
+
+ resp = Couch.put("/#{db_name}/test_doc", [body: %{foo: 1}, headers: jerry])
+ assert resp.status_code == 403
+ assert resp.body["error"] == "forbidden"
+ assert resp.body["reason"] == "Documents must have an author field"
+
+ # Jerry can write the document
+ assert Couch.put("/#{db_name}/test_doc", [body: %{foo: 1, author: "jerry"}, headers: jerry]).body["ok"]
+
+ test_doc = Couch.get("/#{db_name}/test_doc").body
+
+ # Tom cannot write the document
+ resp = Couch.post("/#{db_name}", [body: %{foo: 1}, headers: tom])
+ assert resp.status_code == 403
+ assert resp.body["error"] == "forbidden"
+
+ # Enable admin override for changing author values
+ assert Couch.put("/#{db_name}/_security", body: %{sec_obj | admin_override: true}).body["ok"]
+
+ # Change owner to Tom
+ resp = Couch.get("/_session")
+ Map.put(test_doc, "author", "tom")
+ resp = Couch.put("/#{db_name}/test_doc", body: test_doc)
+ assert resp.body["ok"]
+ Map.put(test_doc, "_rev", resp.body["rev"])
+
+ # Now Tom can update the document
+ Map.put(test_doc, "foo", "asdf")
+ resp = Couch.put("/#{db_name}/test_doc", [body: test_doc, headers: tom])
+ assert resp.body["ok"]
+ Map.put(test_doc, "_rev", resp.body["rev"])
+
+ # Jerry can't delete it
+ resp = Couch.delete("/#{db_name}/test_doc?rev=#{test_doc["_rev"]}", headers: jerry)
+ assert resp.status_code == 401
+ assert resp.body["error"] == "unauthorized"
+ end
+end
+
+# TODO: port remainder of security_validation.js suite
+# remaining bits reproduced below:
+#
+# // try to do something lame
+# try {
+# db.setDbProperty("_security", ["foo"]);
+# T(false && "can't do this");
+# } catch(e) {}
+#
+# // go back to normal
+# T(db.setDbProperty("_security", {admin_override : false}).ok);
+#
+# // Now delete document
+# T(user2Db.deleteDoc(doc).ok);
+#
+# // now test bulk docs
+# var docs = [{_id:"bahbah",author:"jerry",foo:"bar"},{_id:"fahfah",foo:"baz"}];
+#
+# // Create the docs
+# var results = db.bulkSave(docs);
+#
+# T(results[0].rev)
+# T(results[0].error == undefined)
+# T(results[1].rev === undefined)
+# T(results[1].error == "forbidden")
+#
+# T(db.open("bahbah"));
+# T(db.open("fahfah") == null);
+#
+#
+# // now all or nothing with a failure - no more available on cluster
+#/* var docs = [{_id:"booboo",author:"Damien Katz",foo:"bar"},{_id:"foofoo",foo:"baz"}];
+#
+# // Create the docs
+# var results = db.bulkSave(docs, {all_or_nothing:true});
+#
+# T(results.errors.length == 1);
+# T(results.errors[0].error == "forbidden");
+# T(db.open("booboo") == null);
+# T(db.open("foofoo") == null);
+#*/
+#
+# // Now test replication
+# var AuthHeaders = {"Authorization": "Basic c3Bpa2U6ZG9n"}; // spike
+# adminDbA = new CouchDB("" + db_name + "_a", {"X-Couch-Full-Commit":"false"});
+# adminDbB = new CouchDB("" + db_name + "_b", {"X-Couch-Full-Commit":"false"});
+# var dbA = new CouchDB("" + db_name + "_a", AuthHeaders);
+# var dbB = new CouchDB("" + db_name + "_b", AuthHeaders);
+# // looping does not really add value as the scenario is the same anyway (there's nothing 2 be gained from it)
+# var A = CouchDB.protocol + CouchDB.host + "/" + db_name + "_a";
+# var B = CouchDB.protocol + CouchDB.host + "/" + db_name + "_b";
+#
+# // (the databases never exist b4 - and we made sure they're deleted below)
+# //adminDbA.deleteDb();
+# adminDbA.createDb();
+# //adminDbB.deleteDb();
+# adminDbB.createDb();
+#
+# // save and replicate a documents that will and will not pass our design
+# // doc validation function.
+# T(dbA.save({_id:"foo1",value:"a",author:"tom"}).ok);
+# T(dbA.save({_id:"foo2",value:"a",author:"spike"}).ok);
+# T(dbA.save({_id:"bad1",value:"a"}).ok);
+#
+# T(CouchDB.replicate(A, B, {headers:AuthHeaders}).ok);
+# T(CouchDB.replicate(B, A, {headers:AuthHeaders}).ok);
+#
+# T(dbA.open("foo1"));
+# T(dbB.open("foo1"));
+# T(dbA.open("foo2"));
+# T(dbB.open("foo2"));
+#
+# // save the design doc to dbA
+# delete designDoc._rev; // clear rev from previous saves
+# T(adminDbA.save(designDoc).ok);
+#
+# // no affect on already saved docs
+# T(dbA.open("bad1"));
+#
+# // Update some docs on dbB. Since the design hasn't replicated, anything
+# // is allowed.
+#
+# // this edit will fail validation on replication to dbA (no author)
+# T(dbB.save({_id:"bad2",value:"a"}).ok);
+#
+# // this edit will fail security on replication to dbA (wrong author
+# // replicating the change)
+# var foo1 = dbB.open("foo1");
+# foo1.value = "b";
+# T(dbB.save(foo1).ok);
+#
+# // this is a legal edit
+# var foo2 = dbB.open("foo2");
+# foo2.value = "b";
+# T(dbB.save(foo2).ok);
+#
+# var results = CouchDB.replicate({"url": B, "headers": AuthHeaders}, {"url": A, "headers": AuthHeaders}, {headers:AuthHeaders});
+# T(results.ok);
+# TEquals(1, results.history[0].docs_written);
+# TEquals(2, results.history[0].doc_write_failures);
+#
+# // bad2 should not be on dbA
+# T(dbA.open("bad2") == null);
+#
+# // The edit to foo1 should not have replicated.
+# T(dbA.open("foo1").value == "a");
+#
+# // The edit to foo2 should have replicated.
+# T(dbA.open("foo2").value == "b");
+# });
+#
+# // cleanup
+# db.deleteDb();
+# if(adminDbA){
+# adminDbA.deleteDb();
+# }
+# if(adminDbB){
+# adminDbB.deleteDb();
+# }
+# authDb.deleteDb();
+# // have to clean up authDb on the backside :(
+# var req = CouchDB.newXhr();
+# req.open("DELETE", "http://127.0.0.1:15986/" + authDb_name, false);
+# req.send("");
+# CouchDB.maybeThrowError(req);
+#};
[couchdb] 11/31: Ignore erln8.config
Posted by ch...@apache.org.
This is an automated email from the ASF dual-hosted git repository.
chewbranca pushed a commit to branch elixir-suite
in repository https://gitbox.apache.org/repos/asf/couchdb.git
commit 5f72bc7ff49a163bb64bb01dacb7e38db6109c19
Author: Paul J. Davis <pa...@gmail.com>
AuthorDate: Thu Dec 7 10:39:35 2017 -0600
Ignore erln8.config
---
.gitignore | 1 +
1 file changed, 1 insertion(+)
diff --git a/.gitignore b/.gitignore
index a1cba1e..ed15450 100644
--- a/.gitignore
+++ b/.gitignore
@@ -18,6 +18,7 @@ dev/lib/
dev/logs/
ebin/
erl_crash.dump
+erln8.config
install.mk
rel/*.config
rel/couchdb
[couchdb] 02/31: Port remaining basics.js tests
Posted by ch...@apache.org.
This is an automated email from the ASF dual-hosted git repository.
chewbranca pushed a commit to branch elixir-suite
in repository https://gitbox.apache.org/repos/asf/couchdb.git
commit 35d57216966a7827cc2e823948d63d2d7b823547
Author: Russell Branca <ch...@apache.org>
AuthorDate: Fri Dec 1 21:36:12 2017 +0000
Port remaining basics.js tests
---
elixir_suite/test/basics_test.exs | 158 +++++++++++++++++++++++++++++++++++++-
1 file changed, 157 insertions(+), 1 deletion(-)
diff --git a/elixir_suite/test/basics_test.exs b/elixir_suite/test/basics_test.exs
index 87b4aff..1a1adad 100644
--- a/elixir_suite/test/basics_test.exs
+++ b/elixir_suite/test/basics_test.exs
@@ -1,6 +1,8 @@
defmodule BasicsTest do
use CouchTestCase
+ @moduletag :basics
+
@moduledoc """
Test CouchDB basics.
This is a port of the basics.js suite
@@ -19,7 +21,9 @@ defmodule BasicsTest do
@tag :with_db
test "PUT on existing DB should return 412 instead of 500", context do
db_name = context[:db_name]
- assert Couch.put("/#{db_name}").status_code == 412
+ resp = Couch.put("/#{db_name}")
+ assert resp.status_code == 412
+ refute resp.body["ok"]
end
@tag :with_db_name
@@ -114,5 +118,157 @@ defmodule BasicsTest do
#assert resp3.body["_rev"] == new_rev
end
+ @tag :with_db
+ test "Simple map functions", context do
+ db_name = context[:db_name]
+ map_fun = "function(doc) { if (doc.a==4) { emit(null, doc.b); } }"
+ red_fun = "function(keys, values) { return sum(values); }"
+ map_doc = %{:views => %{:baz => %{:map => map_fun}}}
+ red_doc = %{:views => %{:baz => %{:map => map_fun, :reduce => red_fun}}}
+
+ # Bootstrap database and ddoc
+ assert Couch.post("/#{db_name}", [body: %{:_id => "0", :a => 1, :b => 1}]).body["ok"]
+ assert Couch.post("/#{db_name}", [body: %{:_id => "1", :a => 2, :b => 4}]).body["ok"]
+ assert Couch.post("/#{db_name}", [body: %{:_id => "2", :a => 3, :b => 9}]).body["ok"]
+ assert Couch.post("/#{db_name}", [body: %{:_id => "3", :a => 4, :b => 16}]).body["ok"]
+ assert Couch.put("/#{db_name}/_design/foo", [body: map_doc]).body["ok"]
+ assert Couch.put("/#{db_name}/_design/bar", [body: red_doc]).body["ok"]
+ assert Couch.get("/#{db_name}").body["doc_count"] == 6
+
+ # Initial view query test
+ resp = Couch.get("/#{db_name}/_design/foo/_view/baz")
+ assert resp.body["total_rows"] == 1
+ assert hd(resp.body["rows"])["value"] == 16
+ # Modified doc and test for updated view results
+ doc0 = Couch.get("/#{db_name}/0").body
+ doc0 = Map.put(doc0, :a, 4)
+ assert Couch.put("/#{db_name}/0", [body: doc0]).body["ok"]
+ resp = Couch.get("/#{db_name}/_design/foo/_view/baz")
+ assert resp.body["total_rows"] == 2
+
+ # Write 2 more docs and test for updated view results
+ assert Couch.post("/#{db_name}", [body: %{:a => 3, :b => 9}]).body["ok"]
+ assert Couch.post("/#{db_name}", [body: %{:a => 4, :b => 16}]).body["ok"]
+ resp = Couch.get("/#{db_name}/_design/foo/_view/baz")
+ assert resp.body["total_rows"] == 3
+ assert Couch.get("/#{db_name}").body["doc_count"] == 8
+
+ # Test reduce function
+ resp = Couch.get("/#{db_name}/_design/bar/_view/baz")
+ assert hd(resp.body["rows"])["value"] == 33
+
+ # Delete doc and test for updated view results
+ doc0 = Couch.get("/#{db_name}/0").body
+ assert Couch.delete("/#{db_name}/0?rev=#{doc0["_rev"]}").body["ok"]
+ resp = Couch.get("/#{db_name}/_design/foo/_view/baz")
+ assert resp.body["total_rows"] == 2
+ assert Couch.get("/#{db_name}").body["doc_count"] == 7
+ assert Couch.get("/#{db_name}/0").status_code == 404
+ refute Couch.get("/#{db_name}/0?rev=#{doc0["_rev"]}").status_code == 404
+ end
+
+ @tag :with_db
+ test "POST doc response has a Location header", context do
+ db_name = context[:db_name]
+ resp = Couch.post("/#{db_name}", [body: %{:foo => :bar}])
+ assert resp.body["ok"]
+ loc = resp.headers["Location"]
+ assert loc, "should have a Location header"
+ locs = Enum.reverse(String.split(loc, "/"))
+ assert hd(locs) == resp.body["id"]
+ assert hd(tl(locs)) == db_name
+ end
+
+ @tag :with_db
+ test "POST doc with an _id field isn't overwritten by uuid", context do
+ db_name = context[:db_name]
+ resp = Couch.post("/#{db_name}", [body: %{:_id => "oppossum", :yar => "matey"}])
+ assert resp.body["ok"]
+ assert resp.body["id"] == "oppossum"
+ assert Couch.get("/#{db_name}/oppossum").body["yar"] == "matey"
+ end
+
+ @tag :with_db
+ test "PUT doc has a Location header", context do
+ db_name = context[:db_name]
+ resp = Couch.put("/#{db_name}/newdoc", [body: %{:a => 1}])
+ assert String.ends_with?(resp.headers["location"], "/#{db_name}/newdoc")
+ # TODO: make protocol check use defined protocol value
+ assert String.starts_with?(resp.headers["location"], "http")
+ end
+
+ @tag :with_db
+ test "DELETE'ing a non-existent doc should 404", context do
+ db_name = context[:db_name]
+ assert Couch.delete("/#{db_name}/doc-does-not-exist").status_code == 404
+ end
+
+ @tag :with_db
+ test "Check for invalid document members", context do
+ db_name = context[:db_name]
+ bad_docs = [
+ {:goldfish, %{:_zing => 4}},
+ {:zebrafish, %{:_zoom => "hello"}},
+ {:mudfish, %{:zane => "goldfish", :_fan => "something smells delicious"}},
+ {:tastyfish, %{:_bing => %{"wha?" => "soda can"}}}
+ ]
+
+ Enum.each(bad_docs, fn {id, doc} ->
+ resp = Couch.put("/#{db_name}/#{id}", [body: doc])
+ assert resp.status_code == 400
+ assert resp.body["error"] == "doc_validation"
+
+ resp = Couch.post("/#{db_name}", [body: doc])
+ assert resp.status_code == 400
+ assert resp.body["error"] == "doc_validation"
+ end)
+ end
+
+ @tag :with_db
+ test "PUT error when body not an object", context do
+ db_name = context[:db_name]
+ resp = Couch.put("/#{db_name}/bar", [body: "[]"])
+ assert resp.status_code == 400
+ assert resp.body["error"] == "bad_request"
+ assert resp.body["reason"] == "Document must be a JSON object"
+ end
+
+ @tag :with_db
+ test "_bulk_docs POST error when body not an object", context do
+ db_name = context[:db_name]
+ resp = Couch.post("/#{db_name}/_bulk_docs", [body: "[]"])
+ assert resp.status_code == 400
+ assert resp.body["error"] == "bad_request"
+ assert resp.body["reason"] == "Request body must be a JSON object"
+ end
+
+ @tag :with_db
+ test "_all_docs POST error when multi-get is not a {'key': [...]} structure", context do
+ db_name = context[:db_name]
+ resp = Couch.post("/#{db_name}/_all_docs", [body: "[]"])
+ assert resp.status_code == 400
+ assert resp.body["error"] == "bad_request"
+ assert resp.body["reason"] == "Request body must be a JSON object"
+
+ resp = Couch.post("/#{db_name}/_all_docs", [body: %{:keys => 1}])
+ assert resp.status_code == 400
+ assert resp.body["error"] == "bad_request"
+ assert resp.body["reason"] == "`keys` body member must be an array."
+ end
+
+ @tag :with_db
+ test "oops, the doc id got lost in code nirwana", context do
+ db_name = context[:db_name]
+ resp = Couch.delete("/#{db_name}/?rev=foobarbaz")
+ assert resp.status_code == 400, "should return a bad request"
+ assert resp.body["error"] == "bad_request"
+ assert resp.body["reason"] == "You tried to DELETE a database with a ?=rev parameter. Did you mean to DELETE a document instead?"
+ end
+
+ @tag :with_db
+ test "On restart, a request for creating an already existing db can not override", context do
+ # TODO
+ assert true
+ end
end
[couchdb] 05/31: Port config.js tests
Posted by ch...@apache.org.
This is an automated email from the ASF dual-hosted git repository.
chewbranca pushed a commit to branch elixir-suite
in repository https://gitbox.apache.org/repos/asf/couchdb.git
commit 3c4730e8f0e2d12e379267efaedd6069784a7a57
Author: Russell Branca <ch...@apache.org>
AuthorDate: Wed Dec 6 21:51:55 2017 +0000
Port config.js tests
---
elixir_suite/lib/couch.ex | 6 ++
elixir_suite/test/config_test.exs | 149 ++++++++++++++++++++++++++++++++++++++
2 files changed, 155 insertions(+)
diff --git a/elixir_suite/lib/couch.ex b/elixir_suite/lib/couch.ex
index aafe829..d879ecf 100644
--- a/elixir_suite/lib/couch.ex
+++ b/elixir_suite/lib/couch.ex
@@ -30,4 +30,10 @@ defmodule Couch do
def process_response_body(body) do
body |> IO.iodata_to_binary |> :jiffy.decode([:return_maps])
end
+
+ def login(user, pass) do
+ resp = Couch.post("/_session", body: %{:username => user, :password => pass})
+ true = resp.body["ok"]
+ resp.body
+ end
end
diff --git a/elixir_suite/test/config_test.exs b/elixir_suite/test/config_test.exs
new file mode 100644
index 0000000..f83f5a6
--- /dev/null
+++ b/elixir_suite/test/config_test.exs
@@ -0,0 +1,149 @@
+defmodule ConfigTest do
+ use CouchTestCase
+
+ @moduletag :config
+
+ @moduledoc """
+ Test CouchDB config API
+ This is a port of the config.js suite
+ """
+
+ setup do
+ # TODO: switch this to _local when that's landed
+ config_url = "/_node/node1@127.0.0.1/_config"
+ resp = Couch.get(config_url)
+ assert resp.status_code == 200
+ {:ok, config: resp.body, config_url: config_url}
+ end
+
+ def set_config(context, section, key, val) do
+ set_config(context, section, key, val, 200)
+ end
+
+ def set_config(context, section, key, val, status_assert) do
+ url = "#{context[:config_url]}/#{section}/#{key}"
+ headers = ["X-Couch-Persist": "false"]
+ resp = Couch.put(url, headers: headers, body: :jiffy.encode(val))
+ if status_assert do
+ assert resp.status_code == status_assert
+ end
+ resp.body
+ end
+
+ def get_config(context, section) do
+ get_config(context, section, nil, 200)
+ end
+
+ def get_config(context, section, key) do
+ get_config(context, section, key, 200)
+ end
+
+ def get_config(context, section, key, status_assert) do
+ url = if key do
+ "#{context[:config_url]}/#{section}/#{key}"
+ else
+ "#{context[:config_url]}/#{section}"
+ end
+ resp = Couch.get(url)
+ if status_assert do
+ assert resp.status_code == status_assert
+ end
+ resp.body
+ end
+
+ def delete_config(context, section, key) do
+ delete_config(context, section, key, 200)
+ end
+
+ def delete_config(context, section, key, status_assert) do
+ url = "#{context[:config_url]}/#{section}/#{key}"
+ resp = Couch.delete(url)
+ if status_assert do
+ assert resp.status_code == status_assert
+ end
+ end
+
+ # TODO: port sever_port tests from config.js
+ @tag :pending
+ test "CouchDB respects configured protocols"
+
+ test "Standard config options are present", context do
+ assert context[:config]["couchdb"]["database_dir"]
+ assert context[:config]["daemons"]["httpd"]
+ assert context[:config]["httpd_global_handlers"]["_config"]
+ assert context[:config]["log"]["level"]
+ assert context[:config]["query_servers"]["javascript"]
+ end
+
+ test "Settings can be altered with undefined whitelist allowing any change", context do
+ refute context["config"]["httpd"]["config_whitelist"], "Default whitelist is empty"
+ set_config(context, "test", "foo", "bar")
+ assert get_config(context, "test")["foo"] == "bar"
+ assert get_config(context, "test", "foo") == "bar"
+ end
+
+ test "Server-side password hashing, and raw updates disabling that", context do
+ plain_pass = "s3cret"
+ set_config(context, "admins", "administrator", plain_pass)
+ assert Couch.login("administrator", plain_pass)["ok"]
+ hash_pass = get_config(context, "admins", "administrator")
+ assert Regex.match?(~r/^-pbkdf2-/, hash_pass) or Regex.match?(~r/^-hashed-/, hash_pass)
+ delete_config(context, "admins", "administrator")
+ assert Couch.delete("/_session").body["ok"]
+ end
+
+ @tag :pending
+ test "PORT `BUGGED` ?raw tests from config.js"
+
+ test "Non-term whitelist values allow further modification of the whitelist", context do
+ val = "!This is an invalid Erlang term!"
+ set_config(context, "httpd", "config_whitelist", val)
+ assert val == get_config(context, "httpd", "config_whitelist")
+ delete_config(context, "httpd", "config_whitelist")
+ end
+
+ test "Non-list whitelist values allow further modification of the whitelist", context do
+ val = "{[yes, a_valid_erlang_term, but_unfortunately, not_a_list]}"
+ set_config(context, "httpd", "config_whitelist", val)
+ assert val == get_config(context, "httpd", "config_whitelist")
+ delete_config(context, "httpd", "config_whitelist")
+ end
+
+ test "Keys not in the whitelist may not be modified", context do
+ val = "[{httpd,config_whitelist}, {test,foo}]"
+ set_config(context, "httpd", "config_whitelist", val)
+ assert val == get_config(context, "httpd", "config_whitelist")
+ set_config(context, "test", "foo", "PUT to whitelisted config variable")
+ delete_config(context, "test", "foo")
+ end
+
+ test "Non-2-tuples in the whitelist are ignored", context do
+ val = "[{httpd,config_whitelist}, these, {are}, {nOt, 2, tuples}, [so], [they, will], [all, become, noops], {test,foo}]"
+ set_config(context, "httpd", "config_whitelist", val)
+ assert val == get_config(context, "httpd", "config_whitelist")
+ set_config(context, "test", "foo", "PUT to whitelisted config variable")
+ delete_config(context, "test", "foo")
+ end
+
+ test "Atoms, binaries, and strings suffice as whitelist sections and keys.", context do
+ vals = ["{test,foo}", "{\"test\",\"foo\"}", "{<<\"test\">>,<<\"foo\">>}"]
+ Enum.each(vals, fn pair ->
+ set_config(context, "httpd", "config_whitelist", "[{httpd,config_whitelist}, #{pair}")
+ pair_format = case String.at(pair, 1) do
+ "t" -> "tuple"
+ "\"" -> "string"
+ "<" -> "binary"
+ end
+ set_config(context, "test", "foo", "PUT with #{pair_format}")
+ delete_config(context, "test", "foo")
+ end)
+ delete_config(context, "httpd", "config_whitelist")
+ end
+
+ test "Blacklist is functional", context do
+ sections = ["daemons", "external", "httpd_design_handlers", "httpd_db_handlers", "native_query_servers", "os_daemons", "query_servers"]
+ Enum.each(sections, fn section ->
+ set_config(context, section, "wohali", "rules", 403)
+ end)
+ end
+end
[couchdb] 16/31: Remove extraneous comment
Posted by ch...@apache.org.
This is an automated email from the ASF dual-hosted git repository.
chewbranca pushed a commit to branch elixir-suite
in repository https://gitbox.apache.org/repos/asf/couchdb.git
commit d159dedbd790462c0c180cb8f21a8582c8a67c40
Author: Russell Branca <ch...@apache.org>
AuthorDate: Thu Dec 7 18:40:19 2017 +0000
Remove extraneous comment
---
elixir_suite/test/test_helper.exs | 5 -----
1 file changed, 5 deletions(-)
diff --git a/elixir_suite/test/test_helper.exs b/elixir_suite/test/test_helper.exs
index cef7d13..1c61e4a 100644
--- a/elixir_suite/test/test_helper.exs
+++ b/elixir_suite/test/test_helper.exs
@@ -1,11 +1,6 @@
ExUnit.configure(exclude: [pending: true])
ExUnit.start()
-# TODO
-#def random_db_name do
-# "asdf"
-#end
-
defmodule CouchTestCase do
use ExUnit.Case
[couchdb] 29/31: Add the user tag to create users declaratively
Posted by ch...@apache.org.
This is an automated email from the ASF dual-hosted git repository.
chewbranca pushed a commit to branch elixir-suite
in repository https://gitbox.apache.org/repos/asf/couchdb.git
commit 6d01bb9d901a753e9811a7c23212c082a5543f33
Author: Paul J. Davis <pa...@gmail.com>
AuthorDate: Fri Jan 26 14:32:31 2018 -0600
Add the user tag to create users declaratively
The `user` tag allows tests to declaratively request user creation. This
can be used as such:
@tag user: [name: "username", password: "secret", roles: ["a_role"]]
test "this is a user test", context do
sess = Couch.login(context[:userinfo])
resp Couch.Session.get("/_session")
assert resp.body["ok"]
assert resp.body["userCtx"]["name"] == "username"
assert Couch.Session.logout(sess).body["ok"]
end
This also demonstrates how to use the recently add Couch.Session support
for handling user sessions.
Tests that specify a `user` tag will have a `:user` key in the context
that is the current user doc as a map as well as a `:userinfo` key that
is the `username:password` that can be passed directly to
`Couch.login/1`.
---
test/elixir/test/test_helper.exs | 57 +++++++++++++++++++++++++++++++++++++++-
1 file changed, 56 insertions(+), 1 deletion(-)
diff --git a/test/elixir/test/test_helper.exs b/test/elixir/test/test_helper.exs
index 9baf204..f84e1a0 100644
--- a/test/elixir/test/test_helper.exs
+++ b/test/elixir/test/test_helper.exs
@@ -12,7 +12,8 @@ defmodule CouchTestCase do
setup context do
setup_funs = [
&set_db_context/1,
- &set_config_context/1
+ &set_config_context/1,
+ &set_user_context/1
]
context = Enum.reduce(setup_funs, context, fn setup_fun, acc ->
setup_fun.(acc)
@@ -55,6 +56,23 @@ defmodule CouchTestCase do
context
end
+ def set_user_context(context) do
+ case Map.get(context, :user) do
+ nil ->
+ context
+ user when is_list(user) ->
+ user = create_user(user)
+ on_exit(fn ->
+ query = %{:rev => user["_rev"]}
+ resp = Couch.delete("/_users/#{user["_id"]}", query: query)
+ assert HTTPotion.Response.success? resp
+ end)
+ context = Map.put(context, :user, user)
+ userinfo = user["name"] <> ":" <> user["password"]
+ Map.put(context, :userinfo, userinfo)
+ end
+ end
+
def random_db_name do
random_db_name("random-test-db")
end
@@ -97,6 +115,43 @@ defmodule CouchTestCase do
end)
end
+ def create_user(user) do
+ required = [:name, :password, :roles]
+ Enum.each(required, fn key ->
+ assert Keyword.has_key?(user, key), "User missing key: #{key}"
+ end)
+
+ name = Keyword.get(user, :name)
+ password = Keyword.get(user, :password)
+ roles = Keyword.get(user, :roles)
+
+ assert is_binary(name), "User name must be a string"
+ assert is_binary(password), "User password must be a string"
+ assert is_list(roles), "Roles must be a list of strings"
+ Enum.each(roles, fn role ->
+ assert is_binary(role), "Roles must be a list of strings"
+ end)
+
+ user_doc = %{
+ "_id" => "org.couchdb.user:" <> name,
+ "type" => "user",
+ "name" => name,
+ "roles" => roles,
+ "password" => password
+ }
+ resp = Couch.get("/_users/#{user_doc["_id"]}")
+ user_doc = case resp.status_code do
+ 404 ->
+ user_doc
+ sc when sc >= 200 and sc < 300 ->
+ Map.put(user_doc, "_rev", resp.body["_rev"])
+ end
+ resp = Couch.post("/_users", body: user_doc)
+ assert HTTPotion.Response.success? resp
+ assert resp.body["ok"]
+ Map.put(user_doc, "_rev", resp.body["rev"])
+ end
+
def create_db(db_name) do
resp = Couch.put("/#{db_name}")
assert resp.status_code == 201
[couchdb] 23/31: Add list of test ports status to README
Posted by ch...@apache.org.
This is an automated email from the ASF dual-hosted git repository.
chewbranca pushed a commit to branch elixir-suite
in repository https://gitbox.apache.org/repos/asf/couchdb.git
commit 9d392c7a101caee74977085229e7a28095858382
Author: Russell Branca <ch...@apache.org>
AuthorDate: Fri Dec 15 18:10:46 2017 +0000
Add list of test ports status to README
---
test/elixir/README.md | 92 +++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 92 insertions(+)
diff --git a/test/elixir/README.md b/test/elixir/README.md
index a7aedd3..b1b745a 100644
--- a/test/elixir/README.md
+++ b/test/elixir/README.md
@@ -10,3 +10,95 @@ To run the suite:
mix deps.get
mix test --trace
```
+
+# Tests to port
+
+X means done, - means partially
+
+ - [X] Port all_docs.js
+ - [ ] Port attachment_names.js
+ - [ ] Port attachment_paths.js
+ - [ ] Port attachment_ranges.js
+ - [ ] Port attachments.js
+ - [ ] Port attachments_multipart.js
+ - [ ] Port attachment_views.js
+ - [ ] Port auth_cache.js
+ - [X] Port basics.js
+ - [ ] Port batch_save.js
+ - [ ] Port bulk_docs.js
+ - [X] Port changes.js
+ - [ ] Port coffee.js
+ - [ ] Port compact.js
+ - [X] Port config.js
+ - [ ] Port conflicts.js
+ - [ ] Port cookie_auth.js
+ - [ ] Port copy_doc.js
+ - [ ] Port delayed_commits.js
+ - [ ] Port design_docs.js
+ - [ ] Port design_options.js
+ - [ ] Port design_paths.js
+ - [ ] Port erlang_views.js
+ - [ ] Port etags_head.js
+ - [ ] Port etags_views.js
+ - [ ] Port form_submit.js
+ - [ ] Port http.js
+ - [ ] Port invalid_docids.js
+ - [ ] Port jsonp.js
+ - [ ] Port large_docs.js
+ - [ ] Port list_views.js
+ - [ ] Port lorem_b64.txt
+ - [ ] Port lorem.txt
+ - [ ] Port lots_of_docs.js
+ - [ ] Port method_override.js
+ - [ ] Port multiple_rows.js
+ - [ ] Port proxyauth.js
+ - [ ] Port purge.js
+ - [ ] Port reader_acl.js
+ - [ ] Port recreate_doc.js
+ - [ ] Port reduce_builtin.js
+ - [ ] Port reduce_false.js
+ - [ ] Port reduce_false_temp.js
+ - [X] Port reduce.js
+ - [-] Port replication.js
+ - [ ] Port replicator_db_bad_rep_id.js
+ - [ ] Port replicator_db_by_doc_id.js
+ - [ ] Port replicator_db_compact_rep_db.js
+ - [ ] Port replicator_db_continuous.js
+ - [ ] Port replicator_db_credential_delegation.js
+ - [ ] Port replicator_db_field_validation.js
+ - [ ] Port replicator_db_filtered.js
+ - [ ] Port replicator_db_identical_continuous.js
+ - [ ] Port replicator_db_identical.js
+ - [ ] Port replicator_db_invalid_filter.js
+ - [ ] Port replicator_db_security.js
+ - [ ] Port replicator_db_simple.js
+ - [ ] Port replicator_db_successive.js
+ - [ ] Port replicator_db_survives.js
+ - [ ] Port replicator_db_swap_rep_db.js
+ - [ ] Port replicator_db_update_security.js
+ - [ ] Port replicator_db_user_ctx.js
+ - [ ] Port replicator_db_write_auth.js
+ - [ ] Port rev_stemming.js
+ - [X] Port rewrite.js
+ - [ ] Port rewrite_js.js
+ - [ ] Port security_validation.js
+ - [ ] Port show_documents.js
+ - [ ] Port stats.js
+ - [ ] Port update_documents.js
+ - [ ] Port users_db.js
+ - [ ] Port users_db_security.js
+ - [ ] Port utf8.js
+ - [X] Port uuids.js
+ - [X] Port view_collation.js
+ - [ ] Port view_collation_raw.js
+ - [ ] Port view_compaction.js
+ - [ ] Port view_conflicts.js
+ - [ ] Port view_errors.js
+ - [ ] Port view_include_docs.js
+ - [ ] Port view_multi_key_all_docs.js
+ - [ ] Port view_multi_key_design.js
+ - [ ] Port view_multi_key_temp.js
+ - [ ] Port view_offsets.js
+ - [ ] Port view_pagination.js
+ - [ ] Port view_sandboxing.js
+ - [ ] Port view_update_seq.js
[couchdb] 26/31: Fix bug when canceling replications
Posted by ch...@apache.org.
This is an automated email from the ASF dual-hosted git repository.
chewbranca pushed a commit to branch elixir-suite
in repository https://gitbox.apache.org/repos/asf/couchdb.git
commit 35dcf854a8faf4ecbbbfaab36ce6b91c0c6b74f3
Author: Paul J. Davis <pa...@gmail.com>
AuthorDate: Thu Jan 25 16:49:10 2018 -0600
Fix bug when canceling replications
---
src/chttpd/src/chttpd_misc.erl | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/src/chttpd/src/chttpd_misc.erl b/src/chttpd/src/chttpd_misc.erl
index 253da23..e3128c3 100644
--- a/src/chttpd/src/chttpd_misc.erl
+++ b/src/chttpd/src/chttpd_misc.erl
@@ -240,7 +240,9 @@ cancel_replication(PostBody, Ctx) ->
{error, badrpc};
Else ->
% Unclear what to do here -- pick the first error?
- hd(Else)
+ % Except try ignoring any {error, not_found} responses
+ % because we'll always get two of those
+ hd(Else -- [{error, not_found}])
end
end.
[couchdb] 07/31: Port uuids.js to Elixir
Posted by ch...@apache.org.
This is an automated email from the ASF dual-hosted git repository.
chewbranca pushed a commit to branch elixir-suite
in repository https://gitbox.apache.org/repos/asf/couchdb.git
commit fba7e88ab929c53e921d77c76a62de417bf92389
Author: Paul J. Davis <pa...@gmail.com>
AuthorDate: Wed Dec 6 18:29:21 2017 -0600
Port uuids.js to Elixir
---
elixir_suite/test/uuids_test.exs | 100 +++++++++++++++++++++++++++++++++++++++
1 file changed, 100 insertions(+)
diff --git a/elixir_suite/test/uuids_test.exs b/elixir_suite/test/uuids_test.exs
new file mode 100644
index 0000000..3e18e4a
--- /dev/null
+++ b/elixir_suite/test/uuids_test.exs
@@ -0,0 +1,100 @@
+defmodule UUIDsTest do
+ use CouchTestCase
+
+ @moduletag :config
+
+ @moduledoc """
+ Test CouchDB UUIDs API
+ This is a port of the uuids.js suite
+ """
+
+ test "cache busting headers are set" do
+ resp = Couch.get("/_uuids")
+ assert resp.status_code == 200
+ assert Regex.match?(~r/no-cache/, resp.headers["Cache-Control"])
+ assert resp.headers["Pragma"] == "no-cache"
+ assert String.length(resp.headers["ETag"]) > 0
+ end
+
+ test "can return single uuid" do
+ resp = Couch.get("/_uuids")
+ assert resp.status_code == 200
+ [uuid1] = resp.body["uuids"]
+
+ resp = Couch.get("/_uuids", query: %{:count => 1})
+ assert resp.status_code == 200
+ [uuid2] = resp.body["uuids"]
+
+ assert uuid1 != uuid2
+ end
+
+ test "no duplicates in 1,000 UUIDs" do
+ resp = Couch.get("/_uuids", query: %{:count => 1000})
+ assert resp.status_code == 200
+ uuids = resp.body["uuids"]
+
+ assert length(Enum.uniq(uuids)) == length(uuids)
+ end
+
+ test "Method Not Allowed error on POST" do
+ resp = Couch.post("/_uuids", query: %{:count => 1000})
+ assert resp.status_code == 405
+ end
+
+ test "Bad Request error when exceeding max UUID count" do
+ resp = Couch.get("/_uuids", query: %{:count => 1001})
+ assert resp.status_code == 400
+ end
+
+ @tag config: [
+ {"uuids", "algorithm", "sequential"}
+ ]
+ test "sequential uuids are sequential" do
+ resp = Couch.get("/_uuids", query: %{:count => 1000})
+ assert resp.status_code == 200
+ [uuid | rest_uuids] = resp.body["uuids"]
+
+ assert String.length(uuid) == 32
+ Enum.reduce(rest_uuids, uuid, fn curr, acc ->
+ assert String.length(curr) == 32
+ assert acc < curr
+ curr
+ end)
+ end
+
+ @tag config: [
+ {"uuids", "algorithm", "utc_random"}
+ ]
+ test "utc_random uuids are roughly random" do
+ resp = Couch.get("/_uuids", query: %{:count => 1000})
+ assert resp.status_code == 200
+ uuids = resp.body["uuids"]
+
+ assert String.length(Enum.at(uuids, 1)) == 32
+
+ # Assert no collisions
+ assert length(Enum.uniq(uuids)) == length(uuids)
+
+ # Assert rough ordering of UUIDs
+ u1 = String.slice(Enum.at(uuids, 1), 0..13)
+ u2 = String.slice(Enum.at(uuids, -1), 0..13)
+ assert u1 < u2
+ end
+
+ @tag config: [
+ {"uuids", "algorithm", "utc_id"},
+ {"uuids", "utc_id_suffix", "frog"}
+ ]
+ test "utc_id uuids are correct" do
+ resp = Couch.get("/_uuids", query: %{:count => 10})
+ assert resp.status_code == 200
+ [uuid | rest_uuids] = resp.body["uuids"]
+
+ Enum.reduce(rest_uuids, uuid, fn curr, acc ->
+ assert String.length(curr) == 14 + String.length("frog")
+ assert String.slice(curr, 14..-1) == "frog"
+ assert curr > acc
+ curr
+ end)
+ end
+end
[couchdb] 09/31: Add simple Makefile for muscle memory
Posted by ch...@apache.org.
This is an automated email from the ASF dual-hosted git repository.
chewbranca pushed a commit to branch elixir-suite
in repository https://gitbox.apache.org/repos/asf/couchdb.git
commit 754202ea8377d986797220b2ada9442564a45734
Author: Paul J. Davis <pa...@gmail.com>
AuthorDate: Thu Dec 7 10:38:40 2017 -0600
Add simple Makefile for muscle memory
---
elixir_suite/Makefile | 2 ++
1 file changed, 2 insertions(+)
diff --git a/elixir_suite/Makefile b/elixir_suite/Makefile
new file mode 100644
index 0000000..bfcf017
--- /dev/null
+++ b/elixir_suite/Makefile
@@ -0,0 +1,2 @@
+all:
+ mix test --trace
[couchdb] 21/31: Move elixir_suite to test/elixir
Posted by ch...@apache.org.
This is an automated email from the ASF dual-hosted git repository.
chewbranca pushed a commit to branch elixir-suite
in repository https://gitbox.apache.org/repos/asf/couchdb.git
commit 0b48df67520a023c95d6f229534378664601c241
Author: Paul J. Davis <pa...@gmail.com>
AuthorDate: Fri Dec 15 11:52:36 2017 -0600
Move elixir_suite to test/elixir
---
{elixir_suite => test/elixir}/.gitignore | 0
{elixir_suite => test/elixir}/Makefile | 0
{elixir_suite => test/elixir}/README.md | 0
{elixir_suite => test/elixir}/config/config.exs | 0
{elixir_suite => test/elixir}/config/test.exs | 0
{elixir_suite => test/elixir}/lib/couch.ex | 0
{elixir_suite => test/elixir}/mix.exs | 0
{elixir_suite => test/elixir}/mix.lock | 0
{elixir_suite => test/elixir}/test/all_docs_test.exs | 0
{elixir_suite => test/elixir}/test/basics_test.exs | 0
{elixir_suite => test/elixir}/test/config_test.exs | 0
{elixir_suite => test/elixir}/test/reduce_test.exs | 0
{elixir_suite => test/elixir}/test/rewrite_test.exs | 0
{elixir_suite => test/elixir}/test/test_helper.exs | 0
{elixir_suite => test/elixir}/test/uuids_test.exs | 0
{elixir_suite => test/elixir}/test/view_collation_test.exs | 0
16 files changed, 0 insertions(+), 0 deletions(-)
diff --git a/elixir_suite/.gitignore b/test/elixir/.gitignore
similarity index 100%
rename from elixir_suite/.gitignore
rename to test/elixir/.gitignore
diff --git a/elixir_suite/Makefile b/test/elixir/Makefile
similarity index 100%
rename from elixir_suite/Makefile
rename to test/elixir/Makefile
diff --git a/elixir_suite/README.md b/test/elixir/README.md
similarity index 100%
rename from elixir_suite/README.md
rename to test/elixir/README.md
diff --git a/elixir_suite/config/config.exs b/test/elixir/config/config.exs
similarity index 100%
rename from elixir_suite/config/config.exs
rename to test/elixir/config/config.exs
diff --git a/elixir_suite/config/test.exs b/test/elixir/config/test.exs
similarity index 100%
rename from elixir_suite/config/test.exs
rename to test/elixir/config/test.exs
diff --git a/elixir_suite/lib/couch.ex b/test/elixir/lib/couch.ex
similarity index 100%
rename from elixir_suite/lib/couch.ex
rename to test/elixir/lib/couch.ex
diff --git a/elixir_suite/mix.exs b/test/elixir/mix.exs
similarity index 100%
rename from elixir_suite/mix.exs
rename to test/elixir/mix.exs
diff --git a/elixir_suite/mix.lock b/test/elixir/mix.lock
similarity index 100%
rename from elixir_suite/mix.lock
rename to test/elixir/mix.lock
diff --git a/elixir_suite/test/all_docs_test.exs b/test/elixir/test/all_docs_test.exs
similarity index 100%
rename from elixir_suite/test/all_docs_test.exs
rename to test/elixir/test/all_docs_test.exs
diff --git a/elixir_suite/test/basics_test.exs b/test/elixir/test/basics_test.exs
similarity index 100%
rename from elixir_suite/test/basics_test.exs
rename to test/elixir/test/basics_test.exs
diff --git a/elixir_suite/test/config_test.exs b/test/elixir/test/config_test.exs
similarity index 100%
rename from elixir_suite/test/config_test.exs
rename to test/elixir/test/config_test.exs
diff --git a/elixir_suite/test/reduce_test.exs b/test/elixir/test/reduce_test.exs
similarity index 100%
rename from elixir_suite/test/reduce_test.exs
rename to test/elixir/test/reduce_test.exs
diff --git a/elixir_suite/test/rewrite_test.exs b/test/elixir/test/rewrite_test.exs
similarity index 100%
rename from elixir_suite/test/rewrite_test.exs
rename to test/elixir/test/rewrite_test.exs
diff --git a/elixir_suite/test/test_helper.exs b/test/elixir/test/test_helper.exs
similarity index 100%
rename from elixir_suite/test/test_helper.exs
rename to test/elixir/test/test_helper.exs
diff --git a/elixir_suite/test/uuids_test.exs b/test/elixir/test/uuids_test.exs
similarity index 100%
rename from elixir_suite/test/uuids_test.exs
rename to test/elixir/test/uuids_test.exs
diff --git a/elixir_suite/test/view_collation_test.exs b/test/elixir/test/view_collation_test.exs
similarity index 100%
rename from elixir_suite/test/view_collation_test.exs
rename to test/elixir/test/view_collation_test.exs
[couchdb] 18/31: Prefer ?w=3 over hacky sleeps
Posted by ch...@apache.org.
This is an automated email from the ASF dual-hosted git repository.
chewbranca pushed a commit to branch elixir-suite
in repository https://gitbox.apache.org/repos/asf/couchdb.git
commit 91687ee18f70ed86735f8b7cc62076f3b5f980cd
Author: Russell Branca <ch...@apache.org>
AuthorDate: Fri Dec 8 21:41:23 2017 +0000
Prefer ?w=3 over hacky sleeps
---
elixir_suite/test/reduce_test.exs | 12 ++++--------
1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/elixir_suite/test/reduce_test.exs b/elixir_suite/test/reduce_test.exs
index a01c997..9a49bfa 100644
--- a/elixir_suite/test/reduce_test.exs
+++ b/elixir_suite/test/reduce_test.exs
@@ -38,8 +38,7 @@ function (doc) {
assert Couch.put("/#{db_name}/_design/foo", [body: red_doc]).body["ok"]
docs = make_docs(1, num_docs)
- assert Couch.post("/#{db_name}/_bulk_docs", [body: %{:docs => docs}]).status_code == 201
- :timer.sleep(200) # *sigh*
+ assert Couch.post("/#{db_name}/_bulk_docs", [body: %{:docs => docs}, query: %{w: 3}]).status_code == 201
rows = Couch.get(view_url).body["rows"]
assert hd(rows)["value"] == 2 * summate(num_docs)
@@ -96,8 +95,7 @@ function (doc) {
%{keys: ["d", "b"]},
%{keys: ["d", "c"]}
]
- assert Couch.post("/#{db_name}/_bulk_docs", [body: %{docs: docs}]).status_code == 201
- :timer.sleep(20) # *sigh*
+ assert Couch.post("/#{db_name}/_bulk_docs", [body: %{docs: docs}, query: %{w: 3}]).status_code == 201
total_docs = 1 + ((i - 1) * 10 * 11) + ((j + 1) * 11);
assert Couch.get("/#{db_name}").body["doc_count"] == total_docs
end
@@ -195,9 +193,8 @@ function (keys, values, rereduce) {
Enum.each(1..10, fn _ ->
docs = for i <- 1..10, do: %{val: i * 10}
- assert Couch.post("/#{db_name}/_bulk_docs", [body: %{:docs => docs}]).status_code == 201
+ assert Couch.post("/#{db_name}/_bulk_docs", [body: %{:docs => docs}, query: %{w: 3}]).status_code == 201
end)
- :timer.sleep(200) # *sigh*
rows = Couch.get(view_url).body["rows"]
assert_in_delta hd(rows)["value"]["stdDeviation"], 28.722813232690143, 0.0000000001
@@ -226,8 +223,7 @@ function (keys, values, rereduce) {
assert Couch.put("/#{db_name}/_design/foo", [body: ddoc]).body["ok"]
docs = for i <- 0..1122, do: %{_id: Integer.to_string(i), int: i}
- assert Couch.post("/#{db_name}/_bulk_docs", [body: %{:docs => docs}]).status_code == 201
- :timer.sleep(200) # *sigh*
+ assert Couch.post("/#{db_name}/_bulk_docs", [body: %{:docs => docs}, query: %{w: 3}]).status_code == 201
rand_val = fn -> :rand.uniform(100000000) end
[couchdb] 22/31: Integrate Elixir suite with `make`
Posted by ch...@apache.org.
This is an automated email from the ASF dual-hosted git repository.
chewbranca pushed a commit to branch elixir-suite
in repository https://gitbox.apache.org/repos/asf/couchdb.git
commit 17f8de1fd3154cc0005f47ad929edcf2350b99ff
Author: Paul J. Davis <pa...@gmail.com>
AuthorDate: Fri Dec 15 12:08:33 2017 -0600
Integrate Elixir suite with `make`
---
Makefile | 5 +++++
dev/run | 25 ++++++++++++++++---------
test/elixir/run | 4 ++++
3 files changed, 25 insertions(+), 9 deletions(-)
diff --git a/Makefile b/Makefile
index 1e0ea82..db54294 100644
--- a/Makefile
+++ b/Makefile
@@ -137,6 +137,11 @@ soak-eunit: couch
@$(REBAR) setup_eunit 2> /dev/null
while [ $$? -eq 0 ] ; do $(REBAR) -r eunit $(EUNIT_OPTS) ; done
+.PHONY: elixir
+elixir:
+ @rm -rf dev/lib
+ @dev/run -a adm:pass --no-eval test/elixir/run
+
.PHONY: javascript
# target: javascript - Run JavaScript test suites or specific ones defined by suites option
javascript: devclean
diff --git a/dev/run b/dev/run
index a5d8fde..5ab895e 100755
--- a/dev/run
+++ b/dev/run
@@ -132,6 +132,8 @@ def setup_argparse():
help='Optional key=val config overrides. Can be repeated')
parser.add_option('--degrade-cluster', dest="degrade_cluster",type=int, default=0,
help='The number of nodes that should be stopped after cluster config')
+ parser.add_option('--no-eval', action='store_true', default=False,
+ help='Do not eval subcommand output')
return parser.parse_args()
@@ -153,6 +155,7 @@ def setup_context(opts, args):
'haproxy': opts.haproxy,
'haproxy_port': opts.haproxy_port,
'config_overrides': opts.config_overrides,
+ 'no_eval': opts.no_eval,
'reset_logs': True,
'procs': []}
@@ -569,15 +572,19 @@ def join(ctx, lead_port, user, password):
@log('Exec command {cmd}')
def run_command(ctx, cmd):
- p = sp.Popen(cmd, shell=True, stdout=sp.PIPE, stderr=sys.stderr)
- while True:
- line = p.stdout.readline()
- if not line:
- break
- eval(line)
- p.wait()
- exit(p.returncode)
-
+ if ctx['no_eval']:
+ p = sp.Popen(cmd, shell=True)
+ p.wait()
+ exit(p.returncode)
+ else:
+ p = sp.Popen(cmd, shell=True, stdout=sp.PIPE, stderr=sys.stderr)
+ while True:
+ line = p.stdout.readline()
+ if not line:
+ break
+ eval(line)
+ p.wait()
+ exit(p.returncode)
@log('Restart all nodes')
def reboot_nodes(ctx):
diff --git a/test/elixir/run b/test/elixir/run
new file mode 100755
index 0000000..66a5947
--- /dev/null
+++ b/test/elixir/run
@@ -0,0 +1,4 @@
+#!/bin/bash -e
+cd "$(dirname "$0")"
+mix deps.get
+mix test --trace
[couchdb] 10/31: Simple .gitignore for sanity
Posted by ch...@apache.org.
This is an automated email from the ASF dual-hosted git repository.
chewbranca pushed a commit to branch elixir-suite
in repository https://gitbox.apache.org/repos/asf/couchdb.git
commit cb1e1d1ca15e29165d044abadbd429ed919a550d
Author: Paul J. Davis <pa...@gmail.com>
AuthorDate: Thu Dec 7 10:39:04 2017 -0600
Simple .gitignore for sanity
---
elixir_suite/.gitignore | 2 ++
1 file changed, 2 insertions(+)
diff --git a/elixir_suite/.gitignore b/elixir_suite/.gitignore
new file mode 100644
index 0000000..2e39def
--- /dev/null
+++ b/elixir_suite/.gitignore
@@ -0,0 +1,2 @@
+_build/
+deps/
[couchdb] 15/31: Port view_collation.js to view_collation_test.exs
Posted by ch...@apache.org.
This is an automated email from the ASF dual-hosted git repository.
chewbranca pushed a commit to branch elixir-suite
in repository https://gitbox.apache.org/repos/asf/couchdb.git
commit abdd7d608aca6268f3a6ac1d795d311cc4a1975e
Author: Paul J. Davis <pa...@gmail.com>
AuthorDate: Thu Dec 7 12:06:59 2017 -0600
Port view_collation.js to view_collation_test.exs
---
elixir_suite/test/view_collation_test.exs | 133 ++++++++++++++++++++++++++++++
1 file changed, 133 insertions(+)
diff --git a/elixir_suite/test/view_collation_test.exs b/elixir_suite/test/view_collation_test.exs
new file mode 100644
index 0000000..10aec2f
--- /dev/null
+++ b/elixir_suite/test/view_collation_test.exs
@@ -0,0 +1,133 @@
+defmodule ViewCollationTest do
+ use CouchTestCase
+
+ @moduledoc """
+ Test CouchDB View Collation Behavior
+ This is a port of the view_collation.js suite
+ """
+
+ @values [
+ # Special values sort before all other types
+ :null,
+ :false,
+ :true,
+
+ # Then numbers
+ 1,
+ 2,
+ 3.0,
+ 4,
+
+ # Then text, case sensitive
+ "a",
+ "A",
+ "aa",
+ "b",
+ "B",
+ "ba",
+ "bb",
+
+ # Then arrays, compared element by element until different.
+ # Longer arrays sort after their prefixes
+ ["a"],
+ ["b"],
+ ["b", "c"],
+ ["b", "c", "a"],
+ ["b", "d"],
+ ["b", "d", "e"],
+
+ # Then objects, compared each key value in the list until different.
+ # Larger objects sort after their subset objects
+ {[a: 1]},
+ {[a: 2]},
+ {[b: 1]},
+ {[b: 2]},
+ # Member order does matter for collation
+ {[b: 2, a: 1]},
+ {[b: 2, c: 2]}
+ ]
+
+ setup_all do
+ db_name = random_db_name()
+ {:ok, _} = create_db(db_name)
+ on_exit(fn -> delete_db(db_name) end)
+
+ {docs, _} = Enum.flat_map_reduce(@values, 1, fn value, idx ->
+ doc = %{:_id => Integer.to_string(idx), :foo => value}
+ {[doc], idx + 1}
+ end)
+ resp = Couch.post("/#{db_name}/_bulk_docs", body: %{:docs => docs})
+ Enum.each(resp.body, &(assert &1["ok"]))
+
+ map_fun = "function(doc) { emit(doc.foo, null); }"
+ map_doc = %{:views => %{:foo => %{:map => map_fun}}}
+ resp = Couch.put("/#{db_name}/_design/foo", body: map_doc)
+ assert resp.body["ok"]
+
+ {:ok, [db_name: db_name]}
+ end
+
+ test "ascending collation order", context do
+ resp = Couch.get(url(context))
+ pairs = Enum.zip(resp.body["rows"], @values)
+ Enum.each(pairs, fn {row, value} ->
+ assert row["key"] == convert(value)
+ end)
+ end
+
+ test "descending collation order", context do
+ resp = Couch.get(url(context), query: %{"descending" => "true"})
+ pairs = Enum.zip(resp.body["rows"], Enum.reverse(@values))
+ Enum.each(pairs, fn {row, value} ->
+ assert row["key"] == convert(value)
+ end)
+ end
+
+ test "key query option", context do
+ Enum.each(@values, fn value ->
+ resp = Couch.get(url(context), query: %{:key => :jiffy.encode(value)})
+ assert length(resp.body["rows"]) == 1
+ assert Enum.at(resp.body["rows"], 0)["key"] == convert(value)
+ end)
+ end
+
+ test "inclusive_end=true", context do
+ query = %{:endkey => :jiffy.encode("b"), :inclusive_end => true}
+ resp = Couch.get(url(context), query: query)
+ assert Enum.at(resp.body["rows"], -1)["key"] == "b"
+
+ query = Map.put(query, :descending, true)
+ resp = Couch.get(url(context), query: query)
+ assert Enum.at(resp.body["rows"], -1)["key"] == "b"
+ end
+
+ test "inclusive_end=false", context do
+ query = %{:endkey => :jiffy.encode("b"), :inclusive_end => false}
+ resp = Couch.get(url(context), query: query)
+ assert Enum.at(resp.body["rows"], -1)["key"] == "aa"
+
+ query = Map.put(query, :descending, true)
+ resp = Couch.get(url(context), query: query)
+ assert Enum.at(resp.body["rows"], -1)["key"] == "B"
+
+ query = %{
+ :endkey => :jiffy.encode("b"),
+ :endkey_docid => 11,
+ :inclusive_end => false
+ }
+ resp = Couch.get(url(context), query: query)
+ assert Enum.at(resp.body["rows"], -1)["key"] == "aa"
+
+ query = Map.put(query, :endkey_docid, 12)
+ resp = Couch.get(url(context), query: query)
+ assert Enum.at(resp.body["rows"], -1)["key"] == "b"
+ end
+
+ def url(context) do
+ "/#{context[:db_name]}/_design/foo/_view/foo"
+ end
+
+ def convert(value) do
+ :jiffy.decode(:jiffy.encode(value), [:return_maps])
+ end
+end
\ No newline at end of file
[couchdb] 08/31: Remove module attribute
Posted by ch...@apache.org.
This is an automated email from the ASF dual-hosted git repository.
chewbranca pushed a commit to branch elixir-suite
in repository https://gitbox.apache.org/repos/asf/couchdb.git
commit a8e7a7ccfd711a5eec72245ffc69ce337b08daa9
Author: Paul J. Davis <pa...@gmail.com>
AuthorDate: Thu Dec 7 10:38:26 2017 -0600
Remove module attribute
---
elixir_suite/test/uuids_test.exs | 2 --
1 file changed, 2 deletions(-)
diff --git a/elixir_suite/test/uuids_test.exs b/elixir_suite/test/uuids_test.exs
index 3e18e4a..8a9d7f4 100644
--- a/elixir_suite/test/uuids_test.exs
+++ b/elixir_suite/test/uuids_test.exs
@@ -1,8 +1,6 @@
defmodule UUIDsTest do
use CouchTestCase
- @moduletag :config
-
@moduledoc """
Test CouchDB UUIDs API
This is a port of the uuids.js suite
[couchdb] 04/31: Port all_docs.js tests
Posted by ch...@apache.org.
This is an automated email from the ASF dual-hosted git repository.
chewbranca pushed a commit to branch elixir-suite
in repository https://gitbox.apache.org/repos/asf/couchdb.git
commit b3ca83c439af31399f8c0a39c7f22fbb9203000c
Author: Russell Branca <ch...@apache.org>
AuthorDate: Mon Dec 4 23:26:05 2017 +0000
Port all_docs.js tests
---
elixir_suite/test/all_docs_test.exs | 132 ++++++++++++++++++++++++++++++++++++
1 file changed, 132 insertions(+)
diff --git a/elixir_suite/test/all_docs_test.exs b/elixir_suite/test/all_docs_test.exs
new file mode 100644
index 0000000..2fd608d
--- /dev/null
+++ b/elixir_suite/test/all_docs_test.exs
@@ -0,0 +1,132 @@
+defmodule AllDocsTest do
+ use CouchTestCase
+
+ @moduletag :all_docs
+
+ @moduledoc """
+ Test CouchDB _all_docs
+ This is a port of the all_docs.js suite
+ """
+
+ # TODO: do we need to bring this in?
+ # var db = new CouchDB(db_name, {"X-Couch-Full-Commit":"false"}, {w: 3});
+
+ @tag :with_db
+ test "All Docs tests", context do
+ db_name = context[:db_name]
+ resp1 = Couch.post("/#{db_name}", [body: %{:_id => "0", :a => 1, :b => 1}]).body
+ resp2 = Couch.post("/#{db_name}", [body: %{:_id => "3", :a => 4, :b => 16}]).body
+ resp3 = Couch.post("/#{db_name}", [body: %{:_id => "1", :a => 2, :b => 4}]).body
+ resp4 = Couch.post("/#{db_name}", [body: %{:_id => "2", :a => 3, :b => 9}]).body
+
+ assert resp1["ok"]
+ assert resp2["ok"]
+ assert resp3["ok"]
+ assert resp4["ok"]
+
+ revs = [resp1["rev"], resp2["rev"], resp3["rev"], resp4["rev"]]
+
+ # Check _all_docs
+ resp = Couch.get("/#{db_name}/_all_docs").body
+ rows = resp["rows"]
+ assert resp["total_rows"] == length(rows)
+ Enum.each(rows, fn row ->
+ assert row["id"] >= "0" && row["id"] <= "4"
+ end)
+
+ # Check _all_docs with descending=true
+ resp = Couch.get("/#{db_name}/_all_docs", query: %{:descending => true}).body
+ rows = resp["rows"]
+ assert resp["total_rows"] == length(rows)
+
+ # Check _all_docs offset
+ resp = Couch.get("/#{db_name}/_all_docs", query: %{:startkey => "\"2\""}).body
+ assert resp["offset"] == 2
+
+ # Confirm that queries may assume raw collation
+ resp = Couch.get("/#{db_name}/_all_docs", query: %{
+ :startkey => "\"org.couchdb.user:\"",
+ :endkey => "\"org.couchdb.user;\""
+ })
+ assert length(resp.body["rows"]) == 0
+
+ # Check that all docs show up in the changes feed; order can vary
+ resp = Couch.get("/#{db_name}/_changes").body
+ Enum.each(resp["results"], fn row ->
+ assert Enum.member?(revs, hd(row["changes"])["rev"]), "doc #{row["id"]} should be in changes"
+ end)
+
+ # Check that deletions also show up right
+ doc1 = Couch.get("/#{db_name}/1").body
+ assert Couch.delete("/#{db_name}/1", query: %{:rev => doc1["_rev"]}).body["ok"]
+ changes = Couch.get("/#{db_name}/_changes").body["results"]
+ assert length(changes) == 4
+ deleted = Enum.filter(changes, fn row -> row["deleted"] end)
+ assert length(deleted) == 1
+ assert hd(deleted)["id"] == "1"
+
+ # (remember old seq)
+ orig_doc = Enum.find(changes, fn row -> row["id"] == "3" end)
+ # Perform an update
+ doc3 = Couch.get("/#{db_name}/3").body
+ doc3 = Map.put(doc3, :updated, "totally")
+ assert Couch.put("/#{db_name}/3", body: doc3).body["ok"]
+
+ # The update should make doc id 3 have another seq num (not nec. higher or the last though)
+ changes = Couch.get("/#{db_name}/_changes").body["results"]
+ assert length(changes) == 4
+ updated_doc = Enum.find(changes, fn row -> row["id"] == "3" end)
+ assert orig_doc["seq"] != updated_doc["seq"], "seq num should be different"
+
+ # Ok, now let's see what happens with include docs
+ changes = Couch.get("/#{db_name}/_changes", query: %{:include_docs => true}).body["results"]
+ assert length(changes) == 4
+ updated_doc = Enum.find(changes, fn row -> row["id"] == doc3["_id"] end)
+ assert updated_doc["doc"]["updated"] == "totally"
+
+ deleted_doc = Enum.find(changes, fn row -> row["deleted"] end)
+ assert deleted_doc["doc"]["_deleted"]
+
+ # Test _all_docs with keys
+ rows = Couch.post("/#{db_name}/_all_docs", query: %{:include_docs => true}, body: %{:keys => ["1"]}).body["rows"]
+ row = hd(rows)
+ assert length(rows) == 1
+ assert row["key"] == "1"
+ assert row["id"] == "1"
+ assert row["value"]["deleted"]
+ assert row["doc"] == :null
+
+ # Add conflicts
+ conflicted_doc1 = %{:_id => "3", :_rev => "2-aa01552213fafa022e6167113ed01087", :value => "X"}
+ conflicted_doc2 = %{:_id => "3", :_rev => "2-ff01552213fafa022e6167113ed01087", :value => "Z"}
+ assert Couch.put("/#{db_name}/3", query: %{:new_edits => false}, body: conflicted_doc1).body["ok"]
+ assert Couch.put("/#{db_name}/3", query: %{:new_edits => false}, body: conflicted_doc2).body["ok"]
+
+ win_rev = Couch.get("/#{db_name}/3").body
+ changes = Couch.get("/#{db_name}/_changes", query: %{:include_docs => true, :conflicts => true, :style => "all_docs"}).body["results"]
+
+ doc3 = Enum.find(changes, fn row -> row["id"] == "3" end)
+ assert doc3["id"] == "3"
+ assert length(doc3["changes"]) == 3
+ assert win_rev["_rev"] == hd(doc3["changes"])["rev"]
+ assert is_list(doc3["doc"]["_conflicts"])
+ assert length(doc3["doc"]["_conflicts"]) == 2
+
+ rows = Couch.get("/#{db_name}/_all_docs", query: %{:include_docs => true, :conflicts => true}).body["rows"]
+ assert length(rows) == 3
+ change = hd(tl(tl(rows)))
+ assert change["key"] == "3"
+ assert change["id"] == "3"
+ assert change["value"]["rev"] == win_rev["_rev"]
+ assert change["doc"]["_rev"] == win_rev["_rev"]
+ assert change["doc"]["_id"] == "3"
+ assert is_list(change["doc"]["_conflicts"])
+ assert length(change["doc"]["_conflicts"]) == 2
+
+ # Test that _all_docs collates sanely
+ assert Couch.post("/#{db_name}", body: %{:_id => "Z", :foo => "Z"}).body["ok"]
+ assert Couch.post("/#{db_name}", body: %{:_id => "a", :foo => "a"}).body["ok"]
+ rows = Couch.get("/#{db_name}/_all_docs", query: %{:startkey => "\"Z\"", :endkey => "\"Z\""}).body["rows"]
+ assert length(rows) == 1
+ end
+end
[couchdb] 13/31: Update the context in place for setup
Posted by ch...@apache.org.
This is an automated email from the ASF dual-hosted git repository.
chewbranca pushed a commit to branch elixir-suite
in repository https://gitbox.apache.org/repos/asf/couchdb.git
commit 42d1dcaad1570c6f5101fa8ddac399f499d55e7c
Author: Paul J. Davis <pa...@gmail.com>
AuthorDate: Thu Dec 7 11:38:38 2017 -0600
Update the context in place for setup
Turns out that with multiple setup functions the same context is piped
through each definition. Thus we'll always want to update in place
rather than creating empty contexts which possibly removes settings made
in previous setup or setup_all invocations.
---
elixir_suite/test/test_helper.exs | 43 ++++++++++++++++++++++-----------------
1 file changed, 24 insertions(+), 19 deletions(-)
diff --git a/elixir_suite/test/test_helper.exs b/elixir_suite/test/test_helper.exs
index ecd88e5..cef7d13 100644
--- a/elixir_suite/test/test_helper.exs
+++ b/elixir_suite/test/test_helper.exs
@@ -15,31 +15,36 @@ defmodule CouchTestCase do
use ExUnit.Case
setup context do
- {:ok, db_context} = set_db_context(context)
- {:ok, cfg_context} = set_config_context(context)
- {:ok, db_context ++ cfg_context}
+ setup_funs = [
+ &set_db_context/1,
+ &set_config_context/1
+ ]
+ context = Enum.reduce(setup_funs, context, fn setup_fun, acc ->
+ setup_fun.(acc)
+ end)
+ {:ok, context}
end
def set_db_context(context) do
- db_name = if context[:with_db] != nil or context[:with_db_name] != nil do
- if context[:with_db] != nil and context[:with_db] != true do
- context[:with_db]
- else
- case context[:with_db_name] do
- nil -> random_db_name()
- true -> random_db_name()
- name -> name
- end
- end
+ context = case context do
+ %{:with_db_name => true} ->
+ Map.put(context, :db_name, random_db_name())
+ %{:with_db_name => db_name} when is_binary(db_name) ->
+ Map.put(context, :db_name, db_name)
+ %{:with_db => true} ->
+ Map.put(context, :db_name, random_db_name())
+ %{:with_db => db_name} when is_binary(db_name) ->
+ Map.put(context, :db_name, db_name)
+ _ ->
+ context
end
- if context[:with_db] != nil do
- {:ok, _} = create_db(db_name)
-
- on_exit(fn -> delete_db(db_name) end)
+ if Map.has_key? context, :with_db do
+ {:ok, _} = create_db(context[:db_name])
+ on_exit(fn -> delete_db(context[:db_name]) end)
end
- {:ok, db_name: db_name}
+ context
end
def set_config_context(context) do
@@ -48,7 +53,7 @@ defmodule CouchTestCase do
set_config(cfg)
end)
end
- {:ok, []}
+ context
end
def random_db_name do
[couchdb] 24/31: Replace header match with regexp
Posted by ch...@apache.org.
This is an automated email from the ASF dual-hosted git repository.
chewbranca pushed a commit to branch elixir-suite
in repository https://gitbox.apache.org/repos/asf/couchdb.git
commit dadbf13578a487349815f2fdc101c2fbb53d2685
Author: Paul J. Davis <pa...@gmail.com>
AuthorDate: Fri Dec 15 12:30:22 2017 -0600
Replace header match with regexp
This just makes the Content-Type check slightly more robust.
---
test/elixir/lib/couch.ex | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/test/elixir/lib/couch.ex b/test/elixir/lib/couch.ex
index 5119011..8f0aca9 100644
--- a/test/elixir/lib/couch.ex
+++ b/test/elixir/lib/couch.ex
@@ -28,11 +28,10 @@ defmodule Couch do
end
def process_response_body(headers, body) do
- case headers[:'content-type'] do
- "application/json" ->
- body |> IO.iodata_to_binary |> :jiffy.decode([:return_maps])
- _ ->
- process_response_body(body)
+ if String.match?(headers[:"Content-Type"], ~r/application\/json/) do
+ body |> IO.iodata_to_binary |> :jiffy.decode([:return_maps])
+ else
+ process_response_body(body)
end
end
[couchdb] 12/31: DRY constant definition
Posted by ch...@apache.org.
This is an automated email from the ASF dual-hosted git repository.
chewbranca pushed a commit to branch elixir-suite
in repository https://gitbox.apache.org/repos/asf/couchdb.git
commit 005d48b9dd06407a0906ba2d85ad1d9431aae591
Author: Paul J. Davis <pa...@gmail.com>
AuthorDate: Thu Dec 7 10:40:40 2017 -0600
DRY constant definition
---
elixir_suite/test/uuids_test.exs | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/elixir_suite/test/uuids_test.exs b/elixir_suite/test/uuids_test.exs
index 8a9d7f4..563f73b 100644
--- a/elixir_suite/test/uuids_test.exs
+++ b/elixir_suite/test/uuids_test.exs
@@ -79,9 +79,10 @@ defmodule UUIDsTest do
assert u1 < u2
end
+ @utc_id_suffix "frog"
@tag config: [
{"uuids", "algorithm", "utc_id"},
- {"uuids", "utc_id_suffix", "frog"}
+ {"uuids", "utc_id_suffix", @utc_id_suffix}
]
test "utc_id uuids are correct" do
resp = Couch.get("/_uuids", query: %{:count => 10})
@@ -89,8 +90,8 @@ defmodule UUIDsTest do
[uuid | rest_uuids] = resp.body["uuids"]
Enum.reduce(rest_uuids, uuid, fn curr, acc ->
- assert String.length(curr) == 14 + String.length("frog")
- assert String.slice(curr, 14..-1) == "frog"
+ assert String.length(curr) == 14 + String.length(@utc_id_suffix)
+ assert String.slice(curr, 14..-1) == @utc_id_suffix
assert curr > acc
curr
end)