You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@couchdb.apache.org by Ilya Khlopotov <ii...@apache.org> on 2019/05/22 18:42:03 UTC

Use ExUnit to write unit tests.

Hi everyone,

With the upgrade of supported Erlang version and introduction of Elixir into our integration test suite we have an opportunity to replace currently used eunit (for new tests only) with Elixir based ExUnit. 
The eunit testing framework is very hard to maintain. In particular, it has the following problems:
- the process structure is designed in such a way that failure in setup or teardown of one test affects the execution environment of subsequent tests. Which makes it really hard to locate the place where the problem is coming from.
- inline test in the same module as the functions it tests might be skipped
- incorrect usage of ?assert vs ?_assert is not detectable since it makes tests pass 
- there is a weird (and hard to debug) interaction when used in combination with meck 
   - https://github.com/eproxus/meck/issues/133#issuecomment-113189678
   - https://github.com/eproxus/meck/issues/61
   - meck:unload() must be used instead of meck:unload(Module)
- teardown is not always run, which affects all subsequent tests
- grouping of tests is tricky
- it is hard to group tests so individual tests have meaningful descriptions

We believe that with ExUnit we wouldn't have these problems:
- on_exit function is reliable in ExUnit
- it is easy to group tests using `describe` directive
- code-generation is trivial, which makes it is possible to generate tests from formal spec (if/when we have one)

Here are a few examples:

# Test adapters to test different interfaces using same test suite

CouchDB has four different interfaces which we need to test. These are:
- chttpd
- couch_httpd
- fabric
- couch_db

There is a bunch of operations which are very similar. The only differences between them are:
- setup/teardown needs different set of applications
- we need to use different modules to test the operations

This problem is solved by using testing adapter. We would define a common protocol, which we would use for testing.
Then we implement this protocol for every interface we want to use.

```
defmodule Couch.Test.CRUD do
  use ExUnit.Case
  alias Couch.Test.Adapter
  alias Couch.Test.Utils, as: Utils

  alias Couch.Test.Setup

  require Record

  test_groups = [
    "using Clustered API": Adapter.Clustered,
    "using Backdoor API": Adapter.Backdoor,
    "using Fabric API": Adapter.Fabric,
  ]

  for {describe, adapter} <- test_groups do
    describe "Database CRUD #{describe}" do
      @describetag setup: %Setup{}
        |> Setup.Start.new([:chttpd])
        |> Setup.Adapter.new(adapter)
        |> Setup.Admin.new(user: "adm", password: "pass")
        |> Setup.Login.new(user: "adm", password: "pass")
      test "Create", %{setup: setup} do
        db_name = Utils.random_name("db")
        setup_ctx = setup |> Setup.run()
        assert {:ok, resp} = Adapter.create_db(Setup.get(setup_ctx, :adapter), db_name)
        assert resp.body["ok"]
      end
    end
  end
end
```

# Using same test suite to compare new implementation of the same interface with the old one

Imagine that we are doing a major rewrite of a module which would implement the same interface.
How do we compare both implementations return the same results for the same input?
It is easy in Elixir, here is a sketch:
```
defmodule Couch.Test.Fabric.Rewrite do
  use ExUnit.Case
  alias Couch.Test.Utils, as: Utils

  # we cannot use defrecord here because we need to construct
  # record at compile time
  admin_ctx = {:user_ctx, Utils.erlang_record(
    :user_ctx, "couch/include/couch_db.hrl", roles: ["_admin"])}

  test_cases = [
    {"create database": {create_db, [:db_name, []]}},
    {"create database as admin": {create_db, [:db_name, [admin_ctx]]}}
  ]
  module_a = :fabric
  module_b = :fabric3

  describe "Test compatibility of '#{module_a}' with '#{module_b}'" do
    for {description, {function, args}} <- test_cases do
      test "#{description}" do
        result_a = unquote(module_a).unquote(function)(unquote_splicing(args))
        result_b = unquote(module_b).unquote(function)(unquote_splicing(args))
        assert result_a == result_b
      end
    end
  end

end
```
As a result we would get following tests
```
Couch.Test.Fabric.Rewrite
  * test Test compatibility of 'fabric' with 'fabric3' create database (0.01ms)
  * test Test compatibility of 'fabric' with 'fabric3' create database as admin (0.01ms)
```

The prototype of integration is in this draft PR https://github.com/apache/couchdb/pull/2036. I am planing to write formal RFC after first round of discussions on ML.

Best regards,
iilyak

Re: Use ExUnit to write unit tests.

Posted by Paul Davis <pa...@gmail.com>.
On Thu, May 23, 2019 at 11:04 AM Joan Touzet <wo...@apache.org> wrote:
>
> On 2019-05-23 11:15, Paul Davis wrote:
> > I'm pretty happy with the ExUnit we've got going for the HTTP
> > interface and would be an enthusiastic +1 on starting to use it for
> > internals as well.
>
> Wait, where's the full ExUnit implementation for the HTTP interface? Is
> that Ilya's PR, or something that Cloudant runs internally?
>
> If you mean the slow conversion of the JS tests over to Elixir, I wasn't
> aware that these were implemented in ExUnit already. Learn something new
> every day!
>

Just the slow conversion is all I meant. There's no magical HTTP test
suite hiding anywhere. :P

> > The only thing I'd say is that the adapter concept while interesting
> > doesn't feel like it would be that interesting for our particular
> > situation. I could see it being useful for the 5984/5986 distinction
> > since its the same code underneath and we'd only be munging a few
> > differences for testing. However, as Garren points out 5986 is going
> > to disappear one way or another so long term not a huge deal.
>
> +1, the intent was to deprecate 5986 for CouchDB 3.0, and obviously it's
> gone for 4.0.
>
> -Joan
>

Re: Use ExUnit to write unit tests.

Posted by Joan Touzet <wo...@apache.org>.
On 2019-05-23 11:15, Paul Davis wrote:
> I'm pretty happy with the ExUnit we've got going for the HTTP
> interface and would be an enthusiastic +1 on starting to use it for
> internals as well.

Wait, where's the full ExUnit implementation for the HTTP interface? Is
that Ilya's PR, or something that Cloudant runs internally?

If you mean the slow conversion of the JS tests over to Elixir, I wasn't
aware that these were implemented in ExUnit already. Learn something new
every day!

> The only thing I'd say is that the adapter concept while interesting
> doesn't feel like it would be that interesting for our particular
> situation. I could see it being useful for the 5984/5986 distinction
> since its the same code underneath and we'd only be munging a few
> differences for testing. However, as Garren points out 5986 is going
> to disappear one way or another so long term not a huge deal.

+1, the intent was to deprecate 5986 for CouchDB 3.0, and obviously it's
gone for 4.0.

-Joan


Re: Use ExUnit to write unit tests.

Posted by Paul Davis <pa...@gmail.com>.
I'm pretty happy with the ExUnit we've got going for the HTTP
interface and would be an enthusiastic +1 on starting to use it for
internals as well.

The only thing I'd say is that the adapter concept while interesting
doesn't feel like it would be that interesting for our particular
situation. I could see it being useful for the 5984/5986 distinction
since its the same code underneath and we'd only be munging a few
differences for testing. However, as Garren points out 5986 is going
to disappear one way or another so long term not a huge deal.

For testing HTTP vs Fabric layer I'd wager that it'll turn out to be
not very useful. There's a *lot* of code in chttpd that mutates
results returned from fabric functions. Attempting to do meaningful
tests using logic based on adapaters seems to me like it means that
you'd have to either reimplement chttpd bug-complete in the fabric
adapter, or do some sort of weird inverted-chttpd in the HTTP adapter
such that your tests can make the same assertions. And neither of
those ideas sounds like a good idea to me.

Currently my ideal approach would be to start by implementing a test
suite that covers fabric thoroughly (based on at least code coverage
and possibly unnamed other tooling), and then move to chttpd and do
the same. That way each test suite is focused on a particular layer of
concern rather than trying to formulate a single suite that tests two
different layers.

On Thu, May 23, 2019 at 6:21 AM Ilya Khlopotov <ii...@apache.org> wrote:
>
> Hi Joan,
>
> My answers inline
>
> On 2019/05/22 20:16:18, Joan Touzet <wo...@apache.org> wrote:
> > Hi Ilya, thanks for starting this thread. Comments inline.
> >
> > On 2019-05-22 14:42, Ilya Khlopotov wrote:
> > > The eunit testing framework is very hard to maintain. In particular, it has the following problems:
> > > - the process structure is designed in such a way that failure in setup or teardown of one test affects the execution environment of subsequent tests. Which makes it really hard to locate the place where the problem is coming from.
> >
> > I've personally experienced this a lot when reviewing failed logfiles,
> > trying to find the *first* failure where things go wrong. It's a huge
> > problem.
> >
> > > - inline test in the same module as the functions it tests might be skipped
> > > - incorrect usage of ?assert vs ?_assert is not detectable since it makes tests pass
> > > - there is a weird (and hard to debug) interaction when used in combination with meck
> > >    - https://github.com/eproxus/meck/issues/133#issuecomment-113189678
> > >    - https://github.com/eproxus/meck/issues/61
> > >    - meck:unload() must be used instead of meck:unload(Module)
> >
> > Eep! I wasn't aware of this one. That's ugly.
> >
> > > - teardown is not always run, which affects all subsequent tests
> >
> > Have first-hand experienced this one too.
> >
> > > - grouping of tests is tricky
> > > - it is hard to group tests so individual tests have meaningful descriptions
> > >
> > > We believe that with ExUnit we wouldn't have these problems:
> >
> > Who's "we"?
> Wrong pronoun read it as I.
>
> >
> > > - on_exit function is reliable in ExUnit
> > > - it is easy to group tests using `describe` directive
> > > - code-generation is trivial, which makes it is possible to generate tests from formal spec (if/when we have one)
> >
> > Can you address the timeout question w.r.t. EUnit that I raised
> > elsewhere for cross-platform compatibility testing? I know that
> > Peng ran into the same issues I did here and was looking into extending
> > timeouts.
> >
> > Many of our tests suffer from failures where CI resources are slow and
> > simply fail due to taking longer than expected. Does ExUnit have any
> > additional support here?
> >
> > A suggestion was made (by Jay Doane, I believe, on IRC) that perhaps we
> > simply remove all timeout==failure logic (somehow?) and consider a
> > timeout a hung test run, which would eventually fail the entire suite.
> > This would ultimately lead to better deterministic testing, but we'd
> > probably uncover quite a few bugs in the process (esp. against CouchDB
> > <= 4.0).
>
> There is one easy workaround. We could set trace: true in the config
> because one of the side effects of it is timeout = infinity (see here https://github.com/elixir-lang/elixir/blob/master/lib/ex_unit/lib/ex_unit/runner.ex#L410). However this approach has an important caveat:
> - all tests would be run sequentially which means that we wouldn't be able to parallelize them latter.
>
> > >
> > > Here are a few examples:
> > >
> > > # Test adapters to test different interfaces using same test suite
> >
> > This is neat. I'd like someone else to comment whether this the approach
> > you define will handle the polymorphic interfaces gracefully, or if the
> > effort to parametrise/DRY out the tests will be more difficulty than
> > simply maintaining 4 sets of tests.
> >
> >
> > > # Using same test suite to compare new implementation of the same interface with the old one
> > >
> > > Imagine that we are doing a major rewrite of a module which would implement the same interface.
> >
> > *tries to imagine such a 'hypothetical' rewrite* :)
> > > How do we compare both implementations return the same results for the same input?
> > > It is easy in Elixir, here is a sketch:
> >
> > Sounds interesting. I'd again like an analysis (from someone else) as to
> > how straightforward this would be to implement.
> >
> > -Joan
> >
> >

Re: Use ExUnit to write unit tests.

Posted by Ilya Khlopotov <ii...@apache.org>.
Hi Joan,

My answers inline

On 2019/05/22 20:16:18, Joan Touzet <wo...@apache.org> wrote: 
> Hi Ilya, thanks for starting this thread. Comments inline.
> 
> On 2019-05-22 14:42, Ilya Khlopotov wrote:
> > The eunit testing framework is very hard to maintain. In particular, it has the following problems:
> > - the process structure is designed in such a way that failure in setup or teardown of one test affects the execution environment of subsequent tests. Which makes it really hard to locate the place where the problem is coming from.
> 
> I've personally experienced this a lot when reviewing failed logfiles,
> trying to find the *first* failure where things go wrong. It's a huge
> problem.
> 
> > - inline test in the same module as the functions it tests might be skipped
> > - incorrect usage of ?assert vs ?_assert is not detectable since it makes tests pass 
> > - there is a weird (and hard to debug) interaction when used in combination with meck 
> >    - https://github.com/eproxus/meck/issues/133#issuecomment-113189678
> >    - https://github.com/eproxus/meck/issues/61
> >    - meck:unload() must be used instead of meck:unload(Module)
> 
> Eep! I wasn't aware of this one. That's ugly.
> 
> > - teardown is not always run, which affects all subsequent tests
> 
> Have first-hand experienced this one too.
> 
> > - grouping of tests is tricky
> > - it is hard to group tests so individual tests have meaningful descriptions
> > 
> > We believe that with ExUnit we wouldn't have these problems:
> 
> Who's "we"?
Wrong pronoun read it as I.

> 
> > - on_exit function is reliable in ExUnit
> > - it is easy to group tests using `describe` directive
> > - code-generation is trivial, which makes it is possible to generate tests from formal spec (if/when we have one)
> 
> Can you address the timeout question w.r.t. EUnit that I raised
> elsewhere for cross-platform compatibility testing? I know that
> Peng ran into the same issues I did here and was looking into extending
> timeouts.
> 
> Many of our tests suffer from failures where CI resources are slow and
> simply fail due to taking longer than expected. Does ExUnit have any
> additional support here?
> 
> A suggestion was made (by Jay Doane, I believe, on IRC) that perhaps we
> simply remove all timeout==failure logic (somehow?) and consider a
> timeout a hung test run, which would eventually fail the entire suite.
> This would ultimately lead to better deterministic testing, but we'd
> probably uncover quite a few bugs in the process (esp. against CouchDB
> <= 4.0).

There is one easy workaround. We could set trace: true in the config
because one of the side effects of it is timeout = infinity (see here https://github.com/elixir-lang/elixir/blob/master/lib/ex_unit/lib/ex_unit/runner.ex#L410). However this approach has an important caveat:
- all tests would be run sequentially which means that we wouldn't be able to parallelize them latter. 

> > 
> > Here are a few examples:
> > 
> > # Test adapters to test different interfaces using same test suite
> 
> This is neat. I'd like someone else to comment whether this the approach
> you define will handle the polymorphic interfaces gracefully, or if the
> effort to parametrise/DRY out the tests will be more difficulty than
> simply maintaining 4 sets of tests.
> 
> 
> > # Using same test suite to compare new implementation of the same interface with the old one
> > 
> > Imagine that we are doing a major rewrite of a module which would implement the same interface.
> 
> *tries to imagine such a 'hypothetical' rewrite* :)
> > How do we compare both implementations return the same results for the same input?
> > It is easy in Elixir, here is a sketch:
> 
> Sounds interesting. I'd again like an analysis (from someone else) as to
> how straightforward this would be to implement.
> 
> -Joan
> 
> 

Re: Use ExUnit to write unit tests.

Posted by Garren Smith <ga...@apache.org>.
Hi iilya,

This is really great, I've found writing tests in Elixir to be quite
pleasant whereas when I use eunit I spend a lot of time shouting at my
screen. How would the current elixir test be integrated in? Would we
rewrite them to use your new adapter setup so we can test again the various
interfaces?

I took a quick look at the code and it looks really good, my own concern is
maintaining the three different adapters for testing. That could be a lot
of work. From what I remember we are looking at removing support for the
backdoor ports (*5986), so maybe we can remove the need to maintain that
one.

Otherwise I'm +1 on this.

Cheers
Garren


On Wed, May 22, 2019 at 10:16 PM Joan Touzet <wo...@apache.org> wrote:

> Hi Ilya, thanks for starting this thread. Comments inline.
>
> On 2019-05-22 14:42, Ilya Khlopotov wrote:
> > The eunit testing framework is very hard to maintain. In particular, it
> has the following problems:
> > - the process structure is designed in such a way that failure in setup
> or teardown of one test affects the execution environment of subsequent
> tests. Which makes it really hard to locate the place where the problem is
> coming from.
>
> I've personally experienced this a lot when reviewing failed logfiles,
> trying to find the *first* failure where things go wrong. It's a huge
> problem.
>
> > - inline test in the same module as the functions it tests might be
> skipped
> > - incorrect usage of ?assert vs ?_assert is not detectable since it
> makes tests pass
> > - there is a weird (and hard to debug) interaction when used in
> combination with meck
> >    - https://github.com/eproxus/meck/issues/133#issuecomment-113189678
> >    - https://github.com/eproxus/meck/issues/61
> >    - meck:unload() must be used instead of meck:unload(Module)
>
> Eep! I wasn't aware of this one. That's ugly.
>
> > - teardown is not always run, which affects all subsequent tests
>
> Have first-hand experienced this one too.
>
> > - grouping of tests is tricky
> > - it is hard to group tests so individual tests have meaningful
> descriptions
> >
> > We believe that with ExUnit we wouldn't have these problems:
>
> Who's "we"?
>
I think I am one of the "we", I've chatted to iilya a few times around
finding a better way to write and run tests in CouchDB.


>
> > - on_exit function is reliable in ExUnit
> > - it is easy to group tests using `describe` directive
> > - code-generation is trivial, which makes it is possible to generate
> tests from formal spec (if/when we have one)
>
> Can you address the timeout question w.r.t. EUnit that I raised
> elsewhere for cross-platform compatibility testing? I know that
> Peng ran into the same issues I did here and was looking into extending
> timeouts.
>
> Many of our tests suffer from failures where CI resources are slow and
> simply fail due to taking longer than expected. Does ExUnit have any
> additional support here?
>
> A suggestion was made (by Jay Doane, I believe, on IRC) that perhaps we
> simply remove all timeout==failure logic (somehow?) and consider a
> timeout a hung test run, which would eventually fail the entire suite.
> This would ultimately lead to better deterministic testing, but we'd
> probably uncover quite a few bugs in the process (esp. against CouchDB
> <= 4.0).
>
> >
> > Here are a few examples:
> >
> > # Test adapters to test different interfaces using same test suite
>
> This is neat. I'd like someone else to comment whether this the approach
> you define will handle the polymorphic interfaces gracefully, or if the
> effort to parametrise/DRY out the tests will be more difficulty than
> simply maintaining 4 sets of tests.
>
>
> > # Using same test suite to compare new implementation of the same
> interface with the old one
> >
> > Imagine that we are doing a major rewrite of a module which would
> implement the same interface.
>
> *tries to imagine such a 'hypothetical' rewrite* :)
>
> > How do we compare both implementations return the same results for the
> same input?
> > It is easy in Elixir, here is a sketch:
>
> Sounds interesting. I'd again like an analysis (from someone else) as to
> how straightforward this would be to implement.
>
> -Joan
>
>

Re: Use ExUnit to write unit tests.

Posted by Joan Touzet <wo...@apache.org>.
Hi Ilya, thanks for starting this thread. Comments inline.

On 2019-05-22 14:42, Ilya Khlopotov wrote:
> The eunit testing framework is very hard to maintain. In particular, it has the following problems:
> - the process structure is designed in such a way that failure in setup or teardown of one test affects the execution environment of subsequent tests. Which makes it really hard to locate the place where the problem is coming from.

I've personally experienced this a lot when reviewing failed logfiles,
trying to find the *first* failure where things go wrong. It's a huge
problem.

> - inline test in the same module as the functions it tests might be skipped
> - incorrect usage of ?assert vs ?_assert is not detectable since it makes tests pass 
> - there is a weird (and hard to debug) interaction when used in combination with meck 
>    - https://github.com/eproxus/meck/issues/133#issuecomment-113189678
>    - https://github.com/eproxus/meck/issues/61
>    - meck:unload() must be used instead of meck:unload(Module)

Eep! I wasn't aware of this one. That's ugly.

> - teardown is not always run, which affects all subsequent tests

Have first-hand experienced this one too.

> - grouping of tests is tricky
> - it is hard to group tests so individual tests have meaningful descriptions
> 
> We believe that with ExUnit we wouldn't have these problems:

Who's "we"?

> - on_exit function is reliable in ExUnit
> - it is easy to group tests using `describe` directive
> - code-generation is trivial, which makes it is possible to generate tests from formal spec (if/when we have one)

Can you address the timeout question w.r.t. EUnit that I raised
elsewhere for cross-platform compatibility testing? I know that
Peng ran into the same issues I did here and was looking into extending
timeouts.

Many of our tests suffer from failures where CI resources are slow and
simply fail due to taking longer than expected. Does ExUnit have any
additional support here?

A suggestion was made (by Jay Doane, I believe, on IRC) that perhaps we
simply remove all timeout==failure logic (somehow?) and consider a
timeout a hung test run, which would eventually fail the entire suite.
This would ultimately lead to better deterministic testing, but we'd
probably uncover quite a few bugs in the process (esp. against CouchDB
<= 4.0).

> 
> Here are a few examples:
> 
> # Test adapters to test different interfaces using same test suite

This is neat. I'd like someone else to comment whether this the approach
you define will handle the polymorphic interfaces gracefully, or if the
effort to parametrise/DRY out the tests will be more difficulty than
simply maintaining 4 sets of tests.


> # Using same test suite to compare new implementation of the same interface with the old one
> 
> Imagine that we are doing a major rewrite of a module which would implement the same interface.

*tries to imagine such a 'hypothetical' rewrite* :)

> How do we compare both implementations return the same results for the same input?
> It is easy in Elixir, here is a sketch:

Sounds interesting. I'd again like an analysis (from someone else) as to
how straightforward this would be to implement.

-Joan


Use ExUnit to write unit tests.

Posted by Ilya Khlopotov <ii...@apache.org>.
Hi everyone,

I am not exactly sure how to proceed with an RFC https://github.com/apache/couchdb-documentation/pull/415 about using ExUnit to write unit tests. I am using information from https://couchdb.apache.org/bylaws.html#rfc
- introduction of a new testing framework is a technical decission and doesn't need an RFC
- on the other hand ExUnit makes Elixir dependency mandatory which is something that need to be agreed upon
- also it seems that this thread is not in correct one to discuss an RFC since it doesn't include [DISCUSSION] prefix.

Please advice how to classify introduction of Elixir based testing framework into unit testing. 

Best regards,
iilyak

On 2019/05/22 18:42:03, Ilya Khlopotov <ii...@apache.org> wrote: 
> Hi everyone,
> 
> With the upgrade of supported Erlang version and introduction of Elixir into our integration test suite we have an opportunity to replace currently used eunit (for new tests only) with Elixir based ExUnit. 
> The eunit testing framework is very hard to maintain. In particular, it has the following problems:
> - the process structure is designed in such a way that failure in setup or teardown of one test affects the execution environment of subsequent tests. Which makes it really hard to locate the place where the problem is coming from.
> - inline test in the same module as the functions it tests might be skipped
> - incorrect usage of ?assert vs ?_assert is not detectable since it makes tests pass 
> - there is a weird (and hard to debug) interaction when used in combination with meck 
>    - https://github.com/eproxus/meck/issues/133#issuecomment-113189678
>    - https://github.com/eproxus/meck/issues/61
>    - meck:unload() must be used instead of meck:unload(Module)
> - teardown is not always run, which affects all subsequent tests
> - grouping of tests is tricky
> - it is hard to group tests so individual tests have meaningful descriptions
> 
> We believe that with ExUnit we wouldn't have these problems:
> - on_exit function is reliable in ExUnit
> - it is easy to group tests using `describe` directive
> - code-generation is trivial, which makes it is possible to generate tests from formal spec (if/when we have one)
> 
> Here are a few examples:
> 
> # Test adapters to test different interfaces using same test suite
> 
> CouchDB has four different interfaces which we need to test. These are:
> - chttpd
> - couch_httpd
> - fabric
> - couch_db
> 
> There is a bunch of operations which are very similar. The only differences between them are:
> - setup/teardown needs different set of applications
> - we need to use different modules to test the operations
> 
> This problem is solved by using testing adapter. We would define a common protocol, which we would use for testing.
> Then we implement this protocol for every interface we want to use.
> 
> ```
> defmodule Couch.Test.CRUD do
>   use ExUnit.Case
>   alias Couch.Test.Adapter
>   alias Couch.Test.Utils, as: Utils
> 
>   alias Couch.Test.Setup
> 
>   require Record
> 
>   test_groups = [
>     "using Clustered API": Adapter.Clustered,
>     "using Backdoor API": Adapter.Backdoor,
>     "using Fabric API": Adapter.Fabric,
>   ]
> 
>   for {describe, adapter} <- test_groups do
>     describe "Database CRUD #{describe}" do
>       @describetag setup: %Setup{}
>         |> Setup.Start.new([:chttpd])
>         |> Setup.Adapter.new(adapter)
>         |> Setup.Admin.new(user: "adm", password: "pass")
>         |> Setup.Login.new(user: "adm", password: "pass")
>       test "Create", %{setup: setup} do
>         db_name = Utils.random_name("db")
>         setup_ctx = setup |> Setup.run()
>         assert {:ok, resp} = Adapter.create_db(Setup.get(setup_ctx, :adapter), db_name)
>         assert resp.body["ok"]
>       end
>     end
>   end
> end
> ```
> 
> # Using same test suite to compare new implementation of the same interface with the old one
> 
> Imagine that we are doing a major rewrite of a module which would implement the same interface.
> How do we compare both implementations return the same results for the same input?
> It is easy in Elixir, here is a sketch:
> ```
> defmodule Couch.Test.Fabric.Rewrite do
>   use ExUnit.Case
>   alias Couch.Test.Utils, as: Utils
> 
>   # we cannot use defrecord here because we need to construct
>   # record at compile time
>   admin_ctx = {:user_ctx, Utils.erlang_record(
>     :user_ctx, "couch/include/couch_db.hrl", roles: ["_admin"])}
> 
>   test_cases = [
>     {"create database": {create_db, [:db_name, []]}},
>     {"create database as admin": {create_db, [:db_name, [admin_ctx]]}}
>   ]
>   module_a = :fabric
>   module_b = :fabric3
> 
>   describe "Test compatibility of '#{module_a}' with '#{module_b}'" do
>     for {description, {function, args}} <- test_cases do
>       test "#{description}" do
>         result_a = unquote(module_a).unquote(function)(unquote_splicing(args))
>         result_b = unquote(module_b).unquote(function)(unquote_splicing(args))
>         assert result_a == result_b
>       end
>     end
>   end
> 
> end
> ```
> As a result we would get following tests
> ```
> Couch.Test.Fabric.Rewrite
>   * test Test compatibility of 'fabric' with 'fabric3' create database (0.01ms)
>   * test Test compatibility of 'fabric' with 'fabric3' create database as admin (0.01ms)
> ```
> 
> The prototype of integration is in this draft PR https://github.com/apache/couchdb/pull/2036. I am planing to write formal RFC after first round of discussions on ML.
> 
> Best regards,
> iilyak
> 

[VOTE] Use ExUnit to write unit tests.

Posted by Ilya Khlopotov <ii...@apache.org>.
Hi,

Starting formal vote on RFC according to bylaws. RFC https://github.com/apache/couchdb-documentation/pull/415

Best regards,
iilyak

On 2019/05/22 18:42:03, Ilya Khlopotov <ii...@apache.org> wrote: 
> Hi everyone,
> 
> With the upgrade of supported Erlang version and introduction of Elixir into our integration test suite we have an opportunity to replace currently used eunit (for new tests only) with Elixir based ExUnit. 
> The eunit testing framework is very hard to maintain. In particular, it has the following problems:
> - the process structure is designed in such a way that failure in setup or teardown of one test affects the execution environment of subsequent tests. Which makes it really hard to locate the place where the problem is coming from.
> - inline test in the same module as the functions it tests might be skipped
> - incorrect usage of ?assert vs ?_assert is not detectable since it makes tests pass 
> - there is a weird (and hard to debug) interaction when used in combination with meck 
>    - https://github.com/eproxus/meck/issues/133#issuecomment-113189678
>    - https://github.com/eproxus/meck/issues/61
>    - meck:unload() must be used instead of meck:unload(Module)
> - teardown is not always run, which affects all subsequent tests
> - grouping of tests is tricky
> - it is hard to group tests so individual tests have meaningful descriptions
> 
> We believe that with ExUnit we wouldn't have these problems:
> - on_exit function is reliable in ExUnit
> - it is easy to group tests using `describe` directive
> - code-generation is trivial, which makes it is possible to generate tests from formal spec (if/when we have one)
> 
> Here are a few examples:
> 
> # Test adapters to test different interfaces using same test suite
> 
> CouchDB has four different interfaces which we need to test. These are:
> - chttpd
> - couch_httpd
> - fabric
> - couch_db
> 
> There is a bunch of operations which are very similar. The only differences between them are:
> - setup/teardown needs different set of applications
> - we need to use different modules to test the operations
> 
> This problem is solved by using testing adapter. We would define a common protocol, which we would use for testing.
> Then we implement this protocol for every interface we want to use.
> 
> ```
> defmodule Couch.Test.CRUD do
>   use ExUnit.Case
>   alias Couch.Test.Adapter
>   alias Couch.Test.Utils, as: Utils
> 
>   alias Couch.Test.Setup
> 
>   require Record
> 
>   test_groups = [
>     "using Clustered API": Adapter.Clustered,
>     "using Backdoor API": Adapter.Backdoor,
>     "using Fabric API": Adapter.Fabric,
>   ]
> 
>   for {describe, adapter} <- test_groups do
>     describe "Database CRUD #{describe}" do
>       @describetag setup: %Setup{}
>         |> Setup.Start.new([:chttpd])
>         |> Setup.Adapter.new(adapter)
>         |> Setup.Admin.new(user: "adm", password: "pass")
>         |> Setup.Login.new(user: "adm", password: "pass")
>       test "Create", %{setup: setup} do
>         db_name = Utils.random_name("db")
>         setup_ctx = setup |> Setup.run()
>         assert {:ok, resp} = Adapter.create_db(Setup.get(setup_ctx, :adapter), db_name)
>         assert resp.body["ok"]
>       end
>     end
>   end
> end
> ```
> 
> # Using same test suite to compare new implementation of the same interface with the old one
> 
> Imagine that we are doing a major rewrite of a module which would implement the same interface.
> How do we compare both implementations return the same results for the same input?
> It is easy in Elixir, here is a sketch:
> ```
> defmodule Couch.Test.Fabric.Rewrite do
>   use ExUnit.Case
>   alias Couch.Test.Utils, as: Utils
> 
>   # we cannot use defrecord here because we need to construct
>   # record at compile time
>   admin_ctx = {:user_ctx, Utils.erlang_record(
>     :user_ctx, "couch/include/couch_db.hrl", roles: ["_admin"])}
> 
>   test_cases = [
>     {"create database": {create_db, [:db_name, []]}},
>     {"create database as admin": {create_db, [:db_name, [admin_ctx]]}}
>   ]
>   module_a = :fabric
>   module_b = :fabric3
> 
>   describe "Test compatibility of '#{module_a}' with '#{module_b}'" do
>     for {description, {function, args}} <- test_cases do
>       test "#{description}" do
>         result_a = unquote(module_a).unquote(function)(unquote_splicing(args))
>         result_b = unquote(module_b).unquote(function)(unquote_splicing(args))
>         assert result_a == result_b
>       end
>     end
>   end
> 
> end
> ```
> As a result we would get following tests
> ```
> Couch.Test.Fabric.Rewrite
>   * test Test compatibility of 'fabric' with 'fabric3' create database (0.01ms)
>   * test Test compatibility of 'fabric' with 'fabric3' create database as admin (0.01ms)
> ```
> 
> The prototype of integration is in this draft PR https://github.com/apache/couchdb/pull/2036. I am planing to write formal RFC after first round of discussions on ML.
> 
> Best regards,
> iilyak
>