You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@bloodhound.apache.org by as...@apache.org on 2013/05/22 17:47:48 UTC

svn commit: r1485255 - /bloodhound/trunk/bloodhound_multiproduct/tests/env.py

Author: astaric
Date: Wed May 22 15:47:48 2013
New Revision: 1485255

URL: http://svn.apache.org/r1485255
Log:
Force trac to use separate db connections for different environments when running tests.

Unittests in python2.7 run in the same process and share the same connection
pool. This basically means that all tests share the same in-memory database,
and calling reset_db in teardown methods destroys data prepared in __init__
of other tests. Trac tests do not do prepare data in __init__, but when run
with a product env, which stores config values to database, this behavior
causes multiple tests to fail.

We workaround this issue by monkey patching trac's ConnectionPool. We force
it to establish a separate connection for each DatabaseManager.

Modified:
    bloodhound/trunk/bloodhound_multiproduct/tests/env.py

Modified: bloodhound/trunk/bloodhound_multiproduct/tests/env.py
URL: http://svn.apache.org/viewvc/bloodhound/trunk/bloodhound_multiproduct/tests/env.py?rev=1485255&r1=1485254&r2=1485255&view=diff
==============================================================================
--- bloodhound/trunk/bloodhound_multiproduct/tests/env.py (original)
+++ bloodhound/trunk/bloodhound_multiproduct/tests/env.py Wed May 22 15:47:48 2013
@@ -47,6 +47,23 @@ from multiproduct.api import MultiProduc
 from multiproduct.env import ProductEnvironment
 from multiproduct.model import Product
 
+# unittests in python2.7 run in the same process and share the same connection
+# pool. This basically means that all tests share the same in-memory database,
+# and calling reset_db in teardown methods destroys data prepared in __init__
+# of other tests. Trac tests do not do prepare data in __init__, but when run
+# with a product env, which stores config values to database, this behavior
+# causes multiple tests to fail.
+#
+# We workaround this issue by monkey patching trac's ConnectionPool. We force
+# it to establish a separate connection for each DatabaseManager.
+from trac.db.pool import ConnectionPool, ConnectionPoolBackend
+def ConnectionPool_get_cnx(self, timeout=None):
+    if not hasattr(self, '_backend'):
+        self._backend = ConnectionPoolBackend(1)
+    return self._backend.get_cnx(self._connector, self._kwargs, timeout)
+ConnectionPool.get_cnx = ConnectionPool_get_cnx
+
+
 class ProductEnvironmentStub(ProductEnvironment):
     r"""A product environment slightly tweaked for testing purposes
     """



Re: svn commit: r1485255 - /bloodhound/trunk/bloodhound_multiproduct/tests/env.py

Posted by Olemis Lang <ol...@gmail.com>.
On 5/22/13, Anze Staric <an...@gmail.com> wrote:
>> the question I'd rather ask is why is it compulsory to monkey patch
>> ConnectionPool methods ?
>
> It is not, but as I see it ti solves more problems that is causes.
> Beside passing the tests that set config values in __init__, ti also
> enables us to replace
> try:
>             self.mpsystem.upgrade_environment(env.db_transaction)
>         except OperationalError:
>             # table remains but database version is deleted
>             pass
>

I'm just concerned about two facts

  - in production changes introduced by monkey patching that class
    will not be activated so , in  way , we might be cheating ourselves
    if this has further implications somewhere else ...
  - ... mainly because those changes are not bound to the specific
    test cases that might require such modifications ,
    but the whole test suite

>> In general , the more we monkey patch real classes to make test cases
>> pass , we'll be less confident of test results because such
>> modifications will not be available on production and might have
>> non-trivial side-effects at testing time obscuring behaviors that
>> might happen in practice . I'd rather prefer to (always | sometimes)
>> replace them with mock objects (e.g. EnvironmentStub , Mock ...)
>
> If we want to test multiproduct functionality, we need to upgrade the
> database schema (add product column, ...) When should this be done?

There are multiple valid answers to this questions :

  - unit tests are lightweight enough (e.g. in-memory & in-process DB)
    that EnvironmentStub(s) are set up and torn down once per test case .
    The upgrade goes there
  - functional tests are heavy so approach sketched above is not
    recommended in practice .
    Functional environment is always initialized once
    (including MP upgrade ;) . In practice , I've not bothered too much
    about collecting all garbage generated by previous test cases .
    Instead I'd suggest to write test cases in a way that ignores changes
    made elsewhere . Beyond that, there are two options to access
    global test environment :
    * Global var (like in rpctestenv in RPC plugin test suite)
    * Trac built-in fixtures

> Currently, each call to setUp tries to upgrade schema using the
> following code:
>         try:
>             self.mpsystem.upgrade_environment(env.db_transaction)
>         except OperationalError:
>             # table remains but database version is deleted
>             pass
> which fails, if the database is shared between tests and has been
> upgraded already.

For unit tests , if it's reset in tearDown method then next setUp will
start from scratch i.e. no reason to fail unless something is wrong
... but this is just theory , reality supported by facts may point to
other directions ;) ... the truth is in the details .

If sub-classing MultiproductTestcase there are helpers methods for that .

> A side effect of this code is, that the first time
> it is called, default product is inserted to the database, but for the
> subsequent calls, it is not.
>

subsequent calls of ... ?

> IMO, each test should be run in the same environment if it is executed
> alone or as a part of a test suite.

That depends on how you write the tests . What I advocate is :

  - Test case environment should satisfy the same pre-conditions
    (e.g. multi-product upgraded with default product) .
    * ... and should be as close to the real system as possible ,
      otherwise the test suite can not be trusted since it's testing
      something else , not the system
  - What is really important is that processing taking place in test case
    will not have side-effects influencing the results of subsequent TCs ;
    otherwise the test suite starts to be useless (that's what reset_db
    in tearDown is for, to start next TC from scratch)

> If we need to choose between a 3
> line monkey patch for ConnectionPool to allow each test to run in a
> separate in-memory databases and manually reproducing the side effects
> of multiproduct upgrade, product environment configuration, and who
> know what have we missed, in test setUp, I vote for the monkey patch.
>

FWIW , it's worthless to adopt a shortcut if the test environment is
different enough that test results can not be trusted . The goal
resides in being able to make judgments on the correct behavior of the
system by reviewing test results .

Q:

  - How will this change impact upon other test cases ?
  - Is this closer to the code run in production ?
  - ... if not , is there a chance for this to obscure failure modes
    in production ?
  - ... or introduce false negatives in test reports ?

>> In general , the more we monkey patch real classes to make test cases
>> pass , we'll be less confident of test results because such
>> modifications will not be available on production and might have
>> non-trivial side-effects at testing time obscuring behaviors that
>> might happen in practice. I'd rather prefer to (always | sometimes)
>> replace them with mock objects (e.g. EnvironmentStub , Mock ...)
>
> I could also replace ConnectionPoll with a custom class, that overrode
> this function, but I would still need to inject it to trac.db.pool as
> DatabaseManager has no way of replacing the implementation of
> ConnectionPool. When faced with these two options, I prefer monkey
> patches
>

If really needed I'd rather advocate to monkey patch that class in
setUp method of test cases really in need of this modification , and
revert changes in tearDown method .

-- 
Regards,

Olemis.

Apache™ Bloodhound contributor
http://issues.apache.org/bloodhound

Blog ES: http://simelo-es.blogspot.com/
Blog EN: http://simelo-en.blogspot.com/

Featured article:

Re: svn commit: r1485255 - /bloodhound/trunk/bloodhound_multiproduct/tests/env.py

Posted by Anze Staric <an...@gmail.com>.
> the question I'd rather ask is why is it compulsory to monkey patch
> ConnectionPool methods ?

It is not, but as I see it ti solves more problems that is causes.
Beside passing the tests that set config values in __init__, ti also
enables us to replace
try:
            self.mpsystem.upgrade_environment(env.db_transaction)
        except OperationalError:
            # table remains but database version is deleted
            pass



> In general , the more we monkey patch real classes to make test cases
> pass , we'll be less confident of test results because such
> modifications will not be available on production and might have
> non-trivial side-effects at testing time obscuring behaviors that
> might happen in practice . I'd rather prefer to (always | sometimes)
> replace them with mock objects (e.g. EnvironmentStub , Mock ...)

If we want to test multiproduct functionality, we need to upgrade the
database schema (add product column, ...) When should this be done?
Currently, each call to setUp tries to upgrade schema using the
following code:
        try:
            self.mpsystem.upgrade_environment(env.db_transaction)
        except OperationalError:
            # table remains but database version is deleted
            pass
which fails, if the database is shared between tests and has been
upgraded already. A side effect of this code is, that the first time
it is called, default product is inserted to the database, but for the
subsequent calls, it is not.

IMO, each test should be run in the same environment if it is executed
alone or as a part of a test suite. If we need to choose between a 3
line monkey patch for ConnectionPool to allow each test to run in a
separate in-memory databases and manually reproducing the side effects
of multiproduct upgrade, product environment configuration, and who
know what have we missed, in test setUp, I vote for the monkey patch.

> In general , the more we monkey patch real classes to make test cases
> pass , we'll be less confident of test results because such
> modifications will not be available on production and might have
> non-trivial side-effects at testing time obscuring behaviors that
> might happen in practice. I'd rather prefer to (always | sometimes)
> replace them with mock objects (e.g. EnvironmentStub , Mock ...)

I could also replace ConnectionPoll with a custom class, that overrode
this function, but I would still need to inject it to trac.db.pool as
DatabaseManager has no way of replacing the implementation of
ConnectionPool. When faced with these two options, I prefer monkey
patches

Re: svn commit: r1485255 - /bloodhound/trunk/bloodhound_multiproduct/tests/env.py

Posted by Olemis Lang <ol...@gmail.com>.
On 5/22/13, Gary Martin <ga...@wandisco.com> wrote:
> On 22/05/13 18:23, Olemis Lang wrote:
>> On 5/22/13, astaric@apache.org <as...@apache.org> wrote:
>>> Author: astaric
>>> Date: Wed May 22 15:47:48 2013
>>> New Revision: 1485255
>>>
>>> URL: http://svn.apache.org/r1485255
>>> Log:
>>> Force trac to use separate db connections for different environments
>>> when
>>> running tests.
>>>
>>> Unittests in python2.7 run in the same process and share the same
>>> connection
>>> pool. This basically means that all tests share the same in-memory
>>> database,
>> yes but , considering the fact that tests are executed in sequential
>> order , there's no harm done for well-written test cases
>>
>>> and calling reset_db in teardown methods destroys data prepared in
>>> __init__
>>> of other tests.
>> Test data must not be prepared on __init__ . Disposing test data in
>> tearDown is quite convenient to isolate test case execution . If test
>> data is created by test case #1 and it's not disposed afterwards then
>> it might be messing around until the end of the test run thus
>> polluting test reports by introducing false negative results . This is
>> a terrible headache wrt test code , which btw , I'm suffering now with
>> RPC plugin test suite ... so please, do not alter that behavior (i.e.
>> clean in-memory DB expected as TC pre/post-condition ) .
>>
>>
>>> Trac tests do not do prepare data in __init__,
>> indeed , for a good reason ...
>>
>>> but when run
>>> with a product env, which stores config values to database, this
>>> behavior
>>> causes multiple tests to fail.
>>>
>> then why not to ensure that such values are configured at the
>> beginning of each test case inside setUp method .
>>
>>> We workaround this issue by monkey patching trac's ConnectionPool. We
>>> force
>>> it to establish a separate connection for each DatabaseManager.
>>>
>> ... this starts to make sense to me ... but, isn't it the very same DB
>> in the end ?
>>
>> in any case , please beware of the facts above in spite of emitting
>> test reports we can trust .
>>
>> [...]
>>
>
> It doesn't make sense to me yet. What data is set up in __init__s at the
> moment

none that I'm aware of ...

> and why is this not done in the setUp method?
>

the question I'd rather ask is why is it compulsory to monkey patch
ConnectionPool methods ?

In general , the more we monkey patch real classes to make test cases
pass , we'll be less confident of test results because such
modifications will not be available on production and might have
non-trivial side-effects at testing time obscuring behaviors that
might happen in practice . I'd rather prefer to (always | sometimes)
replace them with mock objects (e.g. EnvironmentStub , Mock ...)

> I'd best get searching!
>

;)

-- 
Regards,

Olemis.

Re: svn commit: r1485255 - /bloodhound/trunk/bloodhound_multiproduct/tests/env.py

Posted by Olemis Lang <ol...@gmail.com>.
On 5/24/13, Andrej Golcov <an...@digiverse.si> wrote:
> What if on the first test start, we create an initial DB in memory and
> copy (restore) it to the testdb before the each test?
> That can eliminate the complex logic required by reset and speed up
> unit-test execution.
>

For Bloodhound-specific test cases written from scratch I see no
problem in doing this kind of things . This is similar to the approach
taken by the functional test suite , but with test fixtures added .
We'd only have to ensure that subsets of the test suite will be set up
as expected .

Nevertheless , notice that most test cases will run inherited logic in
Trac test suite . We should not be messing around with those .

-- 
Regards,

Olemis.

Re: svn commit: r1485255 - /bloodhound/trunk/bloodhound_multiproduct/tests/env.py

Posted by Andrej Golcov <an...@digiverse.si>.
What if on the first test start, we create an initial DB in memory and
copy (restore) it to the testdb before the each test?
That can eliminate the complex logic required by reset and speed up
unit-test execution.

Cheers, Andrej

Re: svn commit: r1485255 - /bloodhound/trunk/bloodhound_multiproduct/tests/env.py

Posted by Olemis Lang <ol...@gmail.com>.
On 5/23/13, Branko Čibej <br...@wandisco.com> wrote:
> On 23.05.2013 23:52, Olemis Lang wrote:
>> On 5/23/13, Olemis Lang <ol...@gmail.com> wrote:
>>> On 5/23/13, Anze Staric <an...@gmail.com> wrote:
>>> [...]
>>>
>>>> When upgrade is run again, it figures,
>>>> that it needs to perform all upgrades and tries to add field product
>>>> to table ticket, but this fails, because the field is still there, it
>>>> was not deleted in reset_db).
>>>>
>>> ... and ensure that MP upgrade procedure will not be triggered if DB
>>> schema is up to date , which represents a bug indeed .
>>>
>> I'll clarify what I meant to say with the statement above because it
>> may be misunderstood .
>>
>> Ensure that MP upgrade actions will not be performed due to the fact
>> that once MP upgrade method is executed it will notice everything is
>> up to date . Not doing so represents an error in MP system which has
>> to be asserted and reported as an (error | failure) .
>
> Not sure I quite understand what you're getting at here. In real life,
> I'd expect an upgrade to multi-product to be a manual step,

yes

> performed
> exactly once per BH/Trac installation.

yes ... and no ... needs_upgrade method may be invoked on demand to
detect everything is ok with target env (... please read below ...) .
That's exactly why Trac will stop working automatically if a new
plugin requiring an upgrade is installed .

In test code , if you decide not to run the upgrade quite often (e.g.
setUpClass) then this patch I've been using to test Bloodhound RPC
plugin might be helpful . It transforms MultiproductTestCase helpers
into class methods whenever possible .

https://issues.apache.org/bloodhound/attachment/ticket/509/t509_r1480735_mp_class_test_setup.diff

> In other words, there is no need
> to (a) detect that this has already happened,

that's built-in behavior , so ... yes , this happens under the hood
considering that needs_upgrade method is always invoked on demand and
might trigger upgrade procedure or not . Notice that both are
implemented by MP system so ... yes , it's the responsibility of the
plugin to determine whether an upgrade is required or not ; and those
checks are the ones I'm talking about . If the environment is upgraded
and upgrade command is attempted immediately after then noticing that
needs_upgrade returns True is an imminent confirmation of a bug ...
so, afaict we agreed since the beginning ; unless I misunderstood your
previous comment .

> or (b) be able to undo the
> upgrade -- that is, downgrade the database.
>

downgrade is not possible but , especially in test code , it's
possible to dispose the whole database (schema + data + ...) and start
all over from scratch (that's what I meant in previous messages) .
This is a MUST specially when testing upgrade scenarios . I do not
know if that's the case .

> This is in fact true for /any/ Trac plugin that adds its own tables to
> the database; IIUC reset_db will not remove those tables.
>

exactly what I'm saying is , if it does not remove those tables, then
MP version should not cleared in system table either .

> An important attribute of any unit test environment is that test cases
> must be isolated --

+
... but the most important attribute of the test suite is that test
case success could be interpreted as an irrefutable indicator of
correct system behavior ... and there's no better way to achieve that
than not to inject a patch in a class of the SUT that will not be used
in production . If that can be avoided then much better .

> in other words, it should not matter in which order
> individual test cases are run.

+

> This implies that each test case must see
> the same initial database

yes

> -- which either means that it has to be a new
> database every time,
> or that reset_db has to roll back any changes to
> the schema.
>

not necessarily ... what is needed is setUp and tearDown preserving
pre-conditions needed for testing . If MP upgrade is one such
pre-condition (agreed) and the fact is that reset_db is jeopardizing
such expectations by deleting MP version in system table , then what
shall be done is to preserve system value after invoking that method
(either by overriding it , or by adding an extra step restoring system
value in tearDown after reset_db , or by encapsulating all this in
helper methods like those in MultiproductTestCase class , or ...) ;
rather than hijacking system behavior by monkey patching a class of
the SUT , thus potentially leading to unforeseeable side-effects .

I'm of the opinion that ConnectionPool must not be patched this way .

PS: If default data needs to be restored there's one such helper
method in MultiproductTestCase class .

-- 
Regards,

Olemis.

Re: svn commit: r1485255 - /bloodhound/trunk/bloodhound_multiproduct/tests/env.py

Posted by Branko Čibej <br...@wandisco.com>.
On 23.05.2013 23:52, Olemis Lang wrote:
> On 5/23/13, Olemis Lang <ol...@gmail.com> wrote:
>> On 5/23/13, Anze Staric <an...@gmail.com> wrote:
>> [...]
>>
>>> When upgrade is run again, it figures,
>>> that it needs to perform all upgrades and tries to add field product
>>> to table ticket, but this fails, because the field is still there, it
>>> was not deleted in reset_db).
>>>
>> ... and ensure that MP upgrade procedure will not be triggered if DB
>> schema is up to date , which represents a bug indeed .
>>
> I'll clarify what I meant to say with the statement above because it
> may be misunderstood .
>
> Ensure that MP upgrade actions will not be performed due to the fact
> that once MP upgrade method is executed it will notice everything is
> up to date . Not doing so represents an error in MP system which has
> to be asserted and reported as an (error | failure) .

Not sure I quite understand what you're getting at here. In real life,
I'd expect an upgrade to multi-product to be a manual step, performed
exactly once per BH/Trac installation. In other words, there is no need
to (a) detect that this has already happened, or (b) be able to undo the
upgrade -- that is, downgrade the database.

This is in fact true for /any/ Trac plugin that adds its own tables to
the database; IIUC reset_db will not remove those tables.

An important attribute of any unit test environment is that test cases
must be isolated -- in other words, it should not matter in which order
individual test cases are run. This implies that each test case must see
the same initial database -- which either means that it has to be a new
database every time, or that reset_db has to roll back any changes to
the schema.

-- Brane

-- 
Branko Čibej
Director of Subversion | WANdisco | www.wandisco.com


Re: svn commit: r1485255 - /bloodhound/trunk/bloodhound_multiproduct/tests/env.py

Posted by Olemis Lang <ol...@gmail.com>.
On 5/23/13, Olemis Lang <ol...@gmail.com> wrote:
> On 5/23/13, Anze Staric <an...@gmail.com> wrote:
> [...]
>>>  - Is this closer to the code run in production ?
>> In production, each environment is associated with a separate
>> database.
>
> Each global environment ... but what about product environments ? They
> all should share the same database .
>
> [...]
>
>>> that's exactly why MP upgrade should be performed once for each unit
>>> test . After doing so , execute assertions on the SUT and get rid of
>>> it afterwards ... the next TC will start from scratch and recreate
>>> everything once again
>>
>> Please note, that reset_db does not downgrade the database, it just
>> removes the data from the tables. If env.upgrade is run again, it
>> fails (multiproduct plugin version is deleted from the system table,
>> but the tables are modified.
>
> If this is the point then I'd rather prefer to keep MP version in
> system table after invoking env stub `reset_db` at testing time ...
>
>> When upgrade is run again, it figures,
>> that it needs to perform all upgrades and tries to add field product
>> to table ticket, but this fails, because the field is still there, it
>> was not deleted in reset_db).
>>
>
> ... and ensure that MP upgrade procedure will not be triggered if DB
> schema is up to date , which represents a bug indeed .
>

I'll clarify what I meant to say with the statement above because it
may be misunderstood .

Ensure that MP upgrade actions will not be performed due to the fact
that once MP upgrade method is executed it will notice everything is
up to date . Not doing so represents an error in MP system which has
to be asserted and reported as an (error | failure) .

>> To minimize the impact of the patch,
>
> I really think this should be solved by modifying test code (e.g.
> EnvironmentStub class ) rather than monkey patching a class in the SUT
> . More important than making tests pass is the fact that test success
> really implies that the SUT is working as expected .
>
> [...]
>

-- 
Regards,

Olemis.

Re: svn commit: r1485255 - /bloodhound/trunk/bloodhound_multiproduct/tests/env.py

Posted by Olemis Lang <ol...@gmail.com>.
On 5/23/13, Anze Staric <an...@gmail.com> wrote:
[...]
>>  - Is this closer to the code run in production ?
> In production, each environment is associated with a separate
> database.

Each global environment ... but what about product environments ? They
all should share the same database .

[...]

>> that's exactly why MP upgrade should be performed once for each unit
>> test . After doing so , execute assertions on the SUT and get rid of
>> it afterwards ... the next TC will start from scratch and recreate
>> everything once again
>
> Please note, that reset_db does not downgrade the database, it just
> removes the data from the tables. If env.upgrade is run again, it
> fails (multiproduct plugin version is deleted from the system table,
> but the tables are modified.

If this is the point then I'd rather prefer to keep MP version in
system table after invoking env stub `reset_db` at testing time ...

> When upgrade is run again, it figures,
> that it needs to perform all upgrades and tries to add field product
> to table ticket, but this fails, because the field is still there, it
> was not deleted in reset_db).
>

... and ensure that MP upgrade procedure will not be triggered if DB
schema is up to date , which represents a bug indeed .

> To minimize the impact of the patch,

I really think this should be solved by modifying test code (e.g.
EnvironmentStub class ) rather than monkey patching a class in the SUT
. More important than making tests pass is the fact that test success
really implies that the SUT is working as expected .

[...]

-- 
Regards,

Olemis.

Re: svn commit: r1485255 - /bloodhound/trunk/bloodhound_multiproduct/tests/env.py

Posted by Anze Staric <an...@gmail.com>.
 > - How will this change impact upon other test cases ?
Every test case that constructs env in __init__ / setUpClass /
globally will work exactly as before. Test cases that construct env in
setUp will have a separate in memory database for each test run and
will start with an empty database, that will be filled with tables /
data in setUp. Currently, first test in suite starts with the empty
database and deletes data from it in tearDown. If it modified schema
(added columns, tables) this changes are visible for the following
test.

>  - Is this closer to the code run in production ?
In production, each environment is associated with a separate
database. As with this patch, we run every test case in a fresh
database, this is closer to production, as cases when tables are
upgraded, but plugin version is deleted from the system table are
avoided.

>  - ... or introduce false negatives in test reports ?
If a hypothetical test assumes to get different connections from the
pool (which means it is not using an in-memory database), the patch
breaks the assumption.

> that's exactly why MP upgrade should be performed once for each unit
> test . After doing so , execute assertions on the SUT and get rid of
> it afterwards ... the next TC will start from scratch and recreate
> everything once again
Please note, that reset_db does not downgrade the database, it just
removes the data from the tables. If env.upgrade is run again, it
fails (multiproduct plugin version is deleted from the system table,
but the tables are modified. When upgrade is run again, it figures,
that it needs to perform all upgrades and tries to add field product
to table ticket, but this fails, because the field is still there, it
was not deleted in reset_db).

To minimize the impact of the patch, I am thinking of adding another
(optional) parameter fresh_database (or something alike) to
(multiproduct.)EnvironmentStub constructor. When set to true, the
patch would be applied before parent constructor is called and
unapplied afterwards. As the DatabaseManage is created in parent
constructor, and it keeps the reference to the connection pool, this
should be sufficient. It would require us to manually enable this
option for the tests that need it, but it has way less side effects
than the current monkey patch.


On Thu, May 23, 2013 at 5:01 PM, Olemis Lang <ol...@gmail.com> wrote:
> On 5/23/13, Anze Staric <an...@gmail.com> wrote:
>> On Thu, May 23, 2013 at 2:25 PM, Gary Martin <ga...@wandisco.com>
>> wrote:
>>> Apart from this kind of case, I was also thinking that it might be better
>>> to
>>> get the reset_db method overridden so that it resets the database back to
>>> the clean form that we want.
>>
>> I like the idea,
>
> it's nice , even if I'd prefer to stick to patch the connection pool
> in TC setUp and revert in tearDown
>
>> but how to we get the database to a clean state after
>> it has been migrated to multiproduct in setUp?
>
> that's exactly why MP upgrade should be performed once for each unit
> test . After doing so , execute assertions on the SUT and get rid of
> it afterwards ... the next TC will start from scratch and recreate
> everything once again
>
>> Do we hard code which
>> fields and tables need to be removed and which keys do need to be
>> modified?
>>
>
> Your point is valid . This is a real difficulty . In recent functional
> TCs I've written for RPC plugin what I do is to ensure that the names
> of the products created for testing will never be the same . Then in
> tearDown I just clear bloodhound_product table (except entry for
> default product) and isolation across product environments (combined
> with GUID for products) will ensure there will be no side-effects
> propagated beyond test case lifetime .
>
> There's no need to do so for unit tests .
>
>> I still think that forcing the trac to use a separate database for
>> separate environments is a cleaner solution. An alternative to monkey
>> patching ConnectionPool is to modify EnvironmentStub to use
>> DatabaseManager that creates one connection to database per product
>> (if we are using in memory database).
>>
>
> I'd be ok with that *if*
>
>   - the patch is really needed
>   - it will only affect a limited (and relevant) subset of the test suite
>
> --
> Regards,
>
> Olemis.

Re: svn commit: r1485255 - /bloodhound/trunk/bloodhound_multiproduct/tests/env.py

Posted by Olemis Lang <ol...@gmail.com>.
On 5/23/13, Anze Staric <an...@gmail.com> wrote:
> On Thu, May 23, 2013 at 2:25 PM, Gary Martin <ga...@wandisco.com>
> wrote:
>> Apart from this kind of case, I was also thinking that it might be better
>> to
>> get the reset_db method overridden so that it resets the database back to
>> the clean form that we want.
>
> I like the idea,

it's nice , even if I'd prefer to stick to patch the connection pool
in TC setUp and revert in tearDown

> but how to we get the database to a clean state after
> it has been migrated to multiproduct in setUp?

that's exactly why MP upgrade should be performed once for each unit
test . After doing so , execute assertions on the SUT and get rid of
it afterwards ... the next TC will start from scratch and recreate
everything once again

> Do we hard code which
> fields and tables need to be removed and which keys do need to be
> modified?
>

Your point is valid . This is a real difficulty . In recent functional
TCs I've written for RPC plugin what I do is to ensure that the names
of the products created for testing will never be the same . Then in
tearDown I just clear bloodhound_product table (except entry for
default product) and isolation across product environments (combined
with GUID for products) will ensure there will be no side-effects
propagated beyond test case lifetime .

There's no need to do so for unit tests .

> I still think that forcing the trac to use a separate database for
> separate environments is a cleaner solution. An alternative to monkey
> patching ConnectionPool is to modify EnvironmentStub to use
> DatabaseManager that creates one connection to database per product
> (if we are using in memory database).
>

I'd be ok with that *if*

  - the patch is really needed
  - it will only affect a limited (and relevant) subset of the test suite

-- 
Regards,

Olemis.

Re: svn commit: r1485255 - /bloodhound/trunk/bloodhound_multiproduct/tests/env.py

Posted by Gary Martin <ga...@wandisco.com>.
On 23/05/13 14:18, Anze Staric wrote:
> On Thu, May 23, 2013 at 2:25 PM, Gary Martin <ga...@wandisco.com> wrote:
>> Apart from this kind of case, I was also thinking that it might be better to
>> get the reset_db method overridden so that it resets the database back to
>> the clean form that we want.
> I like the idea, but how to we get the database to a clean state after
> it has been migrated to multiproduct in setUp? Do we hard code which
> fields and tables need to be removed and which keys do need to be
> modified?

Well you might just be able to run the original reset_db and then do the 
upgrade. I prefer the idea of rolling back to the state from before the 
test rather than an explicit reset though. I mean, I know we want it in 
the initial state but we have already done the work of figuring out what 
that state is in the first upgrade. This relies on all relevant 
tearDowns taking part but there may be other advantages.

> I still think that forcing the trac to use a separate database for
> separate environments is a cleaner solution. An alternative to monkey
> patching ConnectionPool is to modify EnvironmentStub to use
> DatabaseManager that creates one connection to database per product
> (if we are using in memory database).

I'm not sure that I agree yet but I think we are closer to understanding 
what is going on at least! I'm not proposing that we revert your change 
when there is other more important stuff to do but we might want to 
consider this again later.

Cheers,
     Gary

Re: svn commit: r1485255 - /bloodhound/trunk/bloodhound_multiproduct/tests/env.py

Posted by Anze Staric <an...@gmail.com>.
On Thu, May 23, 2013 at 2:25 PM, Gary Martin <ga...@wandisco.com> wrote:
> Apart from this kind of case, I was also thinking that it might be better to
> get the reset_db method overridden so that it resets the database back to
> the clean form that we want.

I like the idea, but how to we get the database to a clean state after
it has been migrated to multiproduct in setUp? Do we hard code which
fields and tables need to be removed and which keys do need to be
modified?

I still think that forcing the trac to use a separate database for
separate environments is a cleaner solution. An alternative to monkey
patching ConnectionPool is to modify EnvironmentStub to use
DatabaseManager that creates one connection to database per product
(if we are using in memory database).

Re: svn commit: r1485255 - /bloodhound/trunk/bloodhound_multiproduct/tests/env.py

Posted by Gary Martin <ga...@wandisco.com>.
On 22/05/13 21:39, Anze Staric wrote:
>> It doesn't make sense to me yet. What data is set up in __init__s at the
>> moment and why is this not done in the setUp method?
> trac.wiki.tests.formatter.WikiTestCase uses self.env.config.set.
>
> ProductWikiTestCase in (bloodhound_multiproduct) tests.wiki.formatter
> replaces self.env with ProductEnvironment. ProductEnvironments use
> Configurations defined in multiproduct.config that basically save
> settings to database.
>
> TestSuite defined in tests.wiki.formatter first instantiates TestCases
> for each of the available wiki syntax/resulting html pairs. When these
> tests were run sequentially, tearDown of the first test removed the
> intertrac configuration of all the tests, thus causing them to fail.

OK, that looks like trac doing too much in object initialisation, though 
I am not sure of their justification yet. Are there any special features 
of the env object that are being tested? If not, it strikes me that it 
might be better to mock the environment and just make sure that it is 
being used in the right way.

Apart from this kind of case, I was also thinking that it might be 
better to get the reset_db method overridden so that it resets the 
database back to the clean form that we want.

Cheers,
     Gary

Re: svn commit: r1485255 - /bloodhound/trunk/bloodhound_multiproduct/tests/env.py

Posted by Anze Staric <an...@gmail.com>.
> It doesn't make sense to me yet. What data is set up in __init__s at the
> moment and why is this not done in the setUp method?

trac.wiki.tests.formatter.WikiTestCase uses self.env.config.set.

ProductWikiTestCase in (bloodhound_multiproduct) tests.wiki.formatter
replaces self.env with ProductEnvironment. ProductEnvironments use
Configurations defined in multiproduct.config that basically save
settings to database.

TestSuite defined in tests.wiki.formatter first instantiates TestCases
for each of the available wiki syntax/resulting html pairs. When these
tests were run sequentially, tearDown of the first test removed the
intertrac configuration of all the tests, thus causing them to fail.

Re: svn commit: r1485255 - /bloodhound/trunk/bloodhound_multiproduct/tests/env.py

Posted by Gary Martin <ga...@wandisco.com>.
On 22/05/13 18:23, Olemis Lang wrote:
> On 5/22/13, astaric@apache.org <as...@apache.org> wrote:
>> Author: astaric
>> Date: Wed May 22 15:47:48 2013
>> New Revision: 1485255
>>
>> URL: http://svn.apache.org/r1485255
>> Log:
>> Force trac to use separate db connections for different environments when
>> running tests.
>>
>> Unittests in python2.7 run in the same process and share the same
>> connection
>> pool. This basically means that all tests share the same in-memory
>> database,
> yes but , considering the fact that tests are executed in sequential
> order , there's no harm done for well-written test cases
>
>> and calling reset_db in teardown methods destroys data prepared in __init__
>> of other tests.
> Test data must not be prepared on __init__ . Disposing test data in
> tearDown is quite convenient to isolate test case execution . If test
> data is created by test case #1 and it's not disposed afterwards then
> it might be messing around until the end of the test run thus
> polluting test reports by introducing false negative results . This is
> a terrible headache wrt test code , which btw , I'm suffering now with
> RPC plugin test suite ... so please, do not alter that behavior (i.e.
> clean in-memory DB expected as TC pre/post-condition ) .
>
>
>> Trac tests do not do prepare data in __init__,
> indeed , for a good reason ...
>
>> but when run
>> with a product env, which stores config values to database, this behavior
>> causes multiple tests to fail.
>>
> then why not to ensure that such values are configured at the
> beginning of each test case inside setUp method .
>
>> We workaround this issue by monkey patching trac's ConnectionPool. We force
>> it to establish a separate connection for each DatabaseManager.
>>
> ... this starts to make sense to me ... but, isn't it the very same DB
> in the end ?
>
> in any case , please beware of the facts above in spite of emitting
> test reports we can trust .
>
> [...]
>

It doesn't make sense to me yet. What data is set up in __init__s at the 
moment and why is this not done in the setUp method?

I'd best get searching!

Cheers,
     Gary

Re: svn commit: r1485255 - /bloodhound/trunk/bloodhound_multiproduct/tests/env.py

Posted by Olemis Lang <ol...@gmail.com>.
On 5/22/13, astaric@apache.org <as...@apache.org> wrote:
> Author: astaric
> Date: Wed May 22 15:47:48 2013
> New Revision: 1485255
>
> URL: http://svn.apache.org/r1485255
> Log:
> Force trac to use separate db connections for different environments when
> running tests.
>
> Unittests in python2.7 run in the same process and share the same
> connection
> pool. This basically means that all tests share the same in-memory
> database,

yes but , considering the fact that tests are executed in sequential
order , there's no harm done for well-written test cases

> and calling reset_db in teardown methods destroys data prepared in __init__
> of other tests.

Test data must not be prepared on __init__ . Disposing test data in
tearDown is quite convenient to isolate test case execution . If test
data is created by test case #1 and it's not disposed afterwards then
it might be messing around until the end of the test run thus
polluting test reports by introducing false negative results . This is
a terrible headache wrt test code , which btw , I'm suffering now with
RPC plugin test suite ... so please, do not alter that behavior (i.e.
clean in-memory DB expected as TC pre/post-condition ) .


> Trac tests do not do prepare data in __init__,

indeed , for a good reason ...

> but when run
> with a product env, which stores config values to database, this behavior
> causes multiple tests to fail.
>

then why not to ensure that such values are configured at the
beginning of each test case inside setUp method .

> We workaround this issue by monkey patching trac's ConnectionPool. We force
> it to establish a separate connection for each DatabaseManager.
>

... this starts to make sense to me ... but, isn't it the very same DB
in the end ?

in any case , please beware of the facts above in spite of emitting
test reports we can trust .

[...]

-- 
Regards,

Olemis.