You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@qpid.apache.org by Jiri Danek <jd...@redhat.com> on 2017/04/17 20:29:06 UTC

End-to-end WebDriver test for Qpid Dispatch Router console

 Hello folks,

for a while I've been working on WebDriver (Selenium 2.0) tests for the
Dispatch web console. The idea is to have automatic check that the console
is working and usable. I'd like to share it now in order to get feedback
and possibly even adoption.

This started as a learning project to get more familiar with pytest and
webdriver. I would be glad for any suggestions and recommendations
regarding what I've done wrong and what should be improved.

Currently there is 10 tests, essentially all of them about connecting the
console to a router.
Source on GitHub
https://github.com/jdanekrh/dispatch-console-tests/tree/update_to_9

See it on Travis CI (running on Chrome and Firefox)
https://travis-ci.org/jdanekrh/dispatch-console-tests/builds/222912530

The way it runs on Travis CI is that it first downloads and runs two docker
images which I've created, one for console and the other for router. The
Dockerfiles are in the docker/ directory.

When getting up to speed on UI tests, I tried to follow the idea of test
pyramid [0] and chose to structure the tests around Page Objects[1][2],
because it seems to be considered a good idea. This means a test might look
like this

@pytest.mark.nondestructive
@pytest.mark.parametrize("when_correct_details", [
lambda self, page: self.when_correct_details(page),
lambda self, page: page.connect_to(self.console_ip)])
def test_correct_details(self, when_correct_details):
self.test_name = 'test_correct_details'
page = self.given_connect_page()
when_correct_details(self, page)
page.connect_button.click()
self.then_login_succeeds()
self.then_no_js_error()

If you are familiar with pytest and pytest-selenium, you'd know that by
default only tests marked as nondestructive are executed. That is the
meaning of the first decorator/annotation. The second annotation causes the
test run twice, each time with different function as argument, the first
function fills both ip and port, second function fills only ip on the
initial connect screen.

Here is a screencast of a complete test run in a Chrome browser. All
software is running locally (meaning the test, the Chrome browser, Tomcat
with the console and Dispatch Router).

https://www.youtube.com/watch?v=A7XFCXPcIeE
<https://www.youtube.com/edit?o=U&video_id=A7XFCXPcIeE> (3 minutes)

to run the same thing on the CLI, in the top level directory, run
$ py.test --base-url http://127.0.0.1:8080/stand-alone --console
stand-alone --local-chrome

to use firefox, run

$ py.test --base-url http://127.0.0.1:8080/stand-alone --console
stand-alone  --capability marionette true --driver Firefox --driver-path
/unless/in/PATH/then/path/to/geckodriver

Regarding tests that fail in the video,


   - TestConnectPage::test_wrongip,port is not reported yet; I'd expect to
   see error message almost immediatelly, the way it used to work about 5
   months ago in hawtio version (when I tried it last)
   - TestConnectPage::test_correct_details(when_correct_details1) is
   reported as https://issues.apache.org/jira/browse/DISPATCH-746
   - TestHawtioLogsPage::test_open_hawtio_logs_page should not be tested on
   standalone console (and it passes because of the @pytest.mark.reproduces
   as explained below)
   - TestOverviewPage::test_expanding_tree should not be tested on
   standalone console

There was idea that tests should never be failing. If there is a test that
fails, then the test could be modified to succeed if the issue is present.
I marked such tests with @pytest.mark.reproduces. Passing tests are marked
with @pytest.mark.verifies. This is probably not a good idea, because it is
chore to maintain. Better to fix the issue in the first place.

Regarding CI, there is a Travis CI job linked to the test repository
itself, and another Travis job to build Docker images. In the future, I'd
like to run the image building job daily and have it trigger a job which
will run the tests with the image. This way it will be immediatelly clear
if some new test failed.

If you have any suggestions regarding either the tests itself or ideas
around what should be tested in general. I would be glad to hear it.

[0]
https://testing.googleblog.com/2015/04/just-say-no-to-more-end-to-end-tests.html
[1]
https://gojko.net/2010/04/13/how-to-implement-ui-testing-without-shooting-yourself-in-the-foot-2/
[2] https://youtu.be/7tzA2nsg1jQ?t=14m

Thanks for your help,

 --
Jiri

Re: End-to-end WebDriver test for Qpid Dispatch Router console

Posted by Jiri Danek <jd...@redhat.com>.
Hello, I would like to post update on End-to-end WebDriver testing, since
the tests have achived a milestone. They are running on Travis.

   - Reported DISPATCH-745
   <https://issues.apache.org/jira/browse/DISPATCH-745> about Hawtio
   console being completely broken

following resolution of the issue, committed updated WebDriver tests to
master

   - Test sources: https://github.com/jdanekrh/dispatch-console-tests
   - Travis CI build (executed on every commit to the tests):
   https://travis-ci.org/jdanekrh/dispatch-console-tests
   <https://travis-ci.org/jdanekrh/dispatch-console-tests>

The build runs Chrome and Firefox with hawtio and stand-alone versions of
console. Meaning 4 jobs in total. There is still those 10 tests as before,
nothing new there.

   - Reported failure in TestConnectPage::test_wrong_ip,port as DISPATCH-746
   <https://issues.apache.org/jira/browse/DISPATCH-746>
   - Reported minor issue regarding the treeview in hawtio in DISPATCH-748
   <https://issues.apache.org/jira/browse/DISPATCH-748>, the single test
   for the treeview in hawtio (TestOverviewPage::test_expanding_tree) is
   still broken (it used to fail only in IE, now it fails in Firefox and
   Chrome, did not test IE)

Regarding nightly tests,

   - Travis CI job which builds nightly docker images:
   https://github.com/msgqe/travisci/tree/qpid-dispatch-nightly-build
   <https://github.com/msgqe/travisci/tree/qpid-dispatch-nightly-build>
   - The images: https://hub.docker.com/r/jdanekrh/dispatch-console/tags/,
   https://hub.docker.com/r/jdanekrh/dispatch-router/tags/
   <https://hub.docker.com/r/jdanekrh/dispatch-router/tags/>
   - Travis CI job which uses the images to run tests:
   https://github.com/msgqe/travisci/tree/qpid-dispatch-console-nightly-test
   <https://github.com/msgqe/travisci/tree/qpid-dispatch-console-nightly-test>

This nightly build/test thing works thanks to the new Travis feature
allowing to set periodical daily/weekly/monthly auto-triggering of a job.

The idea is that the nightly build/test will alert about incompatibilities,
changes and so on. This way, current version of dispatch is always tested.
Furthermore, tests can be maintained the moment a problem appears, instead
having to tackle multiple problems at once every time the tests are
revisited. For example, upgrading Tomcat, or responding to some UI change
in the dispatch plugin.

Future intents

   - wait to see what is the priority on resolving reported issues. If they
   are not on the short term radar, eventually place an xfail annotation on
   the related tests, to make checking results easier; or just skip them using
   the @pytest.mark. mechanism. Not sure which yet. Probably latter.
   - think about which repo should hold dockerfiles, webdriver tests, and
   testing scripts. Currently, there is some duplication between nightly test
   repo and the main repo
   - write new test for the last subissue on DISPATCH-745
   <https://issues.apache.org/jira/browse/DISPATCH-745>, adding charts to
   dashboard; it would cover a lot of new ground
   - continue posting updates publicly hoping to get collaboration going
      - use https://github.com/jdanekrh/dispatch-console-tests/issues for
      todos, so that I have autonomy for myself and visibility for
everybody else
      - fix the three things reported there

Answering why I did not use Sauce Labs. When I looked at it the last time,
it could not connect websocket through that proxy of theirs. Dispatch needs
that. I am not aware of any free service with websocket support.

Cheers,
-- 
Jiří Daněk
Messaging QA

Re: End-to-end WebDriver test for Qpid Dispatch Router console

Posted by Matej Lesko <ml...@redhat.com>.
Great job!!

Best regards,
Matej Leško
Middleware Messaging Quality Assurance Engineer

Red Hat Czech s.r.o., Purkynova 647/111, 612 00  Brno, Czech Republic

E-mail: lesko.matej.pu@gmail.com
phone: +421 949 478 066
IRC:   mlesko at #brno, #messaging, #messaging-qe, #brno-TPB

On Mon, Apr 17, 2017 at 10:29 PM, Jiri Danek <jd...@redhat.com> wrote:

>  Hello folks,
>
> for a while I've been working on WebDriver (Selenium 2.0) tests for the
> Dispatch web console. The idea is to have automatic check that the console
> is working and usable. I'd like to share it now in order to get feedback
> and possibly even adoption.
>
> This started as a learning project to get more familiar with pytest and
> webdriver. I would be glad for any suggestions and recommendations
> regarding what I've done wrong and what should be improved.
>
> Currently there is 10 tests, essentially all of them about connecting the
> console to a router.
> Source on GitHub
> https://github.com/jdanekrh/dispatch-console-tests/tree/update_to_9
>
> See it on Travis CI (running on Chrome and Firefox)
> https://travis-ci.org/jdanekrh/dispatch-console-tests/builds/222912530
>
> The way it runs on Travis CI is that it first downloads and runs two docker
> images which I've created, one for console and the other for router. The
> Dockerfiles are in the docker/ directory.
>
> When getting up to speed on UI tests, I tried to follow the idea of test
> pyramid [0] and chose to structure the tests around Page Objects[1][2],
> because it seems to be considered a good idea. This means a test might look
> like this
>
> @pytest.mark.nondestructive
> @pytest.mark.parametrize("when_correct_details", [
> lambda self, page: self.when_correct_details(page),
> lambda self, page: page.connect_to(self.console_ip)])
> def test_correct_details(self, when_correct_details):
> self.test_name = 'test_correct_details'
> page = self.given_connect_page()
> when_correct_details(self, page)
> page.connect_button.click()
> self.then_login_succeeds()
> self.then_no_js_error()
>
> If you are familiar with pytest and pytest-selenium, you'd know that by
> default only tests marked as nondestructive are executed. That is the
> meaning of the first decorator/annotation. The second annotation causes the
> test run twice, each time with different function as argument, the first
> function fills both ip and port, second function fills only ip on the
> initial connect screen.
>
> Here is a screencast of a complete test run in a Chrome browser. All
> software is running locally (meaning the test, the Chrome browser, Tomcat
> with the console and Dispatch Router).
>
> https://www.youtube.com/watch?v=A7XFCXPcIeE
> <https://www.youtube.com/edit?o=U&video_id=A7XFCXPcIeE> (3 minutes)
>
> to run the same thing on the CLI, in the top level directory, run
> $ py.test --base-url http://127.0.0.1:8080/stand-alone --console
> stand-alone --local-chrome
>
> to use firefox, run
>
> $ py.test --base-url http://127.0.0.1:8080/stand-alone --console
> stand-alone  --capability marionette true --driver Firefox --driver-path
> /unless/in/PATH/then/path/to/geckodriver
>
> Regarding tests that fail in the video,
>
>
>    - TestConnectPage::test_wrongip,port is not reported yet; I'd expect to
>    see error message almost immediatelly, the way it used to work about 5
>    months ago in hawtio version (when I tried it last)
>    - TestConnectPage::test_correct_details(when_correct_details1) is
>    reported as https://issues.apache.org/jira/browse/DISPATCH-746
>    - TestHawtioLogsPage::test_open_hawtio_logs_page should not be tested
> on
>    standalone console (and it passes because of the @pytest.mark.reproduces
>    as explained below)
>    - TestOverviewPage::test_expanding_tree should not be tested on
>    standalone console
>
> There was idea that tests should never be failing. If there is a test that
> fails, then the test could be modified to succeed if the issue is present.
> I marked such tests with @pytest.mark.reproduces. Passing tests are marked
> with @pytest.mark.verifies. This is probably not a good idea, because it is
> chore to maintain. Better to fix the issue in the first place.
>
> Regarding CI, there is a Travis CI job linked to the test repository
> itself, and another Travis job to build Docker images. In the future, I'd
> like to run the image building job daily and have it trigger a job which
> will run the tests with the image. This way it will be immediatelly clear
> if some new test failed.
>
> If you have any suggestions regarding either the tests itself or ideas
> around what should be tested in general. I would be glad to hear it.
>
> [0]
> https://testing.googleblog.com/2015/04/just-say-no-to-
> more-end-to-end-tests.html
> [1]
> https://gojko.net/2010/04/13/how-to-implement-ui-testing-
> without-shooting-yourself-in-the-foot-2/
> [2] https://youtu.be/7tzA2nsg1jQ?t=14m
>
> Thanks for your help,
>
>  --
> Jiri
>