You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cocoon.apache.org by Lars Huttar <la...@sil.org> on 2006/05/26 22:37:26 UTC

webapp testing

Hello,

Looking for some recommendations from those with experience...

I have been trying to set up a good method of automated testing for our 
webapps. One use case is regression testing... we converted a Cocoon 
webapp to use SQLServer on the back end instead of Oracle, and we want 
to find the places where the output has changed.
What can you recommend for that purpose?

I've looked at several web testing tools, mostly open-source, such as 
Anteater, WebInject, Morebot, GetLeft, and Grab-a-Site.
OK, the last two are just site grabbers, but those can potentially make 
good automated regression test tools.

I'd settled on Anteater for a while. One of its strengths is the ability 
to fetch two URLs and compare the responses:
    <httpRequest 
href="http://localhost/mount/ethnologue-oracle/book/country-index?cocoon-view=raw">
      <match>
        <contentEquals 
href="http://localhost/mount/ethnologue-last/book/country-index?cocoon-view=raw"/>
      </match>
    </httpRequest>

This compares the output of the Oracle and SQL Server versions of the 
webapp for a particular page.

Another strength of Anteater is the ability to present test results as 
nicely organized and readable html reports, with detail appropriately 
hidden until you ask for it. See 
http://aft.sourceforge.net/example_output/frames/index.html

One feature Anteater doesn't seem to provide is automated discovery of 
the URLs to test, e.g. by crawling the webapp. But I've been filling 
that hole with an XML list of URLs, and a stylesheet that generates an 
Anteater project file from it, creating URL pairs from the input URLs.

The place where I'm having trouble with Anteater is in the comparison of 
non-ASCII characters. It's been telling me that the output of the two 
versions differs, even though when I look at the raw XML, they seem to 
be exactly the same. This was driving me crazy, until I discovered that 
anteater also reports (every time) that the output of the Oracle version 
differs from the output of the Oracle version. In other words, 1 is not 
equal to 1. (Yes, this is a deterministic webapp.) So much for Anteater. 
I may report this bug, but Anteater hasn't been updated for 3 years so 
who knows whether it will ever be fixed. Perhaps this is an example of 
what Joel Spolsky complains about at 
http://www.joelonsoftware.com/articles/Unicode.html


On to the site grabbers. These have the advantage of automatically 
finding the URLs you want to test, starting from your index page.
After grabbing the two webapps into two folder trees, you can then, 
theoretically, run a diff (using e.g. WinMerge) between them and easily 
spot the differences.
However, our webapp is data-driven, and it has tens of thousands of 
possible URLs that a crawler would find. We don't want to test every 
one... it would take days.
So we'd like to exclude a certain set of URLs, matching a wildcard 
pattern. But the site grabbers don't seem to support that feature.
Maybe this is argument for "design the app to fit the testing tools". 
Which may be a worthwhile principle in this imperfect world. But it's a 
pain when you're trying to add testing after development is done.

So... what have you found to be successful in this area? How do you address
- coming up with a list of URLs to test (anyone tried generating a draft 
list directly from a Cocoon sitemap?)
  - without testing every possible datum in your huge database
- comparing the output of two URLs and pinpointing where they differ, 
without having to do manual search.

Thanks for your help,
Lars


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@cocoon.apache.org
For additional commands, e-mail: users-help@cocoon.apache.org


Re: webapp testing

Posted by Simone Gianni <s....@thebug.it>.
Hi Lars,
forgot to mention that it's free and open sourced. Also, since tests are
executed in a target browser, it can really test if you site works in X
browser, while other test suites usually execute tests in a simulated
environment, which commonly is not representative of any real browser at
all.

Simone

Simone Gianni wrote:

>Hi Lars,
>we successfully used Selenium. "writing" the test is as simple as
>recording it with Selenium-IDE, which is a firefox plugin. Then adding
>"wise" testing (like check that this thing is here, or changing a
>recorded click to a data-driven one etc..) is quite easy.
>
>I'm not sure it's the best tool for your needs, but since it uses xpath
>both for clicks and checks, it can easily avoid the problem of
>data-driven applications.
>
>Hope this helps,
>Simone
>
>Lars Huttar wrote:
>
>  
>
>>Hello,
>>
>>Looking for some recommendations from those with experience...
>>
>>I have been trying to set up a good method of automated testing for
>>our webapps. One use case is regression testing... we converted a
>>Cocoon webapp to use SQLServer on the back end instead of Oracle, and
>>we want to find the places where the output has changed.
>>What can you recommend for that purpose?
>>
>>I've looked at several web testing tools, mostly open-source, such as
>>Anteater, WebInject, Morebot, GetLeft, and Grab-a-Site.
>>OK, the last two are just site grabbers, but those can potentially
>>make good automated regression test tools.
>>
>>I'd settled on Anteater for a while. One of its strengths is the
>>ability to fetch two URLs and compare the responses:
>>   <httpRequest
>>href="http://localhost/mount/ethnologue-oracle/book/country-index?cocoon-view=raw">
>>
>>     <match>
>>       <contentEquals
>>href="http://localhost/mount/ethnologue-last/book/country-index?cocoon-view=raw"/>
>>
>>     </match>
>>   </httpRequest>
>>
>>This compares the output of the Oracle and SQL Server versions of the
>>webapp for a particular page.
>>
>>Another strength of Anteater is the ability to present test results as
>>nicely organized and readable html reports, with detail appropriately
>>hidden until you ask for it. See
>>http://aft.sourceforge.net/example_output/frames/index.html
>>
>>One feature Anteater doesn't seem to provide is automated discovery of
>>the URLs to test, e.g. by crawling the webapp. But I've been filling
>>that hole with an XML list of URLs, and a stylesheet that generates an
>>Anteater project file from it, creating URL pairs from the input URLs.
>>
>>The place where I'm having trouble with Anteater is in the comparison
>>of non-ASCII characters. It's been telling me that the output of the
>>two versions differs, even though when I look at the raw XML, they
>>seem to be exactly the same. This was driving me crazy, until I
>>discovered that anteater also reports (every time) that the output of
>>the Oracle version differs from the output of the Oracle version. In
>>other words, 1 is not equal to 1. (Yes, this is a deterministic
>>webapp.) So much for Anteater. I may report this bug, but Anteater
>>hasn't been updated for 3 years so who knows whether it will ever be
>>fixed. Perhaps this is an example of what Joel Spolsky complains about
>>at http://www.joelonsoftware.com/articles/Unicode.html
>>
>>
>>On to the site grabbers. These have the advantage of automatically
>>finding the URLs you want to test, starting from your index page.
>>After grabbing the two webapps into two folder trees, you can then,
>>theoretically, run a diff (using e.g. WinMerge) between them and
>>easily spot the differences.
>>However, our webapp is data-driven, and it has tens of thousands of
>>possible URLs that a crawler would find. We don't want to test every
>>one... it would take days.
>>So we'd like to exclude a certain set of URLs, matching a wildcard
>>pattern. But the site grabbers don't seem to support that feature.
>>Maybe this is argument for "design the app to fit the testing tools".
>>Which may be a worthwhile principle in this imperfect world. But it's
>>a pain when you're trying to add testing after development is done.
>>
>>So... what have you found to be successful in this area? How do you
>>address
>>- coming up with a list of URLs to test (anyone tried generating a
>>draft list directly from a Cocoon sitemap?)
>> - without testing every possible datum in your huge database
>>- comparing the output of two URLs and pinpointing where they differ,
>>without having to do manual search.
>>
>>Thanks for your help,
>>Lars
>>
>>
>>---------------------------------------------------------------------
>>To unsubscribe, e-mail: users-unsubscribe@cocoon.apache.org
>>For additional commands, e-mail: users-help@cocoon.apache.org
>>
>>    
>>
-- 
Simone Gianni

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@cocoon.apache.org
For additional commands, e-mail: users-help@cocoon.apache.org


Re: webapp testing

Posted by Simone Gianni <s....@thebug.it>.
Hi Lars,
we successfully used Selenium. "writing" the test is as simple as
recording it with Selenium-IDE, which is a firefox plugin. Then adding
"wise" testing (like check that this thing is here, or changing a
recorded click to a data-driven one etc..) is quite easy.

I'm not sure it's the best tool for your needs, but since it uses xpath
both for clicks and checks, it can easily avoid the problem of
data-driven applications.

Hope this helps,
Simone

Lars Huttar wrote:

> Hello,
>
> Looking for some recommendations from those with experience...
>
> I have been trying to set up a good method of automated testing for
> our webapps. One use case is regression testing... we converted a
> Cocoon webapp to use SQLServer on the back end instead of Oracle, and
> we want to find the places where the output has changed.
> What can you recommend for that purpose?
>
> I've looked at several web testing tools, mostly open-source, such as
> Anteater, WebInject, Morebot, GetLeft, and Grab-a-Site.
> OK, the last two are just site grabbers, but those can potentially
> make good automated regression test tools.
>
> I'd settled on Anteater for a while. One of its strengths is the
> ability to fetch two URLs and compare the responses:
>    <httpRequest
> href="http://localhost/mount/ethnologue-oracle/book/country-index?cocoon-view=raw">
>
>      <match>
>        <contentEquals
> href="http://localhost/mount/ethnologue-last/book/country-index?cocoon-view=raw"/>
>
>      </match>
>    </httpRequest>
>
> This compares the output of the Oracle and SQL Server versions of the
> webapp for a particular page.
>
> Another strength of Anteater is the ability to present test results as
> nicely organized and readable html reports, with detail appropriately
> hidden until you ask for it. See
> http://aft.sourceforge.net/example_output/frames/index.html
>
> One feature Anteater doesn't seem to provide is automated discovery of
> the URLs to test, e.g. by crawling the webapp. But I've been filling
> that hole with an XML list of URLs, and a stylesheet that generates an
> Anteater project file from it, creating URL pairs from the input URLs.
>
> The place where I'm having trouble with Anteater is in the comparison
> of non-ASCII characters. It's been telling me that the output of the
> two versions differs, even though when I look at the raw XML, they
> seem to be exactly the same. This was driving me crazy, until I
> discovered that anteater also reports (every time) that the output of
> the Oracle version differs from the output of the Oracle version. In
> other words, 1 is not equal to 1. (Yes, this is a deterministic
> webapp.) So much for Anteater. I may report this bug, but Anteater
> hasn't been updated for 3 years so who knows whether it will ever be
> fixed. Perhaps this is an example of what Joel Spolsky complains about
> at http://www.joelonsoftware.com/articles/Unicode.html
>
>
> On to the site grabbers. These have the advantage of automatically
> finding the URLs you want to test, starting from your index page.
> After grabbing the two webapps into two folder trees, you can then,
> theoretically, run a diff (using e.g. WinMerge) between them and
> easily spot the differences.
> However, our webapp is data-driven, and it has tens of thousands of
> possible URLs that a crawler would find. We don't want to test every
> one... it would take days.
> So we'd like to exclude a certain set of URLs, matching a wildcard
> pattern. But the site grabbers don't seem to support that feature.
> Maybe this is argument for "design the app to fit the testing tools".
> Which may be a worthwhile principle in this imperfect world. But it's
> a pain when you're trying to add testing after development is done.
>
> So... what have you found to be successful in this area? How do you
> address
> - coming up with a list of URLs to test (anyone tried generating a
> draft list directly from a Cocoon sitemap?)
>  - without testing every possible datum in your huge database
> - comparing the output of two URLs and pinpointing where they differ,
> without having to do manual search.
>
> Thanks for your help,
> Lars
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@cocoon.apache.org
> For additional commands, e-mail: users-help@cocoon.apache.org
>
-- 
Simone Gianni

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@cocoon.apache.org
For additional commands, e-mail: users-help@cocoon.apache.org


AW: webapp testing

Posted by Christofer Dutz <du...@c-ware.de>.
Hi Lars,

We use Jstudio Sitewalker http://www.jstudio.de/German/DownloadsDE.htm
This is a very powerful tool for performing automating function tests for
complex websites. All you have to do is simply record what you want to do
using the integrated InternetExplorer and then customise the test. Works
fine with Flows and Cforms. They offer A trial version, so I'd just give it
a try.

Chris

> -----Ursprüngliche Nachricht-----
> Von: Lars Huttar [mailto:lars_huttar@sil.org]
> Gesendet: Freitag, 26. Mai 2006 22:37
> An: Cocoon Users List
> Betreff: webapp testing
> 
> Hello,
> 
> Looking for some recommendations from those with experience...
> 
> I have been trying to set up a good method of automated testing for our
> webapps. One use case is regression testing... we converted a Cocoon
> webapp to use SQLServer on the back end instead of Oracle, and we want
> to find the places where the output has changed.
> What can you recommend for that purpose?
> 
> I've looked at several web testing tools, mostly open-source, such as
> Anteater, WebInject, Morebot, GetLeft, and Grab-a-Site.
> OK, the last two are just site grabbers, but those can potentially make
> good automated regression test tools.
> 
> I'd settled on Anteater for a while. One of its strengths is the ability
> to fetch two URLs and compare the responses:
>     <httpRequest
> href="http://localhost/mount/ethnologue-oracle/book/country-index?cocoon-
> view=raw">
>       <match>
>         <contentEquals
> href="http://localhost/mount/ethnologue-last/book/country-index?cocoon-
> view=raw"/>
>       </match>
>     </httpRequest>
> 
> This compares the output of the Oracle and SQL Server versions of the
> webapp for a particular page.
> 
> Another strength of Anteater is the ability to present test results as
> nicely organized and readable html reports, with detail appropriately
> hidden until you ask for it. See
> http://aft.sourceforge.net/example_output/frames/index.html
> 
> One feature Anteater doesn't seem to provide is automated discovery of
> the URLs to test, e.g. by crawling the webapp. But I've been filling
> that hole with an XML list of URLs, and a stylesheet that generates an
> Anteater project file from it, creating URL pairs from the input URLs.
> 
> The place where I'm having trouble with Anteater is in the comparison of
> non-ASCII characters. It's been telling me that the output of the two
> versions differs, even though when I look at the raw XML, they seem to
> be exactly the same. This was driving me crazy, until I discovered that
> anteater also reports (every time) that the output of the Oracle version
> differs from the output of the Oracle version. In other words, 1 is not
> equal to 1. (Yes, this is a deterministic webapp.) So much for Anteater.
> I may report this bug, but Anteater hasn't been updated for 3 years so
> who knows whether it will ever be fixed. Perhaps this is an example of
> what Joel Spolsky complains about at
> http://www.joelonsoftware.com/articles/Unicode.html
> 
> 
> On to the site grabbers. These have the advantage of automatically
> finding the URLs you want to test, starting from your index page.
> After grabbing the two webapps into two folder trees, you can then,
> theoretically, run a diff (using e.g. WinMerge) between them and easily
> spot the differences.
> However, our webapp is data-driven, and it has tens of thousands of
> possible URLs that a crawler would find. We don't want to test every
> one... it would take days.
> So we'd like to exclude a certain set of URLs, matching a wildcard
> pattern. But the site grabbers don't seem to support that feature.
> Maybe this is argument for "design the app to fit the testing tools".
> Which may be a worthwhile principle in this imperfect world. But it's a
> pain when you're trying to add testing after development is done.
> 
> So... what have you found to be successful in this area? How do you
> address
> - coming up with a list of URLs to test (anyone tried generating a draft
> list directly from a Cocoon sitemap?)
>   - without testing every possible datum in your huge database
> - comparing the output of two URLs and pinpointing where they differ,
> without having to do manual search.
> 
> Thanks for your help,
> Lars
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@cocoon.apache.org
> For additional commands, e-mail: users-help@cocoon.apache.org




---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@cocoon.apache.org
For additional commands, e-mail: users-help@cocoon.apache.org