You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@jmeter.apache.org by Peter Lin <wo...@gmail.com> on 2005/10/31 05:36:43 UTC

Survey

I would like to ask the users of JMeter if anyone uses JMeter to run
automated tests. If you do, how do you structure your directories and files?
Would do people imagine using a reporting tool? If you do use jmeter for
automated tests, can take a minute to answer these questions.

1. how often do you run the automated tests?
2. how do you structure the files?
3. do you use a naming convention for the directories and files?
4. how many test plans do you run?
5. which listeners do you use to save the results
6. what kinds of charts and graphs do you want?

thanks in advance.


peter lin

Re: Survey

Posted by Rinke Heida <ri...@aris.nl>.
>I would like to ask the users of JMeter if anyone uses JMeter to run
>automated tests. If you do, how do you structure your directories and files?
>Would do people imagine using a reporting tool? If you do use jmeter for
>automated tests, can take a minute to answer these questions.
>
>1. how often do you run the automated tests?
>  
>
About 4 times a year

>2. how do you structure the files?
>  
>
<project>\
   data\
      <version>_<testdate>\
         input\
            userNR.dat
            weekNR.dat
            etc.
         log\
            err.log
            run.log
            A1_1.html (response with errors)
         report\
            resp_u<NrOfThreads>.xls (responses)
            stat_u<NrOfThreads>.xls (min, max, mean etc per request of 
10 runs)
            diff_u<NrOfThreads>.xls (difference with older version)
            totl.xls (total report with stats and diffs on testplan level)
         output\
            <testplancode>_u<NrOfThreads>.jtl (csv output)
            A_u10.jtl
            etc.
   log\ (temporary log directory is copied afterwards to <version> log.
   prg\
      bat\ (startup scripts for JMeter and setting variables)
      ini\ (jmeter ini file)
      jmx\
         <testplancode>.jmx
         A.jmx
         B.jmx
         etc.

>3. do you use a naming convention for the directories and files?
>  
>
The version of the webapp and the testdate are the name of a 
subdirectory in which all input and output is stored (in subdirectories).
The testplans are named A to O.

>4. how many test plans do you run?
>  
>
15 testplans in which 4 testplans are used for 3 configurations of the 
webapplication (makes total of 23 different tests)

>5. which listeners do you use to save the results
>  
>
Simple data writer to log errors only.
Aggregate report, View Results Tree and Assertion Result in GUI mode 
(building testplan and checking errors etc.)
-l <resultlog> on commandline.

Not a listener but very usefull is save failed responses only (a pity 
only that it saves in the JMeter bin directory).

>6. what kinds of charts and graphs do you want?
>  
>
None at this moment, we only determine differences between two versions 
of the webapplication whith a custom build VB/Excel app.

>thanks in advance.
>
>
>peter lin
>  
>
Rinke Heida

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org


Re: Survey

Posted by Bronagh McElduff <Br...@mobilecohesion.com>.
Hi Peter,

Comments inline..

Kind regards,
Bronagh

Peter Lin wrote:

>I would like to ask the users of JMeter if anyone uses JMeter to run
>automated tests. If you do, how do you structure your directories and files?
>Would do people imagine using a reporting tool? If you do use jmeter for
>automated tests, can take a minute to answer these questions.
>
>1. how often do you run the automated tests?
>  
>
We run our automated tests daily against the night build.  A subset of 
these tests  form our integration test suite and are used in acceptance 
of an official release (fortnightly)

>2. how do you structure the files?
>  
>
Each test plan essentially represents a functional area.  This plan is 
driven by a data file, and each line in the datafile represents an 
individual test case.  The test plans are run using the following ant task:
 <jmeter jmeterhome="${home.jmeter}"
    resultlog="${resultlog}">
        <testplans
              dir="${test.dir}">
             <include name="**/*.jmx"/>
        </testplans>
    </jmeter>
    <xslt
    in="${resultlog}"
    out="${htmlreport}"
    style="${stylesheet}"/>
  </target>

N.B. the stylesheet used is: jmeter-results-detail-report.xsl

Executing using this task means that the test directory structure is 
essentially lost.  However, it would be great if the directory structure 
in which the testplans reside (see attached diagram) could be replicated 
and used to store the result files produced.  Perhaps in addition to the 
single results file, if the user specifies a results file as part of the 
test plan, that the results for this individual plan are written out 
here also when executed automatically (this only occurs when the test 
plan is executed via the JMeter gui)
i.e. <stringProp 
name="filename">${JMETER_HOME}/E2E/Results/Integration/MM7PassthruVaspOrig.jtl</stringProp>

Ideally, I would like to group a number of plans into a functional area 
that would represent a test suite.  This could then be identified by the 
xslt so that the results could be displayed in suites rather than as a 
block of tests

I think a big improvement in terms of reporting would be to provide 
historical results.  If the same set of tests are being executed 
nightly, it is very useful to be able to observe trends, particularly 
for performance tests. 

>3. do you use a naming convention for the directories and files?
>  
>
See attached diagram

>4. how many test plans do you run?
>  
>
Currently 10 test plans with a view to increasing this to reflect 600 
testcases in the next 4 weeks

>5. which listeners do you use to save the results
>  
>
Assertion results
Graph results
Results tree
Aggregate report

>6. what kinds of charts and graphs do you want?
>  
>
TPS would be a more meaningful measurement for me.

>thanks in advance.
>
>
>peter lin
>
>  
>



RE: Survey

Posted by Bruno Charloup <bc...@jouve.fr>.
Peter,
Please see my answers inline : 

-----Message d'origine-----
De : Peter Lin [mailto:woolfel@gmail.com] 
Envoyé : lundi 31 octobre 2005 05:37
À : jmeter-user
Objet : Survey

I would like to ask the users of JMeter if anyone uses JMeter to run
automated tests. If you do, how do you structure your directories and files?
Would do people imagine using a reporting tool? If you do use jmeter for
automated tests, can take a minute to answer these questions.

1. how often do you run the automated tests?
For the present time, we are in stage of creation test plan, as our aim is
non-regression test, we plan to run these test before each new release or
correction of bugs..

2. how do you structure the files?
Each files is structured to match our word test plan (document from our
quality system). Often , only 1 thread group by test plan, and we put in
this test plan several simple controller in order to group url after
recording of the scenario. The name given to each simple controller is the
name of a check point in our written test plan (see enclosed picture file
testplan_structure.png). Because we met difficulties ( JMeter crashes) when
recording test plan, we have limited the size of each JMeter test plan to
1Mo (included response assertion). It is also more simple for us to have
small JMeter test plan : in case of evolution of part of our application,
only few JMeter test plans must be updated

3. do you use a naming convention for the directories and files?
The name of directories and files match the name of title level 1, 2 and 3
in our word test plan (see 2nd screen copy in attached file).

4. how many test plans do you run?
For our first use of JMeter, our word test plan containts 680 check points.
For the moment, we have written more than 68 tests plan.

5. which listeners do you use to save the results 
To check the result we use tree listener, assertion listener.  We plan to
use the sample ant task given in the extra folder to have the list of test
plan with error, but the result of this kind of report doesn't give enough
detail to identify, in case of failure, which assertion failed.

6. what kinds of charts and graphs do you want?
It would be interesting to have a listener which would be able to say, for
an http sampler with response assertion in error, which response assertion
failed.


thanks in advance.


peter lin


Re: Survey

Posted by m mat <pe...@yahoo.com>.
Answers Inline

Peter Lin <wo...@gmail.com> wrote:
I would like to ask the users of JMeter if anyone uses JMeter to run
automated tests. If you do, how do you structure your directories and files?
Would do people imagine using a reporting tool? If you do use jmeter for
automated tests, can take a minute to answer these questions.

1. how often do you run the automated tests?

Matt: Right now we are running them few times every day as we are tuning the system, going forward  we will put them in the build script and run them with builds

2. how do you structure the files?

Test Folder

---Scripts (has all jmx files)

---Data (has all data files

---Results (results get stored here)

---Logs (perfmon and JMeter logs get stored here)
3. do you use a naming convention for the directories and files?

---No I just use the directory structure to make sense of the tests
4. how many test plans do you run?

--- About 10
5. which listeners do you use to save the results

--- Aggregate Report

--- My ant script converts the jtl file to a html report
6. what kinds of charts and graphs do you want?

Aaah. 

a. No of threads with average transaction performance: X axis should have elapsed time, Y axis should have time taken, and the chart should show no of threads and average readings of all   transactions (samplers)

b. Would love to compare 1 result to another later result 

c. Transaction performance: Put users on X axis and time taken on Y axis and plot each sampler

 

Thanks

 

Matt



		
---------------------------------
 Yahoo! FareChase - Search multiple travel sites in one click.  

Re: Survey

Posted by Peter Bernier <jm...@binarytwo.com>.
NOTE: As part of a build, I use JMeter more for functional testing than
for load-testing. (I got to a point where I figured 'hey, I've got all
these load-testing scripts that already do a bunch of assertions, why
not run them before we release a build and see if they pick up
anything?'..)

>1. how often do you run the automated tests?

Before a new build is released to QC. Regression testing with JMeter is
not yet part of our process, but I've been 'unofficially' building up
and running an automatino suite over the past six months. Occasionally I
do manage to pick up some stuff so it's already proved its worth.

>2. how do you structure the files?

Individual scripts based on high-priority feature/defect functionality. I
also have a few more broad-based scripts to target areas that we've had
trouble with in the past.

>3. do you use a naming convention for the directories and files?

<Major Project Number>_Tests\<feature_or_Defect_number>.jmx
For each jmx that I create, I also accompany it with a brief text file
outlining the required data for the script, an explanation of any
variables to be defined, as well as quick description of each action
that script performs. (ie, go here, do this, record this variable etc).

>4. how many test plans do you run?

Currently about 35, spread out over three code streams. (More scripts on
the streams currently in development obviously).

>5. which listeners do you use to save the results

Assertion Results, View Results Tree and Aggregate Report.

>6. what kinds of charts and graphs do you want?

Nothing currently. The only graphs that I package in my reports are
CPU/Memory/Network usage. It'd be nice to have a measure of how
response times change over the course of a test. (ie, Avg Response Time
only applies to the entire test.. it'd be nice to get a bit more
visibility into how that figure breaks down). (disclaimer: It may be
possible to do that with another listener or something, I just haven't
had time to update/enhance my suite in quite a while..)

- Peter Bernier

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org


RE Survey

Posted by je...@bnf.fr.
1. how often do you run the automated tests?
Every new version, if there is a risk of loss of performance (changing
version of EJB, of database...)

2. how do you structure the files?
 -----> appli1 -----> test    -> myTest.jmx
                  |
                  |-> data    -> myData1.csv
                  |           -> myData2.csv
                  |
                  |-> results -> ATemporayFile.jtl
                              -> testsForVersion05.1
                                    |
                                    |-> myTestoOctober31.jtl
                                    |-> analyseTestForVersion05.1.xls

-----> appli2 -----> test ...

myTest.jmx contains the scenario
myData1.csv contains datas for scenarii, for example list of user/passwd...
results contains results: In most cases, I export listeners results in a
temporay files, that I copy  to a repertory dedictated for a version.
After, I insert and analyze my datas in excel file

3. do you use a naming convention for the directories and files?
see before

4. how many test plans do you run?
Now I have 4 test plan, but it's grwonig up.

5. which listeners do you use to save the results
Aggregate

6. what kinds of charts and graphs do you want?
Response Time by servlets
Statistics by servlets




---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org