You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jmeter-dev@jakarta.apache.org by Jordi Salvat i Alabart <js...@atg.com> on 2003/08/11 02:35:16 UTC

JMeter 2.0 [Re: Source dist build]


mstover1@apache.org wrote:
> I'm pretty committed to the idea that JMeter 2.0 will be drastically different from JMeter 1.9.  
> So, feel free to make big changes.

Hey, Mike, we'd like to know what you're thinking about?

-- 
Salut,

Jordi.


Re: JMeter 2.0 [Re: Source dist build]

Posted by Jordi Salvat i Alabart <js...@atg.com>.

Jeremy Arnold wrote:
>  [...] But since most threads are sleeping most 
> of the time, perhaps we can come up with some sort of thread pool, so 
> that a large number of JMeter "threads" (perhaps better to call them 
> "users" in this case) could be handled by a smaller number of JVM 
> threads.  It could be a bit tricky to ensure that we have the right 
> number of JVM threads to handle the JMeter users, and that samples are 
> executed when they are supposed to.  But it seems like there could be 
> potential.

I usually prefer to leave the low-level stuff at the low-level layer: 
the JVM makes should care about threading efficiency (actually it looks 
like they do, see 
http://developer.java.sun.com/developer/technicalArticles/JavaTechandLinux/RedHat/ 
).

> When I read Jordi's message, I thought he was referring to have a system 
> dedicated to performance regression tests, so that we can see the 
> effects of changes to JMeter on its performance.  For example, if we 
> start messing with a thread pool, we would need to be certain that we 
> weren't impacting the results (at least not negatively -- but even if we 
> made an improvement it would be good to document that).
> 

Yes, that's exactly what I meant. Thanks for the clarification.

> Seems like we've got some high hopes for JMeter 2.0...even in just a 
> short discussion -- I'm looking forward to getting started on it.
> 
> Jeremy
> http://xirr.com/~jeremy_a
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> 
> 

-- 
Salut,

Jordi.


Re: JMeter 2.0 [Re: Source dist build]

Posted by Jordi Salvat i Alabart <js...@atg.com>.

Jeremy Arnold wrote:
>  [...] But since most threads are sleeping most 
> of the time, perhaps we can come up with some sort of thread pool, so 
> that a large number of JMeter "threads" (perhaps better to call them 
> "users" in this case) could be handled by a smaller number of JVM 
> threads.  It could be a bit tricky to ensure that we have the right 
> number of JVM threads to handle the JMeter users, and that samples are 
> executed when they are supposed to.  But it seems like there could be 
> potential.

I usually prefer to leave the low-level stuff at the low-level layer: 
the JVM makes should care about threading efficiency (actually it looks 
like they do, see 
http://developer.java.sun.com/developer/technicalArticles/JavaTechandLinux/RedHat/ 
).

> When I read Jordi's message, I thought he was referring to have a system 
> dedicated to performance regression tests, so that we can see the 
> effects of changes to JMeter on its performance.  For example, if we 
> start messing with a thread pool, we would need to be certain that we 
> weren't impacting the results (at least not negatively -- but even if we 
> made an improvement it would be good to document that).
> 

Yes, that's exactly what I meant. Thanks for the clarification.

> Seems like we've got some high hopes for JMeter 2.0...even in just a 
> short discussion -- I'm looking forward to getting started on it.
> 
> Jeremy
> http://xirr.com/~jeremy_a
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> 
> 

-- 
Salut,

Jordi.


---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org


Re: JMeter 2.0 [Re: Source dist build]

Posted by Jeremy Arnold <je...@bigfoot.com>.
It's always nice to see other people thinking in the same general 
directions that I am.

I think Jordi is on the right track about having a separate analysis 
component.  I would like to keep the Visualizers out of the Test Plan -- 
leave the Test Plan with the job of describing the test.  Make some 
basic statistics available at runtime during the test -- how many 
samples passed/failed, an estimate of throughput and response time, and 
perhaps some other data which can be calculated or estimated cheaply at 
runtime (including with remote engines).  Have each engine track more 
detailed data which can be aggregated at the end of the run, and then 
more detailed analysis can be done on this data.

Obviously there are some cases that have to be treated specially -- some 
data is expensive to collect, so you wouldn't necessarily want to store 
it unless the user specifically requested it.

Another extension I would like to see is pluggable module to provide 
extra data which can be correlated with the data that JMeter collects.  
One such module would be to get the CPU utilization on the remote server 
system.  Another could get performance statistics from a Tomcat server.  
Or a WebSphere server.  Or whatever else somebody felt was useful enough 
to write a module for.  JMeter wouldn't need to know the details about 
what is being stored...we just have to develop some kind of generic way 
to store it.


Regarding single threaded operation:  I think single threaded would 
probably not be a good idea.  But since most threads are sleeping most 
of the time, perhaps we can come up with some sort of thread pool, so 
that a large number of JMeter "threads" (perhaps better to call them 
"users" in this case) could be handled by a smaller number of JVM 
threads.  It could be a bit tricky to ensure that we have the right 
number of JVM threads to handle the JMeter users, and that samples are 
executed when they are supposed to.  But it seems like there could be 
potential.

>>Some performance and accuracy tests would also be great. I'm thinking on 
>>how to do those. An important bit would be unused hardware available for 
>>a long term for this purpose only (or almost)... I think I can provide this.
>>    
>>
>
>I've used various techniques to ensure the accuracy of my numbers - primarily to run an extra 
>test client with a very low load and comparing its numbers to the numbers of the high-load 
>clients.  I think the best way to handle it is through documentation to explain these techniques 
>and other ways of analyzing data.  Another way to help might be a visualizer that shows 
>samples as a line that demonstrates it's beginning time and end time, making it easy to see 
>overlapping samples, and thus see potential timing conflicts.
>  
>
When I read Jordi's message, I thought he was referring to have a system 
dedicated to performance regression tests, so that we can see the 
effects of changes to JMeter on its performance.  For example, if we 
start messing with a thread pool, we would need to be certain that we 
weren't impacting the results (at least not negatively -- but even if we 
made an improvement it would be good to document that).


Seems like we've got some high hopes for JMeter 2.0...even in just a 
short discussion -- I'm looking forward to getting started on it.

Jeremy
http://xirr.com/~jeremy_a



Re: JMeter 2.0 [Re: Source dist build]

Posted by Jordi Salvat i Alabart <js...@atg.com>.

mstover1@apache.org wrote:
> Lots of little things like the drag and drop need polishing - I'd prefer to be able to drag and drop 
> multiple files at once, for instance.  I'm not sure exactly what you are referring to with Eclipse (I 
> don't find myself dragging files around in Eclipse), but I imagine you are thinking of a system 
> whereby visual cues are provided to indicate whether you're about to drop an element into, 
> above, or below a tree node.  I wouldn't think that would be too hard.

Sorry -- my mistake: Eclipse does _not_ do that. But you understood what 
I meant. For an example, see Mozilla 1.4 bookmark management (checked 
this one this time). You're right drag'n'drop of multiple files is a must.

> Yes, and maybe automatic adding of Cookie Managers to plans that include HTTPSamplers?
> 

Or a wizard that sets things up for HTTP work?
- Ask for server & port (default 80).
- Ask whether you want the script to get images, applets, etc. or not
- Set up Thread Group, Recording Controller, one listener of choice, 
Cookie Manager.
- Set up Proxy with appropriate stuff depending on inputs above.

>>How about leaving listeners for real-time test result visualization & 
>>test result gathering/saving and having a separate application (or 
>>module) for more complex data analysis. Maybe there's something in the 
>>non-market we can use straight away?
> 
> 
> Sounds great.
> 

I'll start a search.

>>Instead, I would focus into accuracy by raising priority of threads 
>>during actual sampling. Would not improve total performance in terms of 
>>max throughput, but would improve measurement accuracy at mid and high 
>>loads.
> 
> 
> I've thought about this but I don't think it scales up very high.  The majority of any of JMeter's 
> threads time is spent sleeping, either in timer delay, or waiting for IO.  Giving all your IO 
> waiting threads a higher priority doesn't help much.  I also think it might worsen things to make 
> a bunch of threads sitting on IO calls the highest priority!
> 

It should not worsen things much: a sleeping thread is a sleeping 
thread, no matter at which priority. Only that once it wakes up, it 
would run with a minimum of obstacles to completion of the sampling.

Of course it would not improve throughput at all -- if anything it would 
reduce it slightly, because switching priorities has a cost, even if 
small. But accuracy at high loads could improve significantly.

> You mention socket factories - is it possible for JMeter to control all sockets created within the 
> JVM?

I have no idea. Was just a shot in the dark, but I'll research this.

-- 
Salut,

Jordi.


Re: JMeter 2.0 [Re: Source dist build]

Posted by Jeremy Arnold <je...@bigfoot.com>.
It's always nice to see other people thinking in the same general 
directions that I am.

I think Jordi is on the right track about having a separate analysis 
component.  I would like to keep the Visualizers out of the Test Plan -- 
leave the Test Plan with the job of describing the test.  Make some 
basic statistics available at runtime during the test -- how many 
samples passed/failed, an estimate of throughput and response time, and 
perhaps some other data which can be calculated or estimated cheaply at 
runtime (including with remote engines).  Have each engine track more 
detailed data which can be aggregated at the end of the run, and then 
more detailed analysis can be done on this data.

Obviously there are some cases that have to be treated specially -- some 
data is expensive to collect, so you wouldn't necessarily want to store 
it unless the user specifically requested it.

Another extension I would like to see is pluggable module to provide 
extra data which can be correlated with the data that JMeter collects.  
One such module would be to get the CPU utilization on the remote server 
system.  Another could get performance statistics from a Tomcat server.  
Or a WebSphere server.  Or whatever else somebody felt was useful enough 
to write a module for.  JMeter wouldn't need to know the details about 
what is being stored...we just have to develop some kind of generic way 
to store it.


Regarding single threaded operation:  I think single threaded would 
probably not be a good idea.  But since most threads are sleeping most 
of the time, perhaps we can come up with some sort of thread pool, so 
that a large number of JMeter "threads" (perhaps better to call them 
"users" in this case) could be handled by a smaller number of JVM 
threads.  It could be a bit tricky to ensure that we have the right 
number of JVM threads to handle the JMeter users, and that samples are 
executed when they are supposed to.  But it seems like there could be 
potential.

>>Some performance and accuracy tests would also be great. I'm thinking on 
>>how to do those. An important bit would be unused hardware available for 
>>a long term for this purpose only (or almost)... I think I can provide this.
>>    
>>
>
>I've used various techniques to ensure the accuracy of my numbers - primarily to run an extra 
>test client with a very low load and comparing its numbers to the numbers of the high-load 
>clients.  I think the best way to handle it is through documentation to explain these techniques 
>and other ways of analyzing data.  Another way to help might be a visualizer that shows 
>samples as a line that demonstrates it's beginning time and end time, making it easy to see 
>overlapping samples, and thus see potential timing conflicts.
>  
>
When I read Jordi's message, I thought he was referring to have a system 
dedicated to performance regression tests, so that we can see the 
effects of changes to JMeter on its performance.  For example, if we 
start messing with a thread pool, we would need to be certain that we 
weren't impacting the results (at least not negatively -- but even if we 
made an improvement it would be good to document that).


Seems like we've got some high hopes for JMeter 2.0...even in just a 
short discussion -- I'm looking forward to getting started on it.

Jeremy
http://xirr.com/~jeremy_a



---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org


Re: JMeter 2.0 [Re: Source dist build]

Posted by Jordi Salvat i Alabart <js...@atg.com>.

mstover1@apache.org wrote:
> Lots of little things like the drag and drop need polishing - I'd prefer to be able to drag and drop 
> multiple files at once, for instance.  I'm not sure exactly what you are referring to with Eclipse (I 
> don't find myself dragging files around in Eclipse), but I imagine you are thinking of a system 
> whereby visual cues are provided to indicate whether you're about to drop an element into, 
> above, or below a tree node.  I wouldn't think that would be too hard.

Sorry -- my mistake: Eclipse does _not_ do that. But you understood what 
I meant. For an example, see Mozilla 1.4 bookmark management (checked 
this one this time). You're right drag'n'drop of multiple files is a must.

> Yes, and maybe automatic adding of Cookie Managers to plans that include HTTPSamplers?
> 

Or a wizard that sets things up for HTTP work?
- Ask for server & port (default 80).
- Ask whether you want the script to get images, applets, etc. or not
- Set up Thread Group, Recording Controller, one listener of choice, 
Cookie Manager.
- Set up Proxy with appropriate stuff depending on inputs above.

>>How about leaving listeners for real-time test result visualization & 
>>test result gathering/saving and having a separate application (or 
>>module) for more complex data analysis. Maybe there's something in the 
>>non-market we can use straight away?
> 
> 
> Sounds great.
> 

I'll start a search.

>>Instead, I would focus into accuracy by raising priority of threads 
>>during actual sampling. Would not improve total performance in terms of 
>>max throughput, but would improve measurement accuracy at mid and high 
>>loads.
> 
> 
> I've thought about this but I don't think it scales up very high.  The majority of any of JMeter's 
> threads time is spent sleeping, either in timer delay, or waiting for IO.  Giving all your IO 
> waiting threads a higher priority doesn't help much.  I also think it might worsen things to make 
> a bunch of threads sitting on IO calls the highest priority!
> 

It should not worsen things much: a sleeping thread is a sleeping 
thread, no matter at which priority. Only that once it wakes up, it 
would run with a minimum of obstacles to completion of the sampling.

Of course it would not improve throughput at all -- if anything it would 
reduce it slightly, because switching priorities has a cost, even if 
small. But accuracy at high loads could improve significantly.

> You mention socket factories - is it possible for JMeter to control all sockets created within the 
> JVM?

I have no idea. Was just a shot in the dark, but I'll research this.

-- 
Salut,

Jordi.


---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org


Re: JMeter 2.0 [Re: Source dist build]

Posted by ms...@apache.org.
Great feedback, Jordi - responses below.

On 11 Aug 2003 at 10:15, Jordi Salvat i Alabart wrote:

> mstover1@apache.org wrote:
> > I've been using JMeter as a user quite a bit the past few weeks, and I've learned some things 
> > about it.  One is that it's very tedious to use, and so a lot of my thoughts have to do with 
> > creating more powerful tools to manipulate test scripts.  I think I'd like to introduce the idea 
of 
> > alternate ways to view a test plan, ala eclipse, so that different aspects of test plan editing 
can 
> > be brought to the forefront.
> > 
> 
> It's true that test editing is tedious, but I don't really see different 
> "aspects" in such a heavy way as Eclipse -- maybe visualization options?
> 
> Control vs. non-control elements: you had commented in the past about 
> control elements (controllers & samplers) vs. non-control elements 
> (where order essentially doesn't matter). Would be great to have an 
> option to show/hide those non-control elements when viewing the tree. 
> Also to see them in a separate panel showing all those applying to the 
> current control element -- with 'inherited' ones greyed out. Most 
> importantly because it would provide new (and not-so-new) users a 
> clearer view of which non-control elements apply to which control elements.
[reordering]
> Bulk editing: A find/replace feature the most obvious. Another nice one 
> could be to be able to select multiple test elements of the same type 
> and see the editor in the right panel show white fields for values that 
> are equal in all of them -- you could edit these straight-away -- and 
> fields with different values in grey -- possibly non-editable.

A perfect example - a view that shows you a slice of the test plan, by component type, and 
provides an easy way to edit all at once.  I would think that you'd want such code to not get 
mixed up with the existing GUI code, and thus it would be a separate module that provided you 
a different view of things.  Right now, too many elements are closely coupled in order to show 
the one particular view of things - JMeterTreeModel, JMeterTreeListener,GuiPackage, for 
instance.  The tree model should probably be a dumber data model that actors manipulate, 
and that would provide a good start toward implementing other views and editing options.

> 
> Tree editing: Eclipse trees have a nice way of indicating whether to 
> insert before, insert after, or add as child which would be very handy 
> -- our current way is a pain. I don't know if that's doable in Swing, 
> though.

Lots of little things like the drag and drop need polishing - I'd prefer to be able to drag and drop 
multiple files at once, for instance.  I'm not sure exactly what you are referring to with Eclipse (I 
don't find myself dragging files around in Eclipse), but I imagine you are thinking of a system 
whereby visual cues are provided to indicate whether you're about to drop an element into, 
above, or below a tree node.  I wouldn't think that would be too hard.

> 

> 
> Protocol pre-selection: by having options on which protocols we want to 
> use in the test we could avoid cluttering the menus with samplers & 
> config elements not applicable to those protocols.

Yes, and maybe automatic adding of Cookie Managers to plans that include HTTPSamplers?

> 
> Screen real-state usage: reducing font size, getting rid of useless 
> spacing, etc... so that more space is left for panels such as the HTTP 
> request parameters.

Absolutely - I figured people would complain if I changed the font size though.

> 
> Another usability issue: it would be really nice to have certain test 
> elements provide a "dynamically-generated" default name (used in case 
> you leave the Name field blank). E.g. "Timer: 1.5 sec.", "Timer: 
> 10.0±5.0 sec.", "/home/index.jsp",...
> 
> > Remote testing needs to be revamped because it's pointless to have 10 remote machines 
all 
> > trying to stuff responses down the I/O throat of a single controlling machine - better to have 
the 
> > remote machines keep the responses till the end and not risk the accuracy of throughput 
> > measurements.  Perhaps a simpler format can be created for remote testing whereby 
during 
> > the test only success/failure plus response time is sent to the controlling machine, and 
> > everything else waits to the end of the test.
> 
> I agree, but note that this means significant rewrite of all listeners, 
> so that they can handle this two-phase input and still show meaningful 
> results.

Or the SampleListener interface could be given an extra method: 
summarySampleOccurred(long time,boolean success);

Really, all we need to know is that the test is running and samples are happening.  And at the 
end of the test, an easy way to retrieve the entire, fully recorded results.  Which could be 
handled by your new analysis module.

> 
> > I want test results categorized by test run, and not just as a list of sampleResults.  A set 
of 
> > sample results has a metadata set that describes the test run, and JMeter should be able 
to 
> > use such metadata to potentially combine test run results and also display statistics 
> > comparing two test runs (ie, graphing # users vs throughput).  
> 
> How about leaving listeners for real-time test result visualization & 
> test result gathering/saving and having a separate application (or 
> module) for more complex data analysis. Maybe there's something in the 
> non-market we can use straight away?

Sounds great.

> 
> > Result files need to be abstract datasources with an interface that visualizers talk to 
without 
> > knowing whether the backing data is an XML file, a CSV file, a database, etc.  Right now, 
> > JMeter knows how to write CSV files, but can't read them!
> 
> Note this would make sense if we had the separate analysis application I 
> was talking about.
> 
> > A defined interface will help us 
> > modularize this code whereas currently it's mixed up with the code for reading and writing 
test 
> > plan files.
> > 
> > Visualizers should be able to output useful file types for distribution of results to non-jmeter 
> > users.  HTML and PNG files, for instance.  Some way of exporting the data to a format 
that 
> > can be easily posted.
> 
> Again, a separate analysis tool could take care of this.
> 
> > I wanted to make JMeter single threaded with the new non-blocking IO packages, but I 
don't 
> > think this is feasible.
> 
> Definitely not doable for the Java samplers. Extremely difficult for 
> JDBC, difficult and probably not worth it for the rest (just my view -- 
> seems to match your's though).
> 
> Instead, I would focus into accuracy by raising priority of threads 
> during actual sampling. Would not improve total performance in terms of 
> max throughput, but would improve measurement accuracy at mid and high 
> loads.

I've thought about this but I don't think it scales up very high.  The majority of any of JMeter's 
threads time is spent sleeping, either in timer delay, or waiting for IO.  Giving all your IO 
waiting threads a higher priority doesn't help much.  I also think it might worsen things to make 
a bunch of threads sitting on IO calls the highest priority!

> 
> Some performance and accuracy tests would also be great. I'm thinking on 
> how to do those. An important bit would be unused hardware available for 
> a long term for this purpose only (or almost)... I think I can provide this.

I've used various techniques to ensure the accuracy of my numbers - primarily to run an extra 
test client with a very low load and comparing its numbers to the numbers of the high-load 
clients.  I think the best way to handle it is through documentation to explain these techniques 
and other ways of analyzing data.  Another way to help might be a visualizer that shows 
samples as a line that demonstrates it's beginning time and end time, making it easy to see 
overlapping samples, and thus see potential timing conflicts.

> 
> >  It's possible to do if you can get access to the very sockets that do the 
> > communicating, but how will you get that for jdbc drivers?  Even for HTTP, we'd have to 
write 
> > our own HTTP Client from which we could gain access to the socket being used and 
control 
> > the IO for it (or take the commons client and modify it so).  Because to put it all in a single 
> > threaded model, we'd have to take control of the IO part, and force the samplers to hand 
their 
> > sockets to some central code that would take the socket, take the bytes the sampler 
wants to 
> > send, and it would hand back the return bytes plus timing info.  It'd be nice, but I don't 
think it's 
> > feasible for most protocols.
> > 
> > JMeter needs to collect more data.  Size of responses should be explicitly collected to 
help 
> > throughput calculations of the form bytes/second.  Timing data should include a latency 
> > measurement in addition to the whole response time.
> 
> Totally agree. The complete split would be:
> 1- DNS resolution time
> 2- Connection set-up time (SYN to SYN ACK)
> 3- Request transmission time (SYN ACK to ACK of last request packet)
> 4- Latency (ACK of last request packet to 1st response data packet)
> 5- Response reception time
> I'm not sure JMeter is the tool to separate 1,2,3 (this is more of an 
> infrastructure-level thing rather than application-level), but 1+2+3+4 
> separate from 5 is a must. Top commercial tools separate them all.

You mention socket factories - is it possible for JMeter to control all sockets created within the 
JVM? And, if so, couldn't JMeter by that means take control of the low level input and output?  
The question then becomes, how do we match up this data from the low level socket control to 
the Sampler responsible for the data?

> 
> More accurate simulation of browser behaviour in terms of # of 
> concurrent connections, keep-alives, etc. would also be great. Even in 
> terms of available bandwidth: simulating modem/ISDN/ADSL users. Again, 
> this may not be JMeter's job -- application-level testing is more 
> important, IMO.
> 
> The problem is same as above: this requires access to the internals of 
> the client code. How to do this for JDBC? Maybe changing socket 
> factories? But it's a must, so we need to think about it.
> 
> >  Multiple SampleResponses need to be 
> > dealt with better - I'm thinking that instead of an API that looks like:
> > 
> > Sampler{
> >    SampleResult sample();
> > }
> > 
> > We need one that's more based on a callback situation:
> > Sampler {
> >    void sample(SendResultsHereService callback);
> > }
>  >
> > so that Samplers can send multiple results to the collector service.  This would make 
> > samplers more flexible for when scripting in python is allowed - to allow the adhoc scripter 
to 
> > push out sample results at any time during their script.
> > 
> I feel pushing out multiple separate samples belongs more to controller 
> land rather than sampler land...

Good point - I'm all in favor of controller's sending out sampleresult events.

> 
> > Given this, post-processors like assertions and post-processors need a way to know 
which 
> > result to apply themselves to.  We already have this problem wherein redirected samples 
> > confuse these components.  We need a way to either mark a particular response as "the 
main 
> > one" or define a response set all of which need to be tested by the applicable post-
processors.
> 
> Isn't the current "sample-tree" structure correct for this? Wouldn't it 
> be enough to have post-processors, listeners,etc. know about such 
> "structured" sample results?

You're probably right.
> 
> > I'd also like to replace the Avalon Configuration stuff with something that can load files 
more 
> > stream-like and piecemeal, instead of creating a DOM and then handing it over to JMeter.  
It 
> > goes too long without any feedback for the user.  Plus uses a ton of memory.
> 
> Maybe javax.beans.XMLEncoder/Decoder can help? (Never used it, just 
> adding it to the long list).
> 
> > Sun's HTTP Client should be replaced.  As the cornerstone of JMeter, we ought to have 
one 
> > that is highly flexible to our needs, provides the most accurate timing it can, the most 
> > performance possible, the least resource intensive as possible, and the most transparency 
to 
> > JMeter's controlling code.  I think the commons HTTP Client is probably a good place to 
start, 
> > being open-source, we can craft it to our needs.
> 
> Totally agree that it needs to be replaced and that the HTTP Client is 
> our best bet.

Seems like we all think that.

-Mike

> 
> > Well, that's a start :-)
> > 
> -- 
> Salut,
> 
> Jordi.
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> 




--
Michael Stover
mstover1@apache.org
Yahoo IM: mstover_ya
ICQ: 152975688
AIM: mstover777

Re: JMeter 2.0 [Re: Source dist build]

Posted by Jordi Salvat i Alabart <js...@atg.com>.
mstover1@apache.org wrote:
> I've been using JMeter as a user quite a bit the past few weeks, and I've learned some things 
> about it.  One is that it's very tedious to use, and so a lot of my thoughts have to do with 
> creating more powerful tools to manipulate test scripts.  I think I'd like to introduce the idea of 
> alternate ways to view a test plan, ala eclipse, so that different aspects of test plan editing can 
> be brought to the forefront.
> 

It's true that test editing is tedious, but I don't really see different 
"aspects" in such a heavy way as Eclipse -- maybe visualization options?

Control vs. non-control elements: you had commented in the past about 
control elements (controllers & samplers) vs. non-control elements 
(where order essentially doesn't matter). Would be great to have an 
option to show/hide those non-control elements when viewing the tree. 
Also to see them in a separate panel showing all those applying to the 
current control element -- with 'inherited' ones greyed out. Most 
importantly because it would provide new (and not-so-new) users a 
clearer view of which non-control elements apply to which control elements.

Tree editing: Eclipse trees have a nice way of indicating whether to 
insert before, insert after, or add as child which would be very handy 
-- our current way is a pain. I don't know if that's doable in Swing, 
though.

Bulk editing: A find/replace feature the most obvious. Another nice one 
could be to be able to select multiple test elements of the same type 
and see the editor in the right panel show white fields for values that 
are equal in all of them -- you could edit these straight-away -- and 
fields with different values in grey -- possibly non-editable.

Protocol pre-selection: by having options on which protocols we want to 
use in the test we could avoid cluttering the menus with samplers & 
config elements not applicable to those protocols.

Screen real-state usage: reducing font size, getting rid of useless 
spacing, etc... so that more space is left for panels such as the HTTP 
request parameters.

Another usability issue: it would be really nice to have certain test 
elements provide a "dynamically-generated" default name (used in case 
you leave the Name field blank). E.g. "Timer: 1.5 sec.", "Timer: 
10.0±5.0 sec.", "/home/index.jsp",...

> Remote testing needs to be revamped because it's pointless to have 10 remote machines all 
> trying to stuff responses down the I/O throat of a single controlling machine - better to have the 
> remote machines keep the responses till the end and not risk the accuracy of throughput 
> measurements.  Perhaps a simpler format can be created for remote testing whereby during 
> the test only success/failure plus response time is sent to the controlling machine, and 
> everything else waits to the end of the test.

I agree, but note that this means significant rewrite of all listeners, 
so that they can handle this two-phase input and still show meaningful 
results.

> I want test results categorized by test run, and not just as a list of sampleResults.  A set of 
> sample results has a metadata set that describes the test run, and JMeter should be able to 
> use such metadata to potentially combine test run results and also display statistics 
> comparing two test runs (ie, graphing # users vs throughput).  

How about leaving listeners for real-time test result visualization & 
test result gathering/saving and having a separate application (or 
module) for more complex data analysis. Maybe there's something in the 
non-market we can use straight away?

> Result files need to be abstract datasources with an interface that visualizers talk to without 
> knowing whether the backing data is an XML file, a CSV file, a database, etc.  Right now, 
> JMeter knows how to write CSV files, but can't read them!

Note this would make sense if we had the separate analysis application I 
was talking about.

> A defined interface will help us 
> modularize this code whereas currently it's mixed up with the code for reading and writing test 
> plan files.
> 
> Visualizers should be able to output useful file types for distribution of results to non-jmeter 
> users.  HTML and PNG files, for instance.  Some way of exporting the data to a format that 
> can be easily posted.

Again, a separate analysis tool could take care of this.

> I wanted to make JMeter single threaded with the new non-blocking IO packages, but I don't 
> think this is feasible.

Definitely not doable for the Java samplers. Extremely difficult for 
JDBC, difficult and probably not worth it for the rest (just my view -- 
seems to match your's though).

Instead, I would focus into accuracy by raising priority of threads 
during actual sampling. Would not improve total performance in terms of 
max throughput, but would improve measurement accuracy at mid and high 
loads.

Some performance and accuracy tests would also be great. I'm thinking on 
how to do those. An important bit would be unused hardware available for 
a long term for this purpose only (or almost)... I think I can provide this.

>  It's possible to do if you can get access to the very sockets that do the 
> communicating, but how will you get that for jdbc drivers?  Even for HTTP, we'd have to write 
> our own HTTP Client from which we could gain access to the socket being used and control 
> the IO for it (or take the commons client and modify it so).  Because to put it all in a single 
> threaded model, we'd have to take control of the IO part, and force the samplers to hand their 
> sockets to some central code that would take the socket, take the bytes the sampler wants to 
> send, and it would hand back the return bytes plus timing info.  It'd be nice, but I don't think it's 
> feasible for most protocols.
> 
> JMeter needs to collect more data.  Size of responses should be explicitly collected to help 
> throughput calculations of the form bytes/second.  Timing data should include a latency 
> measurement in addition to the whole response time.

Totally agree. The complete split would be:
1- DNS resolution time
2- Connection set-up time (SYN to SYN ACK)
3- Request transmission time (SYN ACK to ACK of last request packet)
4- Latency (ACK of last request packet to 1st response data packet)
5- Response reception time
I'm not sure JMeter is the tool to separate 1,2,3 (this is more of an 
infrastructure-level thing rather than application-level), but 1+2+3+4 
separate from 5 is a must. Top commercial tools separate them all.

More accurate simulation of browser behaviour in terms of # of 
concurrent connections, keep-alives, etc. would also be great. Even in 
terms of available bandwidth: simulating modem/ISDN/ADSL users. Again, 
this may not be JMeter's job -- application-level testing is more 
important, IMO.

The problem is same as above: this requires access to the internals of 
the client code. How to do this for JDBC? Maybe changing socket 
factories? But it's a must, so we need to think about it.

>  Multiple SampleResponses need to be 
> dealt with better - I'm thinking that instead of an API that looks like:
> 
> Sampler{
>    SampleResult sample();
> }
> 
> We need one that's more based on a callback situation:
> Sampler {
>    void sample(SendResultsHereService callback);
> }
 >
> so that Samplers can send multiple results to the collector service.  This would make 
> samplers more flexible for when scripting in python is allowed - to allow the adhoc scripter to 
> push out sample results at any time during their script.
> 
I feel pushing out multiple separate samples belongs more to controller 
land rather than sampler land...

> Given this, post-processors like assertions and post-processors need a way to know which 
> result to apply themselves to.  We already have this problem wherein redirected samples 
> confuse these components.  We need a way to either mark a particular response as "the main 
> one" or define a response set all of which need to be tested by the applicable post-processors.

Isn't the current "sample-tree" structure correct for this? Wouldn't it 
be enough to have post-processors, listeners,etc. know about such 
"structured" sample results?

> I'd also like to replace the Avalon Configuration stuff with something that can load files more 
> stream-like and piecemeal, instead of creating a DOM and then handing it over to JMeter.  It 
> goes too long without any feedback for the user.  Plus uses a ton of memory.

Maybe javax.beans.XMLEncoder/Decoder can help? (Never used it, just 
adding it to the long list).

> Sun's HTTP Client should be replaced.  As the cornerstone of JMeter, we ought to have one 
> that is highly flexible to our needs, provides the most accurate timing it can, the most 
> performance possible, the least resource intensive as possible, and the most transparency to 
> JMeter's controlling code.  I think the commons HTTP Client is probably a good place to start, 
> being open-source, we can craft it to our needs.

Totally agree that it needs to be replaced and that the HTTP Client is 
our best bet.

> Well, that's a start :-)
> 
-- 
Salut,

Jordi.


---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org


Re: JMeter 2.0 [Re: Source dist build]

Posted by Jordi Salvat i Alabart <js...@atg.com>.
mstover1@apache.org wrote:
> I've been using JMeter as a user quite a bit the past few weeks, and I've learned some things 
> about it.  One is that it's very tedious to use, and so a lot of my thoughts have to do with 
> creating more powerful tools to manipulate test scripts.  I think I'd like to introduce the idea of 
> alternate ways to view a test plan, ala eclipse, so that different aspects of test plan editing can 
> be brought to the forefront.
> 

It's true that test editing is tedious, but I don't really see different 
"aspects" in such a heavy way as Eclipse -- maybe visualization options?

Control vs. non-control elements: you had commented in the past about 
control elements (controllers & samplers) vs. non-control elements 
(where order essentially doesn't matter). Would be great to have an 
option to show/hide those non-control elements when viewing the tree. 
Also to see them in a separate panel showing all those applying to the 
current control element -- with 'inherited' ones greyed out. Most 
importantly because it would provide new (and not-so-new) users a 
clearer view of which non-control elements apply to which control elements.

Tree editing: Eclipse trees have a nice way of indicating whether to 
insert before, insert after, or add as child which would be very handy 
-- our current way is a pain. I don't know if that's doable in Swing, 
though.

Bulk editing: A find/replace feature the most obvious. Another nice one 
could be to be able to select multiple test elements of the same type 
and see the editor in the right panel show white fields for values that 
are equal in all of them -- you could edit these straight-away -- and 
fields with different values in grey -- possibly non-editable.

Protocol pre-selection: by having options on which protocols we want to 
use in the test we could avoid cluttering the menus with samplers & 
config elements not applicable to those protocols.

Screen real-state usage: reducing font size, getting rid of useless 
spacing, etc... so that more space is left for panels such as the HTTP 
request parameters.

Another usability issue: it would be really nice to have certain test 
elements provide a "dynamically-generated" default name (used in case 
you leave the Name field blank). E.g. "Timer: 1.5 sec.", "Timer: 
10.0±5.0 sec.", "/home/index.jsp",...

> Remote testing needs to be revamped because it's pointless to have 10 remote machines all 
> trying to stuff responses down the I/O throat of a single controlling machine - better to have the 
> remote machines keep the responses till the end and not risk the accuracy of throughput 
> measurements.  Perhaps a simpler format can be created for remote testing whereby during 
> the test only success/failure plus response time is sent to the controlling machine, and 
> everything else waits to the end of the test.

I agree, but note that this means significant rewrite of all listeners, 
so that they can handle this two-phase input and still show meaningful 
results.

> I want test results categorized by test run, and not just as a list of sampleResults.  A set of 
> sample results has a metadata set that describes the test run, and JMeter should be able to 
> use such metadata to potentially combine test run results and also display statistics 
> comparing two test runs (ie, graphing # users vs throughput).  

How about leaving listeners for real-time test result visualization & 
test result gathering/saving and having a separate application (or 
module) for more complex data analysis. Maybe there's something in the 
non-market we can use straight away?

> Result files need to be abstract datasources with an interface that visualizers talk to without 
> knowing whether the backing data is an XML file, a CSV file, a database, etc.  Right now, 
> JMeter knows how to write CSV files, but can't read them!

Note this would make sense if we had the separate analysis application I 
was talking about.

> A defined interface will help us 
> modularize this code whereas currently it's mixed up with the code for reading and writing test 
> plan files.
> 
> Visualizers should be able to output useful file types for distribution of results to non-jmeter 
> users.  HTML and PNG files, for instance.  Some way of exporting the data to a format that 
> can be easily posted.

Again, a separate analysis tool could take care of this.

> I wanted to make JMeter single threaded with the new non-blocking IO packages, but I don't 
> think this is feasible.

Definitely not doable for the Java samplers. Extremely difficult for 
JDBC, difficult and probably not worth it for the rest (just my view -- 
seems to match your's though).

Instead, I would focus into accuracy by raising priority of threads 
during actual sampling. Would not improve total performance in terms of 
max throughput, but would improve measurement accuracy at mid and high 
loads.

Some performance and accuracy tests would also be great. I'm thinking on 
how to do those. An important bit would be unused hardware available for 
a long term for this purpose only (or almost)... I think I can provide this.

>  It's possible to do if you can get access to the very sockets that do the 
> communicating, but how will you get that for jdbc drivers?  Even for HTTP, we'd have to write 
> our own HTTP Client from which we could gain access to the socket being used and control 
> the IO for it (or take the commons client and modify it so).  Because to put it all in a single 
> threaded model, we'd have to take control of the IO part, and force the samplers to hand their 
> sockets to some central code that would take the socket, take the bytes the sampler wants to 
> send, and it would hand back the return bytes plus timing info.  It'd be nice, but I don't think it's 
> feasible for most protocols.
> 
> JMeter needs to collect more data.  Size of responses should be explicitly collected to help 
> throughput calculations of the form bytes/second.  Timing data should include a latency 
> measurement in addition to the whole response time.

Totally agree. The complete split would be:
1- DNS resolution time
2- Connection set-up time (SYN to SYN ACK)
3- Request transmission time (SYN ACK to ACK of last request packet)
4- Latency (ACK of last request packet to 1st response data packet)
5- Response reception time
I'm not sure JMeter is the tool to separate 1,2,3 (this is more of an 
infrastructure-level thing rather than application-level), but 1+2+3+4 
separate from 5 is a must. Top commercial tools separate them all.

More accurate simulation of browser behaviour in terms of # of 
concurrent connections, keep-alives, etc. would also be great. Even in 
terms of available bandwidth: simulating modem/ISDN/ADSL users. Again, 
this may not be JMeter's job -- application-level testing is more 
important, IMO.

The problem is same as above: this requires access to the internals of 
the client code. How to do this for JDBC? Maybe changing socket 
factories? But it's a must, so we need to think about it.

>  Multiple SampleResponses need to be 
> dealt with better - I'm thinking that instead of an API that looks like:
> 
> Sampler{
>    SampleResult sample();
> }
> 
> We need one that's more based on a callback situation:
> Sampler {
>    void sample(SendResultsHereService callback);
> }
 >
> so that Samplers can send multiple results to the collector service.  This would make 
> samplers more flexible for when scripting in python is allowed - to allow the adhoc scripter to 
> push out sample results at any time during their script.
> 
I feel pushing out multiple separate samples belongs more to controller 
land rather than sampler land...

> Given this, post-processors like assertions and post-processors need a way to know which 
> result to apply themselves to.  We already have this problem wherein redirected samples 
> confuse these components.  We need a way to either mark a particular response as "the main 
> one" or define a response set all of which need to be tested by the applicable post-processors.

Isn't the current "sample-tree" structure correct for this? Wouldn't it 
be enough to have post-processors, listeners,etc. know about such 
"structured" sample results?

> I'd also like to replace the Avalon Configuration stuff with something that can load files more 
> stream-like and piecemeal, instead of creating a DOM and then handing it over to JMeter.  It 
> goes too long without any feedback for the user.  Plus uses a ton of memory.

Maybe javax.beans.XMLEncoder/Decoder can help? (Never used it, just 
adding it to the long list).

> Sun's HTTP Client should be replaced.  As the cornerstone of JMeter, we ought to have one 
> that is highly flexible to our needs, provides the most accurate timing it can, the most 
> performance possible, the least resource intensive as possible, and the most transparency to 
> JMeter's controlling code.  I think the commons HTTP Client is probably a good place to start, 
> being open-source, we can craft it to our needs.

Totally agree that it needs to be replaced and that the HTTP Client is 
our best bet.

> Well, that's a start :-)
> 
-- 
Salut,

Jordi.


Re: JMeter 2.0 [Re: Source dist build]

Posted by ms...@apache.org.
I've been using JMeter as a user quite a bit the past few weeks, and I've learned some things 
about it.  One is that it's very tedious to use, and so a lot of my thoughts have to do with 
creating more powerful tools to manipulate test scripts.  I think I'd like to introduce the idea of 
alternate ways to view a test plan, ala eclipse, so that different aspects of test plan editing can 
be brought to the forefront.

Remote testing needs to be revamped because it's pointless to have 10 remote machines all 
trying to stuff responses down the I/O throat of a single controlling machine - better to have the 
remote machines keep the responses till the end and not risk the accuracy of throughput 
measurements.  Perhaps a simpler format can be created for remote testing whereby during 
the test only success/failure plus response time is sent to the controlling machine, and 
everything else waits to the end of the test.

I want test results categorized by test run, and not just as a list of sampleResults.  A set of 
sample results has a metadata set that describes the test run, and JMeter should be able to 
use such metadata to potentially combine test run results and also display statistics 
comparing two test runs (ie, graphing # users vs throughput).  

Result files need to be abstract datasources with an interface that visualizers talk to without 
knowing whether the backing data is an XML file, a CSV file, a database, etc.  Right now, 
JMeter knows how to write CSV files, but can't read them!  A defined interface will help us 
modularize this code whereas currently it's mixed up with the code for reading and writing test 
plan files.

Visualizers should be able to output useful file types for distribution of results to non-jmeter 
users.  HTML and PNG files, for instance.  Some way of exporting the data to a format that 
can be easily posted.

I wanted to make JMeter single threaded with the new non-blocking IO packages, but I don't 
think this is feasible.  It's possible to do if you can get access to the very sockets that do the 
communicating, but how will you get that for jdbc drivers?  Even for HTTP, we'd have to write 
our own HTTP Client from which we could gain access to the socket being used and control 
the IO for it (or take the commons client and modify it so).  Because to put it all in a single 
threaded model, we'd have to take control of the IO part, and force the samplers to hand their 
sockets to some central code that would take the socket, take the bytes the sampler wants to 
send, and it would hand back the return bytes plus timing info.  It'd be nice, but I don't think it's 
feasible for most protocols.

JMeter needs to collect more data.  Size of responses should be explicitly collected to help 
throughput calculations of the form bytes/second.  Timing data should include a latency 
measurement in addition to the whole response time.  Multiple SampleResponses need to be 
dealt with better - I'm thinking that instead of an API that looks like:

Sampler{
   SampleResult sample();
}

We need one that's more based on a callback situation:
Sampler {
   void sample(SendResultsHereService callback);
}

so that Samplers can send multiple results to the collector service.  This would make 
samplers more flexible for when scripting in python is allowed - to allow the adhoc scripter to 
push out sample results at any time during their script.

Given this, post-processors like assertions and post-processors need a way to know which 
result to apply themselves to.  We already have this problem wherein redirected samples 
confuse these components.  We need a way to either mark a particular response as "the main 
one" or define a response set all of which need to be tested by the applicable post-processors.

I'd also like to replace the Avalon Configuration stuff with something that can load files more 
stream-like and piecemeal, instead of creating a DOM and then handing it over to JMeter.  It 
goes too long without any feedback for the user.  Plus uses a ton of memory.

Sun's HTTP Client should be replaced.  As the cornerstone of JMeter, we ought to have one 
that is highly flexible to our needs, provides the most accurate timing it can, the most 
performance possible, the least resource intensive as possible, and the most transparency to 
JMeter's controlling code.  I think the commons HTTP Client is probably a good place to start, 
being open-source, we can craft it to our needs.

Well, that's a start :-)

-Mike

On 11 Aug 2003 at 2:35, Jordi Salvat i Alabart wrote:

> 
> 
> mstover1@apache.org wrote:
> > I'm pretty committed to the idea that JMeter 2.0 will be drastically different from JMeter 
1.9.  
> > So, feel free to make big changes.
> 
> Hey, Mike, we'd like to know what you're thinking about?
> 
> -- 
> Salut,
> 
> Jordi.
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> 




--
Michael Stover
mstover1@apache.org
Yahoo IM: mstover_ya
ICQ: 152975688
AIM: mstover777

Re: JMeter 2.0 [Re: Source dist build]

Posted by ms...@apache.org.
I've been using JMeter as a user quite a bit the past few weeks, and I've learned some things 
about it.  One is that it's very tedious to use, and so a lot of my thoughts have to do with 
creating more powerful tools to manipulate test scripts.  I think I'd like to introduce the idea of 
alternate ways to view a test plan, ala eclipse, so that different aspects of test plan editing can 
be brought to the forefront.

Remote testing needs to be revamped because it's pointless to have 10 remote machines all 
trying to stuff responses down the I/O throat of a single controlling machine - better to have the 
remote machines keep the responses till the end and not risk the accuracy of throughput 
measurements.  Perhaps a simpler format can be created for remote testing whereby during 
the test only success/failure plus response time is sent to the controlling machine, and 
everything else waits to the end of the test.

I want test results categorized by test run, and not just as a list of sampleResults.  A set of 
sample results has a metadata set that describes the test run, and JMeter should be able to 
use such metadata to potentially combine test run results and also display statistics 
comparing two test runs (ie, graphing # users vs throughput).  

Result files need to be abstract datasources with an interface that visualizers talk to without 
knowing whether the backing data is an XML file, a CSV file, a database, etc.  Right now, 
JMeter knows how to write CSV files, but can't read them!  A defined interface will help us 
modularize this code whereas currently it's mixed up with the code for reading and writing test 
plan files.

Visualizers should be able to output useful file types for distribution of results to non-jmeter 
users.  HTML and PNG files, for instance.  Some way of exporting the data to a format that 
can be easily posted.

I wanted to make JMeter single threaded with the new non-blocking IO packages, but I don't 
think this is feasible.  It's possible to do if you can get access to the very sockets that do the 
communicating, but how will you get that for jdbc drivers?  Even for HTTP, we'd have to write 
our own HTTP Client from which we could gain access to the socket being used and control 
the IO for it (or take the commons client and modify it so).  Because to put it all in a single 
threaded model, we'd have to take control of the IO part, and force the samplers to hand their 
sockets to some central code that would take the socket, take the bytes the sampler wants to 
send, and it would hand back the return bytes plus timing info.  It'd be nice, but I don't think it's 
feasible for most protocols.

JMeter needs to collect more data.  Size of responses should be explicitly collected to help 
throughput calculations of the form bytes/second.  Timing data should include a latency 
measurement in addition to the whole response time.  Multiple SampleResponses need to be 
dealt with better - I'm thinking that instead of an API that looks like:

Sampler{
   SampleResult sample();
}

We need one that's more based on a callback situation:
Sampler {
   void sample(SendResultsHereService callback);
}

so that Samplers can send multiple results to the collector service.  This would make 
samplers more flexible for when scripting in python is allowed - to allow the adhoc scripter to 
push out sample results at any time during their script.

Given this, post-processors like assertions and post-processors need a way to know which 
result to apply themselves to.  We already have this problem wherein redirected samples 
confuse these components.  We need a way to either mark a particular response as "the main 
one" or define a response set all of which need to be tested by the applicable post-processors.

I'd also like to replace the Avalon Configuration stuff with something that can load files more 
stream-like and piecemeal, instead of creating a DOM and then handing it over to JMeter.  It 
goes too long without any feedback for the user.  Plus uses a ton of memory.

Sun's HTTP Client should be replaced.  As the cornerstone of JMeter, we ought to have one 
that is highly flexible to our needs, provides the most accurate timing it can, the most 
performance possible, the least resource intensive as possible, and the most transparency to 
JMeter's controlling code.  I think the commons HTTP Client is probably a good place to start, 
being open-source, we can craft it to our needs.

Well, that's a start :-)

-Mike

On 11 Aug 2003 at 2:35, Jordi Salvat i Alabart wrote:

> 
> 
> mstover1@apache.org wrote:
> > I'm pretty committed to the idea that JMeter 2.0 will be drastically different from JMeter 
1.9.  
> > So, feel free to make big changes.
> 
> Hey, Mike, we'd like to know what you're thinking about?
> 
> -- 
> Salut,
> 
> Jordi.
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> 




--
Michael Stover
mstover1@apache.org
Yahoo IM: mstover_ya
ICQ: 152975688
AIM: mstover777

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org