You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@jmeter.apache.org by Shilpa Kulkarni <sy...@payasonline.com> on 2014/08/28 01:30:05 UTC

API distribution in test plan

Hi Group

Have few newbie questions on building effective load testing plan. I am
working on building a load test plan for a server that backs mobile clients
over REST APIs. We currently have one mobile client in production & we are
launching a new redesigned client which is backed by different set of APIs.
Goal of load testing is to assess whether production servers can take load
generated by new clients. For some time there will be both new & old types
of clients that will hit production servers.

I am reading users from csv file & in addition using the thread counts to
generate load.

I have created a basic test plan that contains all the APIs. The APIs can
be broadly grouped into the following groups
G1 - A set of APIs the old mobile app will call everytime it foregrounds.
Though how many times this will happen is not known, the set of APIs that
will get called is known.
G2 - Old Client : Set of APIs old mobile app will call based on user
action. This again depends on user action.
Similar ly G3, G4 for new clients
G3 - New Client: A set of APIs the new mobile app will call everytime it
foregrounds. Though how many times this will happen is not known, the set
of APIs that will get called is known.
G4 - New Client : Set of APIs new mobile app will call based on user
action. This again depends on user action.

When I build the load test plan, should the different api calls be grouped
per use case, or should they just be at some hypothetical high numbers?
Should I even be paying attention to the fact that they are called by
clients for different reasons?

Example
G1: A, B, C, P
G2: P,Q,A,B
G3: A,B,X,Y,Z
G4: A,B.X
>From here it is clear that A,B are called most frequently : 4 times more
than Y,Z. How should this be reflected in test plan? Have 4 times number of
samplers for A,B  as compared to samplers for Y,Z?

Can someone clarify how the ideal or at least acceptable test plan look
like?

Can someone share sample test plans that will answer my questions?

I can explain further if something is not clear.

Thank you for reading this far, appreciate your time!

Shilpa

Re: API distribution in test plan

Posted by Shilpa Kulkarni <sy...@payasonline.com>.
Hi Jeff

Thanks a lot for a detailed reply! You have many good suggestions here.

Since old clients are in production, we have numbers on API calls. I have
api call counts per 30 minutes on peak week day from Operations. This
includes all APIs from G1,G2. I can take a total of peak numbers for each
API & model the test plan after that. This will give some information on G2.

Where we have least info on is G4 : I think this is where I can engage more
with product mgmt to get more information.

G1-Light, G1-Heavy is also a good idea.

Thanks again!

Shilpa



On Wed, Aug 27, 2014 at 4:30 PM, Shilpa Kulkarni <sy...@payasonline.com>
wrote:

> Hi Group
>
> Have few newbie questions on building effective load testing plan. I am
> working on building a load test plan for a server that backs mobile clients
> over REST APIs. We currently have one mobile client in production & we are
> launching a new redesigned client which is backed by different set of APIs.
> Goal of load testing is to assess whether production servers can take load
> generated by new clients. For some time there will be both new & old types
> of clients that will hit production servers.
>
> I am reading users from csv file & in addition using the thread counts to
> generate load.
>
> I have created a basic test plan that contains all the APIs. The APIs can
> be broadly grouped into the following groups
> G1 - A set of APIs the old mobile app will call everytime it foregrounds.
> Though how many times this will happen is not known, the set of APIs that
> will get called is known.
> G2 - Old Client : Set of APIs old mobile app will call based on user
> action. This again depends on user action.
> Similar ly G3, G4 for new clients
> G3 - New Client: A set of APIs the new mobile app will call everytime it
> foregrounds. Though how many times this will happen is not known, the set
> of APIs that will get called is known.
> G4 - New Client : Set of APIs new mobile app will call based on user
> action. This again depends on user action.
>
> When I build the load test plan, should the different api calls be grouped
> per use case, or should they just be at some hypothetical high numbers?
> Should I even be paying attention to the fact that they are called by
> clients for different reasons?
>
> Example
> G1: A, B, C, P
> G2: P,Q,A,B
> G3: A,B,X,Y,Z
> G4: A,B.X
> From here it is clear that A,B are called most frequently : 4 times more
> than Y,Z. How should this be reflected in test plan? Have 4 times number of
> samplers for A,B  as compared to samplers for Y,Z?
>
> Can someone clarify how the ideal or at least acceptable test plan look
> like?
>
> Can someone share sample test plans that will answer my questions?
>
> I can explain further if something is not clear.
>
> Thank you for reading this far, appreciate your time!
>
> Shilpa
>
>

Re: API distribution in test plan

Posted by Jeff Ohrstrom <jo...@hotmail.com>.
Shilpa, 

There is no easy answer to your question and I'm not sure if there is an
'ideal test plan' but there are some fundamentals. One thing you should
always keep in mind is that you're trying to emulate what's happening
(or will happen) in production and model your test cases after that as
close as you can.  It seems you need information from your business
intelligence group and/or operations.  They should be able to tell you
things like the size of your active client base (how many people
interact with the system) and maybe even REST calls per day/hour/second.
At the very least they can tell you how many times the application has
been downloaded and you can start from there.

However it seems you have little to no information about user behavior,
and the question arises: how can you create a model out of nothing? It
seems you'll have to take some major liberties with client modeling and
report the results with these assumptions you've made (or are going to
have to make). 

Another tip would be to start simple, then as the testing effort goes on
you can get more complex.  That way, the simple cases may show something
wrong and you'll have less to troubleshoot because there are fewer API
calls happening.

This bit is just my opinion and in no way is 'the end all way it should
be done', surely others on the list will have differing opinions and
perhaps better ideas.  It's just a starting point for you, a way to
start thinking about client modeling. If this were given to me I'd
probably start with 2 thread groups grouped logically by client type
(old and new), then build on that (your G1 and G3 below).  Perhaps
building on that you have thread groups that are heavy and light, that
is, light thread groups login, only do 1 or 2 things then log off (i.e.,
emulating users who use the system in a 'light' fashion).  This might
contain G1 + 2 other calls (and similarly G3).  Heavy users may be G1 +
G2x2-3 (with timers sparced through it if applicable).   

Essentially G1 and G3 are essential and G2 and G4 you're going to have
to guess at as to how to weight them.  Also, you should discuss all of
this with the body you're reporting the results to because if you don't
have the proper information to create a load model then you're going to
have to make assumptions about client behavior that everyone involved
should agree to.  


On Wed, 2014-08-27 at 16:30 -0700, Shilpa Kulkarni wrote:
> Hi Group
> 
> Have few newbie questions on building effective load testing plan. I am
> working on building a load test plan for a server that backs mobile clients
> over REST APIs. We currently have one mobile client in production & we are
> launching a new redesigned client which is backed by different set of APIs.
> Goal of load testing is to assess whether production servers can take load
> generated by new clients. For some time there will be both new & old types
> of clients that will hit production servers.
> 
> I am reading users from csv file & in addition using the thread counts to
> generate load.
> 
> I have created a basic test plan that contains all the APIs. The APIs can
> be broadly grouped into the following groups
> G1 - A set of APIs the old mobile app will call everytime it foregrounds.
> Though how many times this will happen is not known, the set of APIs that
> will get called is known.
> G2 - Old Client : Set of APIs old mobile app will call based on user
> action. This again depends on user action.
> Similar ly G3, G4 for new clients
> G3 - New Client: A set of APIs the new mobile app will call everytime it
> foregrounds. Though how many times this will happen is not known, the set
> of APIs that will get called is known.
> G4 - New Client : Set of APIs new mobile app will call based on user
> action. This again depends on user action.
> 
> When I build the load test plan, should the different api calls be grouped
> per use case, or should they just be at some hypothetical high numbers?
> Should I even be paying attention to the fact that they are called by
> clients for different reasons?

They should be grouped by use case, and you should absolutely pay
attention to the fact that they're called by different clients for
different reasons.  Remember, you're creating a model and models are
based on those 'reasons'.  
> 
> Example
> G1: A, B, C, P
> G2: P,Q,A,B
> G3: A,B,X,Y,Z
> G4: A,B.X
> From here it is clear that A,B are called most frequently : 4 times more
> than Y,Z. How should this be reflected in test plan? Have 4 times number of
> samplers for A,B  as compared to samplers for Y,Z?
> 
> Can someone clarify how the ideal or at least acceptable test plan look
> like?
> 
> Can someone share sample test plans that will answer my questions?
> 
> I can explain further if something is not clear.
> 
> Thank you for reading this far, appreciate your time!
> 
> Shilpa



---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@jmeter.apache.org
For additional commands, e-mail: user-help@jmeter.apache.org