You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@deltacloud.apache.org by "dave johnson (JIRA)" <ji...@apache.org> on 2011/07/14 00:05:00 UTC

[jira] [Created] (DTACLOUD-58) api/instance query performance with 8 vms in rhevm

api/instance query performance with 8 vms in rhevm
--------------------------------------------------

                 Key: DTACLOUD-58
                 URL: https://issues.apache.org/jira/browse/DTACLOUD-58
             Project: DeltaCloud
          Issue Type: Bug
         Environment: Aeolus-conductor configured for rhevm provider

Rhevm running in Windows 2008 R2 Server with 1.5GB memory VM (running on a separate blade server)

[root@hp-dl2x170g6-01 ~]# rpm -qa | egrep 'aeolus|deltacloud' | sort
aeolus-all-0.3.0-0.el6.20110711131044git5bc7abf.noarch
aeolus-conductor-0.3.0-0.el6.20110711131044git5bc7abf.noarch
aeolus-conductor-daemons-0.3.0-0.el6.20110711131044git5bc7abf.noarch
aeolus-conductor-doc-0.3.0-0.el6.20110711131044git5bc7abf.noarch
aeolus-configure-2.0.1-0.el6.20110708134115gitab1e6dc.noarch
condor-deltacloud-gahp-7.6.0-5dcloud.el6.x86_64
deltacloud-core-0.3.9999-1308927004.el6.noarch
libdeltacloud-0.9-1.el6.x86_64
rubygem-aeolus-cli-0.0.1-1.el6.20110711131044git5bc7abf.noarch
rubygem-deltacloud-client-0.3.1-1.el6.noarch
[root@hp-dl2x170g6-01 ~]# 
            Reporter: dave johnson
            Assignee: David Lutterkort


I noticed that when I had over 7 rhev-h vm's running within rhevm, the CPU was running > 90% and the api console was practically scrolling continuously.  I looked at the deltacloud-core log file for rhevm and noticed that the query time of /api/instances were taking > 45 seconds to complete.  I started over adding a single vm every five minutes and noticed that there is a "heartbeat" from deltacloud-core every 30 seconds, if querying the API is taking longer than the heartbeat interval, deltacloud-core's heartbeart is causing performance issues with the rhevm api.  

With 8 vm's running, deltacloud-core api/instabces query were timing out and throwing timeout exceptions.  


# instances deployed through conductor
# instance starts in deltacloud-core log
127.0.0.1 - - [13/Jul/2011 16:13:15] "POST /api/instances/7b33e17e-3563-4717-b8e4-e5d5f454a240/start HTTP/1.1" 204 - 2.3925
127.0.0.1 - - [13/Jul/2011 16:20:50] "POST /api/instances/b24424ae-ab1e-4974-ba97-c0e4a8a2a779/start HTTP/1.1" 204 - 2.0416
127.0.0.1 - - [13/Jul/2011 16:26:01] "POST /api/instances/20a5dcad-b053-4267-9e5b-46f40f9d42c7/start HTTP/1.1" 204 - 3.9677
127.0.0.1 - - [13/Jul/2011 16:31:48] "POST /api/instances/e79f7b0e-7cc8-4426-88d7-3a3e1728fbb6/start HTTP/1.1" 204 - 13.0901
127.0.0.1 - - [13/Jul/2011 16:35:55] "POST /api/instances/5327ddc2-8a95-43c9-8b00-80e28fa1a16b/start HTTP/1.1" 204 - 4.2185
127.0.0.1 - - [13/Jul/2011 16:41:23] "POST /api/instances/2d8624a7-ecd5-4e7a-a479-f2ae45f96a70/start HTTP/1.1" 204 - 7.6413
127.0.0.1 - - [13/Jul/2011 16:46:33] "POST /api/instances/0b67d735-cd54-4a12-ba43-d708b018e6d0/start HTTP/1.1" 204 - 6.6535
127.0.0.1 - - [13/Jul/2011 16:52:26] "POST /api/instances/d16a2597-0916-419f-83cd-86e41dc08ca6/start HTTP/1.1" 204 - 20.1581

# api/instance load times climbing
127.0.0.1 - - [13/Jul/2011 16:15:22] "GET /api/instances HTTP/1.1" 200 1223 3.2926
127.0.0.1 - - [13/Jul/2011 16:20:12] "GET /api/instances HTTP/1.1" 200 1223 3.4131
127.0.0.1 - - [13/Jul/2011 16:25:17] "GET /api/instances HTTP/1.1" 200 2381 6.8360
127.0.0.1 - - [13/Jul/2011 16:30:54] "GET /api/instances HTTP/1.1" 200 3539 9.8815
127.0.0.1 - - [13/Jul/2011 16:35:17] "GET /api/instances HTTP/1.1" 200 4697 8.3200
127.0.0.1 - - [13/Jul/2011 16:40:51] "GET /api/instances HTTP/1.1" 200 5855 21.5512
127.0.0.1 - - [13/Jul/2011 16:45:00] "GET /api/instances HTTP/1.1" 200 7013 24.4277
127.0.0.1 - - [13/Jul/2011 16:50:05] "GET /api/instances HTTP/1.1" 200 8171 35.1951
127.0.0.1 - - [13/Jul/2011 16:50:07] "GET /api/instances HTTP/1.1" 200 8171 34.3853
127.0.0.1 - - [13/Jul/2011 16:50:14] "GET /api/instances HTTP/1.1" 200 8171 34.7330
127.0.0.1 - - [13/Jul/2011 16:50:19] "GET /api/instances HTTP/1.1" 200 8171 34.0917
127.0.0.1 - - [13/Jul/2011 16:50:32] "GET /api/instances HTTP/1.1" 200 8171 34.0356
127.0.0.1 - - [13/Jul/2011 16:50:57] "GET /api/instances HTTP/1.1" 200 8171 47.4070
127.0.0.1 - - [13/Jul/2011 16:51:05] "GET /api/instances HTTP/1.1" 200 8171 50.6648
127.0.0.1 - - [13/Jul/2011 16:51:08] "GET /api/instances HTTP/1.1" 200 8171 52.6800
127.0.0.1 - - [13/Jul/2011 16:51:09] "GET /api/instances HTTP/1.1" 200 8171 53.5951
127.0.0.1 - - [13/Jul/2011 16:51:29] "GET /api/instances HTTP/1.1" 200 8171 53.6821
127.0.0.1 - - [13/Jul/2011 16:51:34] "GET /api/instances HTTP/1.1" 200 8171 57.5881
127.0.0.1 - - [13/Jul/2011 16:51:35] "GET /api/instances HTTP/1.1" 200 8171 51.4359
127.0.0.1 - - [13/Jul/2011 16:51:38] "GET /api/instances HTTP/1.1" 200 8171 48.9456
127.0.0.1 - - [13/Jul/2011 16:51:42] "GET /api/instances HTTP/1.1" 200 8171 39.6616
127.0.0.1 - - [13/Jul/2011 16:52:01] "GET /api/instances HTTP/1.1" 200 9454 33.8931
127.0.0.1 - - [13/Jul/2011 16:52:13] "GET /api/instances HTTP/1.1" 200 9454 37.3036
127.0.0.1 - - [13/Jul/2011 16:52:20] "GET /api/instances HTTP/1.1" 200 9454 41.3128
127.0.0.1 - - [13/Jul/2011 16:52:58] "GET /api/instances HTTP/1.1" 200 9454 59.2965
127.0.0.1 - - [13/Jul/2011 16:53:04] "GET /api/instances HTTP/1.1" 500 190 60.0303
127.0.0.1 - - [13/Jul/2011 16:53:05] "GET /api/instances HTTP/1.1" 500 190 60.0688


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Resolved] (DTACLOUD-58) api/instance query performance with 8 vms in rhevm

Posted by "David Lutterkort (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/DTACLOUD-58?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

David Lutterkort resolved DTACLOUD-58.
--------------------------------------

    Resolution: Won't Fix

This isn't a Deltacloud issue; it's a combination of Aeolus polling all instances every 30s and RHEV-M 2.2 being slow to respond.

See
http://mail-archives.apache.org/mod_mbox/incubator-deltacloud-dev/201107.mbox/%3C20110714124958.GB2576@localhost.localdomain%3E

> api/instance query performance with 8 vms in rhevm
> --------------------------------------------------
>
>                 Key: DTACLOUD-58
>                 URL: https://issues.apache.org/jira/browse/DTACLOUD-58
>             Project: DeltaCloud
>          Issue Type: Bug
>         Environment: Aeolus-conductor configured for rhevm provider
> Rhevm running in Windows 2008 R2 Server with 1.5GB memory VM (running on a separate blade server)
> [root@hp-dl2x170g6-01 ~]# rpm -qa | egrep 'aeolus|deltacloud' | sort
> aeolus-all-0.3.0-0.el6.20110711131044git5bc7abf.noarch
> aeolus-conductor-0.3.0-0.el6.20110711131044git5bc7abf.noarch
> aeolus-conductor-daemons-0.3.0-0.el6.20110711131044git5bc7abf.noarch
> aeolus-conductor-doc-0.3.0-0.el6.20110711131044git5bc7abf.noarch
> aeolus-configure-2.0.1-0.el6.20110708134115gitab1e6dc.noarch
> condor-deltacloud-gahp-7.6.0-5dcloud.el6.x86_64
> deltacloud-core-0.3.9999-1308927004.el6.noarch
> libdeltacloud-0.9-1.el6.x86_64
> rubygem-aeolus-cli-0.0.1-1.el6.20110711131044git5bc7abf.noarch
> rubygem-deltacloud-client-0.3.1-1.el6.noarch
> [root@hp-dl2x170g6-01 ~]# 
>            Reporter: dave johnson
>            Assignee: David Lutterkort
>         Attachments: rhevm.perf.log
>
>
> I noticed that when I had over 7 rhev-h vm's running within rhevm, the CPU was running > 90% and the api console was practically scrolling continuously.  I looked at the deltacloud-core log file for rhevm and noticed that the query time of /api/instances were taking > 45 seconds to complete.  I started over adding a single vm every five minutes and noticed that there is a "heartbeat" from deltacloud-core every 30 seconds, if querying the API is taking longer than the heartbeat interval, deltacloud-core's heartbeart is causing performance issues with the rhevm api.  
> With 8 vm's running, deltacloud-core api/instabces query were timing out and throwing timeout exceptions.  
> # instances deployed through conductor
> # instance starts in deltacloud-core log
> 127.0.0.1 - - [13/Jul/2011 16:13:15] "POST /api/instances/7b33e17e-3563-4717-b8e4-e5d5f454a240/start HTTP/1.1" 204 - 2.3925
> 127.0.0.1 - - [13/Jul/2011 16:20:50] "POST /api/instances/b24424ae-ab1e-4974-ba97-c0e4a8a2a779/start HTTP/1.1" 204 - 2.0416
> 127.0.0.1 - - [13/Jul/2011 16:26:01] "POST /api/instances/20a5dcad-b053-4267-9e5b-46f40f9d42c7/start HTTP/1.1" 204 - 3.9677
> 127.0.0.1 - - [13/Jul/2011 16:31:48] "POST /api/instances/e79f7b0e-7cc8-4426-88d7-3a3e1728fbb6/start HTTP/1.1" 204 - 13.0901
> 127.0.0.1 - - [13/Jul/2011 16:35:55] "POST /api/instances/5327ddc2-8a95-43c9-8b00-80e28fa1a16b/start HTTP/1.1" 204 - 4.2185
> 127.0.0.1 - - [13/Jul/2011 16:41:23] "POST /api/instances/2d8624a7-ecd5-4e7a-a479-f2ae45f96a70/start HTTP/1.1" 204 - 7.6413
> 127.0.0.1 - - [13/Jul/2011 16:46:33] "POST /api/instances/0b67d735-cd54-4a12-ba43-d708b018e6d0/start HTTP/1.1" 204 - 6.6535
> 127.0.0.1 - - [13/Jul/2011 16:52:26] "POST /api/instances/d16a2597-0916-419f-83cd-86e41dc08ca6/start HTTP/1.1" 204 - 20.1581
> # api/instance load times climbing
> 127.0.0.1 - - [13/Jul/2011 16:15:22] "GET /api/instances HTTP/1.1" 200 1223 3.2926
> 127.0.0.1 - - [13/Jul/2011 16:20:12] "GET /api/instances HTTP/1.1" 200 1223 3.4131
> 127.0.0.1 - - [13/Jul/2011 16:25:17] "GET /api/instances HTTP/1.1" 200 2381 6.8360
> 127.0.0.1 - - [13/Jul/2011 16:30:54] "GET /api/instances HTTP/1.1" 200 3539 9.8815
> 127.0.0.1 - - [13/Jul/2011 16:35:17] "GET /api/instances HTTP/1.1" 200 4697 8.3200
> 127.0.0.1 - - [13/Jul/2011 16:40:51] "GET /api/instances HTTP/1.1" 200 5855 21.5512
> 127.0.0.1 - - [13/Jul/2011 16:45:00] "GET /api/instances HTTP/1.1" 200 7013 24.4277
> 127.0.0.1 - - [13/Jul/2011 16:50:05] "GET /api/instances HTTP/1.1" 200 8171 35.1951
> 127.0.0.1 - - [13/Jul/2011 16:50:07] "GET /api/instances HTTP/1.1" 200 8171 34.3853
> 127.0.0.1 - - [13/Jul/2011 16:50:14] "GET /api/instances HTTP/1.1" 200 8171 34.7330
> 127.0.0.1 - - [13/Jul/2011 16:50:19] "GET /api/instances HTTP/1.1" 200 8171 34.0917
> 127.0.0.1 - - [13/Jul/2011 16:50:32] "GET /api/instances HTTP/1.1" 200 8171 34.0356
> 127.0.0.1 - - [13/Jul/2011 16:50:57] "GET /api/instances HTTP/1.1" 200 8171 47.4070
> 127.0.0.1 - - [13/Jul/2011 16:51:05] "GET /api/instances HTTP/1.1" 200 8171 50.6648
> 127.0.0.1 - - [13/Jul/2011 16:51:08] "GET /api/instances HTTP/1.1" 200 8171 52.6800
> 127.0.0.1 - - [13/Jul/2011 16:51:09] "GET /api/instances HTTP/1.1" 200 8171 53.5951
> 127.0.0.1 - - [13/Jul/2011 16:51:29] "GET /api/instances HTTP/1.1" 200 8171 53.6821
> 127.0.0.1 - - [13/Jul/2011 16:51:34] "GET /api/instances HTTP/1.1" 200 8171 57.5881
> 127.0.0.1 - - [13/Jul/2011 16:51:35] "GET /api/instances HTTP/1.1" 200 8171 51.4359
> 127.0.0.1 - - [13/Jul/2011 16:51:38] "GET /api/instances HTTP/1.1" 200 8171 48.9456
> 127.0.0.1 - - [13/Jul/2011 16:51:42] "GET /api/instances HTTP/1.1" 200 8171 39.6616
> 127.0.0.1 - - [13/Jul/2011 16:52:01] "GET /api/instances HTTP/1.1" 200 9454 33.8931
> 127.0.0.1 - - [13/Jul/2011 16:52:13] "GET /api/instances HTTP/1.1" 200 9454 37.3036
> 127.0.0.1 - - [13/Jul/2011 16:52:20] "GET /api/instances HTTP/1.1" 200 9454 41.3128
> 127.0.0.1 - - [13/Jul/2011 16:52:58] "GET /api/instances HTTP/1.1" 200 9454 59.2965
> 127.0.0.1 - - [13/Jul/2011 16:53:04] "GET /api/instances HTTP/1.1" 500 190 60.0303
> 127.0.0.1 - - [13/Jul/2011 16:53:05] "GET /api/instances HTTP/1.1" 500 190 60.0688

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

RE: [rhevm-api] [jira] [Commented] (DTACLOUD-58) api/instance query performance with 8 vms in rhevm

Posted by Einav Cohen <ec...@redhat.com>.
> -----Original Message-----
> From: Itamar Heim [mailto:iheim@redhat.com]
> Sent: Thursday, July 14, 2011 10:21 PM
> >
> > On Thu, 2011-07-14 at 10:52 -0400, Itamar Heim wrote:
> > > Your use case is more close to backend --> vdsm, where we poll every
2
> > > seconds actually.
> > > But the polling every 2 seconds is a very light poll - only list of
> VM's
> > > and their status.
> > > The heavier polling for all detils/stats is not as frequent.
> > >
> > > So you should probably define the minimal data you need to poll for
> more
> > > frequently, and run the lighter query (the api has a 'detail' level,
> > > though probably not one that light yet).
> >
> > Do you have any data on what details they are and how often Powershell
> > can support listing all instances at that detail level ?
> 
> I didn't say we have that detail level yet in REST, and powershell for
> sure doesn't have that optimization.
> but we should plan ahead for such a lightweight query on status of VMs,
so
> the question is do you need anything but vm_id and vm_status for such a
> query?
> Einav - would we need more things for a lightweight user portal query?

No - status is probably enough; the rest of the properties displayed in
the user-portal are unlikely to change (name, description, OS).

RE: [rhevm-api] [jira] [Commented] (DTACLOUD-58) api/instance query performance with 8 vms in rhevm

Posted by Itamar Heim <ih...@redhat.com>.

> -----Original Message-----
> From: rhevm-api-bounces@lists.fedorahosted.org
[mailto:rhevm-api-bounces@lists.fedorahosted.org] On
> Behalf Of David Lutterkort
> Sent: Thursday, July 14, 2011 20:46 PM
> To: deltacloud-dev@incubator.apache.org
> Cc: rhevm-api@lists.fedorahosted.org; 'Mark McLoughlin'; 'Chris
Lalancette'
> Subject: Re: [rhevm-api] [jira] [Commented] (DTACLOUD-58) api/instance
query performance with 8 vms in
> rhevm
> 
> On Thu, 2011-07-14 at 10:52 -0400, Itamar Heim wrote:
> > Your use case is more close to backend --> vdsm, where we poll every 2
> > seconds actually.
> > But the polling every 2 seconds is a very light poll - only list of
VM's
> > and their status.
> > The heavier polling for all detils/stats is not as frequent.
> >
> > So you should probably define the minimal data you need to poll for
more
> > frequently, and run the lighter query (the api has a 'detail' level,
> > though probably not one that light yet).
> 
> Do you have any data on what details they are and how often Powershell
> can support listing all instances at that detail level ?

I didn't say we have that detail level yet in REST, and powershell for
sure doesn't have that optimization.
but we should plan ahead for such a lightweight query on status of VMs, so
the question is do you need anything but vm_id and vm_status for such a
query?
Einav - would we need more things for a lightweight user portal query?

RE: [rhevm-api] [jira] [Commented] (DTACLOUD-58) api/instance query performance with 8 vms in rhevm

Posted by David Lutterkort <lu...@redhat.com>.
On Thu, 2011-07-14 at 10:52 -0400, Itamar Heim wrote:
> Your use case is more close to backend --> vdsm, where we poll every 2
> seconds actually.
> But the polling every 2 seconds is a very light poll - only list of VM's
> and their status.
> The heavier polling for all detils/stats is not as frequent.
> 
> So you should probably define the minimal data you need to poll for more
> frequently, and run the lighter query (the api has a 'detail' level,
> though probably not one that light yet).

Do you have any data on what details they are and how often Powershell
can support listing all instances at that detail level ?

David





RE: [rhevm-api] [jira] [Commented] (DTACLOUD-58) api/instance query performance with 8 vms in rhevm

Posted by Itamar Heim <ih...@redhat.com>.
...
> > In aeolus, condor is what's monitoring running instances and I thought
> > it did that be GETing each instance individually every 90 seconds
> 
> No, condor does a batch update of every provider every 30 seconds.  We
can
> certainly tune that down, but the problem is that it is a tradeoff; the
longer
> the timeout, the worse the user experience becomes in terms of updating
the
> UI with the current state.  Even with the 30 second timeout, we are
getting
> complaints that we take "way longer" to show an instance going to
running
> than the EC2 front-end.

We use a 'sliding window' approach in rhev-m in some places from user
perspective
i.e., if the user performed an action (or clicked refresh), we'll refresh
more frequently for a few cycles.
But that's UI --> Backend.

Your use case is more close to backend --> vdsm, where we poll every 2
seconds actually.
But the polling every 2 seconds is a very light poll - only list of VM's
and their status.
The heavier polling for all detils/stats is not as frequent.

So you should probably define the minimal data you need to poll for more
frequently, and run the lighter query (the api has a 'detail' level,
though probably not one that light yet).


Re: [jira] [Commented] (DTACLOUD-58) api/instance query performance with 8 vms in rhevm

Posted by Chris Lalancette <cl...@redhat.com>.
On 07/14/11 - 10:17:03AM, Mark McLoughlin wrote:
> On Thu, 2011-07-14 at 09:53 +0200, Michal Fojtik wrote:
> > On Jul 14, 2011, at 12:16 AM, David Lutterkort (JIRA) wrote:
> > 
> > > 
> > >    [ https://issues.apache.org/jira/browse/DTACLOUD-58?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13064913#comment-13064913 ] 
> > > 
> > > David Lutterkort commented on DTACLOUD-58:
> > > ------------------------------------------
> > > 
> > > The 'heartbeat' isn't coming from deltacloud, but from aeolus; 
> > > reloading the list of all VM's every 30s is way too often.
> 
> This isn't what I expected actually
> 
> In aeolus, condor is what's monitoring running instances and I thought
> it did that be GETing each instance individually every 90 seconds

No, condor does a batch update of every provider every 30 seconds.  We can
certainly tune that down, but the problem is that it is a tradeoff; the longer
the timeout, the worse the user experience becomes in terms of updating the
UI with the current state.  Even with the 30 second timeout, we are getting
complaints that we take "way longer" to show an instance going to running
than the EC2 front-end.

-- 
Chris Lalancette

Re: [jira] [Commented] (DTACLOUD-58) api/instance query performance with 8 vms in rhevm

Posted by Mark McLoughlin <ma...@redhat.com>.
On Thu, 2011-07-14 at 09:53 +0200, Michal Fojtik wrote:
> On Jul 14, 2011, at 12:16 AM, David Lutterkort (JIRA) wrote:
> 
> > 
> >    [ https://issues.apache.org/jira/browse/DTACLOUD-58?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13064913#comment-13064913 ] 
> > 
> > David Lutterkort commented on DTACLOUD-58:
> > ------------------------------------------
> > 
> > The 'heartbeat' isn't coming from deltacloud, but from aeolus; 
> > reloading the list of all VM's every 30s is way too often.

This isn't what I expected actually

In aeolus, condor is what's monitoring running instances and I thought
it did that be GETing each instance individually every 90 seconds

> > At the same time, I don't understand why it takes RHEV-M this long to list 8 VM's
> > 
> > I am afraid deltacloud is just the messenger in this issue
> 
> Right, we're just forwarding requests to RHEV-M Rest API written on top of PowerShell interface
> to RHEV-M.
> You can see how much it takes for backend to process your request by looking at response headers.
> Search for 'X-Backend-Runtime:' header.

There's a known performance regression with the powershell
implementation of the API. Apparently this branch:

  https://github.com/markmc/rhevm-api/tree/0.9-milestone9.1-pmg

helps a lot, but we haven't been able to diagnose it further.

Cheers,
Mark.


Re: [jira] [Commented] (DTACLOUD-58) api/instance query performance with 8 vms in rhevm

Posted by Francesco Vollero <ra...@gmail.com>.
On Thu, Jul 14, 2011 at 9:53 AM, Michal Fojtik <mf...@redhat.com> wrote:
>
> On Jul 14, 2011, at 12:16 AM, David Lutterkort (JIRA) wrote:
>
>>

[snip]

>> The 'heartbeat' isn't coming from deltacloud, but from aeolus; reloading the list of all VM's every 30s is way too often.
>>
>> At the same time, I don't understand why it takes RHEV-M this long to list 8 VM's
>>
>> I am afraid deltacloud is just the messenger in this issue
>
> Right, we're just forwarding requests to RHEV-M Rest API written on top of PowerShell interface
> to RHEV-M.
> You can see how much it takes for backend to process your request by looking at response headers.
> Search for 'X-Backend-Runtime:' header.
>
>  -- MIchal
>

As Michal said, is a backend problem, since from when we started using
RHEV-m we had slowness issues and its because the "joint venture"
Windows (Powershell) and Java does not always work as expected and
always take alot to collect those kind of informations (forgetting
about Java Exception and/or Powershell issues). We hope that in RHEV-m
3.0 we does not have those issues anymore since its powershell free.

Francesco

Re: [jira] [Commented] (DTACLOUD-58) api/instance query performance with 8 vms in rhevm

Posted by Michal Fojtik <mf...@redhat.com>.
On Jul 14, 2011, at 12:16 AM, David Lutterkort (JIRA) wrote:

> 
>    [ https://issues.apache.org/jira/browse/DTACLOUD-58?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13064913#comment-13064913 ] 
> 
> David Lutterkort commented on DTACLOUD-58:
> ------------------------------------------
> 
> The 'heartbeat' isn't coming from deltacloud, but from aeolus; reloading the list of all VM's every 30s is way too often.
> 
> At the same time, I don't understand why it takes RHEV-M this long to list 8 VM's
> 
> I am afraid deltacloud is just the messenger in this issue

Right, we're just forwarding requests to RHEV-M Rest API written on top of PowerShell interface
to RHEV-M.
You can see how much it takes for backend to process your request by looking at response headers.
Search for 'X-Backend-Runtime:' header.

  -- MIchal 

> 
>> api/instance query performance with 8 vms in rhevm
>> --------------------------------------------------
>> 
>>                Key: DTACLOUD-58
>>                URL: https://issues.apache.org/jira/browse/DTACLOUD-58
>>            Project: DeltaCloud
>>         Issue Type: Bug
>>        Environment: Aeolus-conductor configured for rhevm provider
>> Rhevm running in Windows 2008 R2 Server with 1.5GB memory VM (running on a separate blade server)
>> [root@hp-dl2x170g6-01 ~]# rpm -qa | egrep 'aeolus|deltacloud' | sort
>> aeolus-all-0.3.0-0.el6.20110711131044git5bc7abf.noarch
>> aeolus-conductor-0.3.0-0.el6.20110711131044git5bc7abf.noarch
>> aeolus-conductor-daemons-0.3.0-0.el6.20110711131044git5bc7abf.noarch
>> aeolus-conductor-doc-0.3.0-0.el6.20110711131044git5bc7abf.noarch
>> aeolus-configure-2.0.1-0.el6.20110708134115gitab1e6dc.noarch
>> condor-deltacloud-gahp-7.6.0-5dcloud.el6.x86_64
>> deltacloud-core-0.3.9999-1308927004.el6.noarch
>> libdeltacloud-0.9-1.el6.x86_64
>> rubygem-aeolus-cli-0.0.1-1.el6.20110711131044git5bc7abf.noarch
>> rubygem-deltacloud-client-0.3.1-1.el6.noarch
>> [root@hp-dl2x170g6-01 ~]# 
>>           Reporter: dave johnson
>>           Assignee: David Lutterkort
>>        Attachments: rhevm.perf.log
>> 
>> 
>> I noticed that when I had over 7 rhev-h vm's running within rhevm, the CPU was running > 90% and the api console was practically scrolling continuously.  I looked at the deltacloud-core log file for rhevm and noticed that the query time of /api/instances were taking > 45 seconds to complete.  I started over adding a single vm every five minutes and noticed that there is a "heartbeat" from deltacloud-core every 30 seconds, if querying the API is taking longer than the heartbeat interval, deltacloud-core's heartbeart is causing performance issues with the rhevm api.  
>> With 8 vm's running, deltacloud-core api/instabces query were timing out and throwing timeout exceptions.  
>> # instances deployed through conductor
>> # instance starts in deltacloud-core log
>> 127.0.0.1 - - [13/Jul/2011 16:13:15] "POST /api/instances/7b33e17e-3563-4717-b8e4-e5d5f454a240/start HTTP/1.1" 204 - 2.3925
>> 127.0.0.1 - - [13/Jul/2011 16:20:50] "POST /api/instances/b24424ae-ab1e-4974-ba97-c0e4a8a2a779/start HTTP/1.1" 204 - 2.0416
>> 127.0.0.1 - - [13/Jul/2011 16:26:01] "POST /api/instances/20a5dcad-b053-4267-9e5b-46f40f9d42c7/start HTTP/1.1" 204 - 3.9677
>> 127.0.0.1 - - [13/Jul/2011 16:31:48] "POST /api/instances/e79f7b0e-7cc8-4426-88d7-3a3e1728fbb6/start HTTP/1.1" 204 - 13.0901
>> 127.0.0.1 - - [13/Jul/2011 16:35:55] "POST /api/instances/5327ddc2-8a95-43c9-8b00-80e28fa1a16b/start HTTP/1.1" 204 - 4.2185
>> 127.0.0.1 - - [13/Jul/2011 16:41:23] "POST /api/instances/2d8624a7-ecd5-4e7a-a479-f2ae45f96a70/start HTTP/1.1" 204 - 7.6413
>> 127.0.0.1 - - [13/Jul/2011 16:46:33] "POST /api/instances/0b67d735-cd54-4a12-ba43-d708b018e6d0/start HTTP/1.1" 204 - 6.6535
>> 127.0.0.1 - - [13/Jul/2011 16:52:26] "POST /api/instances/d16a2597-0916-419f-83cd-86e41dc08ca6/start HTTP/1.1" 204 - 20.1581
>> # api/instance load times climbing
>> 127.0.0.1 - - [13/Jul/2011 16:15:22] "GET /api/instances HTTP/1.1" 200 1223 3.2926
>> 127.0.0.1 - - [13/Jul/2011 16:20:12] "GET /api/instances HTTP/1.1" 200 1223 3.4131
>> 127.0.0.1 - - [13/Jul/2011 16:25:17] "GET /api/instances HTTP/1.1" 200 2381 6.8360
>> 127.0.0.1 - - [13/Jul/2011 16:30:54] "GET /api/instances HTTP/1.1" 200 3539 9.8815
>> 127.0.0.1 - - [13/Jul/2011 16:35:17] "GET /api/instances HTTP/1.1" 200 4697 8.3200
>> 127.0.0.1 - - [13/Jul/2011 16:40:51] "GET /api/instances HTTP/1.1" 200 5855 21.5512
>> 127.0.0.1 - - [13/Jul/2011 16:45:00] "GET /api/instances HTTP/1.1" 200 7013 24.4277
>> 127.0.0.1 - - [13/Jul/2011 16:50:05] "GET /api/instances HTTP/1.1" 200 8171 35.1951
>> 127.0.0.1 - - [13/Jul/2011 16:50:07] "GET /api/instances HTTP/1.1" 200 8171 34.3853
>> 127.0.0.1 - - [13/Jul/2011 16:50:14] "GET /api/instances HTTP/1.1" 200 8171 34.7330
>> 127.0.0.1 - - [13/Jul/2011 16:50:19] "GET /api/instances HTTP/1.1" 200 8171 34.0917
>> 127.0.0.1 - - [13/Jul/2011 16:50:32] "GET /api/instances HTTP/1.1" 200 8171 34.0356
>> 127.0.0.1 - - [13/Jul/2011 16:50:57] "GET /api/instances HTTP/1.1" 200 8171 47.4070
>> 127.0.0.1 - - [13/Jul/2011 16:51:05] "GET /api/instances HTTP/1.1" 200 8171 50.6648
>> 127.0.0.1 - - [13/Jul/2011 16:51:08] "GET /api/instances HTTP/1.1" 200 8171 52.6800
>> 127.0.0.1 - - [13/Jul/2011 16:51:09] "GET /api/instances HTTP/1.1" 200 8171 53.5951
>> 127.0.0.1 - - [13/Jul/2011 16:51:29] "GET /api/instances HTTP/1.1" 200 8171 53.6821
>> 127.0.0.1 - - [13/Jul/2011 16:51:34] "GET /api/instances HTTP/1.1" 200 8171 57.5881
>> 127.0.0.1 - - [13/Jul/2011 16:51:35] "GET /api/instances HTTP/1.1" 200 8171 51.4359
>> 127.0.0.1 - - [13/Jul/2011 16:51:38] "GET /api/instances HTTP/1.1" 200 8171 48.9456
>> 127.0.0.1 - - [13/Jul/2011 16:51:42] "GET /api/instances HTTP/1.1" 200 8171 39.6616
>> 127.0.0.1 - - [13/Jul/2011 16:52:01] "GET /api/instances HTTP/1.1" 200 9454 33.8931
>> 127.0.0.1 - - [13/Jul/2011 16:52:13] "GET /api/instances HTTP/1.1" 200 9454 37.3036
>> 127.0.0.1 - - [13/Jul/2011 16:52:20] "GET /api/instances HTTP/1.1" 200 9454 41.3128
>> 127.0.0.1 - - [13/Jul/2011 16:52:58] "GET /api/instances HTTP/1.1" 200 9454 59.2965
>> 127.0.0.1 - - [13/Jul/2011 16:53:04] "GET /api/instances HTTP/1.1" 500 190 60.0303
>> 127.0.0.1 - - [13/Jul/2011 16:53:05] "GET /api/instances HTTP/1.1" 500 190 60.0688
> 
> --
> This message is automatically generated by JIRA.
> For more information on JIRA, see: http://www.atlassian.com/software/jira
> 
> 

----------------------------------------------------------------------
Michal Fojtik, Software Engineer, Red Hat Czech
mfojtik@redhat.com
Deltacloud API: http://deltacloud.org


[jira] [Commented] (DTACLOUD-58) api/instance query performance with 8 vms in rhevm

Posted by "David Lutterkort (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/DTACLOUD-58?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13064913#comment-13064913 ] 

David Lutterkort commented on DTACLOUD-58:
------------------------------------------

The 'heartbeat' isn't coming from deltacloud, but from aeolus; reloading the list of all VM's every 30s is way too often.

At the same time, I don't understand why it takes RHEV-M this long to list 8 VM's

I am afraid deltacloud is just the messenger in this issue

> api/instance query performance with 8 vms in rhevm
> --------------------------------------------------
>
>                 Key: DTACLOUD-58
>                 URL: https://issues.apache.org/jira/browse/DTACLOUD-58
>             Project: DeltaCloud
>          Issue Type: Bug
>         Environment: Aeolus-conductor configured for rhevm provider
> Rhevm running in Windows 2008 R2 Server with 1.5GB memory VM (running on a separate blade server)
> [root@hp-dl2x170g6-01 ~]# rpm -qa | egrep 'aeolus|deltacloud' | sort
> aeolus-all-0.3.0-0.el6.20110711131044git5bc7abf.noarch
> aeolus-conductor-0.3.0-0.el6.20110711131044git5bc7abf.noarch
> aeolus-conductor-daemons-0.3.0-0.el6.20110711131044git5bc7abf.noarch
> aeolus-conductor-doc-0.3.0-0.el6.20110711131044git5bc7abf.noarch
> aeolus-configure-2.0.1-0.el6.20110708134115gitab1e6dc.noarch
> condor-deltacloud-gahp-7.6.0-5dcloud.el6.x86_64
> deltacloud-core-0.3.9999-1308927004.el6.noarch
> libdeltacloud-0.9-1.el6.x86_64
> rubygem-aeolus-cli-0.0.1-1.el6.20110711131044git5bc7abf.noarch
> rubygem-deltacloud-client-0.3.1-1.el6.noarch
> [root@hp-dl2x170g6-01 ~]# 
>            Reporter: dave johnson
>            Assignee: David Lutterkort
>         Attachments: rhevm.perf.log
>
>
> I noticed that when I had over 7 rhev-h vm's running within rhevm, the CPU was running > 90% and the api console was practically scrolling continuously.  I looked at the deltacloud-core log file for rhevm and noticed that the query time of /api/instances were taking > 45 seconds to complete.  I started over adding a single vm every five minutes and noticed that there is a "heartbeat" from deltacloud-core every 30 seconds, if querying the API is taking longer than the heartbeat interval, deltacloud-core's heartbeart is causing performance issues with the rhevm api.  
> With 8 vm's running, deltacloud-core api/instabces query were timing out and throwing timeout exceptions.  
> # instances deployed through conductor
> # instance starts in deltacloud-core log
> 127.0.0.1 - - [13/Jul/2011 16:13:15] "POST /api/instances/7b33e17e-3563-4717-b8e4-e5d5f454a240/start HTTP/1.1" 204 - 2.3925
> 127.0.0.1 - - [13/Jul/2011 16:20:50] "POST /api/instances/b24424ae-ab1e-4974-ba97-c0e4a8a2a779/start HTTP/1.1" 204 - 2.0416
> 127.0.0.1 - - [13/Jul/2011 16:26:01] "POST /api/instances/20a5dcad-b053-4267-9e5b-46f40f9d42c7/start HTTP/1.1" 204 - 3.9677
> 127.0.0.1 - - [13/Jul/2011 16:31:48] "POST /api/instances/e79f7b0e-7cc8-4426-88d7-3a3e1728fbb6/start HTTP/1.1" 204 - 13.0901
> 127.0.0.1 - - [13/Jul/2011 16:35:55] "POST /api/instances/5327ddc2-8a95-43c9-8b00-80e28fa1a16b/start HTTP/1.1" 204 - 4.2185
> 127.0.0.1 - - [13/Jul/2011 16:41:23] "POST /api/instances/2d8624a7-ecd5-4e7a-a479-f2ae45f96a70/start HTTP/1.1" 204 - 7.6413
> 127.0.0.1 - - [13/Jul/2011 16:46:33] "POST /api/instances/0b67d735-cd54-4a12-ba43-d708b018e6d0/start HTTP/1.1" 204 - 6.6535
> 127.0.0.1 - - [13/Jul/2011 16:52:26] "POST /api/instances/d16a2597-0916-419f-83cd-86e41dc08ca6/start HTTP/1.1" 204 - 20.1581
> # api/instance load times climbing
> 127.0.0.1 - - [13/Jul/2011 16:15:22] "GET /api/instances HTTP/1.1" 200 1223 3.2926
> 127.0.0.1 - - [13/Jul/2011 16:20:12] "GET /api/instances HTTP/1.1" 200 1223 3.4131
> 127.0.0.1 - - [13/Jul/2011 16:25:17] "GET /api/instances HTTP/1.1" 200 2381 6.8360
> 127.0.0.1 - - [13/Jul/2011 16:30:54] "GET /api/instances HTTP/1.1" 200 3539 9.8815
> 127.0.0.1 - - [13/Jul/2011 16:35:17] "GET /api/instances HTTP/1.1" 200 4697 8.3200
> 127.0.0.1 - - [13/Jul/2011 16:40:51] "GET /api/instances HTTP/1.1" 200 5855 21.5512
> 127.0.0.1 - - [13/Jul/2011 16:45:00] "GET /api/instances HTTP/1.1" 200 7013 24.4277
> 127.0.0.1 - - [13/Jul/2011 16:50:05] "GET /api/instances HTTP/1.1" 200 8171 35.1951
> 127.0.0.1 - - [13/Jul/2011 16:50:07] "GET /api/instances HTTP/1.1" 200 8171 34.3853
> 127.0.0.1 - - [13/Jul/2011 16:50:14] "GET /api/instances HTTP/1.1" 200 8171 34.7330
> 127.0.0.1 - - [13/Jul/2011 16:50:19] "GET /api/instances HTTP/1.1" 200 8171 34.0917
> 127.0.0.1 - - [13/Jul/2011 16:50:32] "GET /api/instances HTTP/1.1" 200 8171 34.0356
> 127.0.0.1 - - [13/Jul/2011 16:50:57] "GET /api/instances HTTP/1.1" 200 8171 47.4070
> 127.0.0.1 - - [13/Jul/2011 16:51:05] "GET /api/instances HTTP/1.1" 200 8171 50.6648
> 127.0.0.1 - - [13/Jul/2011 16:51:08] "GET /api/instances HTTP/1.1" 200 8171 52.6800
> 127.0.0.1 - - [13/Jul/2011 16:51:09] "GET /api/instances HTTP/1.1" 200 8171 53.5951
> 127.0.0.1 - - [13/Jul/2011 16:51:29] "GET /api/instances HTTP/1.1" 200 8171 53.6821
> 127.0.0.1 - - [13/Jul/2011 16:51:34] "GET /api/instances HTTP/1.1" 200 8171 57.5881
> 127.0.0.1 - - [13/Jul/2011 16:51:35] "GET /api/instances HTTP/1.1" 200 8171 51.4359
> 127.0.0.1 - - [13/Jul/2011 16:51:38] "GET /api/instances HTTP/1.1" 200 8171 48.9456
> 127.0.0.1 - - [13/Jul/2011 16:51:42] "GET /api/instances HTTP/1.1" 200 8171 39.6616
> 127.0.0.1 - - [13/Jul/2011 16:52:01] "GET /api/instances HTTP/1.1" 200 9454 33.8931
> 127.0.0.1 - - [13/Jul/2011 16:52:13] "GET /api/instances HTTP/1.1" 200 9454 37.3036
> 127.0.0.1 - - [13/Jul/2011 16:52:20] "GET /api/instances HTTP/1.1" 200 9454 41.3128
> 127.0.0.1 - - [13/Jul/2011 16:52:58] "GET /api/instances HTTP/1.1" 200 9454 59.2965
> 127.0.0.1 - - [13/Jul/2011 16:53:04] "GET /api/instances HTTP/1.1" 500 190 60.0303
> 127.0.0.1 - - [13/Jul/2011 16:53:05] "GET /api/instances HTTP/1.1" 500 190 60.0688

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (DTACLOUD-58) api/instance query performance with 8 vms in rhevm

Posted by "dave johnson (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/DTACLOUD-58?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

dave johnson updated DTACLOUD-58:
---------------------------------

    Attachment: rhevm.perf.log

deltacloud-core rhevm.log

> api/instance query performance with 8 vms in rhevm
> --------------------------------------------------
>
>                 Key: DTACLOUD-58
>                 URL: https://issues.apache.org/jira/browse/DTACLOUD-58
>             Project: DeltaCloud
>          Issue Type: Bug
>         Environment: Aeolus-conductor configured for rhevm provider
> Rhevm running in Windows 2008 R2 Server with 1.5GB memory VM (running on a separate blade server)
> [root@hp-dl2x170g6-01 ~]# rpm -qa | egrep 'aeolus|deltacloud' | sort
> aeolus-all-0.3.0-0.el6.20110711131044git5bc7abf.noarch
> aeolus-conductor-0.3.0-0.el6.20110711131044git5bc7abf.noarch
> aeolus-conductor-daemons-0.3.0-0.el6.20110711131044git5bc7abf.noarch
> aeolus-conductor-doc-0.3.0-0.el6.20110711131044git5bc7abf.noarch
> aeolus-configure-2.0.1-0.el6.20110708134115gitab1e6dc.noarch
> condor-deltacloud-gahp-7.6.0-5dcloud.el6.x86_64
> deltacloud-core-0.3.9999-1308927004.el6.noarch
> libdeltacloud-0.9-1.el6.x86_64
> rubygem-aeolus-cli-0.0.1-1.el6.20110711131044git5bc7abf.noarch
> rubygem-deltacloud-client-0.3.1-1.el6.noarch
> [root@hp-dl2x170g6-01 ~]# 
>            Reporter: dave johnson
>            Assignee: David Lutterkort
>         Attachments: rhevm.perf.log
>
>
> I noticed that when I had over 7 rhev-h vm's running within rhevm, the CPU was running > 90% and the api console was practically scrolling continuously.  I looked at the deltacloud-core log file for rhevm and noticed that the query time of /api/instances were taking > 45 seconds to complete.  I started over adding a single vm every five minutes and noticed that there is a "heartbeat" from deltacloud-core every 30 seconds, if querying the API is taking longer than the heartbeat interval, deltacloud-core's heartbeart is causing performance issues with the rhevm api.  
> With 8 vm's running, deltacloud-core api/instabces query were timing out and throwing timeout exceptions.  
> # instances deployed through conductor
> # instance starts in deltacloud-core log
> 127.0.0.1 - - [13/Jul/2011 16:13:15] "POST /api/instances/7b33e17e-3563-4717-b8e4-e5d5f454a240/start HTTP/1.1" 204 - 2.3925
> 127.0.0.1 - - [13/Jul/2011 16:20:50] "POST /api/instances/b24424ae-ab1e-4974-ba97-c0e4a8a2a779/start HTTP/1.1" 204 - 2.0416
> 127.0.0.1 - - [13/Jul/2011 16:26:01] "POST /api/instances/20a5dcad-b053-4267-9e5b-46f40f9d42c7/start HTTP/1.1" 204 - 3.9677
> 127.0.0.1 - - [13/Jul/2011 16:31:48] "POST /api/instances/e79f7b0e-7cc8-4426-88d7-3a3e1728fbb6/start HTTP/1.1" 204 - 13.0901
> 127.0.0.1 - - [13/Jul/2011 16:35:55] "POST /api/instances/5327ddc2-8a95-43c9-8b00-80e28fa1a16b/start HTTP/1.1" 204 - 4.2185
> 127.0.0.1 - - [13/Jul/2011 16:41:23] "POST /api/instances/2d8624a7-ecd5-4e7a-a479-f2ae45f96a70/start HTTP/1.1" 204 - 7.6413
> 127.0.0.1 - - [13/Jul/2011 16:46:33] "POST /api/instances/0b67d735-cd54-4a12-ba43-d708b018e6d0/start HTTP/1.1" 204 - 6.6535
> 127.0.0.1 - - [13/Jul/2011 16:52:26] "POST /api/instances/d16a2597-0916-419f-83cd-86e41dc08ca6/start HTTP/1.1" 204 - 20.1581
> # api/instance load times climbing
> 127.0.0.1 - - [13/Jul/2011 16:15:22] "GET /api/instances HTTP/1.1" 200 1223 3.2926
> 127.0.0.1 - - [13/Jul/2011 16:20:12] "GET /api/instances HTTP/1.1" 200 1223 3.4131
> 127.0.0.1 - - [13/Jul/2011 16:25:17] "GET /api/instances HTTP/1.1" 200 2381 6.8360
> 127.0.0.1 - - [13/Jul/2011 16:30:54] "GET /api/instances HTTP/1.1" 200 3539 9.8815
> 127.0.0.1 - - [13/Jul/2011 16:35:17] "GET /api/instances HTTP/1.1" 200 4697 8.3200
> 127.0.0.1 - - [13/Jul/2011 16:40:51] "GET /api/instances HTTP/1.1" 200 5855 21.5512
> 127.0.0.1 - - [13/Jul/2011 16:45:00] "GET /api/instances HTTP/1.1" 200 7013 24.4277
> 127.0.0.1 - - [13/Jul/2011 16:50:05] "GET /api/instances HTTP/1.1" 200 8171 35.1951
> 127.0.0.1 - - [13/Jul/2011 16:50:07] "GET /api/instances HTTP/1.1" 200 8171 34.3853
> 127.0.0.1 - - [13/Jul/2011 16:50:14] "GET /api/instances HTTP/1.1" 200 8171 34.7330
> 127.0.0.1 - - [13/Jul/2011 16:50:19] "GET /api/instances HTTP/1.1" 200 8171 34.0917
> 127.0.0.1 - - [13/Jul/2011 16:50:32] "GET /api/instances HTTP/1.1" 200 8171 34.0356
> 127.0.0.1 - - [13/Jul/2011 16:50:57] "GET /api/instances HTTP/1.1" 200 8171 47.4070
> 127.0.0.1 - - [13/Jul/2011 16:51:05] "GET /api/instances HTTP/1.1" 200 8171 50.6648
> 127.0.0.1 - - [13/Jul/2011 16:51:08] "GET /api/instances HTTP/1.1" 200 8171 52.6800
> 127.0.0.1 - - [13/Jul/2011 16:51:09] "GET /api/instances HTTP/1.1" 200 8171 53.5951
> 127.0.0.1 - - [13/Jul/2011 16:51:29] "GET /api/instances HTTP/1.1" 200 8171 53.6821
> 127.0.0.1 - - [13/Jul/2011 16:51:34] "GET /api/instances HTTP/1.1" 200 8171 57.5881
> 127.0.0.1 - - [13/Jul/2011 16:51:35] "GET /api/instances HTTP/1.1" 200 8171 51.4359
> 127.0.0.1 - - [13/Jul/2011 16:51:38] "GET /api/instances HTTP/1.1" 200 8171 48.9456
> 127.0.0.1 - - [13/Jul/2011 16:51:42] "GET /api/instances HTTP/1.1" 200 8171 39.6616
> 127.0.0.1 - - [13/Jul/2011 16:52:01] "GET /api/instances HTTP/1.1" 200 9454 33.8931
> 127.0.0.1 - - [13/Jul/2011 16:52:13] "GET /api/instances HTTP/1.1" 200 9454 37.3036
> 127.0.0.1 - - [13/Jul/2011 16:52:20] "GET /api/instances HTTP/1.1" 200 9454 41.3128
> 127.0.0.1 - - [13/Jul/2011 16:52:58] "GET /api/instances HTTP/1.1" 200 9454 59.2965
> 127.0.0.1 - - [13/Jul/2011 16:53:04] "GET /api/instances HTTP/1.1" 500 190 60.0303
> 127.0.0.1 - - [13/Jul/2011 16:53:05] "GET /api/instances HTTP/1.1" 500 190 60.0688

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira