You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@archiva.apache.org by "Stallard,David" <st...@oclc.org> on 2011/11/01 15:42:49 UTC

RE: 100% CPU in Archiva 1.3.5

We do have a fairly large number of Continuous Integration builds that
can trigger many times per day, each build uploading new snapshots.  It
sounds like that could be a problem based on your second point below.
However, that structure has been in place for well over a year and we've
only had this CPU problem for 2 weeks.  Maybe we just happened to cross
some threshold that has made it more problematic?

I'll look into reducing the number of scans and keeping them to off-peak
hours.

-----Original Message-----
From: Brett Porter [mailto:brett@porterclan.net] On Behalf Of Brett
Porter
Sent: Monday, October 31, 2011 6:34 PM
To: users@archiva.apache.org
Subject: Re: 100% CPU in Archiva 1.3.5

Top replying with a few points:

- artifact upload limits are only via the web UI, maven deployments can
push artifacts as large as needed.
- individual artifacts of that size shouldn't be a big problem (it's a
once off hit), but regularly updating snapshots of that size will cause
it to build
- the scan time below is quite long, particularly for the purge. You
might want to push the scanning schedule out to an "off peak" time - the
purge doesn't need to run that often, and most operations are done
on-demand with the scan just filling in any gaps.

HTH,
Brett

On 01/11/2011, at 2:15 AM, Stallard,David wrote:

> I'm not sure if this is useful, but here are the summaries of the most

> recent hourly scans...from archiva.log:
> 
> 
> .\ Scan of internal \.__________________________________________
>  Repository Dir    : <path removed>/internal
>  Repository Name   : Archiva Managed Internal Repository
>  Repository Layout : default
>  Known Consumers   : (7 configured)
>                      repository-purge (Total: 58857ms; Avg.: 1; Count:
> 58702)
>                      metadata-updater (Total: 419ms; Avg.: 209; Count:
> 2)
>                      auto-remove
>                      auto-rename
>                      update-db-artifact (Total: 98ms; Avg.: 49; Count:
> 2)
>                      create-missing-checksums (Total: 120ms; Avg.: 60;
> Count: 2)
>                      index-content (Total: 0ms; Avg.: 0; Count: 7)  
> Invalid Consumers : <none>
>  Duration          : 2 Minutes 56 Seconds 896 Milliseconds
>  When Gathered     : 10/31/11 11:02 AM
>  Total File Count  : 268305
>  Avg Time Per File :
> ______________________________________________________________
> 
> 
> .\ Scan of snapshots \.__________________________________________
>  Repository Dir    : <path removed>/snapshots
>  Repository Name   : Archiva Managed Snapshot Repository
>  Repository Layout : default
>  Known Consumers   : (7 configured)
>                      repository-purge (Total: 325200ms; Avg.: 8;
Count:
> 39805)
>                      metadata-updater (Total: 5915ms; Avg.: 50; Count:
> 116)
>                      auto-remove
>                      auto-rename
>                      update-db-artifact (Total: 17211ms; Avg.: 148;
> Count: 116)
>                      create-missing-checksums (Total: 15559ms; Avg.:
> 134; Count: 116)
>                      index-content (Total: 34ms; Avg.: 0; Count: 475)

> Invalid Consumers : <none>
>  Duration          : 7 Minutes 17 Seconds 416 Milliseconds
>  When Gathered     : 10/31/11 11:10 AM
>  Total File Count  : 166275
>  Avg Time Per File : 2 Milliseconds
> ______________________________________________________________
> 
> 
> 
> -----Original Message-----
> From: Stallard,David
> Sent: Monday, October 31, 2011 9:57 AM
> To: 'users@archiva.apache.org'
> Subject: RE: 100% CPU in Archiva 1.3.5
> 
> I need to correct my previous message...it turns out we do have 
> artifacts larger than 40M even though that is the defined maximum, I'm

> not sure at this point how that is happening.
> 
> In our internal repository we have 40 artifacts which are over 100M in

> size, with the largest one being 366M.  In snapshots, we have 61 
> artifacts that are >100M, where the largest is 342M.  I'm not sure how

> significant these sizes are in terms of the indexer, but wanted to 
> accurately reflect what we're dealing with.
> 
> -----Original Message-----
> From: Stallard,David
> Sent: Monday, October 31, 2011 9:43 AM
> To: 'users@archiva.apache.org'
> Subject: RE: 100% CPU in Archiva 1.3.5
> 
> Brett Porter said: 
>>> It's not unexpected that indexing drives it to 100% CPU momentarily,
> but causing it to become unavailable is unusual.
> How big are the artifacts it is scanning?<<
> 
> The CPU was still at 100% on Monday morning, so having the weekend to 
> index didn't seem to improve anything; the indexing queue was up to 
> about 3500.  We got a report that downloads from Archiva are extremely

> slow, so I just bounced it.  CPU was immedately at 100% after the 
> bounce, and the indexing queue is at 6.  I expect that queue to 
> continually rise, based on what I've seen after previous bounces.
> 
> Our upload maximum size was 10M for the longest time, but we had to 
> raise it to 20M a while back and then recently we raised it to 40M.  
> But I would think that the overwhelming majority of our artifacts are 
> 10M or less.
> 
> Is there a way to increase the logging level?  Currently, the logs 
> don't show any indication of what it is grinding away on.  After the 
> startup stuff, there really isn't anything in archiva.log except for 
> some Authorization Denied messages -- but these have been occurring 
> for months and months, I don't think they are related to the 100% CPU 
> issue that just started up about a week ago.
> 
> 
> 
> 
> 

--
Brett Porter
brett@apache.org
http://brettporter.wordpress.com/
http://au.linkedin.com/in/brettporter







RE: 100% CPU in Archiva 1.3.5

Posted by "Stallard,David" <st...@oclc.org>.
Sorry for the delayed response, here is an update.  We began to see unusual database-related errors in the log, such as foreign key constraint errors.  We ended up deleting the database and allowing Archiva to rebuild it.  This fixed all of our database issues; we were hopeful that it might fix our excessive CPU usage issue as well, but Archiva continues to use 100% CPU at all times and the indexing queue seems to perpetually grow.  This doesn't seem to be affecting performance, probably because we have moved the scheduled scans to only happen once per day during off hours -- originally they ran every hour and caused CPU to jump from 100% to 300%.

We haven't tested with 1.4 m1, we are still running 1.3.5.  We likely won't try 1.4 m1 as it's not a production release, and this is not our top priority at the moment since Archiva is still functioning normally.

Here is a full thread dump from today:
 

2011-11-14 10:27:39
Full thread dump Java HotSpot(TM) 64-Bit Server VM (10.0-b22 mixed mode):

"Keep-Alive-Timer" daemon prio=10 tid=0x00002aab1352f800 nid=0x40f5 waiting on condition [0x0000000045078000..0x0000000045078d20]
   java.lang.Thread.State: TIMED_WAITING (sleeping)
        at java.lang.Thread.sleep(Native Method)
        at sun.net.www.http.KeepAliveCache.run(KeepAliveCache.java:149)
        at java.lang.Thread.run(Thread.java:619)

"http-10000-31" daemon prio=10 tid=0x00002aab11fde800 nid=0x26b4 in Object.wait() [0x000000004476f000..0x000000004476fda0]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.await(JIoEndpoint.java:416)
        - locked <0x00002aaad1a47350> (a org.apache.tomcat.util.net.JIoEndpoint$Worker)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:442)
        at java.lang.Thread.run(Thread.java:619)

"http-10000-30" daemon prio=10 tid=0x00002aab121a4000 nid=0x26b3 runnable [0x0000000044d74000..0x0000000044d75d20]
   java.lang.Thread.State: RUNNABLE
        at java.net.PlainSocketImpl.socketConnect(Native Method)
        at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333)
        - locked <0x00002aaafc06fea0> (a java.net.PlainSocketImpl)
        at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195)
        at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182)
        at java.net.Socket.connect(Socket.java:519)
        at java.net.Socket.connect(Socket.java:469)
        at sun.net.NetworkClient.doConnect(NetworkClient.java:157)
        at sun.net.www.http.HttpClient.openServer(HttpClient.java:394)
        at sun.net.www.http.HttpClient.openServer(HttpClient.java:529)
        - locked <0x00002aaafc06fdd0> (a sun.net.www.http.HttpClient)
        at sun.net.www.http.HttpClient.<init>(HttpClient.java:233)
        at sun.net.www.http.HttpClient.New(HttpClient.java:306)
        at sun.net.www.http.HttpClient.New(HttpClient.java:323)
        at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:788)
        at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:729)
        at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:654)
        at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:977)
        - locked <0x00002aaafc06f138> (a sun.net.www.protocol.http.HttpURLConnection)
        at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:373)
        at org.apache.maven.wagon.providers.http.LightweightHttpWagon.fillInputData(LightweightHttpWagon.java:115)
        at org.apache.maven.wagon.StreamWagon.getInputStream(StreamWagon.java:116)
        at org.apache.maven.wagon.StreamWagon.getIfNewer(StreamWagon.java:88)
        at org.apache.maven.wagon.StreamWagon.get(StreamWagon.java:61)
        at org.apache.maven.archiva.proxy.DefaultRepositoryProxyConnectors.transferSimpleFile(DefaultRepositoryProxyConnectors.java:696)
        at org.apache.maven.archiva.proxy.DefaultRepositoryProxyConnectors.transferFile(DefaultRepositoryProxyConnectors.java:502)
        at org.apache.maven.archiva.proxy.DefaultRepositoryProxyConnectors.fetchMetatadaFromProxies(DefaultRepositoryProxyConnectors.java:290)
        at org.apache.maven.archiva.webdav.ArchivaDavResourceFactory.fetchContentFromProxies(ArchivaDavResourceFactory.java:610)
        at org.apache.maven.archiva.webdav.ArchivaDavResourceFactory.processRepository(ArchivaDavResourceFactory.java:456)
        at org.apache.maven.archiva.webdav.ArchivaDavResourceFactory.createResource(ArchivaDavResourceFactory.java:246)
        at org.apache.maven.archiva.webdav.RepositoryServlet.service(RepositoryServlet.java:117)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:803)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
        at org.apache.struts2.dispatcher.FilterDispatcher.doFilter(FilterDispatcher.java:416)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
        at com.opensymphony.module.sitemesh.filter.PageFilter.doFilter(PageFilter.java:39)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
        at org.apache.struts2.dispatcher.ActionContextCleanUp.doFilter(ActionContextCleanUp.java:99)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
        at org.apache.struts2.dispatcher.FilterDispatcher.doFilter(FilterDispatcher.java:416)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
        at com.opensymphony.module.sitemesh.filter.PageFilter.doFilter(PageFilter.java:39)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
        at org.apache.struts2.dispatcher.ActionContextCleanUp.doFilter(ActionContextCleanUp.java:99)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
        at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:96)
        at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:76)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
        at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
        at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175)
        at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
        at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
        at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
        at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286)
        at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:844)
        at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
        at java.lang.Thread.run(Thread.java:619)

"http-10000-29" daemon prio=10 tid=0x00002aab121a3000 nid=0x26b2 in Object.wait() [0x0000000044c74000..0x0000000044c74ca0]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.await(JIoEndpoint.java:416)
        - locked <0x00002aaad1a5b170> (a org.apache.tomcat.util.net.JIoEndpoint$Worker)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:442)
        at java.lang.Thread.run(Thread.java:619)

"http-10000-28" daemon prio=10 tid=0x00002aab10c1c000 nid=0x26b1 in Object.wait() [0x0000000044b73000..0x0000000044b73c20]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.await(JIoEndpoint.java:416)
        - locked <0x00002aaad1a5b7e0> (a org.apache.tomcat.util.net.JIoEndpoint$Worker)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:442)
        at java.lang.Thread.run(Thread.java:619)

"http-10000-27" daemon prio=10 tid=0x00002aab10c1b400 nid=0x26b0 in Object.wait() [0x0000000044a72000..0x0000000044a72ba0]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.await(JIoEndpoint.java:416)
        - locked <0x00002aaad1a5be50> (a org.apache.tomcat.util.net.JIoEndpoint$Worker)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:442)
        at java.lang.Thread.run(Thread.java:619)

"http-10000-26" daemon prio=10 tid=0x00002aab10c1ac00 nid=0x26af in Object.wait() [0x0000000044971000..0x0000000044971b20]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.await(JIoEndpoint.java:416)
        - locked <0x00002aaad1a5c4c0> (a org.apache.tomcat.util.net.JIoEndpoint$Worker)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:442)
        at java.lang.Thread.run(Thread.java:619)

   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.await(JIoEndpoint.java:416)
        - locked <0x00002aaad1a5cb30> (a org.apache.tomcat.util.net.JIoEndpoint$Worker)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:442)
        at java.lang.Thread.run(Thread.java:619)

"http-10000-24" daemon prio=10 tid=0x00002aab0efadc00 nid=0x26ad in Object.wait() [0x0000000044870000..0x0000000044870e20]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.await(JIoEndpoint.java:416)
        - locked <0x00002aaad1a5d1a0> (a org.apache.tomcat.util.net.JIoEndpoint$Worker)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:442)
        at java.lang.Thread.run(Thread.java:619)

"pool-1-thread-1" prio=10 tid=0x00002aab138e1800 nid=0x30e3 in Object.wait() [0x000000004466e000..0x000000004466ed20]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at edu.emory.mathcs.backport.java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:316)
        - locked <0x00002aaace72a068> (a edu.emory.mathcs.backport.java.util.concurrent.LinkedBlockingQueue$SerializableLock)
        at edu.emory.mathcs.backport.java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:921)
        at edu.emory.mathcs.backport.java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:980)
        at edu.emory.mathcs.backport.java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:528)
        at java.lang.Thread.run(Thread.java:619)

"http-10000-23" daemon prio=10 tid=0x00002aab119f1c00 nid=0x7bc5 in Object.wait() [0x0000000043f67000..0x0000000043f67d20]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.await(JIoEndpoint.java:416)
        - locked <0x00002aaad0015b30> (a org.apache.tomcat.util.net.JIoEndpoint$Worker)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:442)
        at java.lang.Thread.run(Thread.java:619)

"http-10000-22" daemon prio=10 tid=0x00002aab13afa000 nid=0x7201 runnable [0x000000004456d000..0x000000004456dca0]
   java.lang.Thread.State: RUNNABLE
        at java.net.SocketInputStream.socketRead0(Native Method)
        at java.net.SocketInputStream.read(SocketInputStream.java:129)
        at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:700)
        at org.apache.coyote.http11.InternalInputBuffer.parseRequestLine(InternalInputBuffer.java:366)
        at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:805)
        at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
        at java.lang.Thread.run(Thread.java:619)

"http-10000-21" daemon prio=10 tid=0x00002aab113bb800 nid=0x6447 in Object.wait() [0x0000000044169000..0x0000000044169d20]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.await(JIoEndpoint.java:416)
        - locked <0x00002aaacff9eec0> (a org.apache.tomcat.util.net.JIoEndpoint$Worker)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:442)
        at java.lang.Thread.run(Thread.java:619)

"http-10000-20" daemon prio=10 tid=0x00002aab10587c00 nid=0x43cd in Object.wait() [0x000000004436b000..0x000000004436bca0]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)

        - locked <0x00002aaacfe64408> (a org.apache.tomcat.util.net.JIoEndpoint$Worker)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:442)
        at java.lang.Thread.run(Thread.java:619)

"http-10000-19" daemon prio=10 tid=0x00002aab10587800 nid=0x43cc in Object.wait() [0x0000000043a62000..0x0000000043a62c20]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.await(JIoEndpoint.java:416)
        - locked <0x00002aaacfe641a0> (a org.apache.tomcat.util.net.JIoEndpoint$Worker)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:442)
        at java.lang.Thread.run(Thread.java:619)

"http-10000-18" daemon prio=10 tid=0x00002aab0f66f000 nid=0x43ca in Object.wait() [0x000000004426a000..0x000000004426ab20]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.await(JIoEndpoint.java:416)
        - locked <0x00002aaacfe63dc8> (a org.apache.tomcat.util.net.JIoEndpoint$Worker)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:442)
        at java.lang.Thread.run(Thread.java:619)

"http-10000-17" daemon prio=10 tid=0x00002aab1054e400 nid=0x4264 in Object.wait() [0x0000000044068000..0x0000000044068c20]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.await(JIoEndpoint.java:416)
        - locked <0x00002aaacfe21ad8> (a org.apache.tomcat.util.net.JIoEndpoint$Worker)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:442)
        at java.lang.Thread.run(Thread.java:619)

"http-10000-16" daemon prio=10 tid=0x00002aab1142a000 nid=0x425e in Object.wait() [0x0000000043b63000..0x0000000043b63d20]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.await(JIoEndpoint.java:416)
        - locked <0x00002aaacfdf8000> (a org.apache.tomcat.util.net.JIoEndpoint$Worker)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:442)
        at java.lang.Thread.run(Thread.java:619)

"http-10000-15" daemon prio=10 tid=0x00002aab113c9800 nid=0x1912 in Object.wait() [0x0000000043e66000..0x0000000043e66c20]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.await(JIoEndpoint.java:416)
        - locked <0x00002aaacfa96fc0> (a org.apache.tomcat.util.net.JIoEndpoint$Worker)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:442)
        at java.lang.Thread.run(Thread.java:619)

"http-10000-14" daemon prio=10 tid=0x00002aab0f38b000 nid=0x1911 in Object.wait() [0x0000000043d65000..0x0000000043d65ba0]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.await(JIoEndpoint.java:416)
        - locked <0x00002aaacfaab488> (a org.apache.tomcat.util.net.JIoEndpoint$Worker)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:442)
        at java.lang.Thread.run(Thread.java:619)

   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.await(JIoEndpoint.java:416)
        - locked <0x00002aaacfabf228> (a org.apache.tomcat.util.net.JIoEndpoint$Worker)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:442)
        at java.lang.Thread.run(Thread.java:619)

"TP-Monitor" daemon prio=10 tid=0x00002aab10fa7800 nid=0x17db in Object.wait() [0x0000000043961000..0x0000000043961ba0]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at org.apache.tomcat.util.threads.ThreadPool$MonitorRunnable.run(ThreadPool.java:565)
        - locked <0x00002aaacefb9dc8> (a org.apache.tomcat.util.threads.ThreadPool$MonitorRunnable)
        at java.lang.Thread.run(Thread.java:619)

"TP-Processor4" daemon prio=10 tid=0x00002aab10fa6c00 nid=0x17da runnable [0x0000000043860000..0x0000000043860b20]
   java.lang.Thread.State: RUNNABLE
        at java.net.PlainSocketImpl.socketAccept(Native Method)
        at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:384)
        - locked <0x00002aaacefd31a0> (a java.net.SocksSocketImpl)
        at java.net.ServerSocket.implAccept(ServerSocket.java:453)
        at java.net.ServerSocket.accept(ServerSocket.java:421)
        at org.apache.jk.common.ChannelSocket.accept(ChannelSocket.java:306)
        at org.apache.jk.common.ChannelSocket.acceptConnections(ChannelSocket.java:660)
        at org.apache.jk.common.ChannelSocket$SocketAcceptor.runIt(ChannelSocket.java:870)
        at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:690)
        at java.lang.Thread.run(Thread.java:619)

"TP-Processor3" daemon prio=10 tid=0x00002aab11027400 nid=0x17d9 in Object.wait() [0x000000004375f000..0x000000004375faa0]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        - waiting on <0x00002aaacefb91f8> (a org.apache.tomcat.util.threads.ThreadPool$ControlRunnable)
        at java.lang.Object.wait(Object.java:485)
        at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:662)
        - locked <0x00002aaacefb91f8> (a org.apache.tomcat.util.threads.ThreadPool$ControlRunnable)
        at java.lang.Thread.run(Thread.java:619)

"TP-Processor2" daemon prio=10 tid=0x00002aab11026400 nid=0x17d8 in Object.wait() [0x000000004365e000..0x000000004365ee20]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        - waiting on <0x00002aaacefb95e8> (a org.apache.tomcat.util.threads.ThreadPool$ControlRunnable)
        at java.lang.Object.wait(Object.java:485)
        at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:662)
        - locked <0x00002aaacefb95e8> (a org.apache.tomcat.util.threads.ThreadPool$ControlRunnable)
        at java.lang.Thread.run(Thread.java:619)

"TP-Processor1" daemon prio=10 tid=0x00002aab0f143400 nid=0x17d7 in Object.wait() [0x000000004355d000..0x000000004355dda0]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        - waiting on <0x00002aaacefb99d8> (a org.apache.tomcat.util.threads.ThreadPool$ControlRunnable)
        at java.lang.Object.wait(Object.java:485)
        at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:662)
        - locked <0x00002aaacefb99d8> (a org.apache.tomcat.util.threads.ThreadPool$ControlRunnable)
        at java.lang.Thread.run(Thread.java:619)

"http-10000-12" daemon prio=10 tid=0x00002aab0ef66400 nid=0x17d6 in Object.wait() [0x000000004345c000..0x000000004345cd20]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.await(JIoEndpoint.java:416)
        - locked <0x00002aaacf3e01d8> (a org.apache.tomcat.util.net.JIoEndpoint$Worker)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:442)
        at java.lang.Thread.run(Thread.java:619)

"http-10000-11" daemon prio=10 tid=0x00002aab0ef65000 nid=0x17d5 in Object.wait() [0x000000004335b000..0x000000004335bca0]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.await(JIoEndpoint.java:416)
        - locked <0x00002aaacf3e05d8> (a org.apache.tomcat.util.net.JIoEndpoint$Worker)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:442)
        at java.lang.Thread.run(Thread.java:619)

"http-10000-10" daemon prio=10 tid=0x00002aab0ef63400 nid=0x17d4 in Object.wait() [0x000000004325a000..0x000000004325ac20]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.await(JIoEndpoint.java:416)
        - locked <0x00002aaacf3e07e8> (a org.apache.tomcat.util.net.JIoEndpoint$Worker)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:442)
        at java.lang.Thread.run(Thread.java:619)

"http-10000-9" daemon prio=10 tid=0x00002aab0ef61c00 nid=0x17d3 in Object.wait() [0x0000000043159000..0x0000000043159ba0]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.await(JIoEndpoint.java:416)
        - locked <0x00002aaacf3e0bb8> (a org.apache.tomcat.util.net.JIoEndpoint$Worker)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:442)
        at java.lang.Thread.run(Thread.java:619)

"http-10000-8" daemon prio=10 tid=0x00002aab0f61ec00 nid=0x17d2 in Object.wait() [0x0000000043058000..0x0000000043058b20]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.await(JIoEndpoint.java:416)
        - locked <0x00002aaacf3e0f50> (a org.apache.tomcat.util.net.JIoEndpoint$Worker)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:442)
        at java.lang.Thread.run(Thread.java:619)

"http-10000-7" daemon prio=10 tid=0x00002aab0f61d400 nid=0x17d1 in Object.wait() [0x0000000042f57000..0x0000000042f57aa0]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.await(JIoEndpoint.java:416)
        - locked <0x00002aaacf3e1158> (a org.apache.tomcat.util.net.JIoEndpoint$Worker)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:442)
        at java.lang.Thread.run(Thread.java:619)

"http-10000-6" daemon prio=10 tid=0x00002aab0f61bc00 nid=0x17d0 in Object.wait() [0x0000000042e56000..0x0000000042e56e20]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.await(JIoEndpoint.java:416)
        - locked <0x00002aaacf3e14f0> (a org.apache.tomcat.util.net.JIoEndpoint$Worker)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:442)
        at java.lang.Thread.run(Thread.java:619)

"http-10000-5" daemon prio=10 tid=0x00002aab119f9800 nid=0x17cf in Object.wait() [0x0000000042d55000..0x0000000042d55da0]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.await(JIoEndpoint.java:416)
        - locked <0x00002aaacf3f82f0> (a org.apache.tomcat.util.net.JIoEndpoint$Worker)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:442)
        at java.lang.Thread.run(Thread.java:619)

   java.lang.Thread.State: RUNNABLE
        at java.net.SocketInputStream.socketRead0(Native Method)
        at java.net.SocketInputStream.read(SocketInputStream.java:129)
        at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:700)
        at org.apache.coyote.http11.InternalInputBuffer.parseRequestLine(InternalInputBuffer.java:366)
        at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:805)
        at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
        at java.lang.Thread.run(Thread.java:619)

"http-10000-3" daemon prio=10 tid=0x00002aab119f7000 nid=0x17cd in Object.wait() [0x0000000042b53000..0x0000000042b53ca0]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.await(JIoEndpoint.java:416)
        - locked <0x00002aaacf3f93e0> (a org.apache.tomcat.util.net.JIoEndpoint$Worker)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:442)
        at java.lang.Thread.run(Thread.java:619)

"http-10000-2" daemon prio=10 tid=0x00002aab131c0000 nid=0x17cc runnable [0x0000000042a52000..0x0000000042a52c20]
   java.lang.Thread.State: RUNNABLE
        at java.net.SocketInputStream.socketRead0(Native Method)
        at java.net.SocketInputStream.read(SocketInputStream.java:129)
        at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:700)
        at org.apache.coyote.http11.InternalInputBuffer.parseRequestLine(InternalInputBuffer.java:366)
        at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:805)
        at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
        at java.lang.Thread.run(Thread.java:619)

"http-10000-1" daemon prio=10 tid=0x00002aab131c1800 nid=0x17cb in Object.wait() [0x0000000042951000..0x0000000042951ba0]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.await(JIoEndpoint.java:416)
        - locked <0x00002aaacf3fea38> (a org.apache.tomcat.util.net.JIoEndpoint$Worker)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:442)
        at java.lang.Thread.run(Thread.java:619)

"http-10000-Acceptor-0" daemon prio=10 tid=0x00002aab0f144c00 nid=0x17ca runnable [0x0000000042850000..0x0000000042850b20]
   java.lang.Thread.State: RUNNABLE
        at java.net.PlainSocketImpl.socketAccept(Native Method)
        at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:384)
        - locked <0x00002aaacea5e348> (a java.net.SocksSocketImpl)
        at java.net.ServerSocket.implAccept(ServerSocket.java:453)
        at java.net.ServerSocket.accept(ServerSocket.java:421)
        at org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServerSocketFactory.java:61)
        at org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:310)
        at java.lang.Thread.run(Thread.java:619)

"ContainerBackgroundProcessor[StandardEngine[Catalina]]" daemon prio=10 tid=0x00002aab0f144400 nid=0x17c9 waiting on condition [0x000000004274f000..0x000000004274faa0]
   java.lang.Thread.State: TIMED_WAITING (sleeping)
        at java.lang.Thread.sleep(Native Method)
        at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.run(ContainerBase.java:1579)
        at java.lang.Thread.run(Thread.java:619)

"pool-3-thread-1" prio=10 tid=0x00002aab11910400 nid=0x17c8 runnable [0x000000004264e000..0x000000004264ee20]
   java.lang.Thread.State: RUNNABLE
        at java.util.zip.Deflater.deflateBytes(Native Method)
        at java.util.zip.Deflater.deflate(Deflater.java:290)
        - locked <0x00002aaafbaf04f0> (a java.util.zip.Deflater)
        at java.util.zip.DeflaterOutputStream.deflate(DeflaterOutputStream.java:159)
        at java.util.zip.DeflaterOutputStream.write(DeflaterOutputStream.java:118)
        at java.util.zip.ZipOutputStream.write(ZipOutputStream.java:272)
        - locked <0x00002aaafbaf0480> (a java.util.zip.ZipOutputStream)
        at org.sonatype.nexus.index.packer.DefaultIndexPacker.writeFile(DefaultIndexPacker.java:387)
        at org.sonatype.nexus.index.packer.DefaultIndexPacker.packDirectory(DefaultIndexPacker.java:345)
        at org.sonatype.nexus.index.packer.DefaultIndexPacker.packIndexArchive(DefaultIndexPacker.java:267)
        at org.sonatype.nexus.index.packer.DefaultIndexPacker.writeIndexArchive(DefaultIndexPacker.java:238)
        at org.sonatype.nexus.index.packer.DefaultIndexPacker.packIndex(DefaultIndexPacker.java:155)
        at org.apache.maven.archiva.scheduled.executors.ArchivaIndexingTaskExecutor.finishIndexingTask(ArchivaIndexingTaskExecutor.java:179)
        at org.apache.maven.archiva.scheduled.executors.ArchivaIndexingTaskExecutor.executeTask(ArchivaIndexingTaskExecutor.java:147)
        - locked <0x00002aaacebcde98> (a org.sonatype.nexus.index.DefaultIndexerEngine)
        at org.codehaus.plexus.taskqueue.execution.ThreadedTaskQueueExecutor$ExecutorRunnable$1.run(ThreadedTaskQueueExecutor.java:116)
        at edu.emory.mathcs.backport.java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:442)
        at edu.emory.mathcs.backport.java.util.concurrent.FutureTask.run(FutureTask.java:176)
        at edu.emory.mathcs.backport.java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:987)
        at edu.emory.mathcs.backport.java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:528)
        at java.lang.Thread.run(Thread.java:619)

"Thread-3" daemon prio=10 tid=0x00002aab0eef2000 nid=0x17c7 in Object.wait() [0x000000004254d000..0x000000004254dda0]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at edu.emory.mathcs.backport.java.util.concurrent.FutureTask.waitFor(FutureTask.java:267)
        at edu.emory.mathcs.backport.java.util.concurrent.FutureTask.get(FutureTask.java:117)
        - locked <0x00002aaad7a52068> (a edu.emory.mathcs.backport.java.util.concurrent.FutureTask)
        at org.codehaus.plexus.taskqueue.execution.ThreadedTaskQueueExecutor$ExecutorRunnable.waitForTask(ThreadedTaskQueueExecutor.java:159)
        at org.codehaus.plexus.taskqueue.execution.ThreadedTaskQueueExecutor$ExecutorRunnable.run(ThreadedTaskQueueExecutor.java:127)

"pool-2-thread-1" prio=10 tid=0x00002aab13d29000 nid=0x17c6 in Object.wait() [0x000000004244c000..0x000000004244cd20]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at edu.emory.mathcs.backport.java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:316)
        - locked <0x00002aaacf2a1208> (a edu.emory.mathcs.backport.java.util.concurrent.LinkedBlockingQueue$SerializableLock)
        at edu.emory.mathcs.backport.java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:921)
        at edu.emory.mathcs.backport.java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:980)
        at edu.emory.mathcs.backport.java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:528)
        at java.lang.Thread.run(Thread.java:619)

"Thread-2" daemon prio=10 tid=0x00002aab0eef1800 nid=0x17c5 in Object.wait() [0x000000004234b000..0x000000004234bca0]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:443)
        at edu.emory.mathcs.backport.java.util.concurrent.TimeUnit.timedWait(TimeUnit.java:364)
        at edu.emory.mathcs.backport.java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:349)
        - locked <0x00002aaacf175858> (a edu.emory.mathcs.backport.java.util.concurrent.LinkedBlockingQueue$SerializableLock)
        at org.codehaus.plexus.taskqueue.DefaultTaskQueue.poll(DefaultTaskQueue.java:228)
        at org.codehaus.plexus.taskqueue.execution.ThreadedTaskQueueExecutor$ExecutorRunnable.run(ThreadedTaskQueueExecutor.java:94)

"Thread-1" daemon prio=10 tid=0x00002aab13982800 nid=0x17c4 in Object.wait() [0x000000004224a000..0x000000004224ac20]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:443)
        at edu.emory.mathcs.backport.java.util.concurrent.TimeUnit.timedWait(TimeUnit.java:364)
        at edu.emory.mathcs.backport.java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:349)
        - locked <0x00002aaacf175578> (a edu.emory.mathcs.backport.java.util.concurrent.LinkedBlockingQueue$SerializableLock)
        at org.codehaus.plexus.taskqueue.DefaultTaskQueue.poll(DefaultTaskQueue.java:228)
        at org.codehaus.plexus.taskqueue.execution.ThreadedTaskQueueExecutor$ExecutorRunnable.run(ThreadedTaskQueueExecutor.java:94)

"derby.rawStoreDaemon" daemon prio=10 tid=0x00002aab0fced400 nid=0x17c3 in Object.wait() [0x0000000042149000..0x0000000042149ba0]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at org.apache.derby.impl.services.daemon.BasicDaemon.rest(Unknown Source)
        - locked <0x00002aaacee2cbd0> (a org.apache.derby.impl.services.daemon.BasicDaemon)
        at org.apache.derby.impl.services.daemon.BasicDaemon.run(Unknown Source)
        at java.lang.Thread.run(Thread.java:619)

"defaultScheduler_QuartzSchedulerThread" prio=10 tid=0x00002aab0fcec800 nid=0x17c2 sleeping[0x0000000042048000..0x0000000042048b20]
   java.lang.Thread.State: TIMED_WAITING (sleeping)
        at java.lang.Thread.sleep(Native Method)
        at org.quartz.core.QuartzSchedulerThread.run(QuartzSchedulerThread.java:399)

"defaultScheduler_Worker-14" prio=10 tid=0x00002aab11995c00 nid=0x17c1 in Object.wait() [0x0000000041f47000..0x0000000041f47aa0]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at org.quartz.simpl.SimpleThreadPool.getNextRunnable(SimpleThreadPool.java:423)
        - locked <0x00002aaacf40e248> (a java.lang.Object)
        at org.quartz.simpl.SimpleThreadPool.access$000(SimpleThreadPool.java:53)
        at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:514)

"defaultScheduler_Worker-13" prio=10 tid=0x00002aab0f6e8800 nid=0x17c0 in Object.wait() [0x0000000041e46000..0x0000000041e46e20]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at org.quartz.simpl.SimpleThreadPool.getNextRunnable(SimpleThreadPool.java:423)
        - locked <0x00002aaacf40e248> (a java.lang.Object)
        at org.quartz.simpl.SimpleThreadPool.access$000(SimpleThreadPool.java:53)
        at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:514)

"defaultScheduler_Worker-12" prio=10 tid=0x00002aab0f6e7000 nid=0x17bf in Object.wait() [0x0000000041d45000..0x0000000041d45da0]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at org.quartz.simpl.SimpleThreadPool.getNextRunnable(SimpleThreadPool.java:423)
        - locked <0x00002aaacf40e248> (a java.lang.Object)
        at org.quartz.simpl.SimpleThreadPool.access$000(SimpleThreadPool.java:53)
        at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:514)

"defaultScheduler_Worker-11" prio=10 tid=0x00002aab0f6e5800 nid=0x17be in Object.wait() [0x0000000041c44000..0x0000000041c44d20]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at org.quartz.simpl.SimpleThreadPool.getNextRunnable(SimpleThreadPool.java:423)
        - locked <0x00002aaacf40e248> (a java.lang.Object)
        at org.quartz.simpl.SimpleThreadPool.access$000(SimpleThreadPool.java:53)
        at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:514)

"defaultScheduler_Worker-10" prio=10 tid=0x00002aab0f6e4000 nid=0x17bd in Object.wait() [0x0000000041b43000..0x0000000041b43ca0]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at org.quartz.simpl.SimpleThreadPool.getNextRunnable(SimpleThreadPool.java:423)
        - locked <0x00002aaacf40e248> (a java.lang.Object)
        at org.quartz.simpl.SimpleThreadPool.access$000(SimpleThreadPool.java:53)
        at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:514)

"defaultScheduler_Worker-9" prio=10 tid=0x00002aab0f6e2c00 nid=0x17bc in Object.wait() [0x0000000041a42000..0x0000000041a42c20]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at org.quartz.simpl.SimpleThreadPool.getNextRunnable(SimpleThreadPool.java:423)
        - locked <0x00002aaacf40e248> (a java.lang.Object)
        at org.quartz.simpl.SimpleThreadPool.access$000(SimpleThreadPool.java:53)
        at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:514)

"defaultScheduler_Worker-8" prio=10 tid=0x00002aab13a46c00 nid=0x17bb in Object.wait() [0x0000000041941000..0x0000000041941ba0]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at org.quartz.simpl.SimpleThreadPool.getNextRunnable(SimpleThreadPool.java:423)
        - locked <0x00002aaacf40e248> (a java.lang.Object)
        at org.quartz.simpl.SimpleThreadPool.access$000(SimpleThreadPool.java:53)
        at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:514)

"defaultScheduler_Worker-7" prio=10 tid=0x00002aab13a45800 nid=0x17ba in Object.wait() [0x0000000041840000..0x0000000041840b20]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at org.quartz.simpl.SimpleThreadPool.getNextRunnable(SimpleThreadPool.java:423)
        - locked <0x00002aaacf40e248> (a java.lang.Object)
        at org.quartz.simpl.SimpleThreadPool.access$000(SimpleThreadPool.java:53)
        at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:514)

"defaultScheduler_Worker-6" prio=10 tid=0x00002aab0f4c1000 nid=0x17b9 in Object.wait() [0x000000004173f000..0x000000004173faa0]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at org.quartz.simpl.SimpleThreadPool.getNextRunnable(SimpleThreadPool.java:423)
        - locked <0x00002aaacf40e248> (a java.lang.Object)
        at org.quartz.simpl.SimpleThreadPool.access$000(SimpleThreadPool.java:53)
        at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:514)

"defaultScheduler_Worker-5" prio=10 tid=0x00002aab1198d400 nid=0x17b8 in Object.wait() [0x000000004163e000..0x000000004163ee20]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at org.quartz.simpl.SimpleThreadPool.getNextRunnable(SimpleThreadPool.java:423)
        - locked <0x00002aaacf40e248> (a java.lang.Object)
        at org.quartz.simpl.SimpleThreadPool.access$000(SimpleThreadPool.java:53)
        at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:514)

"defaultScheduler_Worker-4" prio=10 tid=0x00002aab10fc2400 nid=0x17b7 in Object.wait() [0x000000004153d000..0x000000004153dda0]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at org.quartz.simpl.SimpleThreadPool.getNextRunnable(SimpleThreadPool.java:423)
        - locked <0x00002aaacf40e248> (a java.lang.Object)
        at org.quartz.simpl.SimpleThreadPool.access$000(SimpleThreadPool.java:53)
        at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:514)

"defaultScheduler_Worker-3" prio=10 tid=0x00002aab10fc1400 nid=0x17b6 in Object.wait() [0x000000004143c000..0x000000004143cd20]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at org.quartz.simpl.SimpleThreadPool.getNextRunnable(SimpleThreadPool.java:423)
        - locked <0x00002aaacf40e248> (a java.lang.Object)
        at org.quartz.simpl.SimpleThreadPool.access$000(SimpleThreadPool.java:53)
        at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:514)

"defaultScheduler_Worker-2" prio=10 tid=0x00002aab12ed3c00 nid=0x17b5 in Object.wait() [0x000000004133b000..0x000000004133bca0]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at org.quartz.simpl.SimpleThreadPool.getNextRunnable(SimpleThreadPool.java:423)
        - locked <0x00002aaacf40e248> (a java.lang.Object)
        at org.quartz.simpl.SimpleThreadPool.access$000(SimpleThreadPool.java:53)
        at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:514)

"defaultScheduler_Worker-1" prio=10 tid=0x00002aab10faa400 nid=0x17b4 in Object.wait() [0x000000004123a000..0x000000004123ac20]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at org.quartz.simpl.SimpleThreadPool.getNextRunnable(SimpleThreadPool.java:423)
        - locked <0x00002aaacf40e248> (a java.lang.Object)
        at org.quartz.simpl.SimpleThreadPool.access$000(SimpleThreadPool.java:53)
        at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:514)

"defaultScheduler_Worker-0" prio=10 tid=0x00002aab110edc00 nid=0x17b3 in Object.wait() [0x0000000041139000..0x0000000041139ba0]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at org.quartz.simpl.SimpleThreadPool.getNextRunnable(SimpleThreadPool.java:423)
        - locked <0x00002aaacf40e248> (a java.lang.Object)
        at org.quartz.simpl.SimpleThreadPool.access$000(SimpleThreadPool.java:53)
        at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:514)

"derby.rawStoreDaemon" daemon prio=10 tid=0x00002aab108a7400 nid=0x17b0 in Object.wait() [0x0000000041038000..0x0000000041038b20]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at org.apache.derby.impl.services.daemon.BasicDaemon.rest(Unknown Source)
        - locked <0x00002aaaceb21b50> (a org.apache.derby.impl.services.daemon.BasicDaemon)
        at org.apache.derby.impl.services.daemon.BasicDaemon.run(Unknown Source)
        at java.lang.Thread.run(Thread.java:619)

"derby.antiGC" daemon prio=10 tid=0x00002aab1124b000 nid=0x17af in Object.wait() [0x0000000040f37000..0x0000000040f37aa0]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        - waiting on <0x00002aaaceb18638> (a org.apache.derby.impl.services.monitor.AntiGC)
        at java.lang.Object.wait(Object.java:485)
        at org.apache.derby.impl.services.monitor.AntiGC.run(Unknown Source)
        - locked <0x00002aaaceb18638> (a org.apache.derby.impl.services.monitor.AntiGC)
        at java.lang.Thread.run(Thread.java:619)

"Low Memory Detector" daemon prio=10 tid=0x00002aab0eaae800 nid=0x17a8 runnable [0x0000000000000000..0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

"CompilerThread1" daemon prio=10 tid=0x00002aab0eaac000 nid=0x17a7 waiting on condition [0x0000000000000000..0x0000000040c335c0]
   java.lang.Thread.State: RUNNABLE

"CompilerThread0" daemon prio=10 tid=0x00002aab0eaaa000 nid=0x17a6 waiting on condition [0x0000000000000000..0x0000000040b32570]
   java.lang.Thread.State: RUNNABLE

"Signal Dispatcher" daemon prio=10 tid=0x00002aab0eaa8400 nid=0x17a5 runnable [0x0000000000000000..0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

"Finalizer" daemon prio=10 tid=0x00002aab0ea82000 nid=0x17a4 in Object.wait() [0x0000000040931000..0x0000000040931ba0]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:116)
        - locked <0x00002aaaceae5448> (a java.lang.ref.ReferenceQueue$Lock)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:132)
        at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159)

"Reference Handler" daemon prio=10 tid=0x00002aab0ea7b400 nid=0x17a3 in Object.wait() [0x0000000040830000..0x0000000040830b20]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116)
        - locked <0x00002aaaceae95d8> (a java.lang.ref.Reference$Lock)

"main" prio=10 tid=0x0000000040111c00 nid=0x179c runnable [0x0000000040229000..0x000000004022af60]
   java.lang.Thread.State: RUNNABLE
        at java.net.PlainSocketImpl.socketAccept(Native Method)
        at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:384)
        - locked <0x00002aaacf5e2f18> (a java.net.SocksSocketImpl)
        at java.net.ServerSocket.implAccept(ServerSocket.java:453)
        at java.net.ServerSocket.accept(ServerSocket.java:421)
        at org.apache.catalina.core.StandardServer.await(StandardServer.java:389)
        at org.apache.catalina.startup.Catalina.await(Catalina.java:642)
        at org.apache.catalina.startup.Catalina.start(Catalina.java:602)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:288)
        at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:413)

"VM Thread" prio=10 tid=0x00002aab0e8ca000 nid=0x17a1 runnable

"GC task thread#0 (ParallelGC)" prio=10 tid=0x000000004011c800 nid=0x179d runnable

"GC task thread#1 (ParallelGC)" prio=10 tid=0x000000004011dc00 nid=0x179e runnable

"GC task thread#2 (ParallelGC)" prio=10 tid=0x000000004011f000 nid=0x179f runnable

"GC task thread#3 (ParallelGC)" prio=10 tid=0x0000000040120400 nid=0x17a0 runnable

"VM Periodic Task Thread" prio=10 tid=0x00002aab0eab0800 nid=0x17a9 waiting on condition

JNI global references: 767

Heap
 PSYoungGen      total 338112K, used 61456K [0x00002aaaf8d00000, 0x00002aab0e250000, 0x00002aab0e250000)
  eden space 328448K, 16% used [0x00002aaaf8d00000,0x00002aaafc0727a8,0x00002aab0cdc0000)
  from space 9664K, 90% used [0x00002aab0d800000,0x00002aab0e093c28,0x00002aab0e170000)
  to   space 10496K, 0% used [0x00002aab0cdc0000,0x00002aab0cdc0000,0x00002aab0d800000)
 PSOldGen        total 691456K, used 171560K [0x00002aaace250000, 0x00002aaaf8590000, 0x00002aaaf8d00000)
  object space 691456K, 24% used [0x00002aaace250000,0x00002aaad89da008,0x00002aaaf8590000)
 PSPermGen       total 71552K, used 70735K [0x00002aaaae250000, 0x00002aaab2830000, 0x00002aaace250000)
  object space 71552K, 98% used [0x00002aaaae250000,0x00002aaab2763ec0,0x00002aaab2830000)





-----Original Message-----
From: Olivier Lamy [mailto:olamy@apache.org] 
Sent: Wednesday, November 02, 2011 3:52 AM
To: users@archiva.apache.org
Subject: RE: 100% CPU in Archiva 1.3.5

Maybe a thread dump could help us to debug.
Any chance you provide one ?
Did you have time to test with 1.4 m1 ?
--
Olivier
Le 1 nov. 2011 15:43, "Stallard,David" <st...@oclc.org> a écrit :

> We do have a fairly large number of Continuous Integration builds that 
> can trigger many times per day, each build uploading new snapshots.  
> It sounds like that could be a problem based on your second point below.
> However, that structure has been in place for well over a year and 
> we've only had this CPU problem for 2 weeks.  Maybe we just happened 
> to cross some threshold that has made it more problematic?
>
> I'll look into reducing the number of scans and keeping them to 
> off-peak hours.
>
> -----Original Message-----
> From: Brett Porter [mailto:brett@porterclan.net] On Behalf Of Brett 
> Porter
> Sent: Monday, October 31, 2011 6:34 PM
> To: users@archiva.apache.org
> Subject: Re: 100% CPU in Archiva 1.3.5
>
> Top replying with a few points:
>
> - artifact upload limits are only via the web UI, maven deployments 
> can push artifacts as large as needed.
> - individual artifacts of that size shouldn't be a big problem (it's a 
> once off hit), but regularly updating snapshots of that size will 
> cause it to build
> - the scan time below is quite long, particularly for the purge. You 
> might want to push the scanning schedule out to an "off peak" time - 
> the purge doesn't need to run that often, and most operations are done 
> on-demand with the scan just filling in any gaps.
>
> HTH,
> Brett
>
> On 01/11/2011, at 2:15 AM, Stallard,David wrote:
>
> > I'm not sure if this is useful, but here are the summaries of the 
> > most
>
> > recent hourly scans...from archiva.log:
> >
> >
> > .\ Scan of internal \.__________________________________________
> >  Repository Dir    : <path removed>/internal
> >  Repository Name   : Archiva Managed Internal Repository
> >  Repository Layout : default
> >  Known Consumers   : (7 configured)
> >                      repository-purge (Total: 58857ms; Avg.: 1; Count:
> > 58702)
> >                      metadata-updater (Total: 419ms; Avg.: 209; Count:
> > 2)
> >                      auto-remove
> >                      auto-rename
> >                      update-db-artifact (Total: 98ms; Avg.: 49; Count:
> > 2)
> >                      create-missing-checksums (Total: 120ms; Avg.: 
> > 60;
> > Count: 2)
> >                      index-content (Total: 0ms; Avg.: 0; Count: 7) 
> > Invalid Consumers : <none>
> >  Duration          : 2 Minutes 56 Seconds 896 Milliseconds
> >  When Gathered     : 10/31/11 11:02 AM
> >  Total File Count  : 268305
> >  Avg Time Per File :
> > ______________________________________________________________
> >
> >
> > .\ Scan of snapshots \.__________________________________________
> >  Repository Dir    : <path removed>/snapshots
> >  Repository Name   : Archiva Managed Snapshot Repository
> >  Repository Layout : default
> >  Known Consumers   : (7 configured)
> >                      repository-purge (Total: 325200ms; Avg.: 8;
> Count:
> > 39805)
> >                      metadata-updater (Total: 5915ms; Avg.: 50; Count:
> > 116)
> >                      auto-remove
> >                      auto-rename
> >                      update-db-artifact (Total: 17211ms; Avg.: 148;
> > Count: 116)
> >                      create-missing-checksums (Total: 15559ms; Avg.:
> > 134; Count: 116)
> >                      index-content (Total: 34ms; Avg.: 0; Count: 
> > 475)
>
> > Invalid Consumers : <none>
> >  Duration          : 7 Minutes 17 Seconds 416 Milliseconds
> >  When Gathered     : 10/31/11 11:10 AM
> >  Total File Count  : 166275
> >  Avg Time Per File : 2 Milliseconds
> > ______________________________________________________________
> >
> >
> >
> > -----Original Message-----
> > From: Stallard,David
> > Sent: Monday, October 31, 2011 9:57 AM
> > To: 'users@archiva.apache.org'
> > Subject: RE: 100% CPU in Archiva 1.3.5
> >
> > I need to correct my previous message...it turns out we do have 
> > artifacts larger than 40M even though that is the defined maximum, 
> > I'm
>
> > not sure at this point how that is happening.
> >
> > In our internal repository we have 40 artifacts which are over 100M 
> > in
>
> > size, with the largest one being 366M.  In snapshots, we have 61 
> > artifacts that are >100M, where the largest is 342M.  I'm not sure 
> > how
>
> > significant these sizes are in terms of the indexer, but wanted to 
> > accurately reflect what we're dealing with.
> >
> > -----Original Message-----
> > From: Stallard,David
> > Sent: Monday, October 31, 2011 9:43 AM
> > To: 'users@archiva.apache.org'
> > Subject: RE: 100% CPU in Archiva 1.3.5
> >
> > Brett Porter said:
> >>> It's not unexpected that indexing drives it to 100% CPU 
> >>> momentarily,
> > but causing it to become unavailable is unusual.
> > How big are the artifacts it is scanning?<<
> >
> > The CPU was still at 100% on Monday morning, so having the weekend 
> > to index didn't seem to improve anything; the indexing queue was up 
> > to about 3500.  We got a report that downloads from Archiva are 
> > extremely
>
> > slow, so I just bounced it.  CPU was immedately at 100% after the 
> > bounce, and the indexing queue is at 6.  I expect that queue to 
> > continually rise, based on what I've seen after previous bounces.
> >
> > Our upload maximum size was 10M for the longest time, but we had to 
> > raise it to 20M a while back and then recently we raised it to 40M.
> > But I would think that the overwhelming majority of our artifacts 
> > are 10M or less.
> >
> > Is there a way to increase the logging level?  Currently, the logs 
> > don't show any indication of what it is grinding away on.  After the 
> > startup stuff, there really isn't anything in archiva.log except for 
> > some Authorization Denied messages -- but these have been occurring 
> > for months and months, I don't think they are related to the 100% 
> > CPU issue that just started up about a week ago.
> >
> >
> >
> >
> >
>
> --
> Brett Porter
> brett@apache.org
> http://brettporter.wordpress.com/
> http://au.linkedin.com/in/brettporter
>
>
>
>
>
>
>


RE: 100% CPU in Archiva 1.3.5

Posted by Olivier Lamy <ol...@apache.org>.
Maybe a thread dump could help us to debug.
Any chance you provide one ?
Did you have time to test with 1.4 m1 ?
--
Olivier
Le 1 nov. 2011 15:43, "Stallard,David" <st...@oclc.org> a écrit :

> We do have a fairly large number of Continuous Integration builds that
> can trigger many times per day, each build uploading new snapshots.  It
> sounds like that could be a problem based on your second point below.
> However, that structure has been in place for well over a year and we've
> only had this CPU problem for 2 weeks.  Maybe we just happened to cross
> some threshold that has made it more problematic?
>
> I'll look into reducing the number of scans and keeping them to off-peak
> hours.
>
> -----Original Message-----
> From: Brett Porter [mailto:brett@porterclan.net] On Behalf Of Brett
> Porter
> Sent: Monday, October 31, 2011 6:34 PM
> To: users@archiva.apache.org
> Subject: Re: 100% CPU in Archiva 1.3.5
>
> Top replying with a few points:
>
> - artifact upload limits are only via the web UI, maven deployments can
> push artifacts as large as needed.
> - individual artifacts of that size shouldn't be a big problem (it's a
> once off hit), but regularly updating snapshots of that size will cause
> it to build
> - the scan time below is quite long, particularly for the purge. You
> might want to push the scanning schedule out to an "off peak" time - the
> purge doesn't need to run that often, and most operations are done
> on-demand with the scan just filling in any gaps.
>
> HTH,
> Brett
>
> On 01/11/2011, at 2:15 AM, Stallard,David wrote:
>
> > I'm not sure if this is useful, but here are the summaries of the most
>
> > recent hourly scans...from archiva.log:
> >
> >
> > .\ Scan of internal \.__________________________________________
> >  Repository Dir    : <path removed>/internal
> >  Repository Name   : Archiva Managed Internal Repository
> >  Repository Layout : default
> >  Known Consumers   : (7 configured)
> >                      repository-purge (Total: 58857ms; Avg.: 1; Count:
> > 58702)
> >                      metadata-updater (Total: 419ms; Avg.: 209; Count:
> > 2)
> >                      auto-remove
> >                      auto-rename
> >                      update-db-artifact (Total: 98ms; Avg.: 49; Count:
> > 2)
> >                      create-missing-checksums (Total: 120ms; Avg.: 60;
> > Count: 2)
> >                      index-content (Total: 0ms; Avg.: 0; Count: 7)
> > Invalid Consumers : <none>
> >  Duration          : 2 Minutes 56 Seconds 896 Milliseconds
> >  When Gathered     : 10/31/11 11:02 AM
> >  Total File Count  : 268305
> >  Avg Time Per File :
> > ______________________________________________________________
> >
> >
> > .\ Scan of snapshots \.__________________________________________
> >  Repository Dir    : <path removed>/snapshots
> >  Repository Name   : Archiva Managed Snapshot Repository
> >  Repository Layout : default
> >  Known Consumers   : (7 configured)
> >                      repository-purge (Total: 325200ms; Avg.: 8;
> Count:
> > 39805)
> >                      metadata-updater (Total: 5915ms; Avg.: 50; Count:
> > 116)
> >                      auto-remove
> >                      auto-rename
> >                      update-db-artifact (Total: 17211ms; Avg.: 148;
> > Count: 116)
> >                      create-missing-checksums (Total: 15559ms; Avg.:
> > 134; Count: 116)
> >                      index-content (Total: 34ms; Avg.: 0; Count: 475)
>
> > Invalid Consumers : <none>
> >  Duration          : 7 Minutes 17 Seconds 416 Milliseconds
> >  When Gathered     : 10/31/11 11:10 AM
> >  Total File Count  : 166275
> >  Avg Time Per File : 2 Milliseconds
> > ______________________________________________________________
> >
> >
> >
> > -----Original Message-----
> > From: Stallard,David
> > Sent: Monday, October 31, 2011 9:57 AM
> > To: 'users@archiva.apache.org'
> > Subject: RE: 100% CPU in Archiva 1.3.5
> >
> > I need to correct my previous message...it turns out we do have
> > artifacts larger than 40M even though that is the defined maximum, I'm
>
> > not sure at this point how that is happening.
> >
> > In our internal repository we have 40 artifacts which are over 100M in
>
> > size, with the largest one being 366M.  In snapshots, we have 61
> > artifacts that are >100M, where the largest is 342M.  I'm not sure how
>
> > significant these sizes are in terms of the indexer, but wanted to
> > accurately reflect what we're dealing with.
> >
> > -----Original Message-----
> > From: Stallard,David
> > Sent: Monday, October 31, 2011 9:43 AM
> > To: 'users@archiva.apache.org'
> > Subject: RE: 100% CPU in Archiva 1.3.5
> >
> > Brett Porter said:
> >>> It's not unexpected that indexing drives it to 100% CPU momentarily,
> > but causing it to become unavailable is unusual.
> > How big are the artifacts it is scanning?<<
> >
> > The CPU was still at 100% on Monday morning, so having the weekend to
> > index didn't seem to improve anything; the indexing queue was up to
> > about 3500.  We got a report that downloads from Archiva are extremely
>
> > slow, so I just bounced it.  CPU was immedately at 100% after the
> > bounce, and the indexing queue is at 6.  I expect that queue to
> > continually rise, based on what I've seen after previous bounces.
> >
> > Our upload maximum size was 10M for the longest time, but we had to
> > raise it to 20M a while back and then recently we raised it to 40M.
> > But I would think that the overwhelming majority of our artifacts are
> > 10M or less.
> >
> > Is there a way to increase the logging level?  Currently, the logs
> > don't show any indication of what it is grinding away on.  After the
> > startup stuff, there really isn't anything in archiva.log except for
> > some Authorization Denied messages -- but these have been occurring
> > for months and months, I don't think they are related to the 100% CPU
> > issue that just started up about a week ago.
> >
> >
> >
> >
> >
>
> --
> Brett Porter
> brett@apache.org
> http://brettporter.wordpress.com/
> http://au.linkedin.com/in/brettporter
>
>
>
>
>
>
>