You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@tomcat.apache.org by Rainer Jung <ra...@kippdata.de> on 2013/09/20 12:02:18 UTC

Some trunk test suite observations

Having run some sporadically failing tests in a loop under the three
connectors on Solaris 10 Sparc using Java 1.7.0_40. Code is trunk
r1524838 (current as of this morning and codewise identical to 8.0.0 RC3).

The system is relatively slow but not *that* slow. OTOH it had some NFS
server duties while I was running the tests.


Overview
========


TestWebappClassLoaderExecutorMemoryLeak
---------------------------------------

bio: fails 2/10
nio: hangs 1/10
apr: fails 3/10

The hang for nio in the leak detection test looks suspect (see below),
the other failures might be uncritical timing issues:

Testcase: testTimerThreadLeak took 5.737 sec
        FAILED
null
junit.framework.AssertionFailedError
        at
org.apache.catalina.loader.TestWebappClassLoaderExecutorMemoryLeak.testTimerThreadLeak(TestWebappClassLoaderExecutorMemoryLeak.java:72)

The failed assertion is executorServlet.tpe.isTerminated().


TestWsSubprotocols
------------------

bio: fails 3/25
nio: fails 1/25
apr: fails 3/25

Testcase: testWsSubprotocols took 5.834 sec
        Caused an ERROR
null
java.lang.NullPointerException
        at
org.apache.tomcat.websocket.TestWsSubprotocols.testWsSubprotocols(TestWsSubprotocols.java:89)

NPEs get logged for some of the failures, see below.


TestWsWebSocketContainer
------------------------

bio: 2/15
nio: 0/15
apr: 0/15

Crash for APR seems fixed (or very rare) :)
Details for bio failures see below.


TestCoyoteAdapter
-----------------

bio: 0/10
nio: 0/10
apr: 0/10

So yesterday's single failure here was a rare case.



Details
=======

TestWebappClassLoaderExecutorMemoryLeak
---------------------------------------

Concerning the single hang under nio:

- pool threads 2 and 5 still exist but idle:

"pool-1-thread-5" prio=3 tid=0x008c4000 nid=0x19 waiting on condition
[0xb397f000]

   java.lang.Thread.State: WAITING (parking)
        at sun.misc.Unsafe.park(Native Method)
        - parking to wait for  <0xe6f0aed0> (a
java.util.concurrent.locks.ReentrantLock$NonfairSync)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:867)
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1197)
        at
java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:214)
        at
java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:290)
        at
java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:998)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1163)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:724)

"pool-1-thread-2" prio=3 tid=0x00864400 nid=0x16 waiting on condition
[0xb3c7f000]
   java.lang.Thread.State: WAITING (parking)
        at sun.misc.Unsafe.park(Native Method)
        - parking to wait for  <0xe6f0aed0> (a
java.util.concurrent.locks.ReentrantLock$NonfairSync)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:867)
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1197)
        at
java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:214)
        at
java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:290)
        at
java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:998)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1163)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:724)


- main thread waiting for the same monitor as the pool threads (same
address):

"main" prio=3 tid=0x00029800 nid=0x2 waiting on condition [0xfdf7d000]
   java.lang.Thread.State: WAITING (parking)
        at sun.misc.Unsafe.park(Native Method)
        - parking to wait for  <0xe6f0aed0> (a
java.util.concurrent.locks.ReentrantLock$NonfairSync)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:867)
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1197)
        at
java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:214)
        at
java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:290)
        at
java.util.concurrent.ThreadPoolExecutor.shutdownNow(ThreadPoolExecutor.java:1420)
        at
org.apache.catalina.loader.WebappClassLoader.clearReferencesThreads(WebappClassLoader.java:2046)
        at
org.apache.catalina.loader.WebappClassLoader.clearReferences(WebappClassLoader.java:1722)
        at
org.apache.catalina.loader.WebappClassLoader.stop(WebappClassLoader.java:1637)
        at
org.apache.catalina.loader.WebappLoader.stopInternal(WebappLoader.java:491)
        at
org.apache.catalina.util.LifecycleBase.stop(LifecycleBase.java:232)
        - locked <0xe63786e8> (a org.apache.catalina.loader.WebappLoader)
        at
org.apache.catalina.core.StandardContext.stopInternal(StandardContext.java:5541)
        - locked <0xe7201150> (a org.apache.catalina.core.StandardContext)
        at
org.apache.catalina.util.LifecycleBase.stop(LifecycleBase.java:232)
        - locked <0xe7201150> (a org.apache.catalina.core.StandardContext)
        at
org.apache.catalina.loader.TestWebappClassLoaderExecutorMemoryLeak.testTimerThreadLeak(TestWebappClassLoaderExecutorMemoryLeak.java:62)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
        at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
        at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
        at
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
        at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
        at
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
        at
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)

...


TestWsSubprotocols
------------------

Sometimes nothing interesting in the output, sometimes NPEs:

...

    [junit] 20-Sep-2013 09:46:01.051 SEVERE
[http-apr-127.0.0.1-auto-1-exec-2]
org.apache.tomcat.websocket.server.WsHttpUpgradeHandler.destroy Failed
to close WebConnection while destroying the WebSocket HttpUpgradeHandler
    [junit]  java.lang.NullPointerException
    [junit]     at
org.apache.tomcat.websocket.server.WsHttpUpgradeHandler.destroy(WsHttpUpgradeHandler.java:143)
    [junit]     at
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:708)
    [junit]     at
org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProtocol.java:282)
    [junit]     at
org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.doRun(AprEndpoint.java:2289)
    [junit]     at
org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.run(AprEndpoint.java:2278)
    [junit]     at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    [junit]     at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    [junit]     at java.lang.Thread.run(Thread.java:724)
    [junit]
    [junit] 20-Sep-2013 09:46:01.055 SEVERE
[http-apr-127.0.0.1-auto-1-exec-4]
org.apache.tomcat.websocket.server.WsHttpUpgradeHandler.destroy Failed
to close WebConnection while destroying the WebSocket HttpUpgradeHandler
    [junit]  java.lang.NullPointerException
    [junit]     at
org.apache.tomcat.websocket.server.WsHttpUpgradeHandler.destroy(WsHttpUpgradeHandler.java:143)
    [junit]     at
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:708)
    [junit]     at
org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProtocol.java:282)
    [junit]     at
org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.doRun(AprEndpoint.java:2289)
    [junit]     at
org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.run(AprEndpoint.java:2278)
    [junit]     at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    [junit]     at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    [junit]     at java.lang.Thread.run(Thread.java:724)
    [junit]
    [junit] 20-Sep-2013 09:46:01.200 INFO [main]
org.apache.catalina.core.StandardService.stopInternal Stopping service
Tomcat

...



TestWsWebSocketContainer
------------------------

...
Testcase: testSmallTextBufferClientTextMessage took 0.237 sec
        Caused an ERROR
java.util.concurrent.ExecutionException: java.io.IOException: Unable to
write the complete message as the WebSocket connection has been closed
java.io.IOException: java.util.concurrent.ExecutionException:
java.io.IOException: Unable to write the complete message as the
WebSocket connection has been closed
        at
org.apache.tomcat.websocket.WsRemoteEndpointImplBase.sendPartialString(WsRemoteEndpointImplBase.java:205)
        at
org.apache.tomcat.websocket.WsRemoteEndpointImplBase.sendString(WsRemoteEndpointImplBase.java:154)
        at
org.apache.tomcat.websocket.WsRemoteEndpointBasic.sendText(WsRemoteEndpointBasic.java:37)
        at
org.apache.tomcat.websocket.TestWsWebSocketContainer.doBufferTest(TestWsWebSocketContainer.java:256)
        at
org.apache.tomcat.websocket.TestWsWebSocketContainer.testSmallTextBufferClientTextMessage(TestWsWebSocketContainer.java:155)
Caused by: java.util.concurrent.ExecutionException: java.io.IOException:
Unable to write the complete message as the WebSocket connection has
been closed
        at
org.apache.tomcat.websocket.FutureToSendHandler.get(FutureToSendHandler.java:102)
        at
org.apache.tomcat.websocket.WsRemoteEndpointImplBase.sendPartialString(WsRemoteEndpointImplBase.java:201)
Caused by: java.io.IOException: Unable to write the complete message as
the WebSocket connection has been closed
        at org.apache.tomcat.websocket.WsSession.doClose(WsSession.java:421)
        at org.apache.tomcat.websocket.WsSession.close(WsSession.java:392)
        at
org.apache.tomcat.websocket.WsFrameClient.close(WsFrameClient.java:82)
        at
org.apache.tomcat.websocket.WsFrameClient.access$300(WsFrameClient.java:26)
        at
org.apache.tomcat.websocket.WsFrameClient$WsFrameClientCompletionHandler.completed(WsFrameClient.java:105)
        at
org.apache.tomcat.websocket.WsFrameClient$WsFrameClientCompletionHandler.completed(WsFrameClient.java:96)
        at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:126)
        at sun.nio.ch.Invoker$2.run(Invoker.java:206)
        at
sun.nio.ch.AsynchronousChannelGroupImpl$1.run(AsynchronousChannelGroupImpl.java:112)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:724)

Testcase: testConnectToServerEndpointInvalidScheme took 0.111 sec
...


Regards,

Rainer

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org


Re: Some trunk test suite observations

Posted by Rainer Jung <ra...@kippdata.de>.
On 20.09.2013 14:46, Mark Thomas wrote:
> On 20/09/2013 11:02, Rainer Jung wrote:
>> Having run some sporadically failing tests in a loop under the three
>> connectors on Solaris 10 Sparc using Java 1.7.0_40. Code is trunk
>> r1524838 (current as of this morning and codewise identical to 8.0.0 RC3).
> 
> Great. This is really helpful. If you could repeat the test with the
> various fixes in place to see if things have got any better that would
> be great.

I ran the test I listed in the original mails again for bio, nio and
apr, each 50 times. This time the system was idle during the test run.

Only two of the tests I mentioned still fail occasionally:

TestWebappClassLoaderExecutorMemoryLeak: About 6-10 out of 50 runs
failed. In addition I had again had one test run hanging during stop,
this time for the apr connector not the nio one.

TestCoyoteAdapter: onyl failures with apr, two runs out of 50. Details
below.

I will do some investigations and report back. See below for some detail
on the hang.

Thanks for fixing the other ones and improving this one!

>> The hang for nio in the leak detection test looks suspect (see below),
>> the other failures might be uncritical timing issues:
> 
> Looking at the code, timing issues in the tests look very likely. I've
> committed a fix for the timing issues.
> 
> Regarding the loop, any ideas on the root cause?

I'll investigate but the hang is not a loop. The main thread is waiting
for the main ThreadPoolExecutor lock of the timer pool when trying to
shut it down to prevent the memory leak:

"main" prio=3 tid=0x00029800 nid=0x2 waiting on condition [0xfdf7d000]
   java.lang.Thread.State: WAITING (parking)
        at sun.misc.Unsafe.park(Native Method)
        - parking to wait for  <0xe6db92b8> (a
java.util.concurrent.locks.ReentrantLock$NonfairSync)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:867)
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1197)
        at
java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:214)
        at
java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:290)
        at
java.util.concurrent.ThreadPoolExecutor.shutdownNow(ThreadPoolExecutor.java:1420)
        at
org.apache.catalina.loader.WebappClassLoader.clearReferencesThreads(WebappClassLoader.java:2046)
        at
org.apache.catalina.loader.WebappClassLoader.clearReferences(WebappClassLoader.java:1722)
        at
org.apache.catalina.loader.WebappClassLoader.stop(WebappClassLoader.java:1637)
        at
org.apache.catalina.loader.WebappLoader.stopInternal(WebappLoader.java:491)
        at
org.apache.catalina.util.LifecycleBase.stop(LifecycleBase.java:232)

...

and three of the timer threads wait for it as well:

"pool-1-thread-1" prio=3 tid=0x00944800 nid=0x16 waiting on condition
[0xb3a7f000]
   java.lang.Thread.State: WAITING (parking)
        at sun.misc.Unsafe.park(Native Method)
        - parking to wait for  <0xe6db92b8> (a
java.util.concurrent.locks.ReentrantLock$NonfairSync)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:867)
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1197)
        at
java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:214)
        at
java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:290)
        at
java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:998)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1163)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:724)

...

No indication what holds the lock or why it wasn't freed after use. The
source code of the ThreadPoolExecutor in JDK 1.7.0_40 doesn't show a
chance of not releasing the lock when leaving the method that grabs it,
it is always locked and then unlocked in a finally clause. But no thread
in the dump sits in a block which should hold the lock. I'll do some
hunting, probably by also instrumenting ThreadPoolExecutor,
ReentrantLock and/or AbstractQueuedSynchronizer.

Concerning TestCoyoteAdapter:

...
21-Sep-2013 19:43:06.801 INFO [main]
org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler
["http-apr-127.0.0.1-auto-13-33900"]
21-Sep-2013 19:43:16.890 SEVERE [http-apr-127.0.0.1-auto-13-Poller]
org.apache.tomcat.util.net.AprEndpoint$Poller.run Poller failed with
error [81] : [File descriptor in bad state]
21-Sep-2013 19:43:26.900 INFO [main]
org.apache.coyote.AbstractProtocol.pause Pausing ProtocolHandler
["http-apr-127.0.0.1-auto-13-33900"]
...

...
Testcase: testPathParamsRedirect took 21.482 sec
        Caused an ERROR
Unexpected end of file from server
java.net.SocketException: Unexpected end of file from server
        at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:772)
        at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:633)
        at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:769)
        at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:633)
        at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
        at
java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:468)
        at
org.apache.catalina.startup.TomcatBaseTest.methodUrl(TomcatBaseTest.java:247)
        at
org.apache.catalina.startup.TomcatBaseTest.getUrl(TomcatBaseTest.java:219)
        at
org.apache.catalina.startup.TomcatBaseTest.getUrl(TomcatBaseTest.java:213)
        at
org.apache.catalina.startup.TomcatBaseTest.getUrl(TomcatBaseTest.java:202)
        at
org.apache.catalina.startup.TomcatBaseTest.getUrl(TomcatBaseTest.java:196)
        at
org.apache.catalina.connector.TestCoyoteAdapter.testPath(TestCoyoteAdapter.java:137)
        at
org.apache.catalina.connector.TestCoyoteAdapter.testPathParamsRedirect(TestCoyoteAdapter.java:116)

Testcase: testPathParmsFooSessionDummyValue took 1.32 sec
...

The SEVERE method is specific for the failure, also the duration above
20 seconds. The duration count (in seconds) of the 150 tests was:

Count Duration
  74 0
  23 1
   3 2
  50 3
   1 20
   1 21

So all test runs below 4 seconds except for the two failing runs. I'll
let that one run longer and will recheck, whether failures only happen
for apr.

Regards,

Rainer

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org


Re: Some trunk test suite observations

Posted by Mark Thomas <ma...@apache.org>.
On 20/09/2013 11:02, Rainer Jung wrote:
> Having run some sporadically failing tests in a loop under the three
> connectors on Solaris 10 Sparc using Java 1.7.0_40. Code is trunk
> r1524838 (current as of this morning and codewise identical to 8.0.0 RC3).

Great. This is really helpful. If you could repeat the test with the
various fixes in place to see if things have got any better that would
be great.

> The system is relatively slow but not *that* slow. OTOH it had some NFS
> server duties while I was running the tests.
> 
> 
> Overview
> ========
> 
> 
> TestWebappClassLoaderExecutorMemoryLeak
> ---------------------------------------
> 
> bio: fails 2/10
> nio: hangs 1/10
> apr: fails 3/10
> 
> The hang for nio in the leak detection test looks suspect (see below),
> the other failures might be uncritical timing issues:

Looking at the code, timing issues in the tests look very likely. I've
committed a fix for the timing issues.

Regarding the loop, any ideas on the root cause?

> TestWsSubprotocols
> ------------------
> 
> bio: fails 3/25
> nio: fails 1/25
> apr: fails 3/25
> 
> Testcase: testWsSubprotocols took 5.834 sec
>         Caused an ERROR
> null
> java.lang.NullPointerException
>         at
> org.apache.tomcat.websocket.TestWsSubprotocols.testWsSubprotocols(TestWsSubprotocols.java:89)
> 
> NPEs get logged for some of the failures, see below.

I think this is another timing error if the client moves faster than the
server. I can replicated this failure with a breakpoint on the list that
sets the static subprotocols field. I've committed a fix for this as well.

> TestWsWebSocketContainer
> ------------------------
> 
> bio: 2/15
> nio: 0/15
> apr: 0/15
> 
> Crash for APR seems fixed (or very rare) :)
> Details for bio failures see below.

I'm going to assume that the crash is fixed for now.

I can see how the BIO errors occur. If the session is closed before the
message is fully written to the buffers, there will be an IOE and the
test doesn't handle that. I've fixed this in svn too.


> TestCoyoteAdapter
> -----------------
> 
> bio: 0/10
> nio: 0/10
> apr: 0/10
> 
> So yesterday's single failure here was a rare case.

OK. Personally, I'm prepared to accept the odd rare case.

Mark

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org