You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@tomcat.apache.org by Vicenç Beltran <vb...@ac.upc.edu> on 2005/05/20 11:51:13 UTC

Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote

Hi, 

attached you'll find a patch that changes the coyote multithreading
model to a "hybrid" threading model (NIO+Mulithread). It's fully
compatible with the existing Catalina code and is SSL enabled.

The Hybrid model breaks the limitation of one thread per connection,
thus you can have a higher number of concurrent users with a lower
number of threads.
NIO selectors are utilized to detect when a user connection becomes
active ( i.e. there is a user http request available to be read), and
then, one thread processes the connection as usual, but without blocking
on the read() operation because we know that there is one available
request.


The Hybrid model eliminates the need to close inactive connections
(especially important under high load or SSL load) and reduces the
number of necessary threads.


The patch will be also downloadable  in short from
http://www.bsc.es/edragon/.  Next week I will make available a
performance comparison between Tomcat 5.5.9 and the modified Tomcat
(Static content, Dynamic content, Secure Dynamic Content and scalability
on SMP machines). I'm testing it with RUBiS, Surge and httperf.


Now, I am working on the admission control mechanism because it should
be improved. (The number of threads doesn't limit the number of
concurrent connections so we need to limit it in some way).



Best Regards, 

Vicenç Beltran 

eDragon Research Group
Barcelona Supercomputing Center (BSC)
http://www.bsc.es/edragon



===================================================================
diff -uprN
jakarta-tomcat-5.5.9-src/jakarta-tomcat-catalina/catalina/src/conf/server.xml jakarta-tomcat-5.5.9-src+STWS/jakarta-tomcat-catalina/catalina/src/conf/server.xml
---
jakarta-tomcat-5.5.9-src/jakarta-tomcat-catalina/catalina/src/conf/server.xml	Sat Mar 26 20:23:58 2005
+++
jakarta-tomcat-5.5.9-src+STWS/jakarta-tomcat-catalina/catalina/src/conf/server.xml	Thu May 19 18:58:52 2005
@@ -73,10 +73,10 @@
     -->
 
     <!-- Define a non-SSL HTTP/1.1 Connector on port 8080 -->
-    <Connector port="8080" maxHttpHeaderSize="8192"
-               maxThreads="150" minSpareThreads="25"
maxSpareThreads="75"
+    <Connector port="8080" maxHttpHeaderSize="8192"
maxActiveRequest="100"
+               maxThreads="40" minSpareThreads="10"
maxSpareThreads="10"
                enableLookups="false" redirectPort="8443"
acceptCount="100"
-               connectionTimeout="20000" disableUploadTimeout="true" />
+               connectionTimeout="0" disableUploadTimeout="true" />
     <!-- Note : To disable connection timeouts, set connectionTimeout
value
      to 0 -->
 	
@@ -90,11 +90,11 @@
 
     <!-- Define a SSL HTTP/1.1 Connector on port 8443 -->
     <!--
-    <Connector port="8443" maxHttpHeaderSize="8192"
-               maxThreads="150" minSpareThreads="25"
maxSpareThreads="75"
+    <Connector port="8443" maxHttpHeaderSize="8192"
maxActiveRequest="100"
+               maxThreads="40" minSpareThreads="10"
maxSpareThreads="10"
                enableLookups="false" disableUploadTimeout="true"
                acceptCount="100" scheme="https" secure="true"
-               clientAuth="false" sslProtocol="TLS" />
+               connectionTimeout="0" clientAuth="false"
sslProtocol="TLS" />
     -->
 
     <!-- Define an AJP 1.3 Connector on port 8009 -->
diff -uprN
jakarta-tomcat-5.5.9-src/jakarta-tomcat-catalina/catalina/src/share/org/apache/catalina/connector/mbeans-descriptors.xml jakarta-tomcat-5.5.9-src+STWS/jakarta-tomcat-catalina/catalina/src/share/org/apache/catalina/connector/mbeans-descriptors.xml
---
jakarta-tomcat-5.5.9-src/jakarta-tomcat-catalina/catalina/src/share/org/apache/catalina/connector/mbeans-descriptors.xml	Sat Mar 26 20:23:59 2005
+++
jakarta-tomcat-5.5.9-src+STWS/jakarta-tomcat-catalina/catalina/src/share/org/apache/catalina/connector/mbeans-descriptors.xml	Thu May 19 12:29:07 2005
@@ -8,6 +8,12 @@
                 group="Connector"
                  type="org.apache.catalina.connector.Connector">
 
+    <attribute   name="maxActiveRequests"
+          description="Maximum number of active requests"
+                 type="int"
+	     readable="true"
+	    writeable="true"/>
+
     <attribute   name="acceptCount"
           description="The accept count for this Connector"
                  type="int"/>
diff -uprN
jakarta-tomcat-5.5.9-src/jakarta-tomcat-catalina/webapps/docs/config/http.xml jakarta-tomcat-5.5.9-src+STWS/jakarta-tomcat-catalina/webapps/docs/config/http.xml
---
jakarta-tomcat-5.5.9-src/jakarta-tomcat-catalina/webapps/docs/config/http.xml	Sat Mar 26 20:24:09 2005
+++
jakarta-tomcat-5.5.9-src+STWS/jakarta-tomcat-catalina/webapps/docs/config/http.xml	Thu May 19 12:28:17 2005
@@ -57,6 +57,11 @@
 
   <attributes>
 
+    <attribute name="maxActiveRequests" required="false">
+      <p>A integer value which can be used to adjust Tomcat response
time
+	 under high load.</p>
+    </attribute>
+
     <attribute name="allowTrace" required="false">
       <p>A boolean value which can be used to enable or disable the
TRACE
       HTTP method. If not specified, this attribute is set to
false.</p>
diff -uprN
jakarta-tomcat-5.5.9-src/jakarta-tomcat-connectors/http11/src/java/org/apache/coyote/http11/Http11Processor.java jakarta-tomcat-5.5.9-src+STWS/jakarta-tomcat-connectors/http11/src/java/org/apache/coyote/http11/Http11Processor.java
---
jakarta-tomcat-5.5.9-src/jakarta-tomcat-connectors/http11/src/java/org/apache/coyote/http11/Http11Processor.java	Sat Mar 26 20:24:10 2005
+++
jakarta-tomcat-5.5.9-src+STWS/jakarta-tomcat-connectors/http11/src/java/org/apache/coyote/http11/Http11Processor.java	Thu May 19 19:03:10 2005
@@ -17,6 +17,7 @@
 package org.apache.coyote.http11;
 
 import java.io.IOException;
+import java.net.SocketTimeoutException;
 import java.io.InputStream;
 import java.io.InterruptedIOException;
 import java.io.OutputStream;
@@ -774,7 +775,7 @@ public class Http11Processor implements 
         // Error flag
         error = false;
         keepAlive = true;
-
+/*
         int keepAliveLeft = maxKeepAliveRequests;
         int soTimeout = socket.getSoTimeout();
         int oldSoTimeout = soTimeout;
@@ -805,25 +806,33 @@ public class Http11Processor implements 
                 error = true;
             }
         }
+*/
 
         boolean keptAlive = false;
 
-        while (started && !error && keepAlive) {
+	boolean available = true;
+
+        while (started && available && !error && keepAlive) {
 
-            // Parsing the request header
+  	    // Parsing the request header
             try {
-                if( !disableUploadTimeout && keptAlive && soTimeout > 0
) {
+/*                if( !disableUploadTimeout && keptAlive && soTimeout >
0 ) {
                     socket.setSoTimeout(soTimeout);
                 }
+*/		socket.setSoTimeout(10);
                 inputBuffer.parseRequestLine();
+
                 request.setStartTime(System.currentTimeMillis());
                 thrA.setParam( threadPool, request.requestURI() );
                 keptAlive = true;
                 if (!disableUploadTimeout) {
                     socket.setSoTimeout(timeout);
-                }
+                } else socket.setSoTimeout(0);
+
                 inputBuffer.parseHeaders();
-            } catch (IOException e) {
+	    } catch(SocketTimeoutException ste){
+                break;
+	    } catch (IOException e) {
                 error = true;
                 break;
             } catch (Throwable t) {
@@ -845,8 +854,8 @@ public class Http11Processor implements 
                 error = true;
             }
 
-            if (maxKeepAliveRequests > 0 && --keepAliveLeft == 0)
-                keepAlive = false;
+//            if (maxKeepAliveRequests > 0 && --keepAliveLeft == 0)
+//                keepAlive = false;
 
             // Process the request in the adapter
             if (!error) {
@@ -914,9 +923,22 @@ public class Http11Processor implements 
             // Next request
             inputBuffer.nextRequest();
             outputBuffer.nextRequest();
-
+		
+	    available = inputBuffer.available();
         }
 
+// eDragon
+
+
+	if(error || !keepAlive){
+		try{
+			socket.close();
+		} catch (Exception e) { 
+			e.printStackTrace(); 
+		}
+	} 
+
+
         rp.setStage(org.apache.coyote.Constants.STAGE_ENDED);
 
         // Recycle
diff -uprN
jakarta-tomcat-5.5.9-src/jakarta-tomcat-connectors/http11/src/java/org/apache/coyote/http11/Http11Protocol.java jakarta-tomcat-5.5.9-src+STWS/jakarta-tomcat-connectors/http11/src/java/org/apache/coyote/http11/Http11Protocol.java
---
jakarta-tomcat-5.5.9-src/jakarta-tomcat-connectors/http11/src/java/org/apache/coyote/http11/Http11Protocol.java	Sat Mar 26 20:24:10 2005
+++
jakarta-tomcat-5.5.9-src+STWS/jakarta-tomcat-connectors/http11/src/java/org/apache/coyote/http11/Http11Protocol.java	Thu May 19 12:23:03 2005
@@ -239,7 +239,7 @@ public class Http11Protocol implements P
     private String reportedname;
     private int socketCloseDelay=-1;
     private boolean disableUploadTimeout = true;
-    private int socketBuffer = 9000;
+    private int socketBuffer = 8192; //  El famos bug:  
NIO+SSL+(buff>8192)  => BUG!!!
     private Adapter adapter;
     private Http11ConnectionHandler cHandler;
 
@@ -283,6 +283,15 @@ public class Http11Protocol implements P
         setAttribute("minSpareThreads", "" + minSpareThreads);
     }
 
+/* JMX */
+    public void setMaxActiveRequests(int MaxActiveRequests) {
+	ep.setMaxActiveRequests(MaxActiveRequests);
+    }
+
+    public int getMaxActiveRequests(){	
+	return ep.getMaxActiveRequests();
+    }
+
     public void setThreadPriority(int threadPriority) {
       ep.setThreadPriority(threadPriority);
       setAttribute("threadPriority", "" + threadPriority);
@@ -749,13 +758,15 @@ public class Http11Protocol implements P
                 // type of error, provide a configurable delay to give
the
                 // unread input time to arrive so it can be
successfully read
                 // and discarded by shutdownInput().
+/* eDragon
                 if( proto.socketCloseDelay >= 0 ) {
                     try {
                         Thread.sleep(proto.socketCloseDelay);
-                    } catch (InterruptedException ie) { /* ignore */ }
+                    } catch (InterruptedException ie) {  }
                 }
 
                 TcpConnection.shutdownInput( socket );
+*/
             } catch(java.net.SocketException e) {
                 // SocketExceptions are normal
                 Http11Protocol.log.debug
@@ -783,9 +794,11 @@ public class Http11Protocol implements P
                 if (processor instanceof ActionHook) {
                     ((ActionHook)
processor).action(ActionCode.ACTION_STOP, null);
                 }
+/* eDragon
                 // recycle kernel sockets ASAP
                 try { if (socket != null) socket.close (); }
-                catch (IOException e) { /* ignore */ }
+                catch (IOException e) {  }
+*/
             }
         }
     }
diff -uprN
jakarta-tomcat-5.5.9-src/jakarta-tomcat-connectors/http11/src/java/org/apache/coyote/http11/InternalInputBuffer.java jakarta-tomcat-5.5.9-src+STWS/jakarta-tomcat-connectors/http11/src/java/org/apache/coyote/http11/InternalInputBuffer.java
---
jakarta-tomcat-5.5.9-src/jakarta-tomcat-connectors/http11/src/java/org/apache/coyote/http11/InternalInputBuffer.java	Sat Mar 26 20:24:10 2005
+++
jakarta-tomcat-5.5.9-src+STWS/jakarta-tomcat-connectors/http11/src/java/org/apache/coyote/http11/InternalInputBuffer.java	Wed May 18 10:47:47 2005
@@ -371,6 +371,9 @@ public class InternalInputBuffer impleme
 
     }
 
+    public boolean available(){
+	return pos < lastValid;
+    }
 
     /**
      * Read the request line. This function is meant to be used during
the 
diff -uprN
jakarta-tomcat-5.5.9-src/jakarta-tomcat-connectors/util/java/org/apache/tomcat/util/net/DefaultServerSocketFactory.java jakarta-tomcat-5.5.9-src+STWS/jakarta-tomcat-connectors/util/java/org/apache/tomcat/util/net/DefaultServerSocketFactory.java
---
jakarta-tomcat-5.5.9-src/jakarta-tomcat-connectors/util/java/org/apache/tomcat/util/net/DefaultServerSocketFactory.java	Sat Mar 26 20:24:17 2005
+++
jakarta-tomcat-5.5.9-src+STWS/jakarta-tomcat-connectors/util/java/org/apache/tomcat/util/net/DefaultServerSocketFactory.java	Thu May 19 12:37:29 2005
@@ -18,6 +18,9 @@ package org.apache.tomcat.util.net;
 
 import java.io.*;
 import java.net.*;
+import java.nio.channels.ServerSocketChannel;
+import java.nio.channels.SocketChannel;
+import java.net.InetSocketAddress;
 
 /**
  * Default server socket factory. Doesn't do much except give us
@@ -25,8 +28,10 @@ import java.net.*;
  *
  * @author db@eng.sun.com
  * @author Harish Prabandham
+ * @author Vicenç Beltran
  */
 
+
 // Default implementation of server sockets.
 
 //
@@ -41,23 +46,33 @@ class DefaultServerSocketFactory extends
 
     public ServerSocket createSocket (int port)
     throws IOException {
-        return  new ServerSocket (port);
+	ServerSocketChannel ssc = ServerSocketChannel.open();
+        return ssc.socket();
     }
 
     public ServerSocket createSocket (int port, int backlog)
     throws IOException {
-        return new ServerSocket (port, backlog);
+
+	InetSocketAddress isa = new InetSocketAddress(port);
+        ServerSocketChannel ssc = ServerSocketChannel.open();
+        ssc.socket().bind(isa, backlog);
+        return ssc.socket();
     }
 
     public ServerSocket createSocket (int port, int backlog,
         InetAddress ifAddress)
     throws IOException {
-        return new ServerSocket (port, backlog, ifAddress);
+	InetSocketAddress isa = new InetSocketAddress(ifAddress, port);
+        ServerSocketChannel ssc = ServerSocketChannel.open();
+        ssc.socket().bind(isa, backlog);
+        return ssc.socket();
     }
  
     public Socket acceptSocket(ServerSocket socket)
  	throws IOException {
- 	return socket.accept();
+	SocketChannel channel = socket.getChannel().accept();
+	if (channel != null) return channel.socket();
+	else return null;
     }
  
     public void handshake(Socket sock)
diff -uprN
jakarta-tomcat-5.5.9-src/jakarta-tomcat-connectors/util/java/org/apache/tomcat/util/net/LeaderFollowerWorkerThread.java jakarta-tomcat-5.5.9-src+STWS/jakarta-tomcat-connectors/util/java/org/apache/tomcat/util/net/LeaderFollowerWorkerThread.java
---
jakarta-tomcat-5.5.9-src/jakarta-tomcat-connectors/util/java/org/apache/tomcat/util/net/LeaderFollowerWorkerThread.java	Sat Mar 26 20:24:17 2005
+++
jakarta-tomcat-5.5.9-src+STWS/jakarta-tomcat-connectors/util/java/org/apache/tomcat/util/net/LeaderFollowerWorkerThread.java	Thu May 19 19:06:29 2005
@@ -16,71 +16,182 @@
 
 package org.apache.tomcat.util.net;
 
-import java.net.Socket;
 import org.apache.tomcat.util.threads.ThreadPoolRunnable;
+import java.util.LinkedList;
+import java.util.Iterator;
+import java.util.Set;
+import java.nio.channels.ServerSocketChannel;
+import java.nio.channels.SelectionKey;
+import java.nio.channels.Selector;
+import java.io.IOException;
+import java.net.Socket;
 
-/*
- * I switched the threading model here.
- *
- * We used to have a "listener" thread and a "connection"
- * thread, this results in code simplicity but also a needless
- * thread switch.
- *
- * Instead I am now using a pool of threads, all the threads are
- * simmetric in their execution and no thread switch is needed.
+/**
+ * @author Vicenç Beltran
  */
+
 class LeaderFollowerWorkerThread implements ThreadPoolRunnable {
-    /* This is not a normal Runnable - it gets attached to an existing
-       thread, runs and when run() ends - the thread keeps running.
 
-       It's better to keep the name ThreadPoolRunnable - avoid
confusion.
-       We also want to use per/thread data and avoid sync wherever
possible.
-    */
-    PoolTcpEndpoint endpoint;
+/* JMX */	
+    private static int maxActiveRequests=100;
+
+    static public void setMaxActiveRequests(int  MaxActiveRequests){
+	maxActiveRequests = MaxActiveRequests;
+	System.out.println("MaxActiveRequests:	"+maxActiveRequests);
+    }
+
+    static public int getMaxActiveRequests(){
+        return maxActiveRequests;
+    }
+/* JMX */	
+
+    private PoolTcpEndpoint endpoint;
+
+    private LinkedList<Socket> registeredSocketList;
+    private LinkedList<Socket> preAcceptedSockets;
+
+    private Iterator<SelectionKey> iterator;
+    private SelectionKey slkAccept;
+    private Selector selector;
+
+    private int filteredActiveRequests;
+    private int numSelectedKeys; 
+    private int count;
     
-    public LeaderFollowerWorkerThread(PoolTcpEndpoint endpoint) {
-        this.endpoint = endpoint;
+    public LeaderFollowerWorkerThread (PoolTcpEndpoint Endpoint) {
+
+        endpoint = Endpoint;
+	preAcceptedSockets = new LinkedList<Socket>();
+	registeredSocketList = new LinkedList<Socket>();
+
+        try{
+                selector = Selector.open();
+        } catch(IOException e) { e.printStackTrace(); }
+
+	iterator = selector.selectedKeys().iterator();
+
+	ServerSocketChannel ssc = endpoint.getServerSocket().getChannel();
+        slkAccept = null;
+        try{
+                ssc.configureBlocking(false);
+                slkAccept = ssc.register(selector,
SelectionKey.OP_ACCEPT);
+        }catch (Exception e) { e.printStackTrace(); }
+
+	filteredActiveRequests = 0;
+	numSelectedKeys = 0;
+	count = 0;
     }
 
     public Object[] getInitData() {
-        // no synchronization overhead, but 2 array access 
         Object obj[]=new Object[2];
         obj[1]= endpoint.getConnectionHandler().init();
         obj[0]=new TcpConnection();
         return obj;
     }
-    
+ 
     public void runIt(Object perThrData[]) {
 
-        // Create per-thread cache
-        if (endpoint.isRunning()) {
-
-            // Loop if endpoint is paused
-            while (endpoint.isPaused()) {
-                try {
-                    Thread.sleep(1000);
-                } catch (InterruptedException e) {
-                    // Ignore
-                }
-            }
-
-            // Accept a new connection
-            Socket s = null;
+	// Loop if endpoint is paused
+        while (endpoint.isPaused()) {
             try {
-                s = endpoint.acceptSocket();
-            } finally {
-                // Continue accepting on another thread...
-                if (endpoint.isRunning()) {
-                    endpoint.tp.runIt(this);
-                }
+                Thread.sleep(1000);
+            } catch (InterruptedException e) {
+                // Ignore
             }
+        }
 
-            // Process the connection
-            if (null != s) {
-                endpoint.processSocket(s, (TcpConnection)
perThrData[0], (Object[]) perThrData[1]);
-            }
+	Socket socket = null;
+	long localTime = System.nanoTime();
 
+	try{
+	    while(!iterator.hasNext() && preAcceptedSockets.isEmpty()) {
+		
+		if(registeredSocketList.isEmpty()) doSelect();
+		else doSelectNow();
+
+		filteredActiveRequests = (filteredActiveRequests+numSelectedKeys)/2;
+		doAcceptNow(maxActiveRequests-filteredActiveRequests);
+	    }
+
+
+	    if(preAcceptedSockets.size()*count >= numSelectedKeys){
+		count = 0;
+		socket = preAcceptedSockets.poll();
+	    } else {
+		count++;
+	        SelectionKey key = iterator.next(); 
+		socket = (Socket)key.attachment();
+	        key.cancel();
+	    	socket.getChannel().configureBlocking(true);
+		numSelectedKeys--;
+	    } 
+        } catch (IOException e){  
+	    e.printStackTrace(); 
+        } finally {
+	    endpoint.tp.runIt(this);
         }
+	
+	try{
+	        endpoint.processSocket(socket, (TcpConnection) perThrData[0],
(Object[]) perThrData[1]);
+
+		synchronized(registeredSocketList){
+	                registeredSocketList.add(socket);
+		}
+		selector.wakeup();
+
+	} catch(Exception e) { 
+		e.printStackTrace();
+	}
     }
-    
+
+
+
+    private void doAcceptNow(int max) {
+
+	Socket s;
+        for(int i=0; i<max; i++){
+            if((s = endpoint.acceptSocket()) == null) return;
+            preAcceptedSockets.add(s);
+        }
+    }
+
+
+    private void doSelect() throws IOException {
+
+        selector.select();
+
+	registerSockets();		
+	selector.selectedKeys().remove(slkAccept);
+	numSelectedKeys = selector.selectedKeys().size();
+        iterator = selector.selectedKeys().iterator();
+    }
+
+
+    private void doSelectNow() throws IOException{
+	
+	selector.selectNow();
+	registerSockets();		
+	selector.selectNow();
+	selector.selectedKeys().remove(slkAccept);
+	numSelectedKeys = selector.selectedKeys().size();
+	iterator = selector.selectedKeys().iterator();
+    }
+
+
+    private void registerSockets(){
+        Socket socket;
+        synchronized(registeredSocketList){
+            while((socket = registeredSocketList.poll()) !=null){
+                try{
+                        if(!socket.isClosed()){
+                               
socket.getChannel().configureBlocking(false);
+                                socket.getChannel().register(selector,
SelectionKey.OP_READ, socket);
+                        }
+                } catch (Exception e) {
+                        e.printStackTrace();
+                }
+            }
+        }
+    }
+
 }
diff -uprN
jakarta-tomcat-5.5.9-src/jakarta-tomcat-connectors/util/java/org/apache/tomcat/util/net/PoolTcpEndpoint.java jakarta-tomcat-5.5.9-src+STWS/jakarta-tomcat-connectors/util/java/org/apache/tomcat/util/net/PoolTcpEndpoint.java
---
jakarta-tomcat-5.5.9-src/jakarta-tomcat-connectors/util/java/org/apache/tomcat/util/net/PoolTcpEndpoint.java	Sat Mar 26 20:24:17 2005
+++
jakarta-tomcat-5.5.9-src+STWS/jakarta-tomcat-connectors/util/java/org/apache/tomcat/util/net/PoolTcpEndpoint.java	Thu May 19 12:20:53 2005
@@ -88,7 +88,7 @@ public class PoolTcpEndpoint implements 
     protected int socketTimeout=-1;
     private boolean lf = true;
 
-    
+
     // ------ Leader follower fields
 
     
@@ -96,7 +96,7 @@ public class PoolTcpEndpoint implements 
     ThreadPoolRunnable listener;
     ThreadPool tp;
 
-    
+
     // ------ Master slave fields
 
     /* The background thread. */
@@ -119,6 +119,15 @@ public class PoolTcpEndpoint implements 
 
     // -------------------- Configuration --------------------
 
+    public void setMaxActiveRequests(int  MaxActiveRequests){
+	LeaderFollowerWorkerThread.setMaxActiveRequests(MaxActiveRequests);
+    }
+
+    public int getMaxActiveRequests(){
+	return LeaderFollowerWorkerThread.getMaxActiveRequests();
+    }
+
+
     public void setMaxThreads(int maxThreads) {
 	if( maxThreads > 0)
 	    tp.setMaxThreads(maxThreads);
@@ -174,6 +183,10 @@ public class PoolTcpEndpoint implements 
 	    serverSocket = ss;
     }
 
+    public ServerSocket getServerSocket(){
+	return serverSocket;
+    }
+
     public void setServerSocketFactory(  ServerSocketFactory factory )
{
 	    this.factory=factory;
     }
@@ -275,11 +288,11 @@ public class PoolTcpEndpoint implements 
     public int getCurrentThreadCount() {
         return curThreads;
     }
-    
-    public int getCurrentThreadsBusy() {
-        return curThreads - workerThreads.size();
+
+    public int getCurrentThreadsBusy(){
+	return tp.getCurrentThreadsBusy();
     }
-    
+
     // -------------------- Public methods --------------------
 
     public void initEndpoint() throws IOException,
InstantiationException {
@@ -304,25 +317,31 @@ public class PoolTcpEndpoint implements 
         } catch( InstantiationException ex1 ) {
             throw ex1;
         }
+
+
         initialized = true;
     }
     
     public void startEndpoint() throws IOException,
InstantiationException {
-        if (!initialized) {
+     if (!initialized) {
             initEndpoint();
         }
         if (lf) {
             tp.start();
-        }
+        } 
         running = true;
         paused = false;
         if (lf) {
-            listener = new LeaderFollowerWorkerThread(this);
-            tp.runIt(listener);
+	    for(int i=0; i<4; i++){
+	    LeaderFollowerWorkerThread  listener = new
LeaderFollowerWorkerThread(this);
+            	tp.runIt(listener);
+	    }
         } else {
             maxThreads = getMaxThreads();
             threadStart();
-        }
+        } 
+
+
     }
 
     public void pauseEndpoint() {
@@ -407,7 +426,7 @@ public class PoolTcpEndpoint implements 
                 accepted = factory.acceptSocket(serverSocket);
             }
             if (null == accepted) {
-                log.warn(sm.getString("endpoint.warn.nullSocket"));
+               // log.warn(sm.getString("endpoint.warn.nullSocket"));
             } else {
                 if (!running) {
                     accepted.close();  // rude, but unlikely!
@@ -513,13 +532,13 @@ public class PoolTcpEndpoint implements 
             
             // 1: Set socket options: timeout, linger, etc
             setSocketOptions(s);
-            
+
             // 2: SSL handshake
             step = 2;
             if (getServerSocketFactory() != null) {
                 getServerSocketFactory().handshake(s);
             }
-            
+ 
             // 3: Process the connection
             step = 3;
             con.setEndpoint(this);
diff -uprN
jakarta-tomcat-5.5.9-src/jakarta-tomcat-connectors/util/java/org/apache/tomcat/util/net/jsse/JSSE13SocketFactory.java jakarta-tomcat-5.5.9-src+STWS/jakarta-tomcat-connectors/util/java/org/apache/tomcat/util/net/jsse/JSSE13SocketFactory.java
---
jakarta-tomcat-5.5.9-src/jakarta-tomcat-connectors/util/java/org/apache/tomcat/util/net/jsse/JSSE13SocketFactory.java	Sat Mar 26 20:24:17 2005
+++
jakarta-tomcat-5.5.9-src+STWS/jakarta-tomcat-connectors/util/java/org/apache/tomcat/util/net/jsse/JSSE13SocketFactory.java	Wed May 18 10:47:47 2005
@@ -146,6 +146,14 @@ public class JSSE13SocketFactory extends
         socket.setNeedClientAuth(clientAuth);
     }
 
+    protected String[] getEnabledProtocols(SSLSocket socket,
+                                           String requestedProtocols){
+        return null;
+    }
+    protected void setEnabledProtocols(SSLSocket socket,
+                                             String [] protocols){
+    }
+    
     protected void configureClientAuth(SSLSocket socket){
         // In JSSE 1.0.2 docs it does not explicitly
         // state whether SSLSockets returned from 
diff -uprN
jakarta-tomcat-5.5.9-src/jakarta-tomcat-connectors/util/java/org/apache/tomcat/util/net/jsse/JSSE14SocketFactory.java jakarta-tomcat-5.5.9-src+STWS/jakarta-tomcat-connectors/util/java/org/apache/tomcat/util/net/jsse/JSSE14SocketFactory.java
---
jakarta-tomcat-5.5.9-src/jakarta-tomcat-connectors/util/java/org/apache/tomcat/util/net/jsse/JSSE14SocketFactory.java	Sat Mar 26 20:24:17 2005
+++
jakarta-tomcat-5.5.9-src+STWS/jakarta-tomcat-connectors/util/java/org/apache/tomcat/util/net/jsse/JSSE14SocketFactory.java	Wed May 18 10:47:47 2005
@@ -114,6 +114,8 @@ public class JSSE14SocketFactory  extend
             // create proxy
             sslProxy = context.getServerSocketFactory();
 
+	    sslProxy2 = context.getSocketFactory();
+
             // Determine which cipher suites to enable
             String requestedCiphers =
(String)attributes.get("ciphers");
             enabledCiphers = getEnabledCiphers(requestedCiphers,
@@ -180,12 +182,20 @@ public class JSSE14SocketFactory  extend
 
         return tms;
     }
+
+
     protected void setEnabledProtocols(SSLServerSocket socket, String
[]protocols){
         if (protocols != null) {
             socket.setEnabledProtocols(protocols);
         }
     }
 
+     protected void setEnabledProtocols(SSLSocket socket, String
[]protocols){
+        if (protocols != null) {
+            socket.setEnabledProtocols(protocols);
+        }
+    }  
+
     protected String[] getEnabledProtocols(SSLServerSocket socket,
                                            String requestedProtocols){
         String[] supportedProtocols = socket.getSupportedProtocols();
@@ -251,6 +261,74 @@ public class JSSE14SocketFactory  extend
         return enabledProtocols;
     }
 
+
+    protected String[] getEnabledProtocols(SSLSocket socket,
+                                           String requestedProtocols){
+        String[] supportedProtocols = socket.getSupportedProtocols();
+
+        String[] enabledProtocols = null;
+
+        if (requestedProtocols != null) {
+            Vector vec = null;
+            String protocol = requestedProtocols;
+            int index = requestedProtocols.indexOf(',');
+            if (index != -1) {
+                int fromIndex = 0;
+                while (index != -1) {
+                    protocol = requestedProtocols.substring(fromIndex,
index).trim();
+                    if (protocol.length() > 0) {
+                        /*
+                         * Check to see if the requested protocol is
among the
+                         * supported protocols, i.e., may be enabled
+                         */
+                        for (int i=0; supportedProtocols != null
+                                     && i<supportedProtocols.length;
i++) {
+                            if (supportedProtocols[i].equals(protocol))
{
+                                if (vec == null) {
+                                    vec = new Vector();
+                                }
+                                vec.addElement(protocol);
+                                break;
+                            }
+                        }
+                    }
+                    fromIndex = index+1;
+                    index = requestedProtocols.indexOf(',', fromIndex);
+                } // while
+                protocol = requestedProtocols.substring(fromIndex);
+            }
+
+            if (protocol != null) {
+                protocol = protocol.trim();
+                if (protocol.length() > 0) {
+                    /*
+                     * Check to see if the requested protocol is among
the
+                     * supported protocols, i.e., may be enabled
+                     */
+                    for (int i=0; supportedProtocols != null
+                                 && i<supportedProtocols.length; i++) {
+                        if (supportedProtocols[i].equals(protocol)) {
+                            if (vec == null) {
+                                vec = new Vector();
+                            }
+                            vec.addElement(protocol);
+                            break;
+                        }
+                    }
+                }
+            }
+
+            if (vec != null) {
+                enabledProtocols = new String[vec.size()];
+                vec.copyInto(enabledProtocols);
+            }
+        }
+
+        return enabledProtocols;
+    }
+
+
+
     protected void configureClientAuth(SSLServerSocket socket){
         if (wantClientAuth){
             socket.setWantClientAuth(wantClientAuth);
@@ -263,5 +341,4 @@ public class JSSE14SocketFactory  extend
         // Per JavaDocs: SSLSockets returned from 
         // SSLServerSocket.accept() inherit this setting.
     }
-    
 }
diff -uprN
jakarta-tomcat-5.5.9-src/jakarta-tomcat-connectors/util/java/org/apache/tomcat/util/net/jsse/JSSESocketFactory.java jakarta-tomcat-5.5.9-src+STWS/jakarta-tomcat-connectors/util/java/org/apache/tomcat/util/net/jsse/JSSESocketFactory.java
---
jakarta-tomcat-5.5.9-src/jakarta-tomcat-connectors/util/java/org/apache/tomcat/util/net/jsse/JSSESocketFactory.java	Sat Mar 26 20:24:17 2005
+++
jakarta-tomcat-5.5.9-src+STWS/jakarta-tomcat-connectors/util/java/org/apache/tomcat/util/net/jsse/JSSESocketFactory.java	Wed May 18 10:47:47 2005
@@ -32,6 +32,10 @@ import javax.net.ssl.SSLException;
 import javax.net.ssl.SSLServerSocket;
 import javax.net.ssl.SSLServerSocketFactory;
 import javax.net.ssl.SSLSocket;
+import javax.net.ssl.SSLSocketFactory;
+import java.nio.channels.ServerSocketChannel;
+import java.nio.channels.SocketChannel;
+import java.net.InetSocketAddress;
 
 /*
   1. Make the JSSE's jars available, either as an installed
@@ -49,6 +53,7 @@ import javax.net.ssl.SSLSocket;
  * @author Costin Manolache
  * @author Stefan Freyr Stefansson
  * @author EKR -- renamed to JSSESocketFactory
+ * @author Vicenç Beltran
  */
 public abstract class JSSESocketFactory
     extends org.apache.tomcat.util.net.ServerSocketFactory
@@ -68,7 +73,8 @@ public abstract class JSSESocketFactory
     protected String clientAuth = "false";
     protected SSLServerSocketFactory sslProxy = null;
     protected String[] enabledCiphers;
-   
+
+    protected SSLSocketFactory sslProxy2 = null;
 
     public JSSESocketFactory () {
     }
@@ -77,18 +83,20 @@ public abstract class JSSESocketFactory
         throws IOException
     {
         if (!initialized) init();
-        ServerSocket socket = sslProxy.createServerSocket(port);
-        initServerSocket(socket);
-        return socket;
+
+        ServerSocketChannel ssc =  ServerSocketChannel.open();
+        return ssc.socket();
     }
     
     public ServerSocket createSocket (int port, int backlog)
         throws IOException
     {
         if (!initialized) init();
-        ServerSocket socket = sslProxy.createServerSocket(port,
backlog);
-        initServerSocket(socket);
-        return socket;
+
+	InetSocketAddress isa = new InetSocketAddress(port);
+        ServerSocketChannel ssc = ServerSocketChannel.open();
+        ssc.socket().bind(isa, backlog);
+	return ssc.socket();
     }
     
     public ServerSocket createSocket (int port, int backlog,
@@ -96,19 +104,24 @@ public abstract class JSSESocketFactory
         throws IOException
     {   
         if (!initialized) init();
-        ServerSocket socket = sslProxy.createServerSocket(port,
backlog,
-                                                          ifAddress);
-        initServerSocket(socket);
-        return socket;
+	InetSocketAddress isa = new InetSocketAddress(ifAddress, port);
+        ServerSocketChannel ssc = ServerSocketChannel.open();
+        ssc.socket().bind(isa, backlog);
+	return ssc.socket();
     }
-    
+
     public Socket acceptSocket(ServerSocket socket)
         throws IOException
     {
         SSLSocket asock = null;
         try {
-             asock = (SSLSocket)socket.accept();
-             configureClientAuth(asock);
+
+	     SocketChannel channel = socket.getChannel().accept();
+	     if(channel == null) return null;	
+	     Socket sk = channel.socket();
+	     asock = (SSLSocket) sslProxy2.createSocket(sk,
sk.getInetAddress().getHostName(), sk.getPort(), true);	
+	     initSocket(asock);
+	     asock.setUseClientMode(false);
         } catch (SSLException e){
           throw new SocketException("SSL handshake error" +
e.toString());
         }
@@ -116,7 +129,7 @@ public abstract class JSSESocketFactory
     }
 
     public void handshake(Socket sock) throws IOException {
-        ((SSLSocket)sock).startHandshake();
+	((SSLSocket)sock).getSession();
     }
 
     /*
@@ -321,6 +334,10 @@ public abstract class JSSESocketFactory
     abstract protected String[] getEnabledProtocols(SSLServerSocket
socket,
                                                     String
requestedProtocols);
 
+  
+    abstract protected String[] getEnabledProtocols(SSLSocket socket,
+                                                    String
requestedProtocols);
+
     /**
      * Set the SSL protocol variants to be enabled.
      * @param socket the SSLServerSocket.
@@ -329,6 +346,9 @@ public abstract class JSSESocketFactory
     abstract protected void setEnabledProtocols(SSLServerSocket socket,
                                             String [] protocols);
 
+
+    abstract protected void setEnabledProtocols(SSLSocket socket,
+                                            String [] protocols);
     /**
      * Configure Client authentication for this version of JSSE.  The
      * JSSE included in Java 1.4 supports the 'want' value.  Prior
@@ -366,4 +386,21 @@ public abstract class JSSESocketFactory
         configureClientAuth(socket);
     }
 
+    private void initSocket(SSLSocket socket) {
+
+
+        if (enabledCiphers != null) {
+            socket.setEnabledCipherSuites(enabledCiphers);
+        }
+
+        String requestedProtocols = (String)
attributes.get("protocols");
+        setEnabledProtocols(socket, getEnabledProtocols(socket,
+                                                        
requestedProtocols));
+
+        // we don't know if client auth is needed -
+        // after parsing the request we may re-handshake
+        configureClientAuth(socket);
+    }
+
+
 }
===================================================================



---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote

Posted by Remy Maucherat <re...@apache.org>.
Vicenc Beltran Querol wrote:
> It has been a pleasure to post this information, and to receive constructive
> and technically-reasoned answers like yours. Deciding which parameters
> define the performance of a server is a great and never-ending discussion topic.
> Anyway, feel free to send my any questions you may have about the benchmarking 
> environment I've used for my experiments.
> 
> By the way, this is my last post about this topic. I've perfectly
> understood Remy's messages (in the list and in my personal address), 
> so I will not waste your time anymore.

The two problems were you got in a loop implying that:
a) your solution was perfect (sorry, it's a nice experiment, but it's 
perfectible)
b) your use case and test scenario was the only legitimate one

At this point, I am personally done on the issue, that's all.

Rémy

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


RE: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote

Posted by Yoav Shapira <yo...@MIT.EDU>.
Hi,

> By the way, this is my last post about this topic. I've perfectly
> understood Remy's messages (in the list and in my personal address),
> so I will not waste your time anymore.

It was far from a waste of time.  Please don't hesitate to contribute again
in performance tuning or other areas.  Hopefully we'll hear from you again,

Yoav


---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote

Posted by Vicenc Beltran Querol <vb...@ac.upc.edu>.
Hi Peter,

>  I took a look at the AB and Rubis numbers. Honestly I don't
> understand the rubis graphs. 

You can find an explanation about the httperf numbers on the man page 
of Httperf, or looking at http://www.hpl.hp.com/personal/David_Mosberger/httperf.html. 

Rubis is the dynamic application used for the test.
If you are interested on the Rubis benchmark we can send 
to you a war archive with the aplication and the file with the request 
distribution used to generate the workload. 

> From the AB results, it looks like the
> connect, processing and wait times are lower for the hybrid. That's a
> good achievement and congrats to you on that.
> I'm not convinced of the benefit of the hybrid approach over APR. if
> both are equal, then it might be good to have both as options. it's
> nice to be able to support /. effect, but in reality that's achieved
> by distributing servers across multiple hosting facilities. It's not
> achieved through hosting a website on a single quad server supporting
> 10K concurrent connections. I'm not a committer, so I don't have a say
> in what goes into tomcat. thanks for researching NIO and taking time
> to post these results.
> 

It has been a pleasure to post this information, and to receive constructive
and technically-reasoned answers like yours. Deciding which parameters
define the performance of a server is a great and never-ending discussion topic.
Anyway, feel free to send my any questions you may have about the benchmarking 
environment I've used for my experiments.

By the way, this is my last post about this topic. I've perfectly
understood Remy's messages (in the list and in my personal address), 
so I will not waste your time anymore.


Sincerely,
Vicenç





> peter lin
> 
> 
> On 5/25/05, Vicenc Beltran Querol <vb...@ac.upc.edu> wrote:
> > Hi,
> > 
> > The results of the AB benchmark configured with 20 concurrent clients are posted below,
> > If somebody is interested in more configurations (from 20 to 10000 concurrent clients)
> > they are available at http://www.bsc.es/edragon/pdf/TestAb.tgz
> > 
> > BTW, there is also available a comparison between Tomcat and the Hybrid (Tomcat+NIO)
> > web servers at http://www.bsc.es/edragon/pdf/TestRubisDynamic.tgz. The comparison is
> > based on the RUBiS benchmark and the httperf workload generator.
> > 
> > 
> > Regards,
> > 
> > -Vicenç
> > 
> > 
> > ./ab -k -c 20 -n 2000000 http://pcbosch:8080/tomcat.gif
> > 
> > Client: 2 way Xeon 2.4Ghz, 2GB RAM
> > Server: 4 way Xeon 1.4Ghz, 2GB RAM
> > Network: Gbit
> > Java: build 1.5.0_03-b07
> > 
> > 
> > Tomcat 5.5.9
> > -------------------------------------------------------------
> > -------------------------------------------------------------
> > cument Path:          /tomcat.gif
> > Document Length:        1934 bytes
> > 
> > Concurrency Level:      20
> > Time taken for tests:   122.460403 seconds
> > Complete requests:      2000000
> > Failed requests:        0
> > Write errors:           0
> > Keep-Alive requests:    1980006
> > Total transferred:      32937062 bytes
> > HTML transferred:       -426963428 bytes
> > Requests per second:    16331.81 [#/sec] (mean)
> > Time per request:       1.225 [ms] (mean)
> > Time per request:       0.061 [ms] (mean, across all concurrent requests)
> > Transfer rate:          262.66 [Kbytes/sec] received
> > 
> > Connection Times (ms)
> >               min  mean[+/-sd] median   max
> > Connect:        0    0   0.0      0      14
> > Processing:     0    0   2.5      0     636
> > Waiting:        0    0   2.4      0     636
> > Total:          0    0   2.5      0     636
> > 
> > Percentage of the requests served within a certain time (ms)
> >   50%      0
> >   66%      1
> >   75%      1
> >   80%      1
> >   90%      1
> >   95%      2
> >   98%      6
> >   99%     11
> >  100%    636 (longest request)
> > -------------------------------------------------------------
> > -------------------------------------------------------------
> > 
> > 
> > 
> > 
> > Tomcat Hybrid 5.5.9
> > -------------------------------------------------------------
> > -------------------------------------------------------------
> > Document Path:          /tomcat.gif
> > Document Length:        1934 bytes
> > 
> > Concurrency Level:      20
> > Time taken for tests:   282.264843 seconds
> > Complete requests:      2000000
> > Failed requests:        0
> > Write errors:           0
> > Keep-Alive requests:    2000000
> > Total transferred:      33032704 bytes
> > HTML transferred:       -426967296 bytes
> > Requests per second:    7085.54 [#/sec] (mean)
> > Time per request:       2.823 [ms] (mean)
> > Time per request:       0.141 [ms] (mean, across all concurrent requests)
> > Transfer rate:          114.28 [Kbytes/sec] received
> > 
> > Connection Times (ms)
> >               min  mean[+/-sd] median   max
> > Connect:        0    0   0.0      0       1
> > Processing:     0    2   1.7      2      24
> > Waiting:        0    2   1.7      2      24
> > Total:          0    2   1.7      2      24
> > 
> > Percentage of the requests served within a certain time (ms)
> >   50%      2
> >   66%      3
> >   75%      4
> >   80%      4
> >   90%      5
> >   95%      5
> >   98%      6
> >   99%      6
> >  100%     24 (longest request)
> > -------------------------------------------------------------
> > -------------------------------------------------------------
> > 
> > 
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
> > For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org
> > 
> >
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote

Posted by Peter Lin <wo...@gmail.com>.
 I took a look at the AB and Rubis numbers. Honestly I don't
understand the rubis graphs. From the AB results, it looks like the
connect, processing and wait times are lower for the hybrid. That's a
good achievement and congrats to you on that.

I'm not convinced of the benefit of the hybrid approach over APR. if
both are equal, then it might be good to have both as options. it's
nice to be able to support /. effect, but in reality that's achieved
by distributing servers across multiple hosting facilities. It's not
achieved through hosting a website on a single quad server supporting
10K concurrent connections. I'm not a committer, so I don't have a say
in what goes into tomcat. thanks for researching NIO and taking time
to post these results.

peter lin


On 5/25/05, Vicenc Beltran Querol <vb...@ac.upc.edu> wrote:
> Hi,
> 
> The results of the AB benchmark configured with 20 concurrent clients are posted below,
> If somebody is interested in more configurations (from 20 to 10000 concurrent clients)
> they are available at http://www.bsc.es/edragon/pdf/TestAb.tgz
> 
> BTW, there is also available a comparison between Tomcat and the Hybrid (Tomcat+NIO)
> web servers at http://www.bsc.es/edragon/pdf/TestRubisDynamic.tgz. The comparison is
> based on the RUBiS benchmark and the httperf workload generator.
> 
> 
> Regards,
> 
> -Vicenç
> 
> 
> ./ab -k -c 20 -n 2000000 http://pcbosch:8080/tomcat.gif
> 
> Client: 2 way Xeon 2.4Ghz, 2GB RAM
> Server: 4 way Xeon 1.4Ghz, 2GB RAM
> Network: Gbit
> Java: build 1.5.0_03-b07
> 
> 
> Tomcat 5.5.9
> -------------------------------------------------------------
> -------------------------------------------------------------
> cument Path:          /tomcat.gif
> Document Length:        1934 bytes
> 
> Concurrency Level:      20
> Time taken for tests:   122.460403 seconds
> Complete requests:      2000000
> Failed requests:        0
> Write errors:           0
> Keep-Alive requests:    1980006
> Total transferred:      32937062 bytes
> HTML transferred:       -426963428 bytes
> Requests per second:    16331.81 [#/sec] (mean)
> Time per request:       1.225 [ms] (mean)
> Time per request:       0.061 [ms] (mean, across all concurrent requests)
> Transfer rate:          262.66 [Kbytes/sec] received
> 
> Connection Times (ms)
>               min  mean[+/-sd] median   max
> Connect:        0    0   0.0      0      14
> Processing:     0    0   2.5      0     636
> Waiting:        0    0   2.4      0     636
> Total:          0    0   2.5      0     636
> 
> Percentage of the requests served within a certain time (ms)
>   50%      0
>   66%      1
>   75%      1
>   80%      1
>   90%      1
>   95%      2
>   98%      6
>   99%     11
>  100%    636 (longest request)
> -------------------------------------------------------------
> -------------------------------------------------------------
> 
> 
> 
> 
> Tomcat Hybrid 5.5.9
> -------------------------------------------------------------
> -------------------------------------------------------------
> Document Path:          /tomcat.gif
> Document Length:        1934 bytes
> 
> Concurrency Level:      20
> Time taken for tests:   282.264843 seconds
> Complete requests:      2000000
> Failed requests:        0
> Write errors:           0
> Keep-Alive requests:    2000000
> Total transferred:      33032704 bytes
> HTML transferred:       -426967296 bytes
> Requests per second:    7085.54 [#/sec] (mean)
> Time per request:       2.823 [ms] (mean)
> Time per request:       0.141 [ms] (mean, across all concurrent requests)
> Transfer rate:          114.28 [Kbytes/sec] received
> 
> Connection Times (ms)
>               min  mean[+/-sd] median   max
> Connect:        0    0   0.0      0       1
> Processing:     0    2   1.7      2      24
> Waiting:        0    2   1.7      2      24
> Total:          0    2   1.7      2      24
> 
> Percentage of the requests served within a certain time (ms)
>   50%      2
>   66%      3
>   75%      4
>   80%      4
>   90%      5
>   95%      5
>   98%      6
>   99%      6
>  100%     24 (longest request)
> -------------------------------------------------------------
> -------------------------------------------------------------
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org
> 
>

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote

Posted by Remy Maucherat <re...@apache.org>.
Vicenc Beltran Querol wrote:
> It's great to read your opinions... ;)

Let's cut down on the "broken record" effect then: -1 for your code, 
it's not a clean implementation ;) (I end up with a smiley, since you 
did as well)

Rémy

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote

Posted by Remy Maucherat <re...@apache.org>.
Mladen Turk wrote:
> Actually I just read a perfect use case scenario request for
> the new APR connector on tomcat-user@.
> With only couple of threads all the 1000 connections could be handled
> without having 1000 threads.

Actually, it seems a lot more a case of using the servlet API in a way 
it was not designed for.

Rémy

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote

Posted by Mladen Turk <mt...@apache.org>.
Remy Maucherat wrote:
>>
>> In my mind, the argument for tomcat supporting 1000 concurrent
>> connections for an extended period of time isn't valid from my
>> experience.
> 
> - all the other APR features which are really useful and not provided by 
> the core Java platform



Actually I just read a perfect use case scenario request for
the new APR connector on tomcat-user@.
With only couple of threads all the 1000 connections could be handled
without having 1000 threads.


 > Users are connecting to the solution by invoking a servlet (runned by 
tomcat).
 > If a user is auhentified, then I use HTTPServletResponse object (in 
the service
 > method) to get the Outputstream of that object => 
HTTPServletResponse.getoutputstream)
 >
 > Then this stream will be use to handle communications between my 
client application
 > and my custom Server Process (I need to send real time informations 
through
 > this canal).
 >
 > Important => A client session can last several hours, so the life of the
 > servlet is set to time infinite .
 >
 > In fact I had the idea delegate socket connection managment, to 
tomcat engine,
 > by setting servlet life time to infinite.
 >
 > Is it a good way to do, or should I use a socket pooling algorithm 
(actualy,
 > the server can freeze, after unregular amout of times, time for 
writing operation
 > in the Output stream can increase until being totaly unusuable, I have to
 > close, and reconnect)
 >
 > The objective is to handle more than 1000 client sessions.



---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote

Posted by Remy Maucherat <re...@apache.org>.
Peter Lin wrote:
> I'm not sure I agree with that statement. The reason for using apache
> AB for small files under 2K is that JMeter is unable to max out the
> server with tiny files. You can see the original number I produced
> here http://people.apache.org/~woolfel/tc_results.html.
> 
> Since the bulk of my work the last 4 years has been with large
> applications handling millions of pageviews a day, I can safely say
> that most large deployment will rarely exceed 50 concurrent requests
> for extended period of time. this is just my experience on real
> applications, but we generally buffer the entire page and then send it
> in one shot.  this is done for several reasons.
> 
> 1. WAN latency - as you already stated
> 2. improve accuracy of performance logging. we log the page generation
> to make sure we know exactly how much time is spent for the query,
> page markup and transfering the data.
> 3. allows us to track network bottleneck more accurately
> 
> In my mind, the argument for tomcat supporting 1000 concurrent
> connections for an extended period of time isn't valid from my
> experience. There's typically a large cluster of servers that are load
> balanced behind a load balancing router. For me, throughput is far
> more important because most the images and files range from 5-15K in
> size. In these cases, maximizing throughput is more important. So
> small sites trying to deal with the /. effect, it's not worth it.  I
> say that because the network will die long before tomcat will. Any
> site with serious performance requirements will host at a tier 1
> provider and have a cluster of servers.  small personal sites are
> shared hosted and often don't have enough bandwidth.

Yes, all this stuff is not really that useful in the real world in the 
end, and is mostly an answer to non blocking IO hype (which I find quite 
annoying). The actual benefits are:
- better resource efficiency for small servers (hopefully allowing 
eventually a larger market share for Java web servers), but indeed it's 
not going to help in front of /.
- all the other APR features which are really useful and not provided by 
the core Java platform
- a lot more efficient for certain proxying scenarios (AJP mostly, but 
HTTP can have benefits too) - having maximum throughput is very 
important for this scenario, and is why I want maximum throughput
- a lot more efficient for large static files (ex: serving media) due to 
sendfile

Rémy

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote

Posted by Peter Lin <wo...@gmail.com>.
On 5/25/05, Vicenc Beltran Querol <vb...@ac.upc.edu> wrote:
> Hi,
> 
> I'm absolutely disconcerted. In your previous answeryou agreed that the
> AB test is not good for comparing two different architectural
> approaches. And you still wanna compare the performance of the hybrid
> architecture using it. But when I look for APR results on the net, I
> find that in the message 70876
> (http://www.mail-archive.com/tomcat-dev@jakarta.apache.org/msg70876.html)
> of this list that you're using JMeter and think-times in other experiments.
> Have you looked at any of the results I've post for realistic benchmarks?
> Why are you so obsessed with the AB results with concurrency level 20?
> Sorry, but I don't see the point on it...
> 
> 
> Using non-realistic benchmarks and very-oriented performance tricks only
> leads to winning few milliseconds in the response time of the server.
> But it's not a real benefit for the clients. When the server is
> overloaded (when performance improvements are really determinant), these
> benefits are negligible... In my opinion, following these development
> criterias is counterproductive and makes the server worse in the real
> world (where users put it into production). Surely, you disagree...
> 
> 

I'm not sure I agree with that statement. The reason for using apache
AB for small files under 2K is that JMeter is unable to max out the
server with tiny files. You can see the original number I produced
here http://people.apache.org/~woolfel/tc_results.html.

Since the bulk of my work the last 4 years has been with large
applications handling millions of pageviews a day, I can safely say
that most large deployment will rarely exceed 50 concurrent requests
for extended period of time. this is just my experience on real
applications, but we generally buffer the entire page and then send it
in one shot.  this is done for several reasons.

1. WAN latency - as you already stated
2. improve accuracy of performance logging. we log the page generation
to make sure we know exactly how much time is spent for the query,
page markup and transfering the data.
3. allows us to track network bottleneck more accurately

In my mind, the argument for tomcat supporting 1000 concurrent
connections for an extended period of time isn't valid from my
experience. There's typically a large cluster of servers that are load
balanced behind a load balancing router. For me, throughput is far
more important because most the images and files range from 5-15K in
size. In these cases, maximizing throughput is more important. So
small sites trying to deal with the /. effect, it's not worth it.  I
say that because the network will die long before tomcat will. Any
site with serious performance requirements will host at a tier 1
provider and have a cluster of servers.  small personal sites are
shared hosted and often don't have enough bandwidth.

my bias .02 cents.

peter lin

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote

Posted by Vicenc Beltran Querol <vb...@ac.upc.edu>.
Hi,

>The APR connector has a trick to optimize pipelining (where a client
>would do many requests on a single connection, but with a small delay
>between requests - typically, it would happen when getting lots of
>images from a website). 

What's the trick? Are you trying to do blocking read operations with
really short timeouts trying to create false pipelines, as I've already
seen in other scenarios? Because it is only helpful when working in a
really sinthetic environment (AB in a very-low-latency LAN).

Have you ever tried the APR connector in a WAN?

The Hybrid connector is already optimized for real pipelined HTTP
requests. Anyway, as far as I know, the AB does not use HTTP pipelining.
In my opinion, you're trusting in an unreal behavior of the clients,
assuming very low time periods between one response is send and a new
request is received for a given client.


>Maybe this could be added here as well. Removing this optimization will
>make piepelining performance go down as well. Note that Mladen tells
>me of more sendfile-like performance tricks that can't be matched by
>NIO (at least for now).

I'm absolutely disconcerted. In your previous answeryou agreed that the
AB test is not good for comparing two different architectural
approaches. And you still wanna compare the performance of the hybrid
architecture using it. But when I look for APR results on the net, I
find that in the message 70876
(http://www.mail-archive.com/tomcat-dev@jakarta.apache.org/msg70876.html) 
of this list that you're using JMeter and think-times in other experiments. 
Have you looked at any of the results I've post for realistic benchmarks? 
Why are you so obsessed with the AB results with concurrency level 20? 
Sorry, but I don't see the point on it...


Using non-realistic benchmarks and very-oriented performance tricks only
leads to winning few milliseconds in the response time of the server.
But it's not a real benefit for the clients. When the server is
overloaded (when performance improvements are really determinant), these
benefits are negligible... In my opinion, following these development
criterias is counterproductive and makes the server worse in the real
world (where users put it into production). Surely, you disagree...


>Another test which could be done is comparing performance without the
>"-k" setting of ab (removing the impact of any pipelining opimization,
>but more of the overhead is then on the TCP stack rather than on the
>connector).

If we move to an HTTP/1.0 scenario, why do we need to change the
connector architecture? The multithreaded connector would be a good
choice then...


>I still don't like the proposed NIO solution, however, as it adds a lot
>of complexity to the default thread pool (which was complex already,
>but with those changes, it becomes black magic, and robustness will
>likely go down).

The thread pool is unmodified in the hybrid connector. The only
modifications are done in the leaderfollower code, to use NIO
operations.

In my tests, the robustness has been supreme. At least, as good as the
out-of-the-box Tomcat.

>If doing NIO, I think a much simpler thread pool structure should be
>used instead, like the APR endpoint does (even better, it could be a
>Java 5 version taking advantage of the new thread pool APIs).

Is JNI simpler?

>I expect Jean-Francois to live up to the hype and produce less
>experimental code ;)

Sure... :)

I've one final question about the APR architecture. Have you implemented 
any kind of admission control mechanism on it?


It's great to read your opinions... ;)

Best,
Vicenç


---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote

Posted by Remy Maucherat <re...@apache.org>.
Peter Lin wrote:
> Am I reading the results correctly?
> 
> tomcat 5.5.9 - 16,331.81/sec
> hybrid - 7,085.54/sec
> 
> that means the hybrid connector is 2x slower.  If those results are
> accurate, I would say the APR connector is much better choice.

It's more complex than that.

The APR connector has a trick to optimize pipelining (where a client 
would do many requests on a single connection, but with a small delay 
between requests - typically, it would happen when getting lots of 
images from a website). Maybe this could be added here as well. Removing 
this optimization will make piepelining performance go down as well. 
Note that Mladen tells me of more sendfile-like performance tricks that 
can't be matched by NIO (at least for now).

Another test which could be done is comparing performance without the 
"-k" setting of ab (removing the impact of any pipelining opimization, 
but more of the overhead is then on the TCP stack rather than on the 
connector).

I still don't like the proposed NIO solution, however, as it adds a lot 
of complexity to the default thread pool (which was complex already, but 
with those changes, it becomes black magic, and robustness will likely 
go down). If doing NIO, I think a much simpler thread pool structure 
should be used instead, like the APR endpoint does (even better, it 
could be a Java 5 version taking advantage of the new thread pool APIs). 
I expect Jean-Francois to live up to the hype and produce less 
experimental code ;)

Rémy

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote

Posted by Peter Lin <wo...@gmail.com>.
Am I reading the results correctly?

tomcat 5.5.9 - 16,331.81/sec
hybrid - 7,085.54/sec

that means the hybrid connector is 2x slower.  If those results are
accurate, I would say the APR connector is much better choice.

peter lin



On 5/25/05, Vicenc Beltran Querol <vb...@ac.upc.edu> wrote:
> Hi,
> 
> The results of the AB benchmark configured with 20 concurrent clients are posted below,
> If somebody is interested in more configurations (from 20 to 10000 concurrent clients)
> they are available at http://www.bsc.es/edragon/pdf/TestAb.tgz
> 
> BTW, there is also available a comparison between Tomcat and the Hybrid (Tomcat+NIO)
> web servers at http://www.bsc.es/edragon/pdf/TestRubisDynamic.tgz. The comparison is
> based on the RUBiS benchmark and the httperf workload generator.
> 
> 
> Regards,
> 
> -Vicenç
> 
> 
> ./ab -k -c 20 -n 2000000 http://pcbosch:8080/tomcat.gif
> 
> Client: 2 way Xeon 2.4Ghz, 2GB RAM
> Server: 4 way Xeon 1.4Ghz, 2GB RAM
> Network: Gbit
> Java: build 1.5.0_03-b07
> 
> 
> Tomcat 5.5.9
> -------------------------------------------------------------
> -------------------------------------------------------------
> cument Path:          /tomcat.gif
> Document Length:        1934 bytes
> 
> Concurrency Level:      20
> Time taken for tests:   122.460403 seconds
> Complete requests:      2000000
> Failed requests:        0
> Write errors:           0
> Keep-Alive requests:    1980006
> Total transferred:      32937062 bytes
> HTML transferred:       -426963428 bytes
> Requests per second:    16331.81 [#/sec] (mean)
> Time per request:       1.225 [ms] (mean)
> Time per request:       0.061 [ms] (mean, across all concurrent requests)
> Transfer rate:          262.66 [Kbytes/sec] received
> 
> Connection Times (ms)
>               min  mean[+/-sd] median   max
> Connect:        0    0   0.0      0      14
> Processing:     0    0   2.5      0     636
> Waiting:        0    0   2.4      0     636
> Total:          0    0   2.5      0     636
> 
> Percentage of the requests served within a certain time (ms)
>   50%      0
>   66%      1
>   75%      1
>   80%      1
>   90%      1
>   95%      2
>   98%      6
>   99%     11
>  100%    636 (longest request)
> -------------------------------------------------------------
> -------------------------------------------------------------
> 
> 
> 
> 
> Tomcat Hybrid 5.5.9
> -------------------------------------------------------------
> -------------------------------------------------------------
> Document Path:          /tomcat.gif
> Document Length:        1934 bytes
> 
> Concurrency Level:      20
> Time taken for tests:   282.264843 seconds
> Complete requests:      2000000
> Failed requests:        0
> Write errors:           0
> Keep-Alive requests:    2000000
> Total transferred:      33032704 bytes
> HTML transferred:       -426967296 bytes
> Requests per second:    7085.54 [#/sec] (mean)
> Time per request:       2.823 [ms] (mean)
> Time per request:       0.141 [ms] (mean, across all concurrent requests)
> Transfer rate:          114.28 [Kbytes/sec] received
> 
> Connection Times (ms)
>               min  mean[+/-sd] median   max
> Connect:        0    0   0.0      0       1
> Processing:     0    2   1.7      2      24
> Waiting:        0    2   1.7      2      24
> Total:          0    2   1.7      2      24
> 
> Percentage of the requests served within a certain time (ms)
>   50%      2
>   66%      3
>   75%      4
>   80%      4
>   90%      5
>   95%      5
>   98%      6
>   99%      6
>  100%     24 (longest request)
> -------------------------------------------------------------
> -------------------------------------------------------------
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org
> 
>

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote

Posted by Vicenc Beltran Querol <vb...@ac.upc.edu>.
Hi,

The results of the AB benchmark configured with 20 concurrent clients are posted below,
If somebody is interested in more configurations (from 20 to 10000 concurrent clients) 
they are available at http://www.bsc.es/edragon/pdf/TestAb.tgz

BTW, there is also available a comparison between Tomcat and the Hybrid (Tomcat+NIO) 
web servers at http://www.bsc.es/edragon/pdf/TestRubisDynamic.tgz. The comparison is 
based on the RUBiS benchmark and the httperf workload generator.


Regards,

-Vicenç


./ab -k -c 20 -n 2000000 http://pcbosch:8080/tomcat.gif

Client: 2 way Xeon 2.4Ghz, 2GB RAM
Server: 4 way Xeon 1.4Ghz, 2GB RAM
Network: Gbit
Java: build 1.5.0_03-b07


Tomcat 5.5.9
-------------------------------------------------------------
-------------------------------------------------------------
cument Path:          /tomcat.gif
Document Length:        1934 bytes

Concurrency Level:      20
Time taken for tests:   122.460403 seconds
Complete requests:      2000000
Failed requests:        0
Write errors:           0
Keep-Alive requests:    1980006
Total transferred:      32937062 bytes
HTML transferred:       -426963428 bytes
Requests per second:    16331.81 [#/sec] (mean)
Time per request:       1.225 [ms] (mean)
Time per request:       0.061 [ms] (mean, across all concurrent requests)
Transfer rate:          262.66 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0      14
Processing:     0    0   2.5      0     636
Waiting:        0    0   2.4      0     636
Total:          0    0   2.5      0     636

Percentage of the requests served within a certain time (ms)
  50%      0
  66%      1
  75%      1
  80%      1
  90%      1
  95%      2
  98%      6
  99%     11
 100%    636 (longest request)
-------------------------------------------------------------
-------------------------------------------------------------




Tomcat Hybrid 5.5.9 
-------------------------------------------------------------
-------------------------------------------------------------
Document Path:          /tomcat.gif
Document Length:        1934 bytes

Concurrency Level:      20
Time taken for tests:   282.264843 seconds
Complete requests:      2000000
Failed requests:        0
Write errors:           0
Keep-Alive requests:    2000000
Total transferred:      33032704 bytes
HTML transferred:       -426967296 bytes
Requests per second:    7085.54 [#/sec] (mean)
Time per request:       2.823 [ms] (mean) 
Time per request:       0.141 [ms] (mean, across all concurrent requests)
Transfer rate:          114.28 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       1
Processing:     0    2   1.7      2      24
Waiting:        0    2   1.7      2      24
Total:          0    2   1.7      2      24

Percentage of the requests served within a certain time (ms)
  50%      2
  66%      3
  75%      4
  80%      4
  90%      5
  95%      5
  98%      6
  99%      6 
 100%     24 (longest request)
-------------------------------------------------------------
-------------------------------------------------------------


---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote

Posted by Remy Maucherat <re...@apache.org>.
Vicenç Beltran wrote:
> Hi, 
> 
> attached you'll find a patch that changes the coyote multithreading
> model to a "hybrid" threading model (NIO+Mulithread). It's fully
> compatible with the existing Catalina code and is SSL enabled.
> 
> The Hybrid model breaks the limitation of one thread per connection,
> thus you can have a higher number of concurrent users with a lower
> number of threads.
> NIO selectors are utilized to detect when a user connection becomes
> active ( i.e. there is a user http request available to be read), and
> then, one thread processes the connection as usual, but without blocking
> on the read() operation because we know that there is one available
> request.
> 
> The Hybrid model eliminates the need to close inactive connections
> (especially important under high load or SSL load) and reduces the
> number of necessary threads.
> 
> The patch will be also downloadable  in short from
> http://www.bsc.es/edragon/.  Next week I will make available a
> performance comparison between Tomcat 5.5.9 and the modified Tomcat
> (Static content, Dynamic content, Secure Dynamic Content and scalability
> on SMP machines). I'm testing it with RUBiS, Surge and httperf.
> 
> Now, I am working on the admission control mechanism because it should
> be improved. (The number of threads doesn't limit the number of
> concurrent connections so we need to limit it in some way).

I think this demonstrates the problem with trying to do stuff on your 
own, not looking at development activity or communicating with anyone, 
and then dumping a big patch (which I find quite dirty; as is, as Mladen 
just posted, it has zero chance of being committed) on unsuspecting 
developers. It's a bit a caricature of the phenomenon, actually ;)

Rémy

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote

Posted by Remy Maucherat <re...@apache.org>.
Remy Maucherat wrote:
> Remy Maucherat wrote:
> 
>>> I've repeated the tests on the hybrid architecture using the AB.
>>> You can find them attached to this mail. I've run the AB with several 
>>> concurrency levels, ranging from 20 to 10000. You can see all the
>>> results in a plot.
> 
> Here are the results.

Only spam goes through on the ASF lists. Legitimate stuff defeinitely 
does not :(

Please post the results in text form.

Rémy

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote

Posted by Remy Maucherat <re...@apache.org>.
Remy Maucherat wrote:
>> I've repeated the tests on the hybrid architecture using the AB.
>> You can find them attached to this mail. I've run the AB with several 
>> concurrency levels, ranging from 20 to 10000. You can see all the
>> results in a plot.

Here are the results.

Rémy


Re: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote

Posted by Remy Maucherat <re...@apache.org>.
Vicenc Beltran Querol wrote:
> Hi,
> 
> I've repeated the tests on the hybrid architecture using the AB.
> You can find them attached to this mail. I've run the AB with several 
> concurrency levels, ranging from 20 to 10000. You can see all the
> results in a plot.

-c 20 -k is basically the only thing I am interested in. This is not a 
realistic test, it just measures the raw performance rather than the 
scalability.

About your previous bench results: obviously the performance of the 
regular HTTP connector is going to suck once the amount of connections 
exceeds maxThreads. As most threaded servers, it scales by increasing 
the amount of threads, and I believe it will perform relatively well (at 
the expense of resources).

>>Running a test with ab (ab -k -c 20 -n 20000 
>>http://host:8080/tomcat.gif) would take 30s, and would make comparisons 
>>easy (basically, I actually know what the tests do ...), and will be an 
>>actual measurement of throughput.
> 
> I've been studying the behavior of the AB and I've several doubts 
> about the significance of the results when trying to measure the throughtput
> of a server. In my opinion, the AB is a great way to test the impact of
> architectural modifications on the internal latency of the tomcat execution
> pipeline, but not a deterministic way to compare the throughput of two servers.
> In the following paragraphs I try to justify this idea (hope that in a
> comprensible way) :)

Yes, you need to use ab to test either in localhost, or using a gigabit 
network.

> The first thing that makes me suspect about the reliability of the obtained results
> is that this benchmark produces a different workload intensity for each tested server.
> I mean that, given a number of concurrent clients simulated, the AB produces a higher
> number of requests as lower is the response time for the server (a new request is issued
> when the previous is completed). This behavior will always favour an architecture
> with lower internal latencies, even when it manages concurrency worse. This is the case of
> the tomcat multithreaded architecture. Other architectures, for instance the Hybrid or any
> other using non-blocking operations with readiness selectors, will always obtain 
> worse results for low loads (remember that the select operation introduces an internal
> latency of 15-30ms since data is ready in a channel, with the purpose of getting more
> channels ready during that period). 
> When the simulated number of concurrent clients is increased (and especially
> when the number of threads in the pool is lower than the number of emulated
> clients), the multithreaded architecture starts suffering. You can check
> the plots for the throughput, number of keep-alive requests, errors or connect
> time to create your own opinion.

Well, that's precisely the reason why we never used non blocking IO in 
the past :)

> In conclusion, it must be taken into account that using this benchmark to compare
> the throughput of several architectural proposals can lead to wrong conclusions,
> especially when WANs (instead of fast LANs) are used for the evaluation.

Yes.

> This reasoning indicates that this test is more precise to compare the response
> time between two server architectures than to evaluate its performance, because
> the network latency (between the server and the client) can bias the obtained results.
> 
> Finally, I miss a "think-time" as one of the configuration parameters of the AB. It
> reduces the "realism" of the test and makes not possible to test the performance 
> of the server in terms of "user sessions" instead of individual requests.

Again, I am not interested in a real world test here like you would do 
on your server when putting it in production, and where you want to see 
if your app reaches its performance targets, but a measurement of raw 
performance.

> PS: I'm very interested in your opinion (I mean the community) about my reasoning
> about the adequate use of the AB for throughput comparisons...

I don't see any results in your email, BTW.

Rémy

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote

Posted by Vicenc Beltran Querol <vb...@ac.upc.edu>.
Hi,

I've repeated the tests on the hybrid architecture using the AB.
You can find them attached to this mail. I've run the AB with several 
concurrency levels, ranging from 20 to 10000. You can see all the
results in a plot.


> Running a test with ab (ab -k -c 20 -n 20000 
> http://host:8080/tomcat.gif) would take 30s, and would make comparisons 
> easy (basically, I actually know what the tests do ...), and will be an 
> actual measurement of throughput.


I've been studying the behavior of the AB and I've several doubts 
about the significance of the results when trying to measure the throughtput
of a server. In my opinion, the AB is a great way to test the impact of
architectural modifications on the internal latency of the tomcat execution
pipeline, but not a deterministic way to compare the throughput of two servers.
In the following paragraphs I try to justify this idea (hope that in a
comprensible way) :)


The first thing that makes me suspect about the reliability of the obtained results
is that this benchmark produces a different workload intensity for each tested server.
I mean that, given a number of concurrent clients simulated, the AB produces a higher
number of requests as lower is the response time for the server (a new request is issued
when the previous is completed). This behavior will always favour an architecture
with lower internal latencies, even when it manages concurrency worse. This is the case of
the tomcat multithreaded architecture. Other architectures, for instance the Hybrid or any
other using non-blocking operations with readiness selectors, will always obtain 
worse results for low loads (remember that the select operation introduces an internal
latency of 15-30ms since data is ready in a channel, with the purpose of getting more
channels ready during that period). 
When the simulated number of concurrent clients is increased (and especially
when the number of threads in the pool is lower than the number of emulated
clients), the multithreaded architecture starts suffering. You can check
the plots for the throughput, number of keep-alive requests, errors or connect
time to create your own opinion.


In conclusion, it must be taken into account that using this benchmark to compare
the throughput of several architectural proposals can lead to wrong conclusions,
especially when WANs (instead of fast LANs) are used for the evaluation.

Let me explain with a simple example, assuming that the AB simulated 10 concurrent
clients...

Lets suppose two architectures, A and B:

A -> Internal service time for the tomcat.gif file: 1ms
B -> Internal service time for the tomcat.gif file: 5ms


And lets suppose two different scenarios:

First, assuming a zero network latency... (in practice, a Gbit LAN)
 - The observed throughput for A will be 10000 replies/sec (10*(1/10^-3))
 - The observed throughput for B will be  2000 replies/sec (10*(5/10^-3))
 - The speedup observed between A and B is 5.


Later, assuming a  200 ms network latency... (in practice, a WAN)
 - The observed throughput for A will be  49,75 replies/sec (10*(1/(10^-3 + 200^-3)))
 - The observed throughput for B will be  48,78 replies/sec (10*(1/(5^-3 + 200^-3)))
 - The speedup observed between A and B is 1,019.


This reasoning indicates that this test is more precise to compare the response
time between two server architectures than to evaluate its performance, because
the network latency (between the server and the client) can bias the obtained results.


Finally, I miss a "think-time" as one of the configuration parameters of the AB. It
reduces the "realism" of the test and makes not possible to test the performance 
of the server in terms of "user sessions" instead of individual requests.

> 
> Note: your patch still seems bad, as the file add is represented as a 
> diff. This is likely not going to be patcheable.

I'm very sorry but I missunderstood you again... :(

Again... do I send the modified files "as is", instead of a diff?


Thanks for the comments,

Vicenç

PS: I'm very interested in your opinion (I mean the community) about my reasoning
about the adequate use of the AB for throughput comparisons...




> 
> Rémy
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote

Posted by Remy Maucherat <re...@apache.org>.
Vicenc Beltran Querol wrote:
> I've rebuilt the patch following your indications (hope). You can
> find at http://www.bsc.es/edragon/pdf/tomcat-5.5.9-NIO-patch (now it is bigger 
> so it can't be attached)
> 
> The benchmarking results I've obtained for a static content workload can be downloaded 
> from from http://www.bsc.es/edragon/pdf/TestSurge.tgz
> 
> As a summary, the throughput improvement I've observed is about a 25%, without
> breaking the response time. You can see all the results (original, patched and
> comparison) in the above file.
> 
> 
> I'm finishing the Dynamic content (plain and SSL) experiments, and I'll
> post them as soon as possible.

Great, but as I've posted earlier, these benchmarks results are not 
useful to us (maybe for your research they are, of course).

Running a test with ab (ab -k -c 20 -n 20000 
http://host:8080/tomcat.gif) would take 30s, and would make comparisons 
easy (basically, I actually know what the tests do ...), and will be an 
actual measurement of throughput.

Note: your patch still seems bad, as the file add is represented as a 
diff. This is likely not going to be patcheable.

Rémy

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote

Posted by Vicenc Beltran Querol <vb...@ac.upc.edu>.
On Fri, May 20, 2005 at 12:05:51PM +0200, Mladen Turk wrote:
> Vicenç Beltran wrote:
> >Hi, 
> >
> >attached you'll find a patch that changes the coyote multithreading
> >model to a "hybrid" threading model (NIO+Mulithread). It's fully
> >compatible with the existing Catalina code and is SSL enabled.
> >
> >diff -uprN
> >jakarta-tomcat-5.5.9-src/jakarta-tomcat-connectors/http11/src/java/org/apache/coyote/http11/Http11Processor.java
> 
> Can't you simply make two new files
> Http11NioProcessor and Http11NioProtocol.
> 
> Trying to change default implementation that Tomcat uses will never
> be committed (at least I'll vote -1 on that).
> 
> Simply create two files that can be used instead current implementation,
> in a fashion we did for Http11AprProtocol.
> 
> 
> Regards,
> Mladen.


Hi,

I've rebuilt the patch following your indications (hope). You can
find at http://www.bsc.es/edragon/pdf/tomcat-5.5.9-NIO-patch (now it is bigger 
so it can't be attached)

The benchmarking results I've obtained for a static content workload can be downloaded 
from from http://www.bsc.es/edragon/pdf/TestSurge.tgz

As a summary, the throughput improvement I've observed is about a 25%, without
breaking the response time. You can see all the results (original, patched and
comparison) in the above file.


I'm finishing the Dynamic content (plain and SSL) experiments, and I'll
post them as soon as possible.

Best,
Vicenç





---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote

Posted by Bill Barker <wb...@wilshire.com>.
----- Original Message ----- 
From: "Jeanfrancois Arcand" <jf...@apache.org>
To: "Tomcat Developers List" <to...@jakarta.apache.org>
Sent: Friday, May 20, 2005 6:56 AM
Subject: Re: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote


>
>
> Mladen Turk wrote:
>> Vicenc Beltran Querol wrote:
>>
>>> Hi guys,
>>>
>>> I'm not trying to be a Tomcat developer. I'm working on my PhD on web 
>>> performance and just decided to share with you the experimental code 
>>> I've developed after studying the performance obtained ;).
>>>
>>
>> I've done some serious testings with HTTP server and NIO.
>> The results were always bad for NIO.
>> Blocking I/O is minimum 25% faster then NIO.
>
> Faster in what? Throughput and/or scalability?
>
> I disagree ;-) I would like to see your implementation, because from what 
> I'm seeing/measuring, it is completely the inverse. I would be interested 
> to see how you did implement your NIO connector. The problem

> with HTTP is not NIO, but the strategy to use for predicting if you have 
> read all the bytes or not. Falling to implement a good strategy will ends 
> up parsing the bytes many times, calling the Selector.wakeup() too often, 
> thus poor performance. The way you register your SelectionKey is also very 
> important.


Yeah, the speed improvement with NIO is the only thing that makes 
ChannelNioSocket not a total PoC.  It's really depressing that any JVM 
vendor would allow such a huge performance difference between 
Socket.getOutputStream().write and SocketChannel.write.


>
> Also, blocking IO required 1 thread per connection, which doesn't scale 
> very well. That's why I think the new APR connector is interesting, since 
> it fix that problem. But even if with APR, you did workaround the JNI 
> bottleneck by using direct byte buffer, I suspect a pure NIO 
> implementation will perform better than APR (except for static resource) 
> just because of the C->Java overhead. But I don't have yet numbers to 
> show...come to my session at JavaOne, I will :-)
>
>>
>> You may tray that simply by using demo HTTP servers
>> Blocking/Blocking Pool/NIO single thread/NIO multiple threads
>> that comes with new Java6 (see java.net for sources).
>
> Right. This is actually a good example not to follow ;-).
>
> BTW the big patch use NIO blocking, which may improve scalability, but 
> will impact throughput. The patch should be improved to use NIO 
> non-blocking. And then we can compare ;-)
>
> -- Jeanfrancois
>
>>
>>
>> OTOH, I'm sure you must have some performance results :)
>> Simply run the 'ab -n 100000 -c 100 -k host:8080/tomcat.gif'
>> with your patch and standard Tomcat5.5.9.
>>
>>
>>> Anyway, it's OK. I'll work on the new patch and resubmit it.
>>
>>
>> Great.
>>
>> Regards,
>> Mladen.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
>> For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org
>>
>>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org
>
>
> 



This message is intended only for the use of the person(s) listed above as the intended recipient(s), and may contain information that is PRIVILEGED and CONFIDENTIAL.  If you are not an intended recipient, you may not read, copy, or distribute this message or any attachment. If you received this communication in error, please notify us immediately by e-mail and then delete all copies of this message and any attachments.

In addition you should be aware that ordinary (unencrypted) e-mail sent through the Internet is not secure. Do not send confidential or sensitive information, such as social security numbers, account numbers, personal identification numbers and passwords, to us via ordinary (unencrypted) e-mail.



Re: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote

Posted by Jeanfrancois Arcand <jf...@apache.org>.

Mladen Turk wrote:
> Jeanfrancois Arcand wrote:
> 
>>> I've done some serious testings with HTTP server and NIO.
>>> The results were always bad for NIO.
>>> Blocking I/O is minimum 25% faster then NIO.
>>
>>
>> Faster in what? Throughput and/or scalability?
>>
>> I disagree ;-) I would like to see your implementation, because from 
>> what I'm seeing/measuring, it is completely the inverse. I would be 
>> interested to see how you did implement your NIO connector.
> 
> 
> I do not understand why the people are so obsessed with NIO, really.

I'm not obsessed. I just want to see a reel Tomcat implementation before 
  saying I'm obsessed :-). APR looks really promising, but only because 
I can benchmark it and see real number :-)

> Like said there IS a example from Sun that tries all the strategies
> you can imagine, even using mapped byte buffers, single/multiple
> threads etc...
> 
> Feel free to test by yourself if you don't believe me.
> Download the Mustang sources from
> http://www.java.net/download/jdk6/
> You have a complete stack of 5 web servers inside:
> j2se/src/share/sample/nio/server

Yes I already saw that. I'm really not interested about it....

> 
> Also read a nice aricle:
> http://www.usenix.org/events/hotos03/tech/full_papers/vonbehren/vonbehren_html/index.html 
> 
> 
> Solaris and Linux 2.6 threading support is much more advanced then it
> was in a days the even architecture was 'pushed'.

Right :-) Still I will compare a pure NIO non blocking implementation in 
Tomcat vs what we have right now, to have a clear picture. I'm just 
unable to assume the reality without seeing it :-)

Thanks

-- Jeanfrancois


> 
> Regards,
> Mladen.
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org
> 
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote

Posted by Mladen Turk <mt...@apache.org>.
Jeanfrancois Arcand wrote:
>> I've done some serious testings with HTTP server and NIO.
>> The results were always bad for NIO.
>> Blocking I/O is minimum 25% faster then NIO.
> 
> Faster in what? Throughput and/or scalability?
> 
> I disagree ;-) I would like to see your implementation, because from 
> what I'm seeing/measuring, it is completely the inverse. I would be 
> interested to see how you did implement your NIO connector.

I do not understand why the people are so obsessed with NIO, really.
Like said there IS a example from Sun that tries all the strategies
you can imagine, even using mapped byte buffers, single/multiple
threads etc...

Feel free to test by yourself if you don't believe me.
Download the Mustang sources from
http://www.java.net/download/jdk6/
You have a complete stack of 5 web servers inside:
j2se/src/share/sample/nio/server

Also read a nice aricle:
http://www.usenix.org/events/hotos03/tech/full_papers/vonbehren/vonbehren_html/index.html

Solaris and Linux 2.6 threading support is much more advanced then it
was in a days the even architecture was 'pushed'.

Regards,
Mladen.

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote

Posted by Peter Lin <wo...@gmail.com>.
I'm not a committer, but I think evidence proves that native sockets +
JNI is the way to go. To my knowledge, weblogic, websphere and Resin
all use native sockets.  having a pure Java approach sounds nice and
all, but in the edge cases where high concurrent connection is needed,
I much rather go with native + jni.

my 1/10th of a cent worth.

peter


On 5/20/05, Remy Maucherat <re...@apache.org> wrote:
> Jeanfrancois Arcand wrote:
> > I disagree ;-) I would like to see your implementation, because from
> > what I'm seeing/measuring, it is completely the inverse. I would be
> > interested to see how you did implement your NIO connector. The problem
> > with HTTP is not NIO, but the strategy to use for predicting if you have
> > read all the bytes or not. Falling to implement a good strategy will
> > ends up parsing the bytes many times, calling the Selector.wakeup() too
> > often, thus poor performance. The way you register your SelectionKey is
> > also very important.
> >
> > Also, blocking IO required 1 thread per connection, which doesn't scale
> > very well. That's why I think the new APR connector is interesting,
> > since it fix that problem. But even if with APR, you did workaround the
> > JNI bottleneck by using direct byte buffer, I suspect a pure NIO
> > implementation will perform better than APR (except for static resource)
> > just because of the C->Java overhead. But I don't have yet numbers to
> > show...come to my session at JavaOne, I will :-)
> 
> Sorry, but I agree with Mladen. There is simply no way a pure non
> blocking NIO strategy is going to work. It is an attempt to apply
> receipes which work really well for certain types of takss to other
> types of tasks. Translated, it would work spectacularly well for, say, a
> servlet like the default servlet and a small file (ie, small bufferable
> amount of data + no processing), but fail for a servlet which receives a
> lot of uploading or more generally has a long processing time. The main
> problem is that to keep contention on IO low, a low amount of processors
> has to be used, which is not compatible with the second type of tasks.
> The only way to apply NIO to Tomcat is to use it in blocking mode, as in
> the submitted patch.
> 
> The only way to convince me your solution can work is to (also) write
> your own endpoint / protocol handler (this seems trendy these days for
> some reason, so I guess it's ok if everyone does it ;) ) so I can test
> it myself and see the results.
> 
> As APR matches the numbers of classic blocking IO in the (100%
> throughput oriented, and worst case; at least, it's the case which
> favors regular blocking IO the most) ab -k -c 20 localhost/tomcat.gif,
> it seems hard to beat.
> 
> <rant>
> BTW, about the NIO usage for optimizing JNI, I'm actually really mad
> about Sun. Why attempt to make any JNI calls "safe" and make performance
> suck in the first place, when with native code usage it is trivial to
> segfault the whole process anyway (example: feed APR a bad address) ?
> This really makes no sense to me, and seems simply a plot to try to
> force people to write 100% Java code.
> All that complexity and crappy performance for *nothing* (except helping
> .not and mono, of course) ...
> </rant>
> 
> >> You may tray that simply by using demo HTTP servers
> >> Blocking/Blocking Pool/NIO single thread/NIO multiple threads
> >> that comes with new Java6 (see java.net for sources).
> >
> > Right. This is actually a good example not to follow ;-).
> >
> > BTW the big patch use NIO blocking, which may improve scalability, but
> > will impact throughput. The patch should be improved to use NIO
> > non-blocking. And then we can compare ;-)
> 
> You're going to have to prove your point in a big way ;) There were
> articles on the subject advocating the same thing, but if you looked a
> little into it, you could just see how to make the whole thing break
> really easily.
> 
> Rémy
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org
> 
>

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote

Posted by Remy Maucherat <re...@apache.org>.
Jeanfrancois Arcand wrote:
> Well, the strategy you use is important. If you can predict the size of 
> the stream (by let say discovering the content-length), you can make 
> uploading task as fast as with blocking IO (OK, maybe a little slower 
> since you parse the header, and the channel may not reads it fully in 
> its first Selector.select()). But for GET operation, it shouldn't be a 
> problem.

That's not good, we have to provide a general solution. Of course, in 
some cases, it can work out great. I assume also the way to get around 
it if you get the content-length is to buffer (actually, the only actual 
solution is to buffer all uploads), so this simply moves scalability to 
somewhere else.

>> The only way to convince me your solution can work is to (also) write 
>> your own endpoint / protocol handler (this seems trendy these days for 
>> some reason, so I guess it's ok if everyone does it ;) ) so I can test 
>> it myself and see the results.
> 
> Right, I agree this is the way to go ;-). Replacing the current thread 
> pool doesn't hurt also (but maybe it's only me..this pool is way to 
> complex for what it does).

I did revert back to using the old Tomcat 4.0 thread pool for the APR 
endpoint, because that pool style is more predictable (when something 
fails during accept) and configurable (the priority of the accept thread 
can be configured independently).

>> As APR matches the numbers of classic blocking IO in the (100% 
>> throughput oriented, and worst case; at least, it's the case which 
>> favors regular blocking IO the most) ab -k -c 20 localhost/tomcat.gif, 
>> it seems hard to beat.
> 
> Well, I agree but I'm not really interested about static resource. I'm 

Right, nobody is ever interested, but in the end this shuts out Java 
from most of the web server market.

The default servlet is merely an example of a servlet which does almost 
no processing, and returns a little data. It's not meant to be a true 
static file test (for that kind of test, there will be no filesystem 
access). You can also write a servlet which would output a 1KB byte 
array, it's the same.

> more interested to benchmark a real JSP/Servlet application, that 
> contains both static and dynamic resources. I think APR will always be 
> faster that NIO non blocking for static resources. But non blocking will 
>  be faster than the current Coyote/http11 connector, by applying the 
> same trick you did for DefaultServlet sendFile implementation, and by 
> using MappedByteBuffer to load the resource in FileDirContext.

No, don't even to optimize this for now, it's not the thing I'm 
interested in (sendfile will only be used in the default APR config if 
the file is larger than 48KB).

> But all I'm saying needs numbers, which I will have in June :-)
> 
>> <rant>
>> BTW, about the NIO usage for optimizing JNI, I'm actually really mad 
>> about Sun. 
> 
> Haha. Have you ever be happy with SUN :-) Probably only the day you 
> hired me (LOL) and asked me to implement XML schema supports (beurk!) :-)
> 
> Why attempt to make any JNI calls "safe" and make performance
> 
>> suck in the first place, when with native code usage it is trivial to 
>> segfault the whole process anyway (example: feed APR a bad address) ? 
> 
> I agree. But seems the "safe" lobbyist were stronger that others...

I guess :) Obviously, I didn't do any JNI until now, so I didn't 
actually care earlier.

> I agree...but the strategy used just sucks. Having a pipe between the 
> Read Thread and the Processor Thread might looks good on paper, but 
> that's clearly not a good strategy. Also the NIO implementation in Jetty 
>  is clearly bad. EmberIO framework looked promising, but seems little 
> activity happenned after the big annoncement....

I see only two strategies personally ;) I suppose it shows the API is 
probably too complex. Personally, the main strategy I used with APR is 
to minimize the amount of calls, and then optimize the passing around of 
byte arrays, but neither of these choices is caused by APR itself.

> Still, if APR is faster, then I will say it is. But to convince me, I 
> need to compare real implementation.

In addition to the obviously inetresting features 
(epoll/sendfile/openssl), there are all sorts of other useful uses for 
APR, as I mentioned earlier. Getting it inside Tomcat makes it a Java 
version of httpd in terms of core capabilities, and will likely open all 
sorts of features and possibilities in a very simple way.

So if someone can do something which scales well for pure Java users, 
then it's great to have it, but it's only a part of the equation (and 
it's likely not going to remove the need for APR).

Rémy

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote

Posted by Jeanfrancois Arcand <jf...@apache.org>.

Remy Maucherat wrote:
> Jeanfrancois Arcand wrote:
> 
>> I disagree ;-) I would like to see your implementation, because from 
>> what I'm seeing/measuring, it is completely the inverse. I would be 
>> interested to see how you did implement your NIO connector. The 
>> problem with HTTP is not NIO, but the strategy to use for predicting 
>> if you have read all the bytes or not. Falling to implement a good 
>> strategy will ends up parsing the bytes many times, calling the 
>> Selector.wakeup() too often, thus poor performance. The way you 
>> register your SelectionKey is also very important.
>>
>> Also, blocking IO required 1 thread per connection, which doesn't 
>> scale very well. That's why I think the new APR connector is 
>> interesting, since it fix that problem. But even if with APR, you did 
>> workaround the JNI bottleneck by using direct byte buffer, I suspect a 
>> pure NIO implementation will perform better than APR (except for 
>> static resource) just because of the C->Java overhead. But I don't 
>> have yet numbers to show...come to my session at JavaOne, I will :-)
> 
> 
> Sorry, but I agree with Mladen. There is simply no way a pure non 
> blocking NIO strategy is going to work. It is an attempt to apply 
> receipes which work really well for certain types of takss to other 
> types of tasks. Translated, it would work spectacularly well for, say, a 
> servlet like the default servlet and a small file (ie, small bufferable 
> amount of data + no processing), but fail for a servlet which receives a 
> lot of uploading or more generally has a long processing time. The main 
> problem is that to keep contention on IO low, a low amount of processors 
> has to be used, which is not compatible with the second type of tasks. 
> The only way to apply NIO to Tomcat is to use it in blocking mode, as in 
> the submitted patch.

Well, the strategy you use is important. If you can predict the size of 
the stream (by let say discovering the content-length), you can make 
uploading task as fast as with blocking IO (OK, maybe a little slower 
since you parse the header, and the channel may not reads it fully in 
its first Selector.select()). But for GET operation, it shouldn't be a 
problem.

> 
> The only way to convince me your solution can work is to (also) write 
> your own endpoint / protocol handler (this seems trendy these days for 
> some reason, so I guess it's ok if everyone does it ;) ) so I can test 
> it myself and see the results.

Right, I agree this is the way to go ;-). Replacing the current thread 
pool doesn't hurt also (but maybe it's only me..this pool is way to 
complex for what it does).

> 
> As APR matches the numbers of classic blocking IO in the (100% 
> throughput oriented, and worst case; at least, it's the case which 
> favors regular blocking IO the most) ab -k -c 20 localhost/tomcat.gif, 
> it seems hard to beat.

Well, I agree but I'm not really interested about static resource. I'm 
more interested to benchmark a real JSP/Servlet application, that 
contains both static and dynamic resources. I think APR will always be 
faster that NIO non blocking for static resources. But non blocking will 
  be faster than the current Coyote/http11 connector, by applying the 
same trick you did for DefaultServlet sendFile implementation, and by 
using MappedByteBuffer to load the resource in FileDirContext.

But all I'm saying needs numbers, which I will have in June :-)

> 
> <rant>
> BTW, about the NIO usage for optimizing JNI, I'm actually really mad 
> about Sun. 

Haha. Have you ever be happy with SUN :-) Probably only the day you 
hired me (LOL) and asked me to implement XML schema supports (beurk!) :-)

Why attempt to make any JNI calls "safe" and make performance
> suck in the first place, when with native code usage it is trivial to 
> segfault the whole process anyway (example: feed APR a bad address) ? 

I agree. But seems the "safe" lobbyist were stronger that others...

> This really makes no sense to me, and seems simply a plot to try to 
> force people to write 100% Java code.
> All that complexity and crappy performance for *nothing* (except helping 
> .not and mono, of course) ...
> </rant>
> 
>>> You may tray that simply by using demo HTTP servers
>>> Blocking/Blocking Pool/NIO single thread/NIO multiple threads
>>> that comes with new Java6 (see java.net for sources).
>>
>>
>> Right. This is actually a good example not to follow ;-).
>>
>> BTW the big patch use NIO blocking, which may improve scalability, but 
>> will impact throughput. The patch should be improved to use NIO 
>> non-blocking. And then we can compare ;-)
> 
> 
> You're going to have to prove your point in a big way ;) 

Yes, this is exactly what I will try to do at JavaOne. I'm waiting for 
APR to stabilize before predicting anything. If APR is faster, then 
good. But I think we need to compare a real NIO implementation, not a 
hack inside the current http11 code. And I still think non-blocking is 
better.

There were
> articles on the subject advocating the same thing, but if you looked a 
> little into it, you could just see how to make the whole thing break 
> really easily.

I agree...but the strategy used just sucks. Having a pipe between the 
Read Thread and the Processor Thread might looks good on paper, but 
that's clearly not a good strategy. Also the NIO implementation in Jetty 
  is clearly bad. EmberIO framework looked promising, but seems little 
activity happenned after the big annoncement....

Still, if APR is faster, then I will say it is. But to convince me, I 
need to compare real implementation.

Thanks

-- Jeanfrancois

> 
> Rémy
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org
> 
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote

Posted by Remy Maucherat <re...@apache.org>.
Jeanfrancois Arcand wrote:
> I disagree ;-) I would like to see your implementation, because from 
> what I'm seeing/measuring, it is completely the inverse. I would be 
> interested to see how you did implement your NIO connector. The problem 
> with HTTP is not NIO, but the strategy to use for predicting if you have 
> read all the bytes or not. Falling to implement a good strategy will 
> ends up parsing the bytes many times, calling the Selector.wakeup() too 
> often, thus poor performance. The way you register your SelectionKey is 
> also very important.
> 
> Also, blocking IO required 1 thread per connection, which doesn't scale 
> very well. That's why I think the new APR connector is interesting, 
> since it fix that problem. But even if with APR, you did workaround the 
> JNI bottleneck by using direct byte buffer, I suspect a pure NIO 
> implementation will perform better than APR (except for static resource) 
> just because of the C->Java overhead. But I don't have yet numbers to 
> show...come to my session at JavaOne, I will :-)

Sorry, but I agree with Mladen. There is simply no way a pure non 
blocking NIO strategy is going to work. It is an attempt to apply 
receipes which work really well for certain types of takss to other 
types of tasks. Translated, it would work spectacularly well for, say, a 
servlet like the default servlet and a small file (ie, small bufferable 
amount of data + no processing), but fail for a servlet which receives a 
lot of uploading or more generally has a long processing time. The main 
problem is that to keep contention on IO low, a low amount of processors 
has to be used, which is not compatible with the second type of tasks. 
The only way to apply NIO to Tomcat is to use it in blocking mode, as in 
the submitted patch.

The only way to convince me your solution can work is to (also) write 
your own endpoint / protocol handler (this seems trendy these days for 
some reason, so I guess it's ok if everyone does it ;) ) so I can test 
it myself and see the results.

As APR matches the numbers of classic blocking IO in the (100% 
throughput oriented, and worst case; at least, it's the case which 
favors regular blocking IO the most) ab -k -c 20 localhost/tomcat.gif, 
it seems hard to beat.

<rant>
BTW, about the NIO usage for optimizing JNI, I'm actually really mad 
about Sun. Why attempt to make any JNI calls "safe" and make performance 
suck in the first place, when with native code usage it is trivial to 
segfault the whole process anyway (example: feed APR a bad address) ? 
This really makes no sense to me, and seems simply a plot to try to 
force people to write 100% Java code.
All that complexity and crappy performance for *nothing* (except helping 
.not and mono, of course) ...
</rant>

>> You may tray that simply by using demo HTTP servers
>> Blocking/Blocking Pool/NIO single thread/NIO multiple threads
>> that comes with new Java6 (see java.net for sources).
> 
> Right. This is actually a good example not to follow ;-).
> 
> BTW the big patch use NIO blocking, which may improve scalability, but 
> will impact throughput. The patch should be improved to use NIO 
> non-blocking. And then we can compare ;-)

You're going to have to prove your point in a big way ;) There were 
articles on the subject advocating the same thing, but if you looked a 
little into it, you could just see how to make the whole thing break 
really easily.

Rémy

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote

Posted by Jeanfrancois Arcand <jf...@apache.org>.

Mladen Turk wrote:
> Vicenc Beltran Querol wrote:
> 
>> Hi guys,
>>
>> I'm not trying to be a Tomcat developer. I'm working on my PhD on web 
>> performance and just decided to share with you the experimental code 
>> I've developed after studying the performance obtained ;).
>>
> 
> I've done some serious testings with HTTP server and NIO.
> The results were always bad for NIO.
> Blocking I/O is minimum 25% faster then NIO.

Faster in what? Throughput and/or scalability?

I disagree ;-) I would like to see your implementation, because from 
what I'm seeing/measuring, it is completely the inverse. I would be 
interested to see how you did implement your NIO connector. The problem 
with HTTP is not NIO, but the strategy to use for predicting if you have 
read all the bytes or not. Falling to implement a good strategy will 
ends up parsing the bytes many times, calling the Selector.wakeup() too 
often, thus poor performance. The way you register your SelectionKey is 
also very important.

Also, blocking IO required 1 thread per connection, which doesn't scale 
very well. That's why I think the new APR connector is interesting, 
since it fix that problem. But even if with APR, you did workaround the 
JNI bottleneck by using direct byte buffer, I suspect a pure NIO 
implementation will perform better than APR (except for static resource) 
just because of the C->Java overhead. But I don't have yet numbers to 
show...come to my session at JavaOne, I will :-)

> 
> You may tray that simply by using demo HTTP servers
> Blocking/Blocking Pool/NIO single thread/NIO multiple threads
> that comes with new Java6 (see java.net for sources).

Right. This is actually a good example not to follow ;-).

BTW the big patch use NIO blocking, which may improve scalability, but 
will impact throughput. The patch should be improved to use NIO 
non-blocking. And then we can compare ;-)

-- Jeanfrancois

> 
> 
> OTOH, I'm sure you must have some performance results :)
> Simply run the 'ab -n 100000 -c 100 -k host:8080/tomcat.gif'
> with your patch and standard Tomcat5.5.9.
> 
> 
>> Anyway, it's OK. I'll work on the new patch and resubmit it.
> 
> 
> Great.
> 
> Regards,
> Mladen.
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org
> 
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote

Posted by Mladen Turk <mt...@apache.org>.
Vicenc Beltran Querol wrote:
> Hi guys,
> 
> I'm not trying to be a Tomcat developer. I'm working on my PhD on web performance and just decided to share with you the experimental code I've developed after studying the performance obtained ;).
>

I've done some serious testings with HTTP server and NIO.
The results were always bad for NIO.
Blocking I/O is minimum 25% faster then NIO.

You may tray that simply by using demo HTTP servers
Blocking/Blocking Pool/NIO single thread/NIO multiple threads
that comes with new Java6 (see java.net for sources).


OTOH, I'm sure you must have some performance results :)
Simply run the 'ab -n 100000 -c 100 -k host:8080/tomcat.gif'
with your patch and standard Tomcat5.5.9.


> Anyway, it's OK. I'll work on the new patch and resubmit it. 
> 

Great.

Regards,
Mladen.

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote

Posted by Vicenc Beltran Querol <vb...@ac.upc.edu>.
Hi guys,

I'm not trying to be a Tomcat developer. I'm working on my PhD on web performance and just decided to share with you the experimental code I've developed after studying the performance obtained ;).

Anyway, it's OK. I'll work on the new patch and resubmit it. 

Thanks for the comments,
Vicenç


On Fri, May 20, 2005 at 12:19:52PM +0200, Remy Maucherat wrote:
> Mladen Turk wrote:
> >Vicenç Beltran wrote:
> >
> >Can't you simply make two new files
> >Http11NioProcessor and Http11NioProtocol.
> 
> It definitely needs to be a (clean, this means no multiple /* */ in 
> patch submissions ;) ) separate implementation. Actually it will also 
> need a separate NioEndpoint (I would like it best if it was based on a 
> similar structure as AprEndpoint, rather than on the regular 
> TcpEndpoint, as I find its threadpool code inappropriate).
> 
> Whatever happens, I am not going to abandon APR as an optional library, 
> however, as it provides a lot of OS services in additon to simply IO (OS 
> level monitoring, process manipulation and IPC, portable secure RNG - 
> otherwise, TC session generator can't be secure by default on Windows, etc).
> 
> Rémy
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote

Posted by Remy Maucherat <re...@apache.org>.
Mladen Turk wrote:
> Vicenç Beltran wrote:
> 
> Can't you simply make two new files
> Http11NioProcessor and Http11NioProtocol.

It definitely needs to be a (clean, this means no multiple /* */ in 
patch submissions ;) ) separate implementation. Actually it will also 
need a separate NioEndpoint (I would like it best if it was based on a 
similar structure as AprEndpoint, rather than on the regular 
TcpEndpoint, as I find its threadpool code inappropriate).

Whatever happens, I am not going to abandon APR as an optional library, 
however, as it provides a lot of OS services in additon to simply IO (OS 
level monitoring, process manipulation and IPC, portable secure RNG - 
otherwise, TC session generator can't be secure by default on Windows, etc).

Rémy

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote

Posted by Mladen Turk <mt...@apache.org>.
Vicenç Beltran wrote:
> Hi, 
> 
> attached you'll find a patch that changes the coyote multithreading
> model to a "hybrid" threading model (NIO+Mulithread). It's fully
> compatible with the existing Catalina code and is SSL enabled.
> 
> diff -uprN
> jakarta-tomcat-5.5.9-src/jakarta-tomcat-connectors/http11/src/java/org/apache/coyote/http11/Http11Processor.java

Can't you simply make two new files
Http11NioProcessor and Http11NioProtocol.

Trying to change default implementation that Tomcat uses will never
be committed (at least I'll vote -1 on that).

Simply create two files that can be used instead current implementation,
in a fashion we did for Http11AprProtocol.


Regards,
Mladen.

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org