You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@tomcat.apache.org by Yuanuo <gi...@git.apache.org> on 2018/10/12 13:54:53 UTC

[GitHub] tomcat pull request #126: The feature of the transfer rate control are added...

GitHub user Yuanuo opened a pull request:

    https://github.com/apache/tomcat/pull/126

    The feature of the transfer rate control are added to the sendfile.

    The Sendfile feature has been added with the characteristics of the transfer rate control, which can be individually limited for each sendfile process.
    Use a double value in megabytes per second.
    has been tested by: APR, NIO, NIO2 these three kinds of connector modes.
    And the initial parameter "sendfilerate" configuration support for global Defaultservlet.
    
    BTW:The org.apache.tomcat.util.net.RateLimiter is copied from lucene-core-7.5.0.jar.
    


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/Yuanuo/tomcat feature-sendfile-rate-limit-support

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/tomcat/pull/126.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #126
    
----
commit 17c0e917fe1d9a80d31134d305a9cc38a761f3cb
Author: Yuanuo <af...@...>
Date:   2018-10-12T13:51:34Z

    The feature of the transfer rate control are added to the sendfile.
    The Sendfile feature has been added with the characteristics of the transfer rate control, which can be individually limited for each sendfile process.
    Use a double value in megabytes per second.
    has been tested by: APR, NIO, NIO2 these three kinds of connector modes.
    And the initial parameter "sendfilerate" configuration support for global Defaultservlet.

----


---

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org


[GitHub] tomcat issue #126: The feature of the transfer rate control are added to the...

Posted by rmaucher <gi...@git.apache.org>.
Github user rmaucher commented on the issue:

    https://github.com/apache/tomcat/pull/126
  
    -1 from me. This is a bad combination of too big and too specific to be included IMO. Also, and more importantly, it will replace a non blocking / async IO feature with something that uses threads like a regular servlet does.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org


[GitHub] tomcat pull request #126: The feature of the transfer rate control are added...

Posted by rmaucher <gi...@git.apache.org>.
Github user rmaucher commented on a diff in the pull request:

    https://github.com/apache/tomcat/pull/126#discussion_r224803205
  
    --- Diff: java/org/apache/tomcat/util/net/RateLimiter.java ---
    @@ -0,0 +1,163 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *      http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +package org.apache.tomcat.util.net;
    +
    +import java.io.IOException;
    +
    +/** Abstract base class to rate limit IO.  Typically implementations are
    + *  shared across multiple IndexInputs or IndexOutputs (for example
    + *  those involved all merging).  Those IndexInputs and
    + *  IndexOutputs would call {@link #pause} whenever the have read
    + *  or written more than {@link #getMinPauseCheckBytes} bytes. */
    +public abstract class RateLimiter {
    +
    +  /**
    +   * Sets an updated MB per second rate limit.
    +   */
    +  public abstract void setMBPerSec(double mbPerSec);
    +
    +  /**
    +   * The current MB per second rate limit.
    +   */
    +  public abstract double getMBPerSec();
    +  
    +  /** Pauses, if necessary, to keep the instantaneous IO
    +   *  rate at or below the target. 
    +   *  <p>
    +   *  Note: the implementation is thread-safe
    +   *  </p>
    +   *  @return the pause time in nano seconds 
    +   * */
    +  public abstract long pause(long bytes);
    +  
    +  /** How many bytes caller should add up itself before invoking {@link #pause}. */
    +  public abstract long getMinPauseCheckBytes();
    +
    +  /**
    +   * Simple class to rate limit IO.
    +   */
    +  public static class SimpleRateLimiter extends RateLimiter {
    +
    +    private final static int MIN_PAUSE_CHECK_MSEC = 5;
    +
    +    private volatile double mbPerSec;
    +    private volatile long minPauseCheckBytes;
    +    private long lastNS;
    +
    +    // TODO: we could also allow eg a sub class to dynamically
    +    // determine the allowed rate, eg if an app wants to
    +    // change the allowed rate over time or something
    +
    +    /** mbPerSec is the MB/sec max IO rate */
    +    public SimpleRateLimiter(double mbPerSec) {
    +      setMBPerSec(mbPerSec);
    +      lastNS = System.nanoTime();
    +    }
    +
    +    /**
    +     * Sets an updated mb per second rate limit.
    +     */
    +    @Override
    +    public void setMBPerSec(double mbPerSec) {
    +      this.mbPerSec = mbPerSec;
    +      minPauseCheckBytes = (long) ((MIN_PAUSE_CHECK_MSEC / 1000.0) * mbPerSec * 1024 * 1024);
    +    }
    +
    +    @Override
    +    public long getMinPauseCheckBytes() {
    +      return minPauseCheckBytes;
    +    }
    +
    +    /**
    +     * The current mb per second rate limit.
    +     */
    +    @Override
    +    public double getMBPerSec() {
    +      return this.mbPerSec;
    +    }
    +    
    +    /** Pauses, if necessary, to keep the instantaneous IO
    +     *  rate at or below the target.  Be sure to only call
    +     *  this method when bytes &gt; {@link #getMinPauseCheckBytes},
    +     *  otherwise it will pause way too long!
    +     *
    +     *  @return the pause time in nano seconds */  
    +    @Override
    +    public long pause(long bytes) {
    +
    +      long startNS = System.nanoTime();
    +
    +      double secondsToPause = (bytes/1024./1024.) / mbPerSec;
    +
    +      long targetNS;
    +
    +      // Sync'd to read + write lastNS:
    +      synchronized (this) {
    +
    +        // Time we should sleep until; this is purely instantaneous
    +        // rate (just adds seconds onto the last time we had paused to);
    +        // maybe we should also offer decayed recent history one?
    +        targetNS = lastNS + (long) (1000000000 * secondsToPause);
    +
    +        if (startNS >= targetNS) {
    +          // OK, current time is already beyond the target sleep time,
    +          // no pausing to do.
    +
    +          // Set to startNS, not targetNS, to enforce the instant rate, not
    +          // the "averaaged over all history" rate:
    +          lastNS = startNS;
    +          return 0;
    +        }
    +
    +        lastNS = targetNS;
    +      }
    +
    +      long curNS = startNS;
    +
    +      // While loop because Thread.sleep doesn't always sleep
    +      // enough:
    +      while (true) {
    +        final long pauseNS = targetNS - curNS;
    +        if (pauseNS > 0) {
    +          try {
    +            // NOTE: except maybe on real-time JVMs, minimum realistic sleep time
    +            // is 1 msec; if you pass just 1 nsec the default impl rounds
    +            // this up to 1 msec:
    +            int sleepNS;
    +            int sleepMS;
    +            if (pauseNS > 100000L * Integer.MAX_VALUE) {
    +              // Not really practical (sleeping for 25 days) but we shouldn't overflow int:
    +              sleepMS = Integer.MAX_VALUE;
    +              sleepNS = 0;
    +            } else {
    +              sleepMS = (int) (pauseNS/1000000);
    +              sleepNS = (int) (pauseNS % 1000000);
    +            }
    +            Thread.sleep(sleepMS, sleepNS);
    --- End diff --
    
    If you are going to use such a mechanism, I would recommend using a regular servlet instead. Sendfile is faster in theory, but there are many options out there that could be considered first unless you are switching to a more advanced design for the throttling.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org


[GitHub] tomcat issue #126: The feature of the transfer rate control are added to the...

Posted by Yuanuo <gi...@git.apache.org>.
Github user Yuanuo commented on the issue:

    https://github.com/apache/tomcat/pull/126
  
    Ok. It has been closed, but I still want to express my views here.
    
    I am very sorry about my English level. These texts are all done with Google Translate.
    
    1. In any actual environment, the resources and bandwidth of the server are limited, so the speed limit function is necessary.
    
    2, Nginx, Apache, Lighttpd Web servers have Sendfile module to achieve high-performance data transmission. I am using Nginx now, the other two have not been studied. Nginx's Sendfile mode also supports speed limit, which is implemented using the "X-Accel-Limit-Rate" header. And the implementation principle of the speed limit is basically the same, in short, it is guaranteed to transmit only the bytes of certain data within one second, so the process/thread sleep wait will be generated. But does this speed limit feature make Nginx a low-performance web server? The answer is no, this feature will only work when you use it, but it is turned off by default. But this also provides a possibility for people to use.
    
    3, we all know that Tomcat is not only used to transfer a few css or js, and according to the Tomcat implementation, to enable the Sendfile mode, the default sendfileSize is also 48KB, that is, only data larger than this amount will try to enable Sendfile transfer mode. In other applications, such as to do a file download server, you can't make any restrictions so that several download connections will run out of server bandwidth.
    
    4, in the modern Web environment, generally do not directly use Tomcat as a Web server, but use Nginx+Tomcat or other working modes. If you are dealing with static resources, css, js, jpg, png, etc., most of them can be handed over to Nginx for direct processing without going through Tomcat. In this case, Tomcat's Sendfile will never be used. What does this mean? That is to say, the Sendfile function that everyone has to maintain will not be used, even if a lot of code is written to implement APR, NIO, NIO2 and other modes.
    
    5, the code I submitted does not destroy the original working mode, which is what it is, these codes will not play any role by default. It will only be turned on after some parameters have been set. This gives people a possibility, but it is decided by people. On the contrary, even if there is a default high-performance NIO implementation of Sendfile, but people are worried that their server bandwidth is used up, and can not carry out any speed limit, then the default implementation of the Sendfile function will never be used, then everyone writes these What is the use of the code? Yes, you said that these requirements can be achieved by writing a Servlet or Filter yourself, yes, that's it. But everyone who uses Tomcat has to write a set of Servlets or Filters to implement this function. Who is the implementation of Tomcat's own implementation? I think this speed limit feature for Sendfile does not break any existing code and structure, but rather an enhancement to Tomcat.
    
    6. Yes, there is no example of how to use it here. However, in the submitted code, the DefaultServlet has only been modified a few lines, and these lines (lines 2068-2070) are examples of how to use them. So some people may not have viewed the submitted code at all, and directly rely on other people's opinions to negate my code submission, showing that this is unacceptable, and I also doubt this kind of work attitude. Sorry! Since this has been closed, I am not going to describe the usage here.
    
    
    Finally, no matter what, I decided to use Nginx's Sendfile to achieve my needs. This is how I use it in my current PHP code. This submission will be abandoned.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org


[GitHub] tomcat pull request #126: The feature of the transfer rate control are added...

Posted by markt-asf <gi...@git.apache.org>.
Github user markt-asf closed the pull request at:

    https://github.com/apache/tomcat/pull/126


---

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org


[GitHub] tomcat issue #126: The feature of the transfer rate control are added to the...

Posted by rmaucher <gi...@git.apache.org>.
Github user rmaucher commented on the issue:

    https://github.com/apache/tomcat/pull/126
  
    Ok. But this portion of the code in the connectors is asynchronous, so a throttling solution based on Thread.sleep isn't good enough for inclusion. A throttling feature, by itself, would be acceptable but could use a bit of prior discussion to make sure everyone is ok with the plan.
    
    We're completely fine if you are using ngnix or httpd as a proxy for your throttling needs (and others), as a matter of fact Tomcat conspicuously does not implement proxying and we are actively recommending usage of httpd/mod_proxy/balancer for this purpose.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org


[GitHub] tomcat issue #126: The feature of the transfer rate control are added to the...

Posted by markt-asf <gi...@git.apache.org>.
Github user markt-asf commented on the issue:

    https://github.com/apache/tomcat/pull/126
  
    -1 here too for much the same reason. A filter would work better for this.
    Ignoring the thread related concerns, I'd also point out that new features without a description of the use case or similar justification are very unlikely to be accepted.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org