You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@nifi.apache.org by Stephen Coder - US <sc...@caci.com> on 2016/04/01 18:32:26 UTC

Question concerning jetty server timeout in 0.5.1 HandleHttpRequest

Hi Nifi Team,


I've been experimenting with the HandleHttpRequest/Response processors in Nifi 0.5.1, and have noticed an issue that I've not been able to resolve. I'm hoping that I'm simply missing a configuration item, but I've been unable to find the solution.


The scenario is this: HandleHttpRequest --> Long Processing (> 30 seconds) --> HandleHttpResponse. It appears that the jetty server backing the HandleHttpRequest has a built in idle time timeout of 30000 ms (see jetty-server/src/main/java/org/eclipse/jetty/server/AbstractConnector.java _idleTimeout value). In my test flow, 30 seconds after a HTTP requests comes in, a second request comes into the flow. It has the same information, except the http.context.identifier and the FlowFile UUID has changed, and the http.dispatcher.type has changed from REQUEST to ERROR. From my online research (http://stackoverflow.com/questions/30786939/jetty-replay-request-on-timeout?), this re-request with a type of error comes in after jetty determines that a request has timed out.


This would not normally be a big deal. I was able to RouteOnAttribute and capture all ERROR requests without responding. However, those requests are never cleared from the StandardHttpContextMap. I've tested this by setting the number of requests allowed by the StandardHttpContextMap to 4, and done 4 of my long Request/Response tests. Each request is correctly responded to eventually in my test, but because they take over 30 seconds each also generates an ERROR request that is stored in the StandardHttpContextMap. If I then leave the system alone for much longer than the Request Timeout parameter in the StandardHttpContextMap and then attempt a request, I get a 503 response saying that the queue is full and no requests are allowed. No requests are allowed at all until I delete and recreate the Map.


It seems unlikely to me that no one has attempted to use these processors in this fashion. However, looking through the unit test for this processor it seems like no where was a timeout tested over 30 seconds, so I thought it worth a conversation.


So finally, is there a configuration item to extend the jetty server's idle timeout? Or is there a better way to ensure that the bogus requests don't get stuck permanently in the StandardHttpContextMap? I appreciate any pointers you can give.


Thanks,
Luke Coder
BIT Systems
CACI - NCS
941-907-8803 x705
6851 Professional Pkwy W
Sarasota, FL 34240

Re: Question concerning jetty server timeout in 0.5.1 HandleHttpRequest

Posted by coder <sc...@caci.com>.
The change looks like it should work fine. I'm unable to test the change, but
it looks correct. The only other thing I'd say is needed is the HTTP REQUEST
leakage in the StandardHttpContextMap. This happens when the timeout expires
and the HTTP ERROR comes in. Even after the REQUEST timeout is completed in
the StandardHttpContextMap, from what I saw the HTTP ERROR messages are
never cleared and thus the StandardHttpContextMap can get maxed out if too
many timeouts occur.  This might be worth putting into a different JIRA
request, but I thought I'd mention it.

Thanks!
Luke



--
View this message in context: http://apache-nifi-developer-list.39713.n7.nabble.com/Question-concerning-jetty-server-timeout-in-0-5-1-HandleHttpRequest-tp8712p9332.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.

Re: Question concerning jetty server timeout in 0.5.1 HandleHttpRequest

Posted by Pierre Villard <pi...@gmail.com>.
Luke,

I have submitted a PR [1] allowing user to override the default timeout.
If you have a chance to have a look to give us a feedback, that would be
great.

Pierre

[1] https://github.com/apache/nifi/pull/337

2016-04-05 19:46 GMT+02:00 Mark Payne <ma...@hotmail.com>:

> Luke,
>
> I have not yet heard of anyone else running into this, personally.
> Typically, when I've used these
> processors, I am expecting sub-second response times, not 30+ second
> response times. Of course,
> your use case, though, is perfectly valid - just not something that I've
> ever run into myself.
>
> I have submitted a new JIRA [1] to address this issue.
>
> Thanks
> -Mark
>
> [1] https://issues.apache.org/jira/browse/NIFI-1732 <
> https://issues.apache.org/jira/browse/NIFI-1732>
>
>
> > On Apr 1, 2016, at 12:32 PM, Stephen Coder - US <sc...@caci.com> wrote:
> >
> > Hi Nifi Team,
> >
> >
> > I've been experimenting with the HandleHttpRequest/Response processors
> in Nifi 0.5.1, and have noticed an issue that I've not been able to
> resolve. I'm hoping that I'm simply missing a configuration item, but I've
> been unable to find the solution.
> >
> >
> > The scenario is this: HandleHttpRequest --> Long Processing (> 30
> seconds) --> HandleHttpResponse. It appears that the jetty server backing
> the HandleHttpRequest has a built in idle time timeout of 30000 ms (see
> jetty-server/src/main/java/org/eclipse/jetty/server/AbstractConnector.java
> _idleTimeout value). In my test flow, 30 seconds after a HTTP requests
> comes in, a second request comes into the flow. It has the same
> information, except the http.context.identifier and the FlowFile UUID has
> changed, and the http.dispatcher.type has changed from REQUEST to ERROR.
> From my online research (
> http://stackoverflow.com/questions/30786939/jetty-replay-request-on-timeout?),
> this re-request with a type of error comes in after jetty determines that a
> request has timed out.
> >
> >
> > This would not normally be a big deal. I was able to RouteOnAttribute
> and capture all ERROR requests without responding. However, those requests
> are never cleared from the StandardHttpContextMap. I've tested this by
> setting the number of requests allowed by the StandardHttpContextMap to 4,
> and done 4 of my long Request/Response tests. Each request is correctly
> responded to eventually in my test, but because they take over 30 seconds
> each also generates an ERROR request that is stored in the
> StandardHttpContextMap. If I then leave the system alone for much longer
> than the Request Timeout parameter in the StandardHttpContextMap and then
> attempt a request, I get a 503 response saying that the queue is full and
> no requests are allowed. No requests are allowed at all until I delete and
> recreate the Map.
> >
> >
> > It seems unlikely to me that no one has attempted to use these
> processors in this fashion. However, looking through the unit test for this
> processor it seems like no where was a timeout tested over 30 seconds, so I
> thought it worth a conversation.
> >
> >
> > So finally, is there a configuration item to extend the jetty server's
> idle timeout? Or is there a better way to ensure that the bogus requests
> don't get stuck permanently in the StandardHttpContextMap? I appreciate any
> pointers you can give.
> >
> >
> > Thanks,
> > Luke Coder
> > BIT Systems
> > CACI - NCS
> > 941-907-8803 x705
> > 6851 Professional Pkwy W
> > Sarasota, FL 34240
>
>

Re: Question concerning jetty server timeout in 0.5.1 HandleHttpRequest

Posted by Mark Payne <ma...@hotmail.com>.
Luke,

I have not yet heard of anyone else running into this, personally. Typically, when I've used these
processors, I am expecting sub-second response times, not 30+ second response times. Of course,
your use case, though, is perfectly valid - just not something that I've ever run into myself.

I have submitted a new JIRA [1] to address this issue.

Thanks
-Mark

[1] https://issues.apache.org/jira/browse/NIFI-1732 <https://issues.apache.org/jira/browse/NIFI-1732>


> On Apr 1, 2016, at 12:32 PM, Stephen Coder - US <sc...@caci.com> wrote:
> 
> Hi Nifi Team,
> 
> 
> I've been experimenting with the HandleHttpRequest/Response processors in Nifi 0.5.1, and have noticed an issue that I've not been able to resolve. I'm hoping that I'm simply missing a configuration item, but I've been unable to find the solution.
> 
> 
> The scenario is this: HandleHttpRequest --> Long Processing (> 30 seconds) --> HandleHttpResponse. It appears that the jetty server backing the HandleHttpRequest has a built in idle time timeout of 30000 ms (see jetty-server/src/main/java/org/eclipse/jetty/server/AbstractConnector.java _idleTimeout value). In my test flow, 30 seconds after a HTTP requests comes in, a second request comes into the flow. It has the same information, except the http.context.identifier and the FlowFile UUID has changed, and the http.dispatcher.type has changed from REQUEST to ERROR. From my online research (http://stackoverflow.com/questions/30786939/jetty-replay-request-on-timeout?), this re-request with a type of error comes in after jetty determines that a request has timed out.
> 
> 
> This would not normally be a big deal. I was able to RouteOnAttribute and capture all ERROR requests without responding. However, those requests are never cleared from the StandardHttpContextMap. I've tested this by setting the number of requests allowed by the StandardHttpContextMap to 4, and done 4 of my long Request/Response tests. Each request is correctly responded to eventually in my test, but because they take over 30 seconds each also generates an ERROR request that is stored in the StandardHttpContextMap. If I then leave the system alone for much longer than the Request Timeout parameter in the StandardHttpContextMap and then attempt a request, I get a 503 response saying that the queue is full and no requests are allowed. No requests are allowed at all until I delete and recreate the Map.
> 
> 
> It seems unlikely to me that no one has attempted to use these processors in this fashion. However, looking through the unit test for this processor it seems like no where was a timeout tested over 30 seconds, so I thought it worth a conversation.
> 
> 
> So finally, is there a configuration item to extend the jetty server's idle timeout? Or is there a better way to ensure that the bogus requests don't get stuck permanently in the StandardHttpContextMap? I appreciate any pointers you can give.
> 
> 
> Thanks,
> Luke Coder
> BIT Systems
> CACI - NCS
> 941-907-8803 x705
> 6851 Professional Pkwy W
> Sarasota, FL 34240