You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Yonik Seeley (JIRA)" <ji...@apache.org> on 2015/06/14 21:03:00 UTC
[jira] [Comment Edited] (SOLR-7344) Allow Jetty thread pool limits
while still avoiding distributed deadlock.
[ https://issues.apache.org/jira/browse/SOLR-7344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14585190#comment-14585190 ]
Yonik Seeley edited comment on SOLR-7344 at 6/14/15 7:02 PM:
-------------------------------------------------------------
I don't think we should place much weight on internal enforcement. Job #1 should be: what will actually *work* best for our existing system right now, by default, and be the least invasive to clients (without counting internal solr code as clients). I see discussions of mechanisms for tagging requests, but I still don't have an understanding of if the overall problem will be solved or not.
To recap the problem:
1) We want to cap the number of certain types of requests executing concurrently for both flow control (see SOLR-7571) and to make more efficient use of resources.
2) Solr makes requests to itself in various scenarios
- distributed sub-requests (currently only one)
- distributed updates (forwards to leaders, distributed updates
- forwards of requests because the forwarder is not part of the target collection
- Solr Streaming API: potentially unlimited nesting of requests (solr calling itself)
Can someone describe what the current proposal will actually look like (by default, including what queues would have what limits)?
Edit: this issue is getting big enough, I had missed Hrishikesh's message on the proposed queue types.
{quote}
I think the tricky part here is to identify the appropriate thread-pool size for each of the partition. Please take a look and let me know any feedback.
{quote}
Indeed... it seems like this is what we need to be solving (what queues, what limits, what behavior over the limit). Without that I can't even tell if we've solved the distributed-deadlock problem or not.
was (Author: yseeley@gmail.com):
I don't think we should place much weight on internal enforcement. Job #1 should be: what will actually *work* best for our existing system right now, by default, and be the least invasive to clients (without counting internal solr code as clients). I see discussions of mechanisms for tagging requests, but I still don't have an understanding of if the overall problem will be solved or not.
To recap the problem:
1) We want to cap the number of certain types of requests executing concurrently for both flow control (see SOLR-7571) and to make more efficient use of resources.
2) Solr makes requests to itself in various scenarios
- distributed sub-requests (currently only one)
- distributed updates (forwards to leaders, distributed updates
- forwards of requests because the forwarder is not part of the target collection
- Solr Streaming API: potentially unlimited nesting of requests (solr calling itself)
Can someone describe what the current proposal will actually look like (by default, including what queues would have what limits)?
> Allow Jetty thread pool limits while still avoiding distributed deadlock.
> -------------------------------------------------------------------------
>
> Key: SOLR-7344
> URL: https://issues.apache.org/jira/browse/SOLR-7344
> Project: Solr
> Issue Type: Improvement
> Components: SolrCloud
> Reporter: Mark Miller
> Attachments: SOLR-7344.patch
>
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org