You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@lucene.apache.org by "Isabelle Giguere (Jira)" <ji...@apache.org> on 2021/02/01 23:03:00 UTC

[jira] [Commented] (SOLR-8393) Component for Solr resource usage planning

    [ https://issues.apache.org/jira/browse/SOLR-8393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17276715#comment-17276715 ] 

Isabelle Giguere commented on SOLR-8393:
----------------------------------------

New patch, off current master

Parameter 'sizeUnit' is supported for both the SizeComponent, and ClusterSizing.

If parameter 'sizeUnit' is present, values will be output as 'double', according to the chosen size unit.
Value of 'estimated-num-docs' remains a 'long'.
Default behavior, if 'sizeUnit' is not present is the human-readable format.

Valid values for 'sizeUnit' are : GB, MB, KB, bytes

**********
Note about the implementation : 
ClusterSizing calls the SizeComponent via HTTP.  So the returned results per collection are already formatted according to 'sizeUnit' (or lack of it).  As a consequence, ClusterSizing needs to toggle back and forth between human-readable values, and raw long values, to support the requested 'sizeUnit'.
I don't know how we could intercept the SizeComponent response, and receive just the long values, to make the conversion to some 'sizeUnit' just once in ClusterSizing, while keeping the formatting in SizeComponent, for use cases that would call it directly.
A response transformer ?  Would that be the right approach ?

> Component for Solr resource usage planning
> ------------------------------------------
>
>                 Key: SOLR-8393
>                 URL: https://issues.apache.org/jira/browse/SOLR-8393
>             Project: Solr
>          Issue Type: Improvement
>            Reporter: Steve Molloy
>            Priority: Major
>         Attachments: SOLR-8393.patch, SOLR-8393.patch, SOLR-8393.patch, SOLR-8393.patch, SOLR-8393.patch, SOLR-8393.patch, SOLR-8393.patch, SOLR-8393.patch, SOLR-8393.patch, SOLR-8393_tag_7.5.0.patch
>
>
> One question that keeps coming back is how much disk and RAM do I need to run Solr. The most common response is that it highly depends on your data. While true, it makes for frustrated users trying to plan their deployments. 
> The idea I'm bringing is to create a new component that will attempt to extrapolate resources needed in the future by looking at resources currently used. By adding a parameter for the target number of documents, current resources are adapted by a ratio relative to current number of documents.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org