You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@druid.apache.org by GitBox <gi...@apache.org> on 2022/01/11 06:02:07 UTC

[GitHub] [druid] LakshSingla opened a new pull request #12139: Limit the subquery results by memory usage (estimated)

LakshSingla opened a new pull request #12139:
URL: https://github.com/apache/druid/pull/12139


   ### Description
   
   Currently, in the ClientQuerySegmentWalker, when the data sources get inlined, they can be limited by the number of rows to prevent a query (subquery) from hogging up the broker's memory. This however doesn't have a proper correspondence with the memory used, since a row can have multiple columns with varying amounts of data in them. Therefore it would be better if there is a memory limit also available, which prevents the subquery's results from exploding beyond a certain memory limit. 
   This PR aims to add the initial structure for limiting the subqueries by the memory limit, and an initial implementation of estimating the memory of the inlined results.
   Another alternative, apart from estimating the final memory result consumption was to calculate the gathered bytes from historicals, and then limit based on that, however that seems far off the mark, considering that the subqueries can increase or decrease in size in the broker before getting materialized.
   
   Estimating the size of the result in the memory is derived similarly to the implementations of `DimensionIndexers`.
   
   In case that the memory limit does get exceeded, it raises a `ResourceException`.
   
   <hr>
   
   ##### Key changed/added classes in this PR
    * `ClientQuerySegmentWalker`
   
   <hr>
   
   <!-- Check the items by putting "x" in the brackets for the done things. Not all of these items apply to every PR. Remove the items which are not done or not relevant to the PR. None of the items from the checklist below are strictly necessary, but it would be very helpful if you at least self-review the PR. -->
   
   This PR has:
   - [x] been self-reviewed.
      - [ ] using the [concurrency checklist](https://github.com/apache/druid/blob/master/dev/code-review/concurrency.md) (Remove this item if the PR doesn't have any relation to concurrency.)
   - [ ] added documentation for new or modified features or behaviors.
   - [ ] added Javadocs for most classes and all non-trivial methods. Linked related entities via Javadoc links.
   - [ ] added or updated version, license, or notice information in [licenses.yaml](https://github.com/apache/druid/blob/master/dev/license.md)
   - [ ] added comments explaining the "why" and the intent of the code wherever would not be obvious for an unfamiliar reader.
   - [x] added unit tests or modified existing tests to cover new code paths, ensuring the threshold for [code coverage](https://github.com/apache/druid/blob/master/dev/code-review/code-coverage.md) is met.
   - [ ] added integration tests.
   - [ ] been tested in a test Druid cluster.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] cryptoe commented on a change in pull request #12139: Limit the subquery results by memory usage (estimated)

Posted by GitBox <gi...@apache.org>.
cryptoe commented on a change in pull request #12139:
URL: https://github.com/apache/druid/pull/12139#discussion_r783645116



##########
File path: server/src/main/java/org/apache/druid/server/ClientQuerySegmentWalker.java
##########
@@ -592,6 +642,48 @@ private DataSource insertSubqueryIds(
     return InlineDataSource.fromIterable(resultList, signature);
   }
 
+  private static long estimateResultRowSize(Object[] row, RowSignature rowSignature)
+  {
+    int estimate = 0;
+    if (row == null) {
+      return 0;
+    }
+    estimate += 24; // Add memory overhead for the row array
+    for (int i = 0; i < rowSignature.size(); ++i) {
+      Optional<ColumnType> maybeColumnType = rowSignature.getColumnType(i);
+      if (!maybeColumnType.isPresent()) {
+        // This shouldn't be encountered
+        continue;
+      }
+      ColumnType columnType = maybeColumnType.get();
+      if (columnType.equals(ColumnType.LONG)) {
+        estimate += Long.BYTES;
+      } else if (columnType.equals(ColumnType.FLOAT)) {
+        estimate += Float.BYTES;
+      } else if (columnType.equals(ColumnType.DOUBLE)) {
+        estimate += Double.BYTES;
+      } else if (columnType.equals(ColumnType.STRING)
+                 || columnType.equals(ColumnType.STRING_ARRAY)
+                 || columnType.equals(ColumnType.LONG_ARRAY)
+                 || columnType.equals(ColumnType.DOUBLE_ARRAY)) {
+        if (row[i] instanceof String) {
+          estimate += 28 + 16 + 2 * (((String) row[i]).length());
+        } else if (row[i] instanceof Long) {
+          estimate += Long.BYTES;
+        } else if (row[i] instanceof Double) {
+          estimate += Double.BYTES;
+        } else if (row[i] instanceof Float) {
+          estimate += Float.BYTES;
+        } else {
+          estimate += 0;

Review comment:
       We might want to call out that we do a conservative/underestimate in case of unknown col's somewhere in the property documentation.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] clintropolis commented on pull request #12139: Limit the subquery results by memory usage (estimated)

Posted by GitBox <gi...@apache.org>.
clintropolis commented on pull request #12139:
URL: https://github.com/apache/druid/pull/12139#issuecomment-1009633460


   this seems pretty useful, but also looks rather expensive since this is going to happen for every row. Could you measure the performance before and after this change? [this benchmark might be a good place to start](https://github.com/apache/druid/blob/master/benchmarks/src/test/java/org/apache/druid/benchmark/query/CachingClusteredClientBenchmark.java)
   
   Also, there appears to be no way to disable it, maybe it should be possible to set the limit to 0 to disable this computation instead of setting the limit to max long value?
   
   Should it prove expensive, maybe the approach should be to just sample the first 'n' rows and use whatever the average estimated size is for any remaining rows instead of trying to estimate every row encountered. I imagine the loss of accuracy would be worth how much cheaper it would be to not have to loop every column of every row for all rows.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] LakshSingla commented on pull request #12139: Limit the subquery results by memory usage (estimated)

Posted by GitBox <gi...@apache.org>.
LakshSingla commented on pull request #12139:
URL: https://github.com/apache/druid/pull/12139#issuecomment-1010007702


   Thanks for the review!
   
   > this seems pretty useful, but also looks rather expensive since this is going to happen for every row. Could you measure the performance before and after this change? this benchmark might be a good place to start
   
   Yes, this would have a performance overhead on the queries when they are running. Also, I noticed no difference in running `CachingClusteredClientBenchmark` on this branch, v/s the master branch, on my laptop (31mins16s vs 31mins respectively). Is there any other benchmark which is specific to the `ClientQuerySegmentWalker`? Let me take a look around as well
   
   > Also, there appears to be no way to disable it, maybe it should be possible to set the limit to 0 to disable this computation instead of setting the limit to max long value?
   
   Nice idea! Will incorporate this. 
   
   > Should it prove expensive, maybe the approach should be to just sample the first 'n' rows and use whatever the average estimated size is for any remaining rows instead of trying to estimate every row encountered. I imagine the loss of accuracy would be worth how much cheaper it would be to not have to loop every column of every row for all rows.
   
   This seems like a good middle ground of performance versus accuracy. But how would this fare for Strings, or columns of type arrays? Instead of taking the average, should the imin/max of the first n samples be considered to mark a lower bound/upper bound on the memory usage? Also, would sampling the first 'n' be a correct assumption about the size of the rest, or could we probably do better by skipping after a while, and picking randomly? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] rohangarg commented on pull request #12139: Limit the subquery results by memory usage (estimated)

Posted by GitBox <gi...@apache.org>.
rohangarg commented on pull request #12139:
URL: https://github.com/apache/druid/pull/12139#issuecomment-1010068322


   Some thoughts : 
   1. For performance, I'd suggest to also benchmark `estimateResultRowSize` (complete function) as a function of (numRows, numCols, sizeOfCols) to measure the independent impact. For instance, currently we have a 100k limit on subquery rows so for all successful cases, we'd be only measuring the size of 100k rows by default. Maybe the benchmark also helps in determining the default parameters we might have to set (like the 'n' for sampling if needed). Also, more things like different strategies for fixed width columns and variable width columns can be thought of. Or even caching of the size for subquery to help in concurrency cases for same subquery.
   2. Should we have the config as `maxSubqueryResultMemory` to make the config clearer and scoped? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org