You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@pinot.apache.org by GitBox <gi...@apache.org> on 2022/01/14 22:22:41 UTC
[GitHub] [pinot] weixiangsun opened a new pull request #8029: Add Pre-Aggregation Gapfilling functionality.
weixiangsun opened a new pull request #8029:
URL: https://github.com/apache/pinot/pull/8029
## Description
The data set can be time series data. We might need to aggregate the data per time bucket. The data can be missing per entity inside the time bucket. So we need gap fill the missing data per entity for the time buckets before doing the aggregation.
Design: #7422
## Upgrade Notes
Does this PR prevent a zero down-time upgrade? (Assume upgrade order: Controller, Broker, Server, Minion)
* [ ] Yes (Please label as **<code>backward-incompat</code>**, and complete the section below on Release Notes)
Does this PR fix a zero-downtime upgrade introduced earlier?
* [ ] Yes (Please label this as **<code>backward-incompat</code>**, and complete the section below on Release Notes)
Does this PR otherwise need attention when creating release notes? Things to consider:
- New configuration options
- Deprecation of configurations
- Signature changes to public methods/interfaces
- New plugins added or old plugins removed
* [ ] Yes (Please label this PR as **<code>release-notes</code>** and complete the section on Release Notes)
## Release Notes
<!-- If you have tagged this as either backward-incompat or release-notes,
you MUST add text here that you would like to see appear in release notes of the
next release. -->
<!-- If you have a series of commits adding or enabling a feature, then
add this section only in final commit that marks the feature completed.
Refer to earlier release notes to see examples of text.
-->
## Documentation
<!-- If you have introduced a new feature or configuration, please add it to the documentation as well.
See https://docs.pinot.apache.org/developers/developers-and-contributors/update-document
-->
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829462208
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/QueryContext.java
##########
@@ -188,6 +194,10 @@ public FilterContext getHavingFilter() {
return _orderByExpressions;
}
+ public QueryContext getSubQueryContext() {
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r830747853
##########
File path: pinot-broker/src/main/java/org/apache/pinot/broker/requesthandler/BaseBrokerRequestHandler.java
##########
@@ -217,7 +218,10 @@ private BrokerResponseNative handleSQLRequest(long requestId, String query, Json
requestStatistics.setErrorCode(QueryException.PQL_PARSING_ERROR_CODE);
return new BrokerResponseNative(QueryException.getException(QueryException.PQL_PARSING_ERROR, e));
}
- PinotQuery pinotQuery = brokerRequest.getPinotQuery();
+
+ BrokerRequest serverBrokerRequest = GapfillUtils.stripGapfill(brokerRequest);
Review comment:
It is done inside GapfillUtils.stripGapfill method.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829464458
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/utils/BrokerRequestToQueryContextConverter.java
##########
@@ -42,23 +42,42 @@
import org.apache.pinot.common.utils.request.FilterQueryTree;
import org.apache.pinot.common.utils.request.RequestUtils;
import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
import org.apache.pinot.segment.spi.AggregationFunctionType;
public class BrokerRequestToQueryContextConverter {
private BrokerRequestToQueryContextConverter() {
}
+ /**
+ * Validate the gapfill query.
+ */
+ public static void validateGapfillQuery(BrokerRequest brokerRequest) {
+ if (brokerRequest.getPinotQuery() != null) {
+ QueryContext queryContext = convertSQL(brokerRequest.getPinotQuery(), brokerRequest);
+ GapfillUtils.setGapfillType(queryContext);
+ }
+ }
+
/**
* Converts the given {@link BrokerRequest} into a {@link QueryContext}.
*/
public static QueryContext convert(BrokerRequest brokerRequest) {
- return brokerRequest.getPinotQuery() != null ? convertSQL(brokerRequest) : convertPQL(brokerRequest);
+ if (brokerRequest.getPinotQuery() != null) {
+ QueryContext queryContext = convertSQL(brokerRequest.getPinotQuery(), brokerRequest);
+ GapfillUtils.setGapfillType(queryContext);
+ return queryContext;
+ } else {
+ return convertPQL(brokerRequest);
+ }
}
- private static QueryContext convertSQL(BrokerRequest brokerRequest) {
- PinotQuery pinotQuery = brokerRequest.getPinotQuery();
-
+ private static QueryContext convertSQL(PinotQuery pinotQuery, BrokerRequest brokerRequest) {
+ QueryContext subQueryContext = null;
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (cea0596) into [master](https://codecov.io/gh/apache/pinot/commit/cd311bcc2da2d0c7ecb05970581926b5af37f358?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (cd311bc) will **decrease** coverage by `40.20%`.
> The diff coverage is `17.89%`.
> :exclamation: Current head cea0596 differs from pull request most recent head 96e0a1c. Consider uploading reports for the commit 96e0a1c to get more accurate results
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 70.76% 30.55% -40.21%
=============================================
Files 1639 1636 -3
Lines 85920 86065 +145
Branches 12922 13025 +103
=============================================
- Hits 60801 26298 -34503
- Misses 20925 57384 +36459
+ Partials 4194 2383 -1811
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.81% <17.55%> (-0.08%)` | :arrow_down: |
| integration2 | `27.33% <17.89%> (-0.22%)` | :arrow_down: |
| unittests1 | `?` | |
| unittests2 | `?` | |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/GrpcBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvR3JwY0Jyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `75.67% <ø> (ø)` | |
| [.../pinot/core/query/reduce/GapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...ache/pinot/core/query/reduce/GapfillProcessor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwZmlsbFByb2Nlc3Nvci5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...e/pinot/core/query/reduce/RowBasedBlockValSet.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUm93QmFzZWRCbG9ja1ZhbFNldC5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `69.70% <ø> (-22.48%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `16.57% <13.46%> (-47.07%)` | :arrow_down: |
| [...pache/pinot/common/utils/request/RequestUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9jb21tb24vdXRpbHMvcmVxdWVzdC9SZXF1ZXN0VXRpbHMuamF2YQ==) | `56.46% <33.33%> (-31.04%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `83.78% <50.00%> (-7.99%)` | :arrow_down: |
| [.../core/query/reduce/filter/PredicateRowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1ByZWRpY2F0ZVJvd01hdGNoZXIuamF2YQ==) | `50.00% <50.00%> (ø)` | |
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `63.74% <55.55%> (-23.97%)` | :arrow_down: |
| ... and [1147 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [cd311bc...96e0a1c](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r830413641
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/selection/SelectionOperatorUtils.java
##########
@@ -79,7 +79,6 @@ private SelectionOperatorUtils() {
private static final String FLOAT_PATTERN = "#########0.0####";
private static final String DOUBLE_PATTERN = "###################0.0#########";
private static final DecimalFormatSymbols DECIMAL_FORMAT_SYMBOLS = DecimalFormatSymbols.getInstance(Locale.US);
-
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (a5316f7) into [master](https://codecov.io/gh/apache/pinot/commit/3f98ce37fdaef0335fcd82e621489d65751b1f55?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (3f98ce3) will **decrease** coverage by `1.14%`.
> The diff coverage is `82.59%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 70.72% 69.58% -1.15%
+ Complexity 4242 4241 -1
============================================
Files 1631 1641 +10
Lines 85279 85951 +672
Branches 12844 13012 +168
============================================
- Hits 60316 59805 -511
- Misses 20799 21973 +1174
- Partials 4164 4173 +9
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `27.33% <16.14%> (-0.17%)` | :arrow_down: |
| unittests1 | `67.14% <82.30%> (+0.15%)` | :arrow_up: |
| unittests2 | `13.99% <0.00%> (-0.10%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `70.91% <0.00%> (-0.94%)` | :arrow_down: |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `91.58% <0.00%> (-0.60%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...ot/core/query/reduce/filter/RowMatcherFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXJGYWN0b3J5LmphdmE=) | `66.66% <66.66%> (ø)` | |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `92.10% <75.00%> (+0.34%)` | :arrow_up: |
| [...xt/utils/BrokerRequestToQueryContextConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvQnJva2VyUmVxdWVzdFRvUXVlcnlDb250ZXh0Q29udmVydGVyLmphdmE=) | `92.68% <76.74%> (-5.71%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `76.53% <82.85%> (+12.89%)` | :arrow_up: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `86.36% <83.33%> (-1.14%)` | :arrow_down: |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| [...ache/pinot/core/plan/GapfillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `84.21% <84.21%> (ø)` | |
| ... and [119 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [3f98ce3...a5316f7](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (7746a7a) into [master](https://codecov.io/gh/apache/pinot/commit/3f98ce37fdaef0335fcd82e621489d65751b1f55?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (3f98ce3) will **decrease** coverage by `56.74%`.
> The diff coverage is `0.00%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 70.72% 13.98% -56.75%
+ Complexity 4242 81 -4161
=============================================
Files 1631 1596 -35
Lines 85279 84082 -1197
Branches 12844 12816 -28
=============================================
- Hits 60316 11761 -48555
- Misses 20799 71444 +50645
+ Partials 4164 877 -3287
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `?` | |
| unittests2 | `13.98% <0.00%> (-0.12%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `24.41% <0.00%> (-47.44%)` | :arrow_down: |
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `0.00% <0.00%> (-86.56%)` | :arrow_down: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (-87.50%)` | :arrow_down: |
| [...ache/pinot/core/plan/GapfillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...pinot/core/plan/maker/InstancePlanMakerImplV2.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL21ha2VyL0luc3RhbmNlUGxhbk1ha2VySW1wbFYyLmphdmE=) | `0.00% <0.00%> (-76.77%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...not/core/query/reduce/GapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwRmlsbERhdGFUYWJsZVJlZHVjZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [.../pinot/core/query/reduce/GapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...e/pinot/core/query/reduce/HavingFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvSGF2aW5nRmlsdGVySGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.31%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.77%)` | :arrow_down: |
| ... and [1320 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [3f98ce3...7746a7a](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r815388675
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/plan/CombinePlanNode.java
##########
@@ -180,6 +191,8 @@ public BaseCombineOperator run() {
// Selection order-by
return new SelectionOrderByCombineOperator(operators, _queryContext, _executorService);
}
+ } else if (gapfillType != GapfillUtils.GapfillType.NONE) {
+ return new SelectionOnlyCombineOperator(operators, _queryContext, _executorService);
Review comment:
We do not need do the sorting on the pinot server. The sorting will be done broker side.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r812295908
##########
File path: pinot-core/src/test/java/org/apache/pinot/queries/PreAggregationGapfillQueriesTest.java
##########
@@ -0,0 +1,3277 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.queries;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import org.apache.commons.io.FileUtils;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.segment.local.indexsegment.immutable.ImmutableSegmentLoader;
+import org.apache.pinot.segment.local.segment.creator.impl.SegmentIndexCreationDriverImpl;
+import org.apache.pinot.segment.local.segment.readers.GenericRowRecordReader;
+import org.apache.pinot.segment.spi.ImmutableSegment;
+import org.apache.pinot.segment.spi.IndexSegment;
+import org.apache.pinot.segment.spi.creator.SegmentGeneratorConfig;
+import org.apache.pinot.spi.config.table.TableConfig;
+import org.apache.pinot.spi.config.table.TableType;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+import org.apache.pinot.spi.data.FieldSpec.DataType;
+import org.apache.pinot.spi.data.Schema;
+import org.apache.pinot.spi.data.readers.GenericRow;
+import org.apache.pinot.spi.utils.ReadMode;
+import org.apache.pinot.spi.utils.builder.TableConfigBuilder;
+import org.testng.Assert;
+import org.testng.annotations.AfterClass;
+import org.testng.annotations.BeforeClass;
+import org.testng.annotations.Test;
+
+
+/**
+ * Queries test for Gapfill queries.
+ */
+// TODO: Item 1. table alias for subquery in next PR
+// TODO: Item 2. Deprecate PostAggregateGapfill implementation in next PR
+@SuppressWarnings("rawtypes")
+public class PreAggregationGapfillQueriesTest extends BaseQueriesTest {
+ private static final File INDEX_DIR = new File(FileUtils.getTempDirectory(), "PostAggregationGapfillQueriesTest");
+ private static final String RAW_TABLE_NAME = "parkingData";
+ private static final String SEGMENT_NAME = "testSegment";
+
+ private static final int NUM_LOTS = 4;
+
+ private static final String IS_OCCUPIED_COLUMN = "isOccupied";
+ private static final String LEVEL_ID_COLUMN = "levelId";
+ private static final String LOT_ID_COLUMN = "lotId";
+ private static final String EVENT_TIME_COLUMN = "eventTime";
+ private static final Schema SCHEMA = new Schema.SchemaBuilder()
+ .addSingleValueDimension(IS_OCCUPIED_COLUMN, DataType.INT)
+ .addSingleValueDimension(LOT_ID_COLUMN, DataType.STRING)
+ .addSingleValueDimension(LEVEL_ID_COLUMN, DataType.STRING)
+ .addSingleValueDimension(EVENT_TIME_COLUMN, DataType.LONG)
+ .setPrimaryKeyColumns(Arrays.asList(LOT_ID_COLUMN, EVENT_TIME_COLUMN))
+ .build();
+ private static final TableConfig TABLE_CONFIG = new TableConfigBuilder(TableType.OFFLINE).setTableName(RAW_TABLE_NAME)
+ .build();
+
+ private IndexSegment _indexSegment;
+ private List<IndexSegment> _indexSegments;
+
+ @Override
+ protected String getFilter() {
+ // NOTE: Use a match all filter to switch between DictionaryBasedAggregationOperator and AggregationOperator
+ return " WHERE eventTime >= 0";
+ }
+
+ @Override
+ protected IndexSegment getIndexSegment() {
+ return _indexSegment;
+ }
+
+ @Override
+ protected List<IndexSegment> getIndexSegments() {
+ return _indexSegments;
+ }
+
+ GenericRow createRow(String time, int levelId, int lotId, boolean isOccupied) {
+ DateTimeFormatSpec dateTimeFormatter
+ = new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ GenericRow parkingRow = new GenericRow();
+ parkingRow.putValue(EVENT_TIME_COLUMN, dateTimeFormatter.fromFormatToMillis(time));
+ parkingRow.putValue(LEVEL_ID_COLUMN, "Level_" + String.valueOf(levelId));
+ parkingRow.putValue(LOT_ID_COLUMN, "LotId_" + String.valueOf(lotId));
+ parkingRow.putValue(IS_OCCUPIED_COLUMN, isOccupied);
+ return parkingRow;
+ }
+
+ @BeforeClass
+ public void setUp()
+ throws Exception {
+ FileUtils.deleteDirectory(INDEX_DIR);
+
+ List<GenericRow> records = new ArrayList<>(NUM_LOTS * 2);
+ records.add(createRow("2021-11-07 04:11:00.000", 0, 0, true));
+ records.add(createRow("2021-11-07 04:21:00.000", 0, 0, true));
+ records.add(createRow("2021-11-07 04:31:00.000", 1, 0, true));
+ records.add(createRow("2021-11-07 05:17:00.000", 0, 1, true));
+ records.add(createRow("2021-11-07 05:37:00.000", 0, 1, true));
+ records.add(createRow("2021-11-07 05:47:00.000", 1, 2, true));
+ records.add(createRow("2021-11-07 06:25:00.000", 0, 2, true));
+ records.add(createRow("2021-11-07 06:35:00.000", 0, 2, true));
+ records.add(createRow("2021-11-07 06:36:00.000", 1, 1, true));
+ records.add(createRow("2021-11-07 07:44:00.000", 0, 3, true));
+ records.add(createRow("2021-11-07 07:46:00.000", 1, 3, true));
+ records.add(createRow("2021-11-07 07:54:00.000", 1, 3, true));
+ records.add(createRow("2021-11-07 08:44:00.000", 0, 2, false));
+ records.add(createRow("2021-11-07 08:44:00.000", 1, 2, false));
+ records.add(createRow("2021-11-07 09:31:00.000", 0, 3, false));
+ records.add(createRow("2021-11-07 09:31:00.000", 1, 3, false));
+ records.add(createRow("2021-11-07 10:17:00.000", 0, 0, false));
+ records.add(createRow("2021-11-07 10:33:00.000", 0, 0, false));
+ records.add(createRow("2021-11-07 10:33:00.000", 1, 0, false));
+ records.add(createRow("2021-11-07 11:54:00.000", 0, 1, false));
+ records.add(createRow("2021-11-07 11:57:00.000", 1, 1, false));
+
+ SegmentGeneratorConfig segmentGeneratorConfig = new SegmentGeneratorConfig(TABLE_CONFIG, SCHEMA);
+ segmentGeneratorConfig.setTableName(RAW_TABLE_NAME);
+ segmentGeneratorConfig.setSegmentName(SEGMENT_NAME);
+ segmentGeneratorConfig.setOutDir(INDEX_DIR.getPath());
+
+ SegmentIndexCreationDriverImpl driver = new SegmentIndexCreationDriverImpl();
+ driver.init(segmentGeneratorConfig, new GenericRowRecordReader(records));
+ driver.build();
+
+ ImmutableSegment immutableSegment = ImmutableSegmentLoader.load(new File(INDEX_DIR, SEGMENT_NAME), ReadMode.mmap);
+ _indexSegment = immutableSegment;
+ _indexSegments = Arrays.asList(immutableSegment);
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestSelectSelect() {
+ DateTimeFormatSpec dateTimeFormatter
+ = new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + " GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " levelId, lotId, isOccupied "
+ + "FROM parkingData "
+ + "WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + "LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int [][] expectedOccupiedSlotsCounts1 =
+ new int [][] {{6, 6}, {8, 4}, {10, 2}, {12, 0}, {6, 4}, {4, 6}, {2, 10}, {0, 10}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = (String) gapFillRows1.get(index)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[3].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, lotId, isOccupied, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int [] expectedOccupiedSlotsCounts2 = new int [] {6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = (String) gapFillRows2.get(index)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestAggregateSelect() {
+ DateTimeFormatSpec dateTimeFormatter
+ = new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)), levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int [][] expectedOccupiedSlotsCounts1 =
+ new int [][] {{2, 6}, {4, 4}, {6, 2}, {8, 0}, {6, 2}, {4, 4}, {2, 6}, {0, 8}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = (String) gapFillRows1.get(index)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[3].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + "GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)), levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int [] expectedOccupiedSlotsCounts2 = new int [] {2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = (String) gapFillRows2.get(index)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestGapfillAggregate() {
+ DateTimeFormatSpec dateTimeFormatter
+ = new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String dataTimeConvertQuery = "SELECT "
+ + "DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + "'1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col, "
+ + "SUM(isOccupied) "
+ + "FROM parkingData "
+ + "WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + "GROUP BY 1 "
+ + "ORDER BY 1 "
+ + "LIMIT 200";
+
+ BrokerResponseNative dateTimeConvertBrokerResponse = getBrokerResponseForSqlQuery(dataTimeConvertQuery);
+
+ ResultTable dateTimeConvertResultTable = dateTimeConvertBrokerResponse.getResultTable();
+ Assert.assertEquals(dateTimeConvertResultTable.getRows().size(), 8);
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCounts1 = new double [] {6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double [] expectedOccupiedSlotsCounts2 = new double [] {6, 8, 10, 12, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = (String) gapFillRows2.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestGapfillAggregateWithOptionalGroupBy() {
+ DateTimeFormatSpec dateTimeFormatter
+ = new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCountsForLevel11 = new double [] {4, 5, 6, 5, 3, 2, 1, 0};
+ double [] expectedOccupiedSlotsCountsForLevel21 = new double [] {2, 3, 4, 7, 3, 2, 1, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = (String) gapFillRows1.get(i + 1)[0];
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double [] expectedOccupiedSlotsCountsForLevel12 = new double [] {4, 5, 6, 5, 3, 2, 1};
+ double [] expectedOccupiedSlotsCountsForLevel22 = new double [] {2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCountsForLevel12.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel12.length * 2; i += 2) {
+ String firstTimeCol = (String) gapFillRows2.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i)[2]);
+ }
+ firstTimeCol = (String) gapFillRows2.get(i + 1)[0];
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestGapfillAggregateWithHavingClause() {
+ DateTimeFormatSpec dateTimeFormatter
+ = new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " HAVING occupied_slots_count > 0"
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCountsForLevel11 = new double [] {4, 5, 6, 5, 3, 2, 1};
+ double [] expectedOccupiedSlotsCountsForLevel21 = new double [] {2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = (String) gapFillRows1.get(i + 1)[0];
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestAggregateAggregate() {
+ DateTimeFormatSpec dateTimeFormatter
+ = new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCounts1 = new double [] {2, 4, 6, 8, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double [] expectedOccupiedSlotsCounts2 = new double [] {2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = (String) gapFillRows2.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestAggregateAggregateWithOptionalGroupBy() {
+ DateTimeFormatSpec dateTimeFormatter
+ = new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCounts1 = new double [] {1, 2, 3, 4, 3, 2, 1, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i += 2) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = (String) gapFillRows1.get(i + 1)[0];
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double [] expectedOccupiedSlotsCounts2 = new double [] {1, 2, 3, 4, 3, 2, 1};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i += 2) {
+ String firstTimeCol = (String) gapFillRows2.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = (String) gapFillRows2.get(i + 1)[0];
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestAggregateAggregateWithHavingClause() {
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)),"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " HAVING occupied_slots_count > 0"
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCounts1 = new double [] {1, 2, 3, 4, 3, 2, 1};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length * 2);
+ DateTimeFormatSpec dateTimeFormatter
+ = new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+
+ long start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i += 2) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = (String) gapFillRows1.get(i + 1)[0];
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochHoursGapfillTestSelectSelect() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + " GapFill(ToEpochHours(eventTime), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId "
+ + "FROM parkingData "
+ + "WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + "LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int [][] expectedOccupiedSlotsCounts1 =
+ new int [][] {{6, 6}, {8, 4}, {10, 2}, {12, 0}, {6, 4}, {4, 6}, {2, 10}, {0, 10}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = ((Long) (gapFillRows1.get(index)[0])).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[3].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, lotId, isOccupied, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochHours(eventTime), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int [] expectedOccupiedSlotsCounts2 = new int [] {6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void toEpochHoursGapfillTestAggregateSelect() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + " GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)),"
+ + " levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT ToEpochHours(eventTime) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int [][] expectedOccupiedSlotsCounts1 =
+ new int [][] {{2, 6}, {4, 4}, {6, 2}, {8, 0}, {6, 2}, {4, 4}, {2, 6}, {0, 8}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[3].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + " GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)),"
+ + " levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT ToEpochHours(eventTime) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int [] expectedOccupiedSlotsCounts2 = new int [] {2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void toEpochHoursGapfillTestGapfillAggregate() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochHours(eventTime), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCounts1 = new double [] {6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochHours(eventTime), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double [] expectedOccupiedSlotsCounts2 = new double [] {6, 8, 10, 12, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochHoursGapfillTestGapfillAggregateWithOptionalGroupBy() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochHours(eventTime), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCountsForLevel11 = new double [] {4, 5, 6, 5, 3, 2, 1, 0};
+ double [] expectedOccupiedSlotsCountsForLevel21 = new double [] {2, 3, 4, 7, 3, 2, 1, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochHours(eventTime), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double [] expectedOccupiedSlotsCountsForLevel12 = new double [] {4, 5, 6, 5, 3, 2, 1};
+ double [] expectedOccupiedSlotsCountsForLevel22 = new double [] {2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCountsForLevel12.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel12.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows2.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochHoursGapfillTestGapfillAggregateWithHavingClause() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochHours(eventTime), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " HAVING occupied_slots_count > 0"
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCountsForLevel11 = new double [] {4, 5, 6, 5, 3, 2, 1};
+ double [] expectedOccupiedSlotsCountsForLevel21 = new double [] {2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochHoursGapfillTestAggregateAggregate() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)),"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochHours(eventTime) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCounts1 = new double [] {2, 4, 6, 8, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochHours(eventTime) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double [] expectedOccupiedSlotsCounts2 = new double [] {2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochHoursGapfillTestAggregateAggregateWithOptionalGroupBy() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochHours(eventTime) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCounts1 = new double [] {1, 2, 3, 4, 3, 2, 1, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochHours(eventTime) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double [] expectedOccupiedSlotsCounts2 = new double [] {1, 2, 3, 4, 3, 2, 1};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = ((Long) gapFillRows2.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochHoursGapfillTestAggregateAggregateWithHavingClause() {
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochHours(eventTime) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " HAVING occupied_slots_count > 0"
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCounts1 = new double [] {1, 2, 3, 4, 3, 2, 1};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length * 2);
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+
+ long start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesRoundedHoursGapfillTestSelectSelect() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MINUTES:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT GapFill(ToEpochMinutesRounded(eventTime, 60), '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int [][] expectedOccupiedSlotsCounts1 =
+ new int [][] {{6, 6}, {8, 4}, {10, 2}, {12, 0}, {6, 4}, {4, 6}, {2, 10}, {0, 10}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = ((Long) (gapFillRows1.get(index)[0])).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[3].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, lotId, isOccupied, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesRounded(eventTime, 60), '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int [] expectedOccupiedSlotsCounts2 = new int [] {6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void toEpochMinutesRoundedHoursGapfillTestAggregateSelect() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MINUTES:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "GapFill(time_col, '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)), levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT ToEpochMinutesRounded(eventTime, 60) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int [][] expectedOccupiedSlotsCounts1 =
+ new int [][] {{2, 6}, {4, 4}, {6, 2}, {8, 0}, {6, 2}, {4, 4}, {2, 6}, {0, 8}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[3].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + "GapFill(time_col, '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)), levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT ToEpochMinutesRounded(eventTime, 60) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int [] expectedOccupiedSlotsCounts2 = new int [] {2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void toEpochMinutesRoundedHoursGapfillTestGapfillAggregate() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MINUTES:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesRounded(eventTime, 60), '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCounts1 = new double [] {6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesRounded(eventTime, 60), '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double [] expectedOccupiedSlotsCounts2 = new double [] {6, 8, 10, 12, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesRoundedHoursGapfillTestGapfillAggregateWithOptionalGroupBy() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MINUTES:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesRounded(eventTime, 60), '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCountsForLevel11 = new double [] {4, 5, 6, 5, 3, 2, 1, 0};
+ double [] expectedOccupiedSlotsCountsForLevel21 = new double [] {2, 3, 4, 7, 3, 2, 1, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesRounded(eventTime, 60), '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double [] expectedOccupiedSlotsCountsForLevel12 = new double [] {4, 5, 6, 5, 3, 2, 1};
+ double [] expectedOccupiedSlotsCountsForLevel22 = new double [] {2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCountsForLevel12.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel12.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows2.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesRoundedHoursGapfillTestGapfillAggregateWithHavingClause() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MINUTES:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesRounded(eventTime, 60), '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " HAVING occupied_slots_count > 0"
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCountsForLevel11 = new double [] {4, 5, 6, 5, 3, 2, 1};
+ double [] expectedOccupiedSlotsCountsForLevel21 = new double [] {2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesRoundedHoursGapfillTestAggregateAggregate() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MINUTES:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochMinutesRounded(eventTime, 60) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCounts1 = new double [] {2, 4, 6, 8, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochMinutesRounded(eventTime, 60) time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double [] expectedOccupiedSlotsCounts2 = new double [] {2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesRoundedHoursGapfillTestAggregateAggregateWithOptionalGroupBy() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MINUTES:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochMinutesRounded(eventTime, 60) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCounts1 = new double [] {1, 2, 3, 4, 3, 2, 1, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
Review comment:
Yes we do not stop taking arbitrary level of queries.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (f44c4dc) into [master](https://codecov.io/gh/apache/pinot/commit/3f98ce37fdaef0335fcd82e621489d65751b1f55?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (3f98ce3) will **decrease** coverage by `56.64%`.
> The diff coverage is `10.47%`.
> :exclamation: Current head f44c4dc differs from pull request most recent head 1d5ee22. Consider uploading reports for the commit 1d5ee22 to get more accurate results
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 70.72% 14.08% -56.65%
+ Complexity 4242 81 -4161
=============================================
Files 1631 1596 -35
Lines 85279 84193 -1086
Branches 12844 12829 -15
=============================================
- Hits 60316 11858 -48458
- Misses 20799 71453 +50654
+ Partials 4164 882 -3282
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `?` | |
| unittests2 | `14.08% <10.47%> (-0.02%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `24.41% <0.00%> (-47.44%)` | :arrow_down: |
| [...apache/pinot/common/function/FunctionRegistry.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9jb21tb24vZnVuY3Rpb24vRnVuY3Rpb25SZWdpc3RyeS5qYXZh) | `0.00% <0.00%> (-87.10%)` | :arrow_down: |
| [...ache/pinot/common/metadata/ZKMetadataProvider.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9jb21tb24vbWV0YWRhdGEvWktNZXRhZGF0YVByb3ZpZGVyLmphdmE=) | `0.00% <0.00%> (-79.14%)` | :arrow_down: |
| [...ot/common/request/context/predicate/Predicate.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9jb21tb24vcmVxdWVzdC9jb250ZXh0L3ByZWRpY2F0ZS9QcmVkaWNhdGUuamF2YQ==) | `0.00% <0.00%> (-66.67%)` | :arrow_down: |
| [...request/context/predicate/RegexpLikePredicate.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9jb21tb24vcmVxdWVzdC9jb250ZXh0L3ByZWRpY2F0ZS9SZWdleHBMaWtlUHJlZGljYXRlLmphdmE=) | `0.00% <0.00%> (-66.67%)` | :arrow_down: |
| [...g/apache/pinot/common/utils/helix/HelixHelper.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9jb21tb24vdXRpbHMvaGVsaXgvSGVsaXhIZWxwZXIuamF2YQ==) | `0.00% <0.00%> (-45.15%)` | :arrow_down: |
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `0.00% <0.00%> (-86.56%)` | :arrow_down: |
| [...a/org/apache/pinot/core/common/DataBlockCache.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9jb21tb24vRGF0YUJsb2NrQ2FjaGUuamF2YQ==) | `0.00% <0.00%> (-91.43%)` | :arrow_down: |
| [...he/pinot/core/operator/blocks/ProjectionBlock.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9vcGVyYXRvci9ibG9ja3MvUHJvamVjdGlvbkJsb2NrLmphdmE=) | `0.00% <0.00%> (-60.00%)` | :arrow_down: |
| [...che/pinot/core/operator/blocks/TransformBlock.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9vcGVyYXRvci9ibG9ja3MvVHJhbnNmb3JtQmxvY2suamF2YQ==) | `0.00% <0.00%> (-69.24%)` | :arrow_down: |
| ... and [1335 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [3f98ce3...1d5ee22](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (98cf976) into [master](https://codecov.io/gh/apache/pinot/commit/3f98ce37fdaef0335fcd82e621489d65751b1f55?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (3f98ce3) will **decrease** coverage by `34.56%`.
> The diff coverage is `16.03%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 70.72% 36.16% -34.57%
+ Complexity 4242 81 -4161
=============================================
Files 1631 1641 +10
Lines 85279 86103 +824
Branches 12844 13034 +190
=============================================
- Hits 60316 31135 -29181
- Misses 20799 52467 +31668
+ Partials 4164 2501 -1663
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.69% <16.03%> (+<0.01%)` | :arrow_up: |
| integration2 | `?` | |
| unittests1 | `?` | |
| unittests2 | `14.09% <0.00%> (-0.01%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `69.92% <0.00%> (-1.93%)` | :arrow_down: |
| [...ache/pinot/core/plan/GapfillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...not/core/query/reduce/GapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwRmlsbERhdGFUYWJsZVJlZHVjZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [.../pinot/core/query/reduce/GapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `69.25% <0.00%> (-22.93%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `19.14% <13.27%> (-44.49%)` | :arrow_down: |
| [...pinot/core/plan/maker/InstancePlanMakerImplV2.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL21ha2VyL0luc3RhbmNlUGxhbk1ha2VySW1wbFYyLmphdmE=) | `43.63% <26.31%> (-33.14%)` | :arrow_down: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `61.11% <30.00%> (-26.39%)` | :arrow_down: |
| [.../pinot/core/query/reduce/ResultReducerFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUmVzdWx0UmVkdWNlckZhY3RvcnkuamF2YQ==) | `55.55% <33.33%> (-25.70%)` | :arrow_down: |
| ... and [1018 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [3f98ce3...98cf976](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] amrishlal commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
amrishlal commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r819826352
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/util/GapfillUtils.java
##########
@@ -31,7 +36,10 @@
*/
public class GapfillUtils {
private static final String POST_AGGREGATE_GAP_FILL = "postaggregategapfill";
Review comment:
ok
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/utils/BrokerRequestToQueryContextConverter.java
##########
@@ -53,12 +55,81 @@ private BrokerRequestToQueryContextConverter() {
* Converts the given {@link BrokerRequest} into a {@link QueryContext}.
*/
public static QueryContext convert(BrokerRequest brokerRequest) {
- return brokerRequest.getPinotQuery() != null ? convertSQL(brokerRequest) : convertPQL(brokerRequest);
+ if (brokerRequest.getPinotQuery() != null) {
+ QueryContext queryContext = convertSQL(brokerRequest.getPinotQuery(), brokerRequest);
+ queryContext.setGapfillType(GapfillUtils.getGapfillType(queryContext));
+ validateForGapfillQuery(queryContext);
Review comment:
From what I can see these syntax checks are happening on the Server side. Server receives an `InstanceRequest` from Broker in `InstanceRequestHandler.channelRead0` method and invokes the `ServerQueryRequest` constructor to create `ServerQueryRequest` object. The `ServerQueryRequest` constructor then calls makes the call:
`_queryContext = BrokerRequestToQueryContextConverter.convert(brokerRequest);`
We should move the syntax checks as far up the stack as possible and reject bad queries as early as possible so that unnecessary processing can be avoided.
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/QueryContext.java
##########
@@ -133,6 +137,8 @@ private QueryContext(String tableName, List<ExpressionContext> selectExpressions
_queryOptions = queryOptions;
_debugOptions = debugOptions;
_brokerRequest = brokerRequest;
+ _gapfillType = null;
Review comment:
Let's leave it as null as Jackie suggested.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r821252308
##########
File path: pinot-common/src/main/java/org/apache/pinot/sql/parsers/CalciteSqlParser.java
##########
@@ -369,6 +369,9 @@ private static PinotQuery compileCalciteSqlToPinotQuery(String sql) {
DataSource dataSource = new DataSource();
dataSource.setTableName(fromNode.toString());
pinotQuery.setDataSource(dataSource);
+ if (fromNode instanceof SqlSelect || fromNode instanceof SqlOrderBy) {
Review comment:
No, I did not flatten the subqueries into a single query. The query will be passed around without any change.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (f07e90b) into [master](https://codecov.io/gh/apache/pinot/commit/262dc50e236ed2af25a0cf8c67658a48731ce573?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (262dc50) will **decrease** coverage by `32.99%`.
> The diff coverage is `16.91%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 70.83% 37.83% -33.00%
+ Complexity 4258 81 -4177
=============================================
Files 1636 1645 +9
Lines 85804 86422 +618
Branches 12920 13075 +155
=============================================
- Hits 60779 32698 -28081
- Misses 20836 51137 +30301
+ Partials 4189 2587 -1602
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.73% <16.64%> (-0.23%)` | :arrow_down: |
| integration2 | `27.54% <16.91%> (-0.05%)` | :arrow_down: |
| unittests1 | `?` | |
| unittests2 | `14.09% <0.27%> (-0.09%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...not/core/query/reduce/GapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwRmlsbERhdGFUYWJsZVJlZHVjZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [.../pinot/core/query/reduce/GapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `69.25% <0.00%> (-22.93%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `17.46% <13.04%> (-46.18%)` | :arrow_down: |
| [.../pinot/core/query/reduce/ResultReducerFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUmVzdWx0UmVkdWNlckZhY3RvcnkuamF2YQ==) | `61.11% <33.33%> (-20.14%)` | :arrow_down: |
| [...query/request/context/utils/QueryContextUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvUXVlcnlDb250ZXh0VXRpbHMuamF2YQ==) | `46.15% <36.36%> (-31.12%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `83.78% <50.00%> (-7.99%)` | :arrow_down: |
| [.../core/query/reduce/filter/PredicateRowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1ByZWRpY2F0ZVJvd01hdGNoZXIuamF2YQ==) | `50.00% <50.00%> (ø)` | |
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `64.28% <53.84%> (-23.91%)` | :arrow_down: |
| ... and [941 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [262dc50...f07e90b](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r785603093
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/plan/PreAggGapFillSelectionPlanNode.java
##########
@@ -0,0 +1,86 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.plan;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.List;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.core.common.Operator;
+import org.apache.pinot.core.operator.blocks.IntermediateResultsBlock;
+import org.apache.pinot.core.operator.query.SelectionOnlyOperator;
+import org.apache.pinot.core.operator.transform.TransformOperator;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.segment.spi.IndexSegment;
+
+
+/**
+ * The <code>PreAggGapFillSelectionPlanNode</code> class provides the execution
+ * plan for pre-aggregate gapfill query on a single segment.
+ */
+public class PreAggGapFillSelectionPlanNode implements PlanNode {
+ private final IndexSegment _indexSegment;
+ private final QueryContext _queryContext;
+
+ public PreAggGapFillSelectionPlanNode(IndexSegment indexSegment, QueryContext queryContext) {
+ _indexSegment = indexSegment;
+ _queryContext = queryContext.getPreAggregateGapFillQueryContext();
+ }
+
+ @Override
+ public Operator<IntermediateResultsBlock> run() {
+ int limit = _queryContext.getLimit();
+
+ ExpressionContext gapFillSelection = null;
+ for (ExpressionContext expressionContext : _queryContext.getSelectExpressions()) {
+ if (GapfillUtils.isPreAggregateGapfill(expressionContext)) {
+ gapFillSelection = expressionContext;
+ break;
+ }
+ }
+
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ ExpressionContext timeSeriesOn = null;
+ for (int i = 5; i < args.size(); i++) {
Review comment:
Done
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r788272952
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/PreAggregationGapFillSelectionOperatorService.java
##########
@@ -0,0 +1,388 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Comparator;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.PriorityQueue;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FilterContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.query.selection.SelectionOperatorUtils;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+
+/**
+ * The <code>PreAggregationGapFillSelectionOperatorService</code> class provides the services for selection queries with
+ * <code>ORDER BY</code>.
+ * <p>Expected behavior:
+ * <ul>
+ * <li>
+ * Return selection results with the same order of columns as user passed in.
+ * <ul>
+ * <li>Eg. SELECT colB, colA, colC FROM table -> [valB, valA, valC]</li>
+ * </ul>
+ * </li>
+ * <li>
+ * For 'SELECT *', return columns with alphabetically order.
+ * <ul>
+ * <li>Eg. SELECT * FROM table -> [valA, valB, valC]</li>
+ * </ul>
+ * </li>
+ * <li>
+ * Order by does not change the order of columns in selection results.
+ * <ul>
+ * <li>Eg. SELECT colB, colA, colC FROM table ORDER BY calC -> [valB, valA, valC]</li>
+ * </ul>
+ * </li>
+ * </ul>
+ */
+@SuppressWarnings("rawtypes")
+public class PreAggregationGapFillSelectionOperatorService {
+ private final List<String> _columns;
+ private final DataSchema _dataSchema;
+ private final int _limitForAggregatedResult;
+ private final int _limitForGapfilledResult;
+ private final PriorityQueue<Object[]> _rows;
+
+ private final DateTimeGranularitySpec _dateTimeGranularity;
+ private final DateTimeFormatSpec _dateTimeFormatter;
+ private final long _startMs;
+ private final long _endMs;
+ private final long _timeBucketSize;
+ private final QueryContext _queryContext;
+ private final QueryContext _preAggregateGapFillQueryContext;
+
+ private final int _numOfGroupByKeys;
+ private final List<Integer> _groupByKeyIndexes;
+ private final boolean [] _isGroupBySelections;
+ private final Set<Key> _groupByKeys;
+ private final List<Key> _groupByKeyList;
+ private final Map<Key, Integer> _groupByKeyMappings;
+ private final Map<Key, Object[]> _previousByGroupKey;
+ private final Map<String, ExpressionContext> _fillExpressions;
+ private final FilterContext _gapFillFilterContext;
+
+ /**
+ * Constructor for <code>SelectionOperatorService</code> with {@link DataSchema}. (Inter segment)
+ *
+ * @param queryContext Selection order-by query
+ * @param dataSchema data schema.
+ */
+ public PreAggregationGapFillSelectionOperatorService(QueryContext queryContext, DataSchema dataSchema) {
+ _columns = Arrays.asList(dataSchema.getColumnNames());
+ _dataSchema = dataSchema;
+ _limitForAggregatedResult = queryContext.getLimit();
+ _limitForGapfilledResult = queryContext.getSubQueryContext().getLimit();
+ _rows = new PriorityQueue<>(Math.min(_limitForAggregatedResult,
+ SelectionOperatorUtils.MAX_ROW_HOLDER_INITIAL_CAPACITY),
+ getTypeCompatibleComparator());
+
+ _queryContext = queryContext;
+ _preAggregateGapFillQueryContext = queryContext.getSubQueryContext();
+ ExpressionContext gapFillSelection =
+ GapfillUtils.getPreAggregateGapfillExpressionContext(_preAggregateGapFillQueryContext);
+
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ Preconditions.checkArgument(
+ args.size() > 5, "PreAggregateGapFill does not have correct number of arguments.");
+ Preconditions.checkArgument(
+ args.get(1).getLiteral() != null, "The second argument of PostAggregateGapFill should be TimeFormatter.");
+ Preconditions.checkArgument(
+ args.get(2).getLiteral() != null, "The third argument of PostAggregateGapFill should be start time.");
+ Preconditions.checkArgument(
+ args.get(3).getLiteral() != null, "The fourth argument of PostAggregateGapFill should be end time.");
+ Preconditions.checkArgument(
+ args.get(4).getLiteral() != null, "The fifth argument of PostAggregateGapFill should be time bucket size.");
+
+ _gapFillFilterContext = _queryContext.getFilter();
+ _dateTimeFormatter = new DateTimeFormatSpec(args.get(1).getLiteral());
+ String start = args.get(2).getLiteral();
+ String end = args.get(3).getLiteral();
+ _dateTimeGranularity = new DateTimeGranularitySpec(args.get(4).getLiteral());
+ _startMs = truncate(_dateTimeFormatter.fromFormatToMillis(start));
+ _endMs = truncate(_dateTimeFormatter.fromFormatToMillis(end));
+ _timeBucketSize = _dateTimeGranularity.granularityToMillis();
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ _fillExpressions = GapfillUtils.getFillExpressions(gapFillSelection);
+
+ _previousByGroupKey = new HashMap<>();
+ _groupByKeyIndexes = new ArrayList<>();
+ _isGroupBySelections = new boolean[dataSchema.getColumnDataTypes().length];
+ _groupByKeys = new HashSet<>();
+ _groupByKeyList = new ArrayList<>();
+ _groupByKeyMappings = new HashMap<>();
+
+ Map<String, Integer> indexes = new HashMap<>();
+ for (int i = 0; i < _columns.size(); i++) {
+ indexes.put(_columns.get(i), i);
+ }
+
+ Preconditions.checkArgument(timeseriesOn != null, "The TimeSeriesOn expressions should be specified.");
+ _numOfGroupByKeys = timeseriesOn.getFunction().getArguments().size() - 1;
+ List<ExpressionContext> timeSeries = timeseriesOn.getFunction().getArguments();
+ // The first one argument of timeSeries is time column. The left ones are defining entity.
+ for (int i = 1; i < timeSeries.size(); i++) {
+ int index = indexes.get(timeSeries.get(i).getIdentifier());
+ _isGroupBySelections[index] = true;
+ _groupByKeyIndexes.add(index);
+ }
+ }
+
+ private Key constructGroupKeys(Object[] row) {
+ Object [] groupKeys = new Object[_numOfGroupByKeys];
+ for (int i = 0; i < _numOfGroupByKeys; i++) {
+ groupKeys[i] = row[_groupByKeyIndexes.get(i)];
+ }
+ return new Key(groupKeys);
+ }
+
+ private long truncate(long epoch) {
+ int sz = _dateTimeGranularity.getSize();
+ return epoch / sz * sz;
+ }
+
+ List<Object[]> gapFillAndAggregate(List<Object[]> sortedRows, DataSchema dataSchemaForAggregatedResult) {
Review comment:
break it into the small functions.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (e4043fe) into [master](https://codecov.io/gh/apache/pinot/commit/af742f7d7e1dbe4325c982f3a0164927d8a0037f?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (af742f7) will **decrease** coverage by `42.94%`.
> The diff coverage is `11.94%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 70.32% 27.37% -42.95%
=============================================
Files 1624 1626 +2
Lines 84176 84522 +346
Branches 12600 12731 +131
=============================================
- Hits 59196 23141 -36055
- Misses 20906 59226 +38320
+ Partials 4074 2155 -1919
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `27.37% <11.94%> (?)` | |
| unittests1 | `?` | |
| unittests2 | `?` | |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [.../combine/GapfillGroupByOrderByCombineOperator.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9vcGVyYXRvci9jb21iaW5lL0dhcGZpbGxHcm91cEJ5T3JkZXJCeUNvbWJpbmVPcGVyYXRvci5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...plan/GapfillAggregationGroupByOrderByPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxBZ2dyZWdhdGlvbkdyb3VwQnlPcmRlckJ5UGxhbk5vZGUuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...he/pinot/core/plan/GapfillAggregationPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxBZ2dyZWdhdGlvblBsYW5Ob2RlLmphdmE=) | `0.00% <0.00%> (ø)` | |
| [...ache/pinot/core/plan/GapfillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `69.25% <0.00%> (-22.93%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `27.02% <18.60%> (-36.61%)` | :arrow_down: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `78.33% <20.00%> (-5.60%)` | :arrow_down: |
| ... and [1232 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [af742f7...e4043fe](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r806333962
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/utils/QueryContextUtils.java
##########
@@ -47,16 +52,22 @@ public static boolean isSelectionQuery(QueryContext query) {
* Selection-only query at this moment means selection query without order-by.
*/
public static boolean isSelectionOnlyQuery(QueryContext query) {
- return query.getAggregationFunctions() == null && query.getOrderByExpressions() == null;
+ return query.getAggregationFunctions() == null
+ && query.getOrderByExpressions() == null
+ && !GapfillUtils.isGapfill(query);
}
/**
- * Returns {@code true} if the given query is an aggregation query, {@code false} otherwise.
+ * Returns {@code trgue} if the given query is an agregation query, {@code false} otherwise.
*/
public static boolean isAggregationQuery(QueryContext query) {
- AggregationFunction[] aggregationFunctions = query.getAggregationFunctions();
- return aggregationFunctions != null && (aggregationFunctions.length != 1
- || !(aggregationFunctions[0] instanceof DistinctAggregationFunction));
+ if (GapfillUtils.isGapfill(query)) {
Review comment:
Done
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (509321d) into [master](https://codecov.io/gh/apache/pinot/commit/1d1a7d34709b6a89985a610f46dd1c97d6c9271a?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (1d1a7d3) will **decrease** coverage by `6.69%`.
> The diff coverage is `74.56%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 71.40% 64.71% -6.70%
- Complexity 4223 4263 +40
============================================
Files 1597 1572 -25
Lines 82903 81856 -1047
Branches 12369 12325 -44
============================================
- Hits 59201 52974 -6227
- Misses 19689 25093 +5404
+ Partials 4013 3789 -224
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `67.93% <74.56%> (-0.21%)` | :arrow_down: |
| unittests2 | `14.17% <0.00%> (-0.20%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...query/request/context/utils/QueryContextUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvUXVlcnlDb250ZXh0VXRpbHMuamF2YQ==) | `69.56% <20.00%> (-7.71%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `58.73% <54.83%> (-4.91%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `92.42% <66.66%> (+0.11%)` | :arrow_up: |
| [...che/pinot/core/query/reduce/filter/RowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXIuamF2YQ==) | `66.66% <66.66%> (ø)` | |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `83.76% <83.76%> (ø)` | |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `85.71% <85.71%> (ø)` | |
| [.../core/query/reduce/filter/PredicateRowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1ByZWRpY2F0ZVJvd01hdGNoZXIuamF2YQ==) | `87.50% <87.50%> (ø)` | |
| [...inot/core/plan/PreAggGapFillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL1ByZUFnZ0dhcEZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `90.00% <90.00%> (ø)` | |
| ... and [418 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [1d1a7d3...509321d](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (a9f2578) into [master](https://codecov.io/gh/apache/pinot/commit/1d1a7d34709b6a89985a610f46dd1c97d6c9271a?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (1d1a7d3) will **decrease** coverage by `1.10%`.
> The diff coverage is `73.62%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 71.40% 70.30% -1.11%
- Complexity 4223 4224 +1
============================================
Files 1597 1610 +13
Lines 82903 83317 +414
Branches 12369 12449 +80
============================================
- Hits 59201 58578 -623
- Misses 19689 20703 +1014
- Partials 4013 4036 +23
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.86% <16.05%> (-0.14%)` | :arrow_down: |
| integration2 | `?` | |
| unittests1 | `68.13% <73.62%> (-0.01%)` | :arrow_down: |
| unittests2 | `14.24% <0.00%> (-0.13%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...query/request/context/utils/QueryContextUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvUXVlcnlDb250ZXh0VXRpbHMuamF2YQ==) | `69.56% <20.00%> (-7.71%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `58.73% <54.83%> (-4.91%)` | :arrow_down: |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `63.88% <63.88%> (ø)` | |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `92.42% <66.66%> (+0.11%)` | :arrow_up: |
| [...che/pinot/core/query/reduce/filter/RowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXIuamF2YQ==) | `66.66% <66.66%> (ø)` | |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| [...PreAggregationGapFillSelectionOperatorService.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsU2VsZWN0aW9uT3BlcmF0b3JTZXJ2aWNlLmphdmE=) | `83.51% <83.51%> (ø)` | |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `85.71% <85.71%> (ø)` | |
| [.../core/query/reduce/filter/PredicateRowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1ByZWRpY2F0ZVJvd01hdGNoZXIuamF2YQ==) | `87.50% <87.50%> (ø)` | |
| ... and [125 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [1d1a7d3...a9f2578](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r785541884
##########
File path: pinot-common/src/main/java/org/apache/pinot/sql/parsers/CalciteSqlParser.java
##########
@@ -117,21 +117,50 @@ private static String removeTerminatingSemicolon(String sql) {
return sql;
}
+ private static SqlNode parse(String sql) {
+ SqlParser sqlParser = SqlParser.create(sql, PARSER_CONFIG);
+ try {
+ return sqlParser.parseQuery();
+ } catch (SqlParseException e) {
+ throw new SqlCompilationException("Caught exception while parsing query: " + sql, e);
+ }
+ }
+
+ public static PinotQuery compileToPinotQueryWithSubquery(String sql)
+ throws SqlCompilationException {
+ return compileToPinotQuery(sql, true);
+ }
+
public static PinotQuery compileToPinotQuery(String sql)
throws SqlCompilationException {
- // Remove the comments from the query
- sql = removeComments(sql);
+ return compileToPinotQuery(sql, false);
+ }
- // Remove the terminating semicolon from the query
+ private static PinotQuery compileToPinotQuery(String sql, boolean enablePreAggregateGapfillQuery)
+ throws SqlCompilationException {
+ // Removes the terminating semicolon if any
sql = removeTerminatingSemicolon(sql);
// Extract OPTION statements from sql as Calcite Parser doesn't parse it.
List<String> options = extractOptionsFromSql(sql);
if (!options.isEmpty()) {
sql = removeOptionsFromSql(sql);
}
+
+ SqlNode sqlNode = parse(sql);
+
// Compile Sql without OPTION statements.
- PinotQuery pinotQuery = compileCalciteSqlToPinotQuery(sql);
+ PinotQuery pinotQuery = compileSqlNodeToPinotQuery(sqlNode);
+
+ if (enablePreAggregateGapfillQuery) {
Review comment:
We can not leverage this IN_SUBQUERY feature https://docs.pinot.apache.org/users/user-guide-query/filtering-with-idset#in_subquery since the query syntax is trying to follow the real subquery approach.
The reason of this special-casing is that we are not blocking subquery feature development. We are planning to make our feature compatible with subquery feature when it is in place.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] mayankshriv commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
mayankshriv commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r785350006
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/BlockValSetImpl.java
##########
@@ -0,0 +1,171 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.List;
+import javax.annotation.Nullable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.segment.spi.index.reader.Dictionary;
+import org.apache.pinot.spi.data.FieldSpec;
+
+
+/**
+ * Helper class to convert the result rows to BlockValSet.
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class BlockValSetImpl implements BlockValSet {
+
+ private final FieldSpec.DataType _dataType;
+ private final List<Object[]> _rows;
+ private final int _columnIndex;
+
+ public BlockValSetImpl(DataSchema.ColumnDataType columnDataType, List<Object[]> rows, int columnIndex) {
+ _dataType = columnDataType.toDataType();
+ _rows = rows;
+ _columnIndex = columnIndex;
+ }
+
+ @Override
+ public FieldSpec.DataType getValueType() {
+ return _dataType;
+ }
+
+ @Override
+ public boolean isSingleValue() {
+ return true;
+ }
+
+ @Nullable
+ @Override
+ public Dictionary getDictionary() {
+ throw new UnsupportedOperationException("Not supported");
+ }
+
+ @Override
+ public int[] getDictionaryIdsSV() {
+ throw new UnsupportedOperationException("Not supported");
+ }
+
+ @Override
+ public int[] getIntValuesSV() {
+ if (_dataType == FieldSpec.DataType.INT) {
+ int [] result = new int[_rows.size()];
+ for (int i = 0; i < result.length; i++) {
+ result[i] = (Integer) _rows.get(i)[_columnIndex];
+ }
+ return result;
+ }
+ throw new UnsupportedOperationException("Not supported");
+ }
+
+ @Override
+ public long[] getLongValuesSV() {
+ if (_dataType == FieldSpec.DataType.LONG) {
+ long [] result = new long[_rows.size()];
+ for (int i = 0; i < result.length; i++) {
+ result[i] = (Long) _rows.get(i)[_columnIndex];
+ }
+ return result;
+ }
+ throw new UnsupportedOperationException("Not supported");
Review comment:
IIRC, other parts of the code allow for read `int` as `long` and other such upcasting?
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/BlockValSetImpl.java
##########
@@ -0,0 +1,171 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.List;
+import javax.annotation.Nullable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.segment.spi.index.reader.Dictionary;
+import org.apache.pinot.spi.data.FieldSpec;
+
+
+/**
+ * Helper class to convert the result rows to BlockValSet.
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class BlockValSetImpl implements BlockValSet {
Review comment:
Could we choose a better name for the class and also add more java docs. For example, unclear to me how it is different from other impls of `BlockValSet` without having to read the code.
##########
File path: pinot-common/src/main/java/org/apache/pinot/sql/parsers/CalciteSqlParser.java
##########
@@ -117,21 +117,50 @@ private static String removeTerminatingSemicolon(String sql) {
return sql;
}
+ private static SqlNode parse(String sql) {
+ SqlParser sqlParser = SqlParser.create(sql, PARSER_CONFIG);
+ try {
+ return sqlParser.parseQuery();
+ } catch (SqlParseException e) {
+ throw new SqlCompilationException("Caught exception while parsing query: " + sql, e);
+ }
+ }
+
+ public static PinotQuery compileToPinotQueryWithSubquery(String sql)
+ throws SqlCompilationException {
+ return compileToPinotQuery(sql, true);
+ }
+
public static PinotQuery compileToPinotQuery(String sql)
throws SqlCompilationException {
- // Remove the comments from the query
- sql = removeComments(sql);
+ return compileToPinotQuery(sql, false);
+ }
- // Remove the terminating semicolon from the query
+ private static PinotQuery compileToPinotQuery(String sql, boolean enablePreAggregateGapfillQuery)
+ throws SqlCompilationException {
+ // Removes the terminating semicolon if any
sql = removeTerminatingSemicolon(sql);
// Extract OPTION statements from sql as Calcite Parser doesn't parse it.
List<String> options = extractOptionsFromSql(sql);
if (!options.isEmpty()) {
sql = removeOptionsFromSql(sql);
}
+
+ SqlNode sqlNode = parse(sql);
+
// Compile Sql without OPTION statements.
- PinotQuery pinotQuery = compileCalciteSqlToPinotQuery(sql);
+ PinotQuery pinotQuery = compileSqlNodeToPinotQuery(sqlNode);
+
+ if (enablePreAggregateGapfillQuery) {
Review comment:
Can we avoid this special-casing? We already have the `IN_SUBQUERY` feature: https://docs.pinot.apache.org/users/user-guide-query/filtering-with-idset#in_subquery
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/plan/PreAggGapFillSelectionPlanNode.java
##########
@@ -0,0 +1,86 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.plan;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.List;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.core.common.Operator;
+import org.apache.pinot.core.operator.blocks.IntermediateResultsBlock;
+import org.apache.pinot.core.operator.query.SelectionOnlyOperator;
+import org.apache.pinot.core.operator.transform.TransformOperator;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.segment.spi.IndexSegment;
+
+
+/**
+ * The <code>PreAggGapFillSelectionPlanNode</code> class provides the execution
+ * plan for pre-aggregate gapfill query on a single segment.
+ */
+public class PreAggGapFillSelectionPlanNode implements PlanNode {
+ private final IndexSegment _indexSegment;
+ private final QueryContext _queryContext;
+
+ public PreAggGapFillSelectionPlanNode(IndexSegment indexSegment, QueryContext queryContext) {
+ _indexSegment = indexSegment;
+ _queryContext = queryContext.getPreAggregateGapFillQueryContext();
+ }
+
+ @Override
+ public Operator<IntermediateResultsBlock> run() {
+ int limit = _queryContext.getLimit();
+
+ ExpressionContext gapFillSelection = null;
+ for (ExpressionContext expressionContext : _queryContext.getSelectExpressions()) {
+ if (GapfillUtils.isPreAggregateGapfill(expressionContext)) {
+ gapFillSelection = expressionContext;
+ break;
+ }
+ }
+
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ ExpressionContext timeSeriesOn = null;
+ for (int i = 5; i < args.size(); i++) {
Review comment:
The use of `5` hear reads like a magic number. Can we make it more readable and/or add comments?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829485246
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/ColumnDataToBlockValSetConverter.java
##########
@@ -0,0 +1,181 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.List;
+import javax.annotation.Nullable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.segment.spi.index.reader.Dictionary;
+import org.apache.pinot.spi.data.FieldSpec;
+
+
+/**
+ * As for Gapfilling Function, all raw data will be retrieved from the pinot
+ * server and merged on the pinot broker. The data will be in {@link DataTable}
+ * format.
+ * As part of Gapfilling Function execution plan, the aggregation function will
+ * work on the merged data on pinot broker. The aggregation function only takes
+ * the {@link BlockValSet} format.
+ * This is the Helper class to convert the data from {@link DataTable} to the
+ * block of values {@link BlockValSet} which used as input to the aggregation
+ * function.
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class ColumnDataToBlockValSetConverter implements BlockValSet {
+
+ private final FieldSpec.DataType _dataType;
+ private final List<Object[]> _rows;
+ private final int _columnIndex;
+
+ public ColumnDataToBlockValSetConverter(DataSchema.ColumnDataType columnDataType, List<Object[]> rows,
+ int columnIndex) {
+ _dataType = columnDataType.toDataType();
+ _rows = rows;
+ _columnIndex = columnIndex;
+ }
+
+ @Override
+ public FieldSpec.DataType getValueType() {
+ return _dataType;
+ }
+
+ @Override
+ public boolean isSingleValue() {
+ return true;
+ }
+
+ @Nullable
+ @Override
+ public Dictionary getDictionary() {
+ throw new UnsupportedOperationException("Not supported");
+ }
+
+ @Override
+ public int[] getDictionaryIdsSV() {
+ throw new UnsupportedOperationException("Not supported");
+ }
+
+ @Override
+ public int[] getIntValuesSV() {
+ if (_dataType == FieldSpec.DataType.INT) {
+ int[] result = new int[_rows.size()];
+ for (int i = 0; i < result.length; i++) {
+ result[i] = (Integer) _rows.get(i)[_columnIndex];
+ }
+ return result;
+ }
+ throw new UnsupportedOperationException("Not supported");
+ }
+
+ @Override
+ public long[] getLongValuesSV() {
+ if (_dataType == FieldSpec.DataType.LONG) {
+ long[] result = new long[_rows.size()];
+ for (int i = 0; i < result.length; i++) {
+ result[i] = (Long) _rows.get(i)[_columnIndex];
+ }
+ return result;
+ }
+ throw new UnsupportedOperationException("Not supported");
+ }
+
+ @Override
+ public float[] getFloatValuesSV() {
+ if (_dataType == FieldSpec.DataType.FLOAT) {
+ float[] result = new float[_rows.size()];
+ for (int i = 0; i < result.length; i++) {
+ result[i] = (Float) _rows.get(i)[_columnIndex];
+ }
+ return result;
+ }
+ throw new UnsupportedOperationException("Not supported");
+ }
+
+ @Override
+ public double[] getDoubleValuesSV() {
+ if (_dataType == FieldSpec.DataType.DOUBLE) {
+ double[] result = new double[_rows.size()];
+ for (int i = 0; i < result.length; i++) {
+ result[i] = (Double) _rows.get(i)[_columnIndex];
+ }
+ return result;
+ } else if (_dataType == FieldSpec.DataType.INT) {
+ double[] result = new double[_rows.size()];
+ for (int i = 0; i < result.length; i++) {
+ result[i] = ((Integer) _rows.get(i)[_columnIndex]).doubleValue();
+ }
+ return result;
+ }
+ throw new UnsupportedOperationException("Not supported");
+ }
+
+ @Override
+ public String[] getStringValuesSV() {
+ if (_dataType == FieldSpec.DataType.STRING) {
+ String[] result = new String[_rows.size()];
+ for (int i = 0; i < result.length; i++) {
+ result[i] = (String) _rows.get(i)[_columnIndex];
+ }
+ return result;
+ }
+ throw new UnsupportedOperationException("Not supported");
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r817134430
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/utils/BrokerRequestToQueryContextConverter.java
##########
@@ -53,12 +55,82 @@ private BrokerRequestToQueryContextConverter() {
* Converts the given {@link BrokerRequest} into a {@link QueryContext}.
*/
public static QueryContext convert(BrokerRequest brokerRequest) {
- return brokerRequest.getPinotQuery() != null ? convertSQL(brokerRequest) : convertPQL(brokerRequest);
+ QueryContext queryContext;
+ if (brokerRequest.getPinotQuery() != null) {
+ queryContext = convertSQL(brokerRequest.getPinotQuery(), brokerRequest);
+ } else {
+ queryContext = convertPQL(brokerRequest);
+ }
+ queryContext.setGapfillType(GapfillUtils.getGapfillType(queryContext));
Review comment:
Should move it to sql case
Fixed
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/PostAggregationHandler.java
##########
@@ -50,6 +54,9 @@ public PostAggregationHandler(QueryContext queryContext, DataSchema dataSchema)
_filteredAggregationsIndexMap = queryContext.getFilteredAggregationsIndexMap();
assert _filteredAggregationsIndexMap != null;
List<ExpressionContext> groupByExpressions = queryContext.getGroupByExpressions();
+ if (groupByExpressions == null) {
Review comment:
Maybe should be removed?
Fixed
##########
File path: pinot-core/src/test/java/org/apache/pinot/queries/BaseQueriesTest.java
##########
@@ -234,8 +234,8 @@ private BrokerResponseNative getBrokerResponse(QueryContext queryContext, PlanMa
Utils.rethrowException(e);
}
- BrokerResponseNative brokerResponse =
- brokerReduceService.reduceOnDataTable(queryContext.getBrokerRequest(), dataTableMap,
+ BrokerResponseNative brokerResponse = brokerReduceService
Review comment:
Revert the change
Fixed
##########
File path: pinot-common/src/main/java/org/apache/pinot/common/request/DataSource.java
##########
@@ -97,12 +102,14 @@ public short getThriftFieldId() {
}
// isset id assignments
- private static final _Fields optionals[] = {_Fields.TABLE_NAME};
+ private static final _Fields optionals[] = {_Fields.TABLE_NAME,_Fields.SUBQUERY};
public static final java.util.Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
static {
java.util.Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new java.util.EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
tmpMap.put(_Fields.TABLE_NAME, new org.apache.thrift.meta_data.FieldMetaData("tableName", org.apache.thrift.TFieldRequirementType.OPTIONAL,
new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+ tmpMap.put(_Fields.SUBQUERY, new org.apache.thrift.meta_data.FieldMetaData("subquery", org.apache.thrift.TFieldRequirementType.OPTIONAL,
+ new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT , "PinotQuery")));
Review comment:
Remove the spaces?
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (1c1ba84) into [master](https://codecov.io/gh/apache/pinot/commit/3f98ce37fdaef0335fcd82e621489d65751b1f55?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (3f98ce3) will **decrease** coverage by `6.57%`.
> The diff coverage is `79.03%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 70.72% 64.15% -6.58%
- Complexity 4242 4245 +3
============================================
Files 1631 1596 -35
Lines 85279 84200 -1079
Branches 12844 12831 -13
============================================
- Hits 60316 54020 -6296
- Misses 20799 26283 +5484
+ Partials 4164 3897 -267
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `67.07% <79.24%> (+0.09%)` | :arrow_up: |
| unittests2 | `14.06% <0.00%> (-0.04%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `24.41% <0.00%> (-47.44%)` | :arrow_down: |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `86.08% <0.00%> (-6.10%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `91.89% <66.66%> (+0.12%)` | :arrow_up: |
| [...ot/core/query/reduce/filter/RowMatcherFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXJGYWN0b3J5LmphdmE=) | `66.66% <66.66%> (ø)` | |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `68.79% <70.79%> (+5.15%)` | :arrow_up: |
| [...pinot/core/plan/maker/InstancePlanMakerImplV2.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL21ha2VyL0luc3RhbmNlUGxhbk1ha2VySW1wbFYyLmphdmE=) | `66.36% <73.68%> (-10.41%)` | :arrow_down: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `80.55% <75.00%> (-6.95%)` | :arrow_down: |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `85.44% <85.71%> (-1.12%)` | :arrow_down: |
| ... and [413 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [3f98ce3...1c1ba84](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] amrishlal commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
amrishlal commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r819865670
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/GapFillDataTableReducer.java
##########
@@ -0,0 +1,690 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import org.apache.pinot.common.exception.QueryException;
+import org.apache.pinot.common.metrics.BrokerMetrics;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.QueryProcessingException;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.ConcurrentIndexedTable;
+import org.apache.pinot.core.data.table.IndexedTable;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.data.table.Record;
+import org.apache.pinot.core.data.table.SimpleIndexedTable;
+import org.apache.pinot.core.data.table.UnboundedConcurrentIndexedTable;
+import org.apache.pinot.core.operator.combine.GroupByOrderByCombineOperator;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.query.selection.SelectionOperatorUtils;
+import org.apache.pinot.core.transport.ServerRoutingInstance;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.core.util.GroupByUtils;
+import org.apache.pinot.core.util.trace.TraceRunnable;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+
+/**
+ * Helper class to reduce and set gap fill results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class GapFillDataTableReducer implements DataTableReducer {
+ private static final int MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE = 2; // TBD, find a better value.
+
+ private final QueryContext _queryContext;
+
+ private final int _limitForAggregatedResult;
+ private final DateTimeGranularitySpec _dateTimeGranularity;
+ private final DateTimeFormatSpec _dateTimeFormatter;
+ private final long _startMs;
+ private final long _endMs;
+ private final long _timeBucketSize;
+ private final int _numOfTimeBuckets;
+ private final List<Integer> _groupByKeyIndexes;
+ private final Set<Key> _groupByKeys;
+ private final Map<Key, Object[]> _previousByGroupKey;
+ private final Map<String, ExpressionContext> _fillExpressions;
+ private final List<ExpressionContext> _timeSeries;
+ private final GapfillUtils.GapfillType _gapfillType;
+ private int _limitForGapfilledResult;
+ private boolean[] _isGroupBySelections;
+
+ GapFillDataTableReducer(QueryContext queryContext) {
+ _queryContext = queryContext;
+ _gapfillType = queryContext.getGapfillType();
+ _limitForAggregatedResult = queryContext.getLimit();
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ _limitForGapfilledResult = queryContext.getLimit();
+ } else {
+ _limitForGapfilledResult = queryContext.getSubQueryContext().getLimit();
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext);
+
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+
+ _dateTimeFormatter = new DateTimeFormatSpec(args.get(1).getLiteral());
+ _dateTimeGranularity = new DateTimeGranularitySpec(args.get(4).getLiteral());
+ String start = args.get(2).getLiteral();
+ _startMs = truncate(_dateTimeFormatter.fromFormatToMillis(start));
+ String end = args.get(3).getLiteral();
+ _endMs = truncate(_dateTimeFormatter.fromFormatToMillis(end));
+ _timeBucketSize = _dateTimeGranularity.granularityToMillis();
+ _numOfTimeBuckets = (int) ((_endMs - _startMs) / _timeBucketSize);
+
+ _fillExpressions = GapfillUtils.getFillExpressions(gapFillSelection);
+
+ _previousByGroupKey = new HashMap<>();
+ _groupByKeyIndexes = new ArrayList<>();
+ _groupByKeys = new HashSet<>();
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ _timeSeries = timeseriesOn.getFunction().getArguments();
+ }
+
+ private int findBucketIndex(long time) {
+ return (int) ((time - _startMs) / _timeBucketSize);
+ }
+
+ private void replaceColumnNameWithAlias(DataSchema dataSchema) {
+ QueryContext queryContext;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = _queryContext.getSubQueryContext().getSubQueryContext();
+ } else if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ queryContext = _queryContext;
+ } else {
+ queryContext = _queryContext.getSubQueryContext();
+ }
+ List<String> aliasList = queryContext.getAliasList();
+ Map<String, String> columnNameToAliasMap = new HashMap<>();
+ for (int i = 0; i < aliasList.size(); i++) {
+ if (aliasList.get(i) != null) {
+ ExpressionContext selection = queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(selection)) {
+ selection = selection.getFunction().getArguments().get(0);
+ }
+ columnNameToAliasMap.put(selection.toString(), aliasList.get(i));
+ }
+ }
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ if (columnNameToAliasMap.containsKey(dataSchema.getColumnNames()[i])) {
+ dataSchema.getColumnNames()[i] = columnNameToAliasMap.get(dataSchema.getColumnNames()[i]);
+ }
+ }
+ }
+
+ /**
+ * Computes the number of reduce threads to use per query.
+ * <ul>
+ * <li> Use single thread if number of data tables to reduce is less than
+ * {@value #MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE}.</li>
+ * <li> Else, use min of max allowed reduce threads per query, and number of data tables.</li>
+ * </ul>
+ *
+ * @param numDataTables Number of data tables to reduce
+ * @param maxReduceThreadsPerQuery Max allowed reduce threads per query
+ * @return Number of reduce threads to use for the query
+ */
+ private int getNumReduceThreadsToUse(int numDataTables, int maxReduceThreadsPerQuery) {
+ // Use single thread if number of data tables < MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE.
+ if (numDataTables < MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE) {
+ return Math.min(1, numDataTables); // Number of data tables can be zero.
+ }
+
+ return Math.min(maxReduceThreadsPerQuery, numDataTables);
+ }
+
+ private IndexedTable getIndexedTable(DataSchema dataSchema, Collection<DataTable> dataTablesToReduce,
+ DataTableReducerContext reducerContext)
+ throws TimeoutException {
+ QueryContext queryContext = _queryContext.getSubQueryContext();
+ if (_queryContext.getGapfillType() == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = queryContext.getSubQueryContext();
+ }
+ long start = System.currentTimeMillis();
+ int numDataTables = dataTablesToReduce.size();
+
+ // Get the number of threads to use for reducing.
+ // In case of single reduce thread, fall back to SimpleIndexedTable to avoid redundant locking/unlocking calls.
+ int numReduceThreadsToUse = getNumReduceThreadsToUse(numDataTables, reducerContext.getMaxReduceThreadsPerQuery());
+ int limit = queryContext.getLimit();
+ // TODO: Make minTrimSize configurable
+ int trimSize = GroupByUtils.getTableCapacity(limit);
+ // NOTE: For query with HAVING clause, use trimSize as resultSize to ensure the result accuracy.
+ // TODO: Resolve the HAVING clause within the IndexedTable before returning the result
+ int resultSize = queryContext.getHavingFilter() != null ? trimSize : limit;
+ int trimThreshold = reducerContext.getGroupByTrimThreshold();
+ IndexedTable indexedTable;
+ if (numReduceThreadsToUse <= 1) {
+ indexedTable = new SimpleIndexedTable(dataSchema, queryContext, resultSize, trimSize, trimThreshold);
+ } else {
+ if (trimThreshold >= GroupByOrderByCombineOperator.MAX_TRIM_THRESHOLD) {
+ // special case of trim threshold where it is set to max value.
+ // there won't be any trimming during upsert in this case.
+ // thus we can avoid the overhead of read-lock and write-lock
+ // in the upsert method.
+ indexedTable = new UnboundedConcurrentIndexedTable(dataSchema, queryContext, resultSize);
+ } else {
+ indexedTable = new ConcurrentIndexedTable(dataSchema, queryContext, resultSize, trimSize, trimThreshold);
+ }
+ }
+
+ Future[] futures = new Future[numDataTables];
+ CountDownLatch countDownLatch = new CountDownLatch(numDataTables);
+
+ // Create groups of data tables that each thread can process concurrently.
+ // Given that numReduceThreads is <= numDataTables, each group will have at least one data table.
+ ArrayList<DataTable> dataTables = new ArrayList<>(dataTablesToReduce);
+ List<List<DataTable>> reduceGroups = new ArrayList<>(numReduceThreadsToUse);
+
+ for (int i = 0; i < numReduceThreadsToUse; i++) {
+ reduceGroups.add(new ArrayList<>());
+ }
+ for (int i = 0; i < numDataTables; i++) {
+ reduceGroups.get(i % numReduceThreadsToUse).add(dataTables.get(i));
+ }
+
+ int cnt = 0;
+ ColumnDataType[] storedColumnDataTypes = dataSchema.getStoredColumnDataTypes();
+ int numColumns = storedColumnDataTypes.length;
+ for (List<DataTable> reduceGroup : reduceGroups) {
+ futures[cnt++] = reducerContext.getExecutorService().submit(new TraceRunnable() {
+ @Override
+ public void runJob() {
+ for (DataTable dataTable : reduceGroup) {
+ int numRows = dataTable.getNumberOfRows();
+
+ try {
+ for (int rowId = 0; rowId < numRows; rowId++) {
+ Object[] values = new Object[numColumns];
+ for (int colId = 0; colId < numColumns; colId++) {
+ switch (storedColumnDataTypes[colId]) {
+ case INT:
+ values[colId] = dataTable.getInt(rowId, colId);
+ break;
+ case LONG:
+ values[colId] = dataTable.getLong(rowId, colId);
+ break;
+ case FLOAT:
+ values[colId] = dataTable.getFloat(rowId, colId);
+ break;
+ case DOUBLE:
+ values[colId] = dataTable.getDouble(rowId, colId);
+ break;
+ case STRING:
+ values[colId] = dataTable.getString(rowId, colId);
+ break;
+ case BYTES:
+ values[colId] = dataTable.getBytes(rowId, colId);
+ break;
+ case OBJECT:
+ values[colId] = dataTable.getObject(rowId, colId);
+ break;
+ // Add other aggregation intermediate result / group-by column type supports here
+ default:
+ throw new IllegalStateException();
+ }
+ }
+ indexedTable.upsert(new Record(values));
+ }
+ } finally {
+ countDownLatch.countDown();
+ }
+ }
+ }
+ });
+ }
+
+ try {
+ long timeOutMs = reducerContext.getReduceTimeOutMs() - (System.currentTimeMillis() - start);
+ countDownLatch.await(timeOutMs, TimeUnit.MILLISECONDS);
+ } catch (InterruptedException e) {
+ for (Future future : futures) {
+ if (!future.isDone()) {
+ future.cancel(true);
+ }
+ }
+ throw new TimeoutException("Timed out in broker reduce phase.");
+ }
+
+ indexedTable.finish(true);
+ return indexedTable;
+ }
+
+ /**
+ * Here are three things that happen
+ * 1. Sort the result sets from all pinot servers based on timestamp
+ * 2. Gapfill the data for missing entities per time bucket
+ * 3. Aggregate the dataset per time bucket.
+ */
+ @Override
+ public void reduceAndSetResults(String tableName, DataSchema dataSchema,
+ Map<ServerRoutingInstance, DataTable> dataTableMap, BrokerResponseNative brokerResponseNative,
+ DataTableReducerContext reducerContext, BrokerMetrics brokerMetrics) {
+ DataSchema resultTableSchema = getResultTableDataSchema(dataSchema);
+ if (dataTableMap.isEmpty()) {
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, Collections.emptyList()));
+ return;
+ }
+
+ String[] columns = dataSchema.getColumnNames();
+
+ Map<String, Integer> indexes = new HashMap<>();
+ for (int i = 0; i < columns.length; i++) {
+ indexes.put(columns[i], i);
+ }
+
+ _isGroupBySelections = new boolean[dataSchema.getColumnDataTypes().length];
+
+ // The first one argument of timeSeries is time column. The left ones are defining entity.
+ for (ExpressionContext entityColum : _timeSeries) {
+ int index = indexes.get(entityColum.getIdentifier());
+ _isGroupBySelections[index] = true;
+ _groupByKeyIndexes.add(index);
+ }
+
+ List<Object[]>[] timeBucketedRawRows;
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL_AGGREGATE || _gapfillType == GapfillUtils.GapfillType.GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL_SELECT) {
+ timeBucketedRawRows = putRawRowsIntoTimeBucket(dataTableMap.values());
+ } else {
+ try {
+ IndexedTable indexedTable = getIndexedTable(dataSchema, dataTableMap.values(), reducerContext);
+ timeBucketedRawRows = putRawRowsIntoTimeBucket(indexedTable);
+ } catch (TimeoutException e) {
+ brokerResponseNative.getProcessingExceptions()
+ .add(new QueryProcessingException(QueryException.BROKER_TIMEOUT_ERROR_CODE, e.getMessage()));
+ return;
+ }
+ }
+ List<Object[]> resultRows;
+ replaceColumnNameWithAlias(dataSchema);
+
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL_AGGREGATE || _gapfillType == GapfillUtils.GapfillType.GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL_SELECT) {
+ List<Object[]> gapfilledRows = gapFillAndAggregate(timeBucketedRawRows, resultTableSchema, dataSchema);
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL_SELECT) {
+ List<String> selectionColumns = SelectionOperatorUtils.getSelectionColumns(_queryContext, dataSchema);
+ resultRows = new ArrayList<>(gapfilledRows.size());
+
+ Map<String, Integer> columnNameToIndexMap = new HashMap<>(dataSchema.getColumnNames().length);
+ String[] columnNames = dataSchema.getColumnNames();
+ for (int i = 0; i < columnNames.length; i++) {
+ columnNameToIndexMap.put(columnNames[i], i);
+ }
+
+ ColumnDataType[] columnDataTypes = dataSchema.getColumnDataTypes();
+ ColumnDataType[] resultColumnDataTypes = new ColumnDataType[selectionColumns.size()];
+ for (int i = 0; i < resultColumnDataTypes.length; i++) {
+ String name = selectionColumns.get(i);
+ int index = columnNameToIndexMap.get(name);
+ resultColumnDataTypes[i] = columnDataTypes[index];
+ }
+
+ for (Object[] row : gapfilledRows) {
+ Object[] resultRow = new Object[selectionColumns.size()];
+ for (int i = 0; i < selectionColumns.size(); i++) {
+ int index = columnNameToIndexMap.get(selectionColumns.get(i));
+ resultRow[i] = resultColumnDataTypes[i].convertAndFormat(row[index]);
+ }
+ resultRows.add(resultRow);
+ }
+ } else {
+ resultRows = gapfilledRows;
+ }
+ } else {
+ this.setupColumnTypeForAggregatedColum(dataSchema.getColumnDataTypes());
+ ColumnDataType[] columnDataTypes = dataSchema.getColumnDataTypes();
+ for (List<Object[]> rawRowsForTimeBucket : timeBucketedRawRows) {
+ if (rawRowsForTimeBucket == null) {
+ continue;
+ }
+ for (Object[] row : rawRowsForTimeBucket) {
+ extractFinalAggregationResults(row);
+ for (int i = 0; i < columnDataTypes.length; i++) {
+ row[i] = columnDataTypes[i].convert(row[i]);
+ }
+ }
+ }
+ resultRows = gapFillAndAggregate(timeBucketedRawRows, resultTableSchema, dataSchema);
+ }
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, resultRows));
+ }
+
+ private void extractFinalAggregationResults(Object[] row) {
+ AggregationFunction[] aggregationFunctions;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL) {
+ aggregationFunctions = _queryContext.getSubQueryContext().getAggregationFunctions();
+ } else {
+ aggregationFunctions = _queryContext.getSubQueryContext().getSubQueryContext().getAggregationFunctions();
+ }
+ int numAggregationFunctionsForInnerQuery = aggregationFunctions == null ? 0 : aggregationFunctions.length;
+ for (int i = 0; i < numAggregationFunctionsForInnerQuery; i++) {
+ int valueIndex = _timeSeries.size() + 1 + i;
+ row[valueIndex] = aggregationFunctions[i].extractFinalResult(row[valueIndex]);
+ }
+ }
+
+ private void setupColumnTypeForAggregatedColum(ColumnDataType[] columnDataTypes) {
+ AggregationFunction[] aggregationFunctions;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL) {
+ aggregationFunctions = _queryContext.getSubQueryContext().getAggregationFunctions();
+ } else {
+ aggregationFunctions = _queryContext.getSubQueryContext().getSubQueryContext().getAggregationFunctions();
+ }
+ int numAggregationFunctionsForInnerQuery = aggregationFunctions == null ? 0 : aggregationFunctions.length;
+ for (int i = 0; i < numAggregationFunctionsForInnerQuery; i++) {
+ columnDataTypes[_timeSeries.size() + 1 + i] = aggregationFunctions[i].getFinalResultColumnType();
+ }
+ }
+
+ /**
+ * Constructs the DataSchema for the ResultTable.
+ */
+ private DataSchema getResultTableDataSchema(DataSchema dataSchema) {
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ return dataSchema;
+ }
+
+ int numOfColumns = _queryContext.getSelectExpressions().size();
+ String[] columnNames = new String[numOfColumns];
+ ColumnDataType[] columnDataTypes = new ColumnDataType[numOfColumns];
+ for (int i = 0; i < numOfColumns; i++) {
+ ExpressionContext expressionContext = _queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(expressionContext)) {
+ expressionContext = expressionContext.getFunction().getArguments().get(0);
+ }
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ columnNames[i] = expressionContext.getIdentifier();
+ columnDataTypes[i] = ColumnDataType.STRING;
+ } else {
+ FunctionContext functionContext = expressionContext.getFunction();
+ AggregationFunction aggregationFunction =
+ AggregationFunctionFactory.getAggregationFunction(functionContext, _queryContext);
+ columnDataTypes[i] = aggregationFunction.getFinalResultColumnType();
+ columnNames[i] = functionContext.toString();
+ }
+ }
+ return new DataSchema(columnNames, columnDataTypes);
+ }
+
+ private Key constructGroupKeys(Object[] row) {
+ Object[] groupKeys = new Object[_groupByKeyIndexes.size()];
+ for (int i = 0; i < _groupByKeyIndexes.size(); i++) {
+ groupKeys[i] = row[_groupByKeyIndexes.get(i)];
+ }
+ return new Key(groupKeys);
+ }
+
+ private long truncate(long epoch) {
+ int sz = _dateTimeGranularity.getSize();
+ return epoch / sz * sz;
+ }
+
+ private List<Object[]> gapFillAndAggregate(List<Object[]>[] timeBucketedRawRows,
+ DataSchema dataSchemaForAggregatedResult, DataSchema dataSchema) {
+ List<Object[]> result = new ArrayList<>();
+
+ GapfillFilterHandler postGapfillFilterHandler = null;
+ if (_queryContext.getSubQueryContext() != null && _queryContext.getFilter() != null) {
+ postGapfillFilterHandler = new GapfillFilterHandler(_queryContext.getFilter(), dataSchema);
+ }
+ GapfillFilterHandler postAggregateHavingFilterHandler = null;
+ if (_queryContext.getHavingFilter() != null) {
+ postAggregateHavingFilterHandler =
+ new GapfillFilterHandler(_queryContext.getHavingFilter(), dataSchemaForAggregatedResult);
+ }
+ for (long time = _startMs; time < _endMs; time += _timeBucketSize) {
+ int index = findBucketIndex(time);
+ List<Object[]> bucketedResult = gapfill(time, timeBucketedRawRows[index], dataSchema, postGapfillFilterHandler);
+ if (_queryContext.getAggregationFunctions() == null) {
+ result.addAll(bucketedResult);
+ } else if (bucketedResult.size() > 0) {
+ List<Object[]> aggregatedRows = aggregateGapfilledData(bucketedResult, dataSchema);
+ for (Object[] aggregatedRow : aggregatedRows) {
+ if (postAggregateHavingFilterHandler == null || postAggregateHavingFilterHandler.isMatch(aggregatedRow)) {
+ result.add(aggregatedRow);
+ }
+ if (result.size() >= _limitForAggregatedResult) {
+ return result;
+ }
+ }
+ }
+ }
+ return result;
+ }
+
+ private List<Object[]> gapfill(long bucketTime, List<Object[]> rawRowsForBucket, DataSchema dataSchema,
+ GapfillFilterHandler postGapfillFilterHandler) {
+ List<Object[]> bucketedResult = new ArrayList<>();
+ ColumnDataType[] resultColumnDataTypes = dataSchema.getColumnDataTypes();
+ int numResultColumns = resultColumnDataTypes.length;
+ Set<Key> keys = new HashSet<>(_groupByKeys);
+
+ if (rawRowsForBucket != null) {
+ for (Object[] resultRow : rawRowsForBucket) {
+ for (int i = 0; i < resultColumnDataTypes.length; i++) {
+ resultRow[i] = resultColumnDataTypes[i].format(resultRow[i]);
+ }
+
+ long timeCol = _dateTimeFormatter.fromFormatToMillis(String.valueOf(resultRow[0]));
+ if (timeCol > bucketTime) {
+ break;
+ }
+ if (timeCol == bucketTime) {
+ if (postGapfillFilterHandler == null || postGapfillFilterHandler.isMatch(resultRow)) {
+ if (bucketedResult.size() >= _limitForGapfilledResult) {
+ _limitForGapfilledResult = 0;
+ break;
+ } else {
+ bucketedResult.add(resultRow);
+ }
+ }
+ Key key = constructGroupKeys(resultRow);
+ keys.remove(key);
+ _previousByGroupKey.put(key, resultRow);
+ }
+ }
+ }
+
+ for (Key key : keys) {
+ Object[] gapfillRow = new Object[numResultColumns];
+ int keyIndex = 0;
+ if (resultColumnDataTypes[0] == ColumnDataType.LONG) {
+ gapfillRow[0] = Long.valueOf(_dateTimeFormatter.fromMillisToFormat(bucketTime));
+ } else {
+ gapfillRow[0] = _dateTimeFormatter.fromMillisToFormat(bucketTime);
+ }
+ for (int i = 1; i < _isGroupBySelections.length; i++) {
+ if (_isGroupBySelections[i]) {
+ gapfillRow[i] = key.getValues()[keyIndex++];
+ } else {
+ gapfillRow[i] = getFillValue(i, dataSchema.getColumnName(i), key, resultColumnDataTypes[i]);
+ }
+ }
+
+ if (postGapfillFilterHandler == null || postGapfillFilterHandler.isMatch(gapfillRow)) {
+ if (bucketedResult.size() >= _limitForGapfilledResult) {
+ break;
+ } else {
+ bucketedResult.add(gapfillRow);
+ }
+ }
+ }
+ if (_limitForGapfilledResult > _groupByKeys.size()) {
+ _limitForGapfilledResult -= _groupByKeys.size();
+ } else {
+ _limitForGapfilledResult = 0;
+ }
+ return bucketedResult;
+ }
+
+ private List<Object[]> aggregateGapfilledData(List<Object[]> bucketedRows, DataSchema dataSchema) {
+ List<ExpressionContext> groupbyExpressions = _queryContext.getGroupByExpressions();
+ Preconditions.checkArgument(groupbyExpressions != null, "No GroupBy Clause.");
+ Map<String, Integer> indexes = new HashMap<>();
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ indexes.put(dataSchema.getColumnName(i), i);
+ }
+
+ Map<List<Object>, Integer> groupKeyIndexes = new HashMap<>();
+ int[] groupKeyArray = new int[bucketedRows.size()];
+ List<Object[]> aggregatedResult = new ArrayList<>();
+ for (int i = 0; i < bucketedRows.size(); i++) {
+ Object[] bucketedRow = bucketedRows.get(i);
+ List<Object> groupKey = new ArrayList<>(groupbyExpressions.size());
+ for (ExpressionContext groupbyExpression : groupbyExpressions) {
+ int columnIndex = indexes.get(groupbyExpression.toString());
+ groupKey.add(bucketedRow[columnIndex]);
+ }
+ if (groupKeyIndexes.containsKey(groupKey)) {
+ groupKeyArray[i] = groupKeyIndexes.get(groupKey);
+ } else {
+ // create the new groupBy Result row and fill the group by key
+ groupKeyArray[i] = groupKeyIndexes.size();
+ groupKeyIndexes.put(groupKey, groupKeyIndexes.size());
+ Object[] row = new Object[_queryContext.getSelectExpressions().size()];
+ for (int j = 0; j < _queryContext.getSelectExpressions().size(); j++) {
+ ExpressionContext expressionContext = _queryContext.getSelectExpressions().get(j);
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ row[j] = bucketedRow[indexes.get(expressionContext.toString())];
+ }
+ }
+ aggregatedResult.add(row);
+ }
+ }
+
+ Map<ExpressionContext, BlockValSet> blockValSetMap = new HashMap<>();
+ for (int i = 1; i < dataSchema.getColumnNames().length; i++) {
+ blockValSetMap.put(ExpressionContext.forIdentifier(dataSchema.getColumnName(i)),
+ new ColumnDataToBlockValSetConverter(dataSchema.getColumnDataType(i), bucketedRows, i));
+ }
+
+ for (int i = 0; i < _queryContext.getSelectExpressions().size(); i++) {
+ ExpressionContext expressionContext = _queryContext.getSelectExpressions().get(i);
+ if (expressionContext.getType() == ExpressionContext.Type.FUNCTION) {
+ FunctionContext functionContext = expressionContext.getFunction();
+ AggregationFunction aggregationFunction =
+ AggregationFunctionFactory.getAggregationFunction(functionContext, _queryContext);
+ GroupByResultHolder groupByResultHolder =
+ aggregationFunction.createGroupByResultHolder(_groupByKeys.size(), _groupByKeys.size());
+ aggregationFunction.aggregateGroupBySV(bucketedRows.size(), groupKeyArray, groupByResultHolder, blockValSetMap);
+ for (int j = 0; j < groupKeyIndexes.size(); j++) {
+ Object[] row = aggregatedResult.get(j);
+ row[i] = aggregationFunction.extractGroupByResult(groupByResultHolder, j);
+ row[i] = aggregationFunction.extractFinalResult(row[i]);
+ }
+ }
+ }
+ return aggregatedResult;
+ }
+
+ private Object getFillValue(int columnIndex, String columnName, Object key, ColumnDataType dataType) {
+ ExpressionContext expressionContext = _fillExpressions.get(columnName);
+ if (expressionContext != null && expressionContext.getFunction() != null && GapfillUtils
+ .isFill(expressionContext)) {
+ List<ExpressionContext> args = expressionContext.getFunction().getArguments();
+ if (args.get(1).getLiteral() == null) {
+ throw new UnsupportedOperationException("Wrong Sql.");
+ }
+ GapfillUtils.FillType fillType = GapfillUtils.FillType.valueOf(args.get(1).getLiteral());
+ if (fillType == GapfillUtils.FillType.FILL_DEFAULT_VALUE) {
+ // TODO: may fill the default value from sql in the future.
+ return GapfillUtils.getDefaultValue(dataType);
+ } else if (fillType == GapfillUtils.FillType.FILL_PREVIOUS_VALUE) {
+ Object[] row = _previousByGroupKey.get(key);
+ if (row != null) {
+ return row[columnIndex];
+ } else {
+ return GapfillUtils.getDefaultValue(dataType);
+ }
+ } else {
+ throw new UnsupportedOperationException("unsupported fill type.");
+ }
+ } else {
+ return GapfillUtils.getDefaultValue(dataType);
+ }
+ }
+
+ /**
+ * Merge all result tables from different pinot servers and sort the rows based on timebucket.
+ */
+ private List<Object[]>[] putRawRowsIntoTimeBucket(Collection<DataTable> dataTables) {
+ List<Object[]>[] bucketedItems = new List[_numOfTimeBuckets];
+
+ for (DataTable dataTable : dataTables) {
+ int numRows = dataTable.getNumberOfRows();
+ for (int rowId = 0; rowId < numRows; rowId++) {
+ Object[] row = SelectionOperatorUtils.extractRowFromDataTable(dataTable, rowId);
+ String timeCol = row[0] instanceof Long ? ((Long) row[0]).toString() : (String) row[0];
+ long timeBucket = _dateTimeFormatter.fromFormatToMillis(timeCol);
+ int index = findBucketIndex(timeBucket);
+ if (bucketedItems[index] == null) {
+ bucketedItems[index] = new ArrayList<>();
+ }
+ bucketedItems[index].add(row);
+ _groupByKeys.add(constructGroupKeys(row));
+ }
+ }
+ return bucketedItems;
+ }
+
+ private List<Object[]>[] putRawRowsIntoTimeBucket(IndexedTable indexedTable) {
+ List<Object[]>[] bucketedItems = new List[_numOfTimeBuckets];
+
+ Iterator<Record> iterator = indexedTable.iterator();
+ while (iterator.hasNext()) {
+ Object[] row = iterator.next().getValues();
+ String timeCol = row[0] instanceof Long ? ((Long) row[0]).toString() : (String) row[0];
+ long timeBucket = _dateTimeFormatter.fromFormatToMillis(timeCol);
Review comment:
This can be simplified to:
` long timeBucket = _dateTimeFormatter.fromFormatToMillis(String.valueOf(row[0]));`
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (98cf976) into [master](https://codecov.io/gh/apache/pinot/commit/3f98ce37fdaef0335fcd82e621489d65751b1f55?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (3f98ce3) will **decrease** coverage by `0.91%`.
> The diff coverage is `78.97%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 70.72% 69.81% -0.92%
- Complexity 4242 4248 +6
============================================
Files 1631 1641 +10
Lines 85279 86103 +824
Branches 12844 13034 +190
============================================
- Hits 60316 60111 -205
- Misses 20799 21807 +1008
- Partials 4164 4185 +21
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.69% <16.03%> (+<0.01%)` | :arrow_up: |
| integration2 | `?` | |
| unittests1 | `67.13% <79.18%> (+0.14%)` | :arrow_up: |
| unittests2 | `14.09% <0.00%> (-0.01%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `69.92% <0.00%> (-1.93%)` | :arrow_down: |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `91.58% <0.00%> (-0.60%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `91.89% <66.66%> (+0.12%)` | :arrow_up: |
| [...ot/core/query/reduce/filter/RowMatcherFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXJGYWN0b3J5LmphdmE=) | `66.66% <66.66%> (ø)` | |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `68.79% <70.79%> (+5.15%)` | :arrow_up: |
| [...pinot/core/plan/maker/InstancePlanMakerImplV2.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL21ha2VyL0luc3RhbmNlUGxhbk1ha2VySW1wbFYyLmphdmE=) | `66.36% <73.68%> (-10.41%)` | :arrow_down: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `80.55% <75.00%> (-6.95%)` | :arrow_down: |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `86.38% <85.71%> (-0.18%)` | :arrow_down: |
| ... and [146 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [3f98ce3...98cf976](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r817004820
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/PreAggregationGapFillDataTableReducer.java
##########
@@ -0,0 +1,775 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.PriorityQueue;
+import java.util.Set;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import org.apache.pinot.common.exception.QueryException;
+import org.apache.pinot.common.metrics.BrokerMetrics;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.QueryProcessingException;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.ConcurrentIndexedTable;
+import org.apache.pinot.core.data.table.IndexedTable;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.data.table.Record;
+import org.apache.pinot.core.data.table.SimpleIndexedTable;
+import org.apache.pinot.core.data.table.UnboundedConcurrentIndexedTable;
+import org.apache.pinot.core.operator.combine.GroupByOrderByCombineOperator;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.query.selection.SelectionOperatorUtils;
+import org.apache.pinot.core.transport.ServerRoutingInstance;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.core.util.GroupByUtils;
+import org.apache.pinot.core.util.trace.TraceRunnable;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+/**
+ * Helper class to reduce and set Aggregation results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class PreAggregationGapFillDataTableReducer implements DataTableReducer {
+ private static final int MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE = 2; // TBD, find a better value.
+
+ private final QueryContext _queryContext;
+
+ private final int _limitForAggregatedResult;
+ private int _limitForGapfilledResult;
+
+ private final DateTimeGranularitySpec _dateTimeGranularity;
+ private final DateTimeFormatSpec _dateTimeFormatter;
+ private final long _startMs;
+ private final long _endMs;
+ private final long _timeBucketSize;
+
+ private final List<Integer> _groupByKeyIndexes;
+ private boolean [] _isGroupBySelections;
+ private final Set<Key> _groupByKeys;
+ private final Map<Key, Object[]> _previousByGroupKey;
+ private final Map<String, ExpressionContext> _fillExpressions;
+ private final List<ExpressionContext> _timeSeries;
+ private final GapfillUtils.GapfillType _gapfillType;
+
+ PreAggregationGapFillDataTableReducer(QueryContext queryContext) {
+ _queryContext = queryContext;
+ _gapfillType = queryContext.getGapfillType();
+ _limitForAggregatedResult = queryContext.getLimit();
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ _limitForGapfilledResult = queryContext.getLimit();
+ } else {
+ _limitForGapfilledResult = queryContext.getSubQueryContext().getLimit();
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext);
+
+ Preconditions.checkArgument(
+ gapFillSelection != null && gapFillSelection.getFunction() != null, "Gapfill Expression should be function.");
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ Preconditions.checkArgument(
+ args.size() > 5, "PreAggregateGapFill does not have correct number of arguments.");
+ Preconditions.checkArgument(
+ args.get(1).getLiteral() != null, "The second argument of PostAggregateGapFill should be TimeFormatter.");
+ Preconditions.checkArgument(
+ args.get(2).getLiteral() != null, "The third argument of PostAggregateGapFill should be start time.");
+ Preconditions.checkArgument(
+ args.get(3).getLiteral() != null, "The fourth argument of PostAggregateGapFill should be end time.");
+ Preconditions.checkArgument(
+ args.get(4).getLiteral() != null, "The fifth argument of PostAggregateGapFill should be time bucket size.");
+
+ _dateTimeFormatter = new DateTimeFormatSpec(args.get(1).getLiteral());
+ _dateTimeGranularity = new DateTimeGranularitySpec(args.get(4).getLiteral());
+ String start = args.get(2).getLiteral();
+ _startMs = truncate(_dateTimeFormatter.fromFormatToMillis(start));
+ String end = args.get(3).getLiteral();
+ _endMs = truncate(_dateTimeFormatter.fromFormatToMillis(end));
+ _timeBucketSize = _dateTimeGranularity.granularityToMillis();
+
+ _fillExpressions = GapfillUtils.getFillExpressions(gapFillSelection);
+
+ _previousByGroupKey = new HashMap<>();
+ _groupByKeyIndexes = new ArrayList<>();
+ _groupByKeys = new HashSet<>();
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ Preconditions.checkArgument(timeseriesOn != null, "The TimeSeriesOn expressions should be specified.");
+ _timeSeries = timeseriesOn.getFunction().getArguments();
+ }
+
+ private void replaceColumnNameWithAlias(DataSchema dataSchema) {
+ QueryContext queryContext;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = _queryContext.getSubQueryContext().getSubQueryContext();
+ } else if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ queryContext = _queryContext;
+ } else {
+ queryContext = _queryContext.getSubQueryContext();
+ }
+ List<String> aliasList = queryContext.getAliasList();
+ Map<String, String> columnNameToAliasMap = new HashMap<>();
+ for (int i = 0; i < aliasList.size(); i++) {
+ if (aliasList.get(i) != null) {
+ ExpressionContext selection = queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(selection)) {
+ selection = selection.getFunction().getArguments().get(0);
+ }
+ columnNameToAliasMap.put(selection.toString(), aliasList.get(i));
+ }
+ }
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ if (columnNameToAliasMap.containsKey(dataSchema.getColumnNames()[i])) {
+ dataSchema.getColumnNames()[i] = columnNameToAliasMap.get(dataSchema.getColumnNames()[i]);
+ }
+ }
+ }
+
+ /**
+ * Computes the number of reduce threads to use per query.
+ * <ul>
+ * <li> Use single thread if number of data tables to reduce is less than
+ * {@value #MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE}.</li>
+ * <li> Else, use min of max allowed reduce threads per query, and number of data tables.</li>
+ * </ul>
+ *
+ * @param numDataTables Number of data tables to reduce
+ * @param maxReduceThreadsPerQuery Max allowed reduce threads per query
+ * @return Number of reduce threads to use for the query
+ */
+ private int getNumReduceThreadsToUse(int numDataTables, int maxReduceThreadsPerQuery) {
+ // Use single thread if number of data tables < MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE.
+ if (numDataTables < MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE) {
+ return Math.min(1, numDataTables); // Number of data tables can be zero.
+ }
+
+ return Math.min(maxReduceThreadsPerQuery, numDataTables);
+ }
+
+ private IndexedTable getIndexedTable(DataSchema dataSchema, Collection<DataTable> dataTablesToReduce,
+ DataTableReducerContext reducerContext)
+ throws TimeoutException {
+ long start = System.currentTimeMillis();
+ int numDataTables = dataTablesToReduce.size();
+
+ // Get the number of threads to use for reducing.
+ // In case of single reduce thread, fall back to SimpleIndexedTable to avoid redundant locking/unlocking calls.
+ int numReduceThreadsToUse = getNumReduceThreadsToUse(numDataTables, reducerContext.getMaxReduceThreadsPerQuery());
+ int limit = _queryContext.getLimit();
+ // TODO: Make minTrimSize configurable
+ int trimSize = GroupByUtils.getTableCapacity(limit);
+ // NOTE: For query with HAVING clause, use trimSize as resultSize to ensure the result accuracy.
+ // TODO: Resolve the HAVING clause within the IndexedTable before returning the result
+ int resultSize = _queryContext.getHavingFilter() != null ? trimSize : limit;
+ int trimThreshold = reducerContext.getGroupByTrimThreshold();
+ IndexedTable indexedTable;
+ if (numReduceThreadsToUse <= 1) {
+ indexedTable = new SimpleIndexedTable(dataSchema, _queryContext, resultSize, trimSize, trimThreshold);
+ } else {
+ if (trimThreshold >= GroupByOrderByCombineOperator.MAX_TRIM_THRESHOLD) {
+ // special case of trim threshold where it is set to max value.
+ // there won't be any trimming during upsert in this case.
+ // thus we can avoid the overhead of read-lock and write-lock
+ // in the upsert method.
+ indexedTable = new UnboundedConcurrentIndexedTable(dataSchema, _queryContext, resultSize);
+ } else {
+ indexedTable = new ConcurrentIndexedTable(dataSchema, _queryContext, resultSize, trimSize, trimThreshold);
+ }
+ }
+
+ Future[] futures = new Future[numDataTables];
+ CountDownLatch countDownLatch = new CountDownLatch(numDataTables);
+
+ // Create groups of data tables that each thread can process concurrently.
+ // Given that numReduceThreads is <= numDataTables, each group will have at least one data table.
+ ArrayList<DataTable> dataTables = new ArrayList<>(dataTablesToReduce);
+ List<List<DataTable>> reduceGroups = new ArrayList<>(numReduceThreadsToUse);
+
+ for (int i = 0; i < numReduceThreadsToUse; i++) {
+ reduceGroups.add(new ArrayList<>());
+ }
+ for (int i = 0; i < numDataTables; i++) {
+ reduceGroups.get(i % numReduceThreadsToUse).add(dataTables.get(i));
+ }
+
+ int cnt = 0;
+ ColumnDataType[] storedColumnDataTypes = dataSchema.getStoredColumnDataTypes();
+ int numColumns = storedColumnDataTypes.length;
+ for (List<DataTable> reduceGroup : reduceGroups) {
+ futures[cnt++] = reducerContext.getExecutorService().submit(new TraceRunnable() {
+ @Override
+ public void runJob() {
+ for (DataTable dataTable : reduceGroup) {
+ int numRows = dataTable.getNumberOfRows();
+
+ try {
+ for (int rowId = 0; rowId < numRows; rowId++) {
+ Object[] values = new Object[numColumns];
+ for (int colId = 0; colId < numColumns; colId++) {
+ switch (storedColumnDataTypes[colId]) {
+ case INT:
+ values[colId] = dataTable.getInt(rowId, colId);
+ break;
+ case LONG:
+ values[colId] = dataTable.getLong(rowId, colId);
+ break;
+ case FLOAT:
+ values[colId] = dataTable.getFloat(rowId, colId);
+ break;
+ case DOUBLE:
+ values[colId] = dataTable.getDouble(rowId, colId);
+ break;
+ case STRING:
+ values[colId] = dataTable.getString(rowId, colId);
+ break;
+ case BYTES:
+ values[colId] = dataTable.getBytes(rowId, colId);
+ break;
+ case OBJECT:
+ values[colId] = dataTable.getObject(rowId, colId);
+ break;
+ // Add other aggregation intermediate result / group-by column type supports here
+ default:
+ throw new IllegalStateException();
+ }
+ }
+ indexedTable.upsert(new Record(values));
+ }
+ } finally {
+ countDownLatch.countDown();
+ }
+ }
+ }
+ });
+ }
+
+ try {
+ long timeOutMs = reducerContext.getReduceTimeOutMs() - (System.currentTimeMillis() - start);
+ countDownLatch.await(timeOutMs, TimeUnit.MILLISECONDS);
+ } catch (InterruptedException e) {
+ for (Future future : futures) {
+ if (!future.isDone()) {
+ future.cancel(true);
+ }
+ }
+ throw new TimeoutException("Timed out in broker reduce phase.");
+ }
+
+ indexedTable.finish(true);
+ return indexedTable;
+ }
+
+ /**
+ * Here are three things that happen
+ * 1. Sort the result sets from all pinot servers based on timestamp
+ * 2. Gapfill the data for missing entities per time bucket
+ * 3. Aggregate the dataset per time bucket.
+ */
+ @Override
+ public void reduceAndSetResults(String tableName, DataSchema dataSchema,
+ Map<ServerRoutingInstance, DataTable> dataTableMap, BrokerResponseNative brokerResponseNative,
+ DataTableReducerContext reducerContext, BrokerMetrics brokerMetrics) {
+ DataSchema resultTableSchema = getResultTableDataSchema(dataSchema);
+ if (dataTableMap.isEmpty()) {
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, Collections.emptyList()));
+ return;
+ }
+
+ String[] columns = dataSchema.getColumnNames();
+
+ Map<String, Integer> indexes = new HashMap<>();
+ for (int i = 0; i < columns.length; i++) {
+ indexes.put(columns[i], i);
+ }
+
+ _isGroupBySelections = new boolean[dataSchema.getColumnDataTypes().length];
+
+ // The first one argument of timeSeries is time column. The left ones are defining entity.
+ for (ExpressionContext entityColum : _timeSeries) {
+ int index = indexes.get(entityColum.getIdentifier());
+ _isGroupBySelections[index] = true;
+ _groupByKeyIndexes.add(index);
+ }
+
+ List<Object[]> sortedRawRows;
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL_AGGREGATE
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL_SELECT) {
+ sortedRawRows = mergeAndSort(dataTableMap.values(), dataSchema);
+ } else {
+ try {
+ IndexedTable indexedTable = getIndexedTable(dataSchema, dataTableMap.values(), reducerContext);
+ sortedRawRows = mergeAndSort(indexedTable, dataSchema);
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (3c27643) into [master](https://codecov.io/gh/apache/pinot/commit/3f98ce37fdaef0335fcd82e621489d65751b1f55?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (3f98ce3) will **decrease** coverage by `56.64%`.
> The diff coverage is `0.00%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 70.72% 14.08% -56.65%
+ Complexity 4242 81 -4161
=============================================
Files 1631 1596 -35
Lines 85279 84078 -1201
Branches 12844 12809 -35
=============================================
- Hits 60316 11842 -48474
- Misses 20799 71345 +50546
+ Partials 4164 891 -3273
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `?` | |
| unittests2 | `14.08% <0.00%> (-0.02%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `24.41% <0.00%> (-47.44%)` | :arrow_down: |
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `0.00% <0.00%> (-86.56%)` | :arrow_down: |
| [...org/apache/pinot/core/data/table/IndexedTable.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9kYXRhL3RhYmxlL0luZGV4ZWRUYWJsZS5qYXZh) | `0.00% <0.00%> (-84.75%)` | :arrow_down: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (-87.50%)` | :arrow_down: |
| [...ache/pinot/core/plan/GapfillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...pinot/core/plan/maker/InstancePlanMakerImplV2.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL21ha2VyL0luc3RhbmNlUGxhbk1ha2VySW1wbFYyLmphdmE=) | `0.00% <0.00%> (-76.77%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...not/core/query/reduce/GapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwRmlsbERhdGFUYWJsZVJlZHVjZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [.../pinot/core/query/reduce/GapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...e/pinot/core/query/reduce/HavingFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvSGF2aW5nRmlsdGVySGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.31%)` | :arrow_down: |
| ... and [1321 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [3f98ce3...3c27643](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r815146809
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/PreAggregationGapFillDataTableReducer.java
##########
@@ -0,0 +1,775 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.PriorityQueue;
+import java.util.Set;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import org.apache.pinot.common.exception.QueryException;
+import org.apache.pinot.common.metrics.BrokerMetrics;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.QueryProcessingException;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.ConcurrentIndexedTable;
+import org.apache.pinot.core.data.table.IndexedTable;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.data.table.Record;
+import org.apache.pinot.core.data.table.SimpleIndexedTable;
+import org.apache.pinot.core.data.table.UnboundedConcurrentIndexedTable;
+import org.apache.pinot.core.operator.combine.GroupByOrderByCombineOperator;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.query.selection.SelectionOperatorUtils;
+import org.apache.pinot.core.transport.ServerRoutingInstance;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.core.util.GroupByUtils;
+import org.apache.pinot.core.util.trace.TraceRunnable;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+/**
+ * Helper class to reduce and set Aggregation results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class PreAggregationGapFillDataTableReducer implements DataTableReducer {
+ private static final int MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE = 2; // TBD, find a better value.
+
+ private final QueryContext _queryContext;
+
+ private final int _limitForAggregatedResult;
+ private int _limitForGapfilledResult;
+
+ private final DateTimeGranularitySpec _dateTimeGranularity;
+ private final DateTimeFormatSpec _dateTimeFormatter;
+ private final long _startMs;
+ private final long _endMs;
+ private final long _timeBucketSize;
+
+ private final List<Integer> _groupByKeyIndexes;
+ private boolean [] _isGroupBySelections;
+ private final Set<Key> _groupByKeys;
+ private final Map<Key, Object[]> _previousByGroupKey;
+ private final Map<String, ExpressionContext> _fillExpressions;
+ private final List<ExpressionContext> _timeSeries;
+ private final GapfillUtils.GapfillType _gapfillType;
+
+ PreAggregationGapFillDataTableReducer(QueryContext queryContext) {
+ _queryContext = queryContext;
+ _gapfillType = queryContext.getGapfillType();
+ _limitForAggregatedResult = queryContext.getLimit();
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ _limitForGapfilledResult = queryContext.getLimit();
+ } else {
+ _limitForGapfilledResult = queryContext.getSubQueryContext().getLimit();
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext);
+
+ Preconditions.checkArgument(
+ gapFillSelection != null && gapFillSelection.getFunction() != null, "Gapfill Expression should be function.");
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ Preconditions.checkArgument(
+ args.size() > 5, "PreAggregateGapFill does not have correct number of arguments.");
+ Preconditions.checkArgument(
+ args.get(1).getLiteral() != null, "The second argument of PostAggregateGapFill should be TimeFormatter.");
+ Preconditions.checkArgument(
+ args.get(2).getLiteral() != null, "The third argument of PostAggregateGapFill should be start time.");
+ Preconditions.checkArgument(
+ args.get(3).getLiteral() != null, "The fourth argument of PostAggregateGapFill should be end time.");
+ Preconditions.checkArgument(
+ args.get(4).getLiteral() != null, "The fifth argument of PostAggregateGapFill should be time bucket size.");
+
+ _dateTimeFormatter = new DateTimeFormatSpec(args.get(1).getLiteral());
+ _dateTimeGranularity = new DateTimeGranularitySpec(args.get(4).getLiteral());
+ String start = args.get(2).getLiteral();
+ _startMs = truncate(_dateTimeFormatter.fromFormatToMillis(start));
+ String end = args.get(3).getLiteral();
+ _endMs = truncate(_dateTimeFormatter.fromFormatToMillis(end));
+ _timeBucketSize = _dateTimeGranularity.granularityToMillis();
+
+ _fillExpressions = GapfillUtils.getFillExpressions(gapFillSelection);
+
+ _previousByGroupKey = new HashMap<>();
+ _groupByKeyIndexes = new ArrayList<>();
+ _groupByKeys = new HashSet<>();
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ Preconditions.checkArgument(timeseriesOn != null, "The TimeSeriesOn expressions should be specified.");
+ _timeSeries = timeseriesOn.getFunction().getArguments();
+ }
+
+ private void replaceColumnNameWithAlias(DataSchema dataSchema) {
+ QueryContext queryContext;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = _queryContext.getSubQueryContext().getSubQueryContext();
+ } else if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ queryContext = _queryContext;
+ } else {
+ queryContext = _queryContext.getSubQueryContext();
+ }
+ List<String> aliasList = queryContext.getAliasList();
+ Map<String, String> columnNameToAliasMap = new HashMap<>();
+ for (int i = 0; i < aliasList.size(); i++) {
+ if (aliasList.get(i) != null) {
+ ExpressionContext selection = queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(selection)) {
+ selection = selection.getFunction().getArguments().get(0);
+ }
+ columnNameToAliasMap.put(selection.toString(), aliasList.get(i));
+ }
+ }
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ if (columnNameToAliasMap.containsKey(dataSchema.getColumnNames()[i])) {
+ dataSchema.getColumnNames()[i] = columnNameToAliasMap.get(dataSchema.getColumnNames()[i]);
+ }
+ }
+ }
+
+ /**
+ * Computes the number of reduce threads to use per query.
+ * <ul>
+ * <li> Use single thread if number of data tables to reduce is less than
+ * {@value #MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE}.</li>
+ * <li> Else, use min of max allowed reduce threads per query, and number of data tables.</li>
+ * </ul>
+ *
+ * @param numDataTables Number of data tables to reduce
+ * @param maxReduceThreadsPerQuery Max allowed reduce threads per query
+ * @return Number of reduce threads to use for the query
+ */
+ private int getNumReduceThreadsToUse(int numDataTables, int maxReduceThreadsPerQuery) {
+ // Use single thread if number of data tables < MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE.
+ if (numDataTables < MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE) {
+ return Math.min(1, numDataTables); // Number of data tables can be zero.
+ }
+
+ return Math.min(maxReduceThreadsPerQuery, numDataTables);
+ }
+
+ private IndexedTable getIndexedTable(DataSchema dataSchema, Collection<DataTable> dataTablesToReduce,
+ DataTableReducerContext reducerContext)
+ throws TimeoutException {
+ long start = System.currentTimeMillis();
+ int numDataTables = dataTablesToReduce.size();
+
+ // Get the number of threads to use for reducing.
+ // In case of single reduce thread, fall back to SimpleIndexedTable to avoid redundant locking/unlocking calls.
+ int numReduceThreadsToUse = getNumReduceThreadsToUse(numDataTables, reducerContext.getMaxReduceThreadsPerQuery());
+ int limit = _queryContext.getLimit();
+ // TODO: Make minTrimSize configurable
+ int trimSize = GroupByUtils.getTableCapacity(limit);
+ // NOTE: For query with HAVING clause, use trimSize as resultSize to ensure the result accuracy.
+ // TODO: Resolve the HAVING clause within the IndexedTable before returning the result
+ int resultSize = _queryContext.getHavingFilter() != null ? trimSize : limit;
+ int trimThreshold = reducerContext.getGroupByTrimThreshold();
+ IndexedTable indexedTable;
+ if (numReduceThreadsToUse <= 1) {
+ indexedTable = new SimpleIndexedTable(dataSchema, _queryContext, resultSize, trimSize, trimThreshold);
+ } else {
+ if (trimThreshold >= GroupByOrderByCombineOperator.MAX_TRIM_THRESHOLD) {
+ // special case of trim threshold where it is set to max value.
+ // there won't be any trimming during upsert in this case.
+ // thus we can avoid the overhead of read-lock and write-lock
+ // in the upsert method.
+ indexedTable = new UnboundedConcurrentIndexedTable(dataSchema, _queryContext, resultSize);
+ } else {
+ indexedTable = new ConcurrentIndexedTable(dataSchema, _queryContext, resultSize, trimSize, trimThreshold);
+ }
+ }
+
+ Future[] futures = new Future[numDataTables];
+ CountDownLatch countDownLatch = new CountDownLatch(numDataTables);
+
+ // Create groups of data tables that each thread can process concurrently.
+ // Given that numReduceThreads is <= numDataTables, each group will have at least one data table.
+ ArrayList<DataTable> dataTables = new ArrayList<>(dataTablesToReduce);
+ List<List<DataTable>> reduceGroups = new ArrayList<>(numReduceThreadsToUse);
+
+ for (int i = 0; i < numReduceThreadsToUse; i++) {
+ reduceGroups.add(new ArrayList<>());
+ }
+ for (int i = 0; i < numDataTables; i++) {
+ reduceGroups.get(i % numReduceThreadsToUse).add(dataTables.get(i));
+ }
+
+ int cnt = 0;
+ ColumnDataType[] storedColumnDataTypes = dataSchema.getStoredColumnDataTypes();
+ int numColumns = storedColumnDataTypes.length;
+ for (List<DataTable> reduceGroup : reduceGroups) {
+ futures[cnt++] = reducerContext.getExecutorService().submit(new TraceRunnable() {
+ @Override
+ public void runJob() {
+ for (DataTable dataTable : reduceGroup) {
+ int numRows = dataTable.getNumberOfRows();
+
+ try {
+ for (int rowId = 0; rowId < numRows; rowId++) {
+ Object[] values = new Object[numColumns];
+ for (int colId = 0; colId < numColumns; colId++) {
+ switch (storedColumnDataTypes[colId]) {
+ case INT:
+ values[colId] = dataTable.getInt(rowId, colId);
+ break;
+ case LONG:
+ values[colId] = dataTable.getLong(rowId, colId);
+ break;
+ case FLOAT:
+ values[colId] = dataTable.getFloat(rowId, colId);
+ break;
+ case DOUBLE:
+ values[colId] = dataTable.getDouble(rowId, colId);
+ break;
+ case STRING:
+ values[colId] = dataTable.getString(rowId, colId);
+ break;
+ case BYTES:
+ values[colId] = dataTable.getBytes(rowId, colId);
+ break;
+ case OBJECT:
+ values[colId] = dataTable.getObject(rowId, colId);
+ break;
+ // Add other aggregation intermediate result / group-by column type supports here
+ default:
+ throw new IllegalStateException();
+ }
+ }
+ indexedTable.upsert(new Record(values));
+ }
+ } finally {
+ countDownLatch.countDown();
+ }
+ }
+ }
+ });
+ }
+
+ try {
+ long timeOutMs = reducerContext.getReduceTimeOutMs() - (System.currentTimeMillis() - start);
+ countDownLatch.await(timeOutMs, TimeUnit.MILLISECONDS);
+ } catch (InterruptedException e) {
+ for (Future future : futures) {
+ if (!future.isDone()) {
+ future.cancel(true);
+ }
+ }
+ throw new TimeoutException("Timed out in broker reduce phase.");
+ }
+
+ indexedTable.finish(true);
+ return indexedTable;
+ }
+
+ /**
+ * Here are three things that happen
+ * 1. Sort the result sets from all pinot servers based on timestamp
+ * 2. Gapfill the data for missing entities per time bucket
+ * 3. Aggregate the dataset per time bucket.
+ */
+ @Override
+ public void reduceAndSetResults(String tableName, DataSchema dataSchema,
+ Map<ServerRoutingInstance, DataTable> dataTableMap, BrokerResponseNative brokerResponseNative,
+ DataTableReducerContext reducerContext, BrokerMetrics brokerMetrics) {
+ DataSchema resultTableSchema = getResultTableDataSchema(dataSchema);
+ if (dataTableMap.isEmpty()) {
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, Collections.emptyList()));
+ return;
+ }
+
+ String[] columns = dataSchema.getColumnNames();
+
+ Map<String, Integer> indexes = new HashMap<>();
+ for (int i = 0; i < columns.length; i++) {
+ indexes.put(columns[i], i);
+ }
+
+ _isGroupBySelections = new boolean[dataSchema.getColumnDataTypes().length];
+
+ // The first one argument of timeSeries is time column. The left ones are defining entity.
+ for (ExpressionContext entityColum : _timeSeries) {
+ int index = indexes.get(entityColum.getIdentifier());
+ _isGroupBySelections[index] = true;
+ _groupByKeyIndexes.add(index);
+ }
+
+ List<Object[]> sortedRawRows;
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL_AGGREGATE
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL_SELECT) {
+ sortedRawRows = mergeAndSort(dataTableMap.values(), dataSchema);
+ } else {
+ try {
+ IndexedTable indexedTable = getIndexedTable(dataSchema, dataTableMap.values(), reducerContext);
+ sortedRawRows = mergeAndSort(indexedTable, dataSchema);
+ } catch (TimeoutException e) {
+ brokerResponseNative.getProcessingExceptions()
+ .add(new QueryProcessingException(QueryException.BROKER_TIMEOUT_ERROR_CODE, e.getMessage()));
+ return;
+ }
+ }
+ List<Object[]> resultRows;
+ replaceColumnNameWithAlias(dataSchema);
+ if (_queryContext.getAggregationFunctions() != null) {
+ validateGroupByForOuterQuery();
+ }
+
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL_AGGREGATE
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL_SELECT) {
+ List<Object[]> gapfilledRows = gapFillAndAggregate(sortedRawRows, resultTableSchema, dataSchema);
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL_SELECT) {
+ List<String> selectionColumns = SelectionOperatorUtils.getSelectionColumns(_queryContext, dataSchema);
+ resultRows = new ArrayList<>(gapfilledRows.size());
+
+ Map<String, Integer> columnNameToIndexMap = new HashMap<>(dataSchema.getColumnNames().length);
+ String[] columnNames = dataSchema.getColumnNames();
+ for (int i = 0; i < columnNames.length; i++) {
+ columnNameToIndexMap.put(columnNames[i], i);
+ }
+
+ ColumnDataType[] columnDataTypes = dataSchema.getColumnDataTypes();
+ ColumnDataType[] resultColumnDataTypes = new ColumnDataType[selectionColumns.size()];
+ for (int i = 0; i < resultColumnDataTypes.length; i++) {
+ String name = selectionColumns.get(i);
+ int index = columnNameToIndexMap.get(name);
+ resultColumnDataTypes[i] = columnDataTypes[index];
+ }
+
+ for (Object[] row : gapfilledRows) {
+ Object[] resultRow = new Object[selectionColumns.size()];
+ for (int i = 0; i < selectionColumns.size(); i++) {
+ int index = columnNameToIndexMap.get(selectionColumns.get(i));
+ resultRow[i] = resultColumnDataTypes[i].convertAndFormat(row[index]);
+ }
+ resultRows.add(resultRow);
+ }
+ } else {
+ resultRows = gapfilledRows;
+ }
+ } else {
+ this.setupColumnTypeForAggregatedColum(dataSchema.getColumnDataTypes());
+ ColumnDataType[] columnDataTypes = dataSchema.getColumnDataTypes();
+ for (Object[] row : sortedRawRows) {
+ extractFinalAggregationResults(row);
+ for (int i = 0; i < columnDataTypes.length; i++) {
+ row[i] = columnDataTypes[i].convert(row[i]);
+ }
+ }
+ resultRows = gapFillAndAggregate(sortedRawRows, resultTableSchema, dataSchema);
+ }
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, resultRows));
+ }
+
+ private void extractFinalAggregationResults(Object[] row) {
+ AggregationFunction[] aggregationFunctions;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL) {
+ aggregationFunctions = _queryContext.getSubQueryContext().getAggregationFunctions();
+ } else {
+ aggregationFunctions = _queryContext.getSubQueryContext().getSubQueryContext().getAggregationFunctions();
+ }
+ int numAggregationFunctionsForInnerQuery = aggregationFunctions == null ? 0 : aggregationFunctions.length;
+ for (int i = 0; i < numAggregationFunctionsForInnerQuery; i++) {
+ int valueIndex = _timeSeries.size() + 1 + i;
+ row[valueIndex] = aggregationFunctions[i].extractFinalResult(row[valueIndex]);
+ }
+ }
+
+ private void setupColumnTypeForAggregatedColum(ColumnDataType[] columnDataTypes) {
+ AggregationFunction[] aggregationFunctions;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL) {
+ aggregationFunctions = _queryContext.getSubQueryContext().getAggregationFunctions();
+ } else {
+ aggregationFunctions = _queryContext.getSubQueryContext().getSubQueryContext().getAggregationFunctions();
+ }
+ int numAggregationFunctionsForInnerQuery = aggregationFunctions == null ? 0 : aggregationFunctions.length;
+ for (int i = 0; i < numAggregationFunctionsForInnerQuery; i++) {
+ columnDataTypes[_timeSeries.size() + 1 + i] = aggregationFunctions[i].getFinalResultColumnType();
+ }
+ }
+
+ /**
+ * Constructs the DataSchema for the ResultTable.
+ */
+ private DataSchema getResultTableDataSchema(DataSchema dataSchema) {
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ return dataSchema;
+ }
+
+ int numOfColumns = _queryContext.getSelectExpressions().size();
+ String [] columnNames = new String[numOfColumns];
+ ColumnDataType [] columnDataTypes = new ColumnDataType[numOfColumns];
+ for (int i = 0; i < numOfColumns; i++) {
+ ExpressionContext expressionContext = _queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(expressionContext)) {
+ expressionContext = expressionContext.getFunction().getArguments().get(0);
+ }
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ columnNames[i] = expressionContext.getIdentifier();
+ columnDataTypes[i] = ColumnDataType.STRING;
+ } else {
+ FunctionContext functionContext = expressionContext.getFunction();
+ AggregationFunction aggregationFunction =
+ AggregationFunctionFactory.getAggregationFunction(functionContext, _queryContext);
+ columnDataTypes[i] = aggregationFunction.getFinalResultColumnType();
+ columnNames[i] = functionContext.toString();
+ }
+ }
+ return new DataSchema(columnNames, columnDataTypes);
+ }
+
+ private Key constructGroupKeys(Object[] row) {
+ Object [] groupKeys = new Object[_groupByKeyIndexes.size()];
+ for (int i = 0; i < _groupByKeyIndexes.size(); i++) {
+ groupKeys[i] = row[_groupByKeyIndexes.get(i)];
+ }
+ return new Key(groupKeys);
+ }
+
+ private long truncate(long epoch) {
+ int sz = _dateTimeGranularity.getSize();
+ return epoch / sz * sz;
+ }
+
+ private List<Object[]> gapFillAndAggregate(List<Object[]> sortedRows,
+ DataSchema dataSchemaForAggregatedResult,
+ DataSchema dataSchema) {
+ List<Object[]> result = new ArrayList<>();
+
+ PreAggregateGapfillFilterHandler postGapfillFilterHandler = null;
+ if (_queryContext.getSubQueryContext() != null && _queryContext.getFilter() != null) {
+ postGapfillFilterHandler = new PreAggregateGapfillFilterHandler(_queryContext.getFilter(), dataSchema);
+ }
+ PreAggregateGapfillFilterHandler postAggregateHavingFilterHandler = null;
+ if (_queryContext.getHavingFilter() != null) {
+ postAggregateHavingFilterHandler = new PreAggregateGapfillFilterHandler(
+ _queryContext.getHavingFilter(), dataSchemaForAggregatedResult);
+ }
+ Object[] previous = null;
+ Iterator<Object[]> sortedIterator = sortedRows.iterator();
+ for (long time = _startMs; time < _endMs; time += _timeBucketSize) {
+ List<Object[]> bucketedResult = new ArrayList<>();
+ previous = gapfill(time, bucketedResult, sortedIterator, previous, dataSchema, postGapfillFilterHandler);
+ if (_queryContext.getAggregationFunctions() == null) {
+ result.addAll(bucketedResult);
+ } else if (bucketedResult.size() > 0) {
+ List<Object[]> aggregatedRows = aggregateGapfilledData(bucketedResult, dataSchema);
+ for (Object[] aggregatedRow : aggregatedRows) {
+ if (postAggregateHavingFilterHandler == null || postAggregateHavingFilterHandler.isMatch(aggregatedRow)) {
+ result.add(aggregatedRow);
+ }
+ if (result.size() >= _limitForAggregatedResult) {
+ return result;
+ }
+ }
+ }
+ }
+ return result;
+ }
+
+ private Object[] gapfill(long bucketTime,
+ List<Object[]> bucketedResult,
+ Iterator<Object[]> sortedIterator,
+ Object[] previous,
+ DataSchema dataSchema,
+ PreAggregateGapfillFilterHandler postGapfillFilterHandler) {
+ ColumnDataType[] resultColumnDataTypes = dataSchema.getColumnDataTypes();
+ int numResultColumns = resultColumnDataTypes.length;
+ Set<Key> keys = new HashSet<>(_groupByKeys);
+ if (previous == null && sortedIterator.hasNext()) {
+ previous = sortedIterator.next();
+ }
+
+ while (previous != null) {
+ Object[] resultRow = previous;
+ for (int i = 0; i < resultColumnDataTypes.length; i++) {
+ resultRow[i] = resultColumnDataTypes[i].format(resultRow[i]);
+ }
+
+ long timeCol = _dateTimeFormatter.fromFormatToMillis(String.valueOf(resultRow[0]));
+ if (timeCol > bucketTime) {
+ break;
+ }
+ if (timeCol == bucketTime) {
+ if (postGapfillFilterHandler == null || postGapfillFilterHandler.isMatch(previous)) {
+ if (bucketedResult.size() >= _limitForGapfilledResult) {
+ _limitForGapfilledResult = 0;
+ break;
+ } else {
+ bucketedResult.add(resultRow);
+ }
+ }
+ Key key = constructGroupKeys(resultRow);
+ keys.remove(key);
+ _previousByGroupKey.put(key, resultRow);
+ }
+ if (sortedIterator.hasNext()) {
+ previous = sortedIterator.next();
+ } else {
+ previous = null;
+ }
+ }
+
+ for (Key key : keys) {
+ Object[] gapfillRow = new Object[numResultColumns];
+ int keyIndex = 0;
+ if (resultColumnDataTypes[0] == ColumnDataType.LONG) {
+ gapfillRow[0] = Long.valueOf(_dateTimeFormatter.fromMillisToFormat(bucketTime));
+ } else {
+ gapfillRow[0] = _dateTimeFormatter.fromMillisToFormat(bucketTime);
+ }
+ for (int i = 1; i < _isGroupBySelections.length; i++) {
+ if (_isGroupBySelections[i]) {
+ gapfillRow[i] = key.getValues()[keyIndex++];
+ } else {
+ gapfillRow[i] = getFillValue(i, dataSchema.getColumnName(i), key, resultColumnDataTypes[i]);
+ }
+ }
+
+ if (postGapfillFilterHandler == null || postGapfillFilterHandler.isMatch(gapfillRow)) {
+ if (bucketedResult.size() >= _limitForGapfilledResult) {
+ break;
+ } else {
+ bucketedResult.add(gapfillRow);
+ }
+ }
+ }
+ if (_limitForGapfilledResult > _groupByKeys.size()) {
+ _limitForGapfilledResult -= _groupByKeys.size();
+ } else {
+ _limitForGapfilledResult = 0;
+ }
+ return previous;
+ }
+
+ /**
+ * Make sure that the outer query has the group by clause and the group by clause has the time bucket.
+ */
+ private void validateGroupByForOuterQuery() {
+ List<ExpressionContext> groupbyExpressions = _queryContext.getGroupByExpressions();
+ Preconditions.checkArgument(groupbyExpressions != null, "No GroupBy Clause.");
+ List<ExpressionContext> innerSelections = _queryContext.getSubQueryContext().getSelectExpressions();
+ String timeBucketCol = null;
+ List<String> strAlias = _queryContext.getSubQueryContext().getAliasList();
+ for (int i = 0; i < innerSelections.size(); i++) {
+ ExpressionContext innerSelection = innerSelections.get(i);
+ if (GapfillUtils.isGapfill(innerSelection)) {
+ if (strAlias.get(i) != null) {
+ timeBucketCol = strAlias.get(i);
+ } else {
+ timeBucketCol = innerSelection.getFunction().getArguments().get(0).toString();
+ }
+ break;
+ }
+ }
+
+ Preconditions.checkArgument(timeBucketCol != null, "No Group By timebucket.");
+
+ boolean findTimeBucket = false;
+ for (ExpressionContext groupbyExp : groupbyExpressions) {
+ if (timeBucketCol.equals(groupbyExp.toString())) {
+ findTimeBucket = true;
+ break;
+ }
+ }
+
+ Preconditions.checkArgument(findTimeBucket, "No Group By timebucket.");
+ }
+
+ private List<Object[]> aggregateGapfilledData(List<Object[]> bucketedRows, DataSchema dataSchema) {
Review comment:
IndexedTable is used to merge the Intermediate Aggregate Result. We do not need merge the intermediate aggregated result inside outer aggregation since there are no multiple segments.
If the subquery is aggregation query, the intermediate result from different pinot segments has already been merged before gapfill happens. IndexedTable has already been used to merge the intermediate result from subquery aggregation.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (a8bf363) into [master](https://codecov.io/gh/apache/pinot/commit/b05a5419c88fd61450156189c8754d8c10614423?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (b05a541) will **increase** coverage by `2.91%`.
> The diff coverage is `79.01%`.
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
+ Coverage 64.10% 67.01% +2.91%
+ Complexity 4267 4183 -84
============================================
Files 1594 1246 -348
Lines 84040 63053 -20987
Branches 12719 9868 -2851
============================================
- Hits 53870 42255 -11615
+ Misses 26291 17765 -8526
+ Partials 3879 3033 -846
```
| Flag | Coverage Δ | |
|---|---|---|
| unittests1 | `67.01% <79.01%> (+<0.01%)` | :arrow_up: |
| unittests2 | `?` | |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...pache/pinot/common/utils/request/RequestUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9jb21tb24vdXRpbHMvcmVxdWVzdC9SZXF1ZXN0VXRpbHMuamF2YQ==) | `85.71% <0.00%> (-1.79%)` | :arrow_down: |
| [...e/pinot/core/query/reduce/RowBasedBlockValSet.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUm93QmFzZWRCbG9ja1ZhbFNldC5qYXZh) | `16.12% <16.12%> (ø)` | |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `91.89% <66.66%> (+0.12%)` | :arrow_up: |
| [...ot/core/query/reduce/filter/RowMatcherFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXJGYWN0b3J5LmphdmE=) | `66.66% <66.66%> (ø)` | |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `72.28% <74.21%> (+8.64%)` | :arrow_up: |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| [.../pinot/core/query/reduce/GapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `85.71% <85.71%> (ø)` | |
| [.../core/query/reduce/filter/PredicateRowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1ByZWRpY2F0ZVJvd01hdGNoZXIuamF2YQ==) | `87.50% <87.50%> (ø)` | |
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `86.60% <88.88%> (-0.17%)` | :arrow_down: |
| [...e/pinot/core/query/reduce/BrokerReduceService.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQnJva2VyUmVkdWNlU2VydmljZS5qYXZh) | `81.81% <91.66%> (+0.73%)` | :arrow_up: |
| ... and [374 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [b05a541...a8bf363](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (4a0d902) into [master](https://codecov.io/gh/apache/pinot/commit/262dc50e236ed2af25a0cf8c67658a48731ce573?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (262dc50) will **decrease** coverage by `56.74%`.
> The diff coverage is `0.31%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 70.83% 14.08% -56.75%
+ Complexity 4258 81 -4177
=============================================
Files 1636 1600 -36
Lines 85804 84442 -1362
Branches 12920 12855 -65
=============================================
- Hits 60779 11894 -48885
- Misses 20836 71653 +50817
+ Partials 4189 895 -3294
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `?` | |
| unittests2 | `14.08% <0.31%> (-0.10%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/GrpcBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvR3JwY0Jyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `0.00% <ø> (-78.58%)` | :arrow_down: |
| [...thandler/SingleConnectionBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvU2luZ2xlQ29ubmVjdGlvbkJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `13.20% <0.00%> (-73.83%)` | :arrow_down: |
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `0.00% <0.00%> (-88.20%)` | :arrow_down: |
| [...t/controller/api/resources/PinotQueryResource.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29udHJvbGxlci9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29udHJvbGxlci9hcGkvcmVzb3VyY2VzL1Bpbm90UXVlcnlSZXNvdXJjZS5qYXZh) | `0.00% <0.00%> (-50.35%)` | :arrow_down: |
| [...e/pinot/core/query/reduce/BrokerReduceService.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQnJva2VyUmVkdWNlU2VydmljZS5qYXZh) | `0.00% <0.00%> (-97.30%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...ache/pinot/core/query/reduce/GapFillProcessor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwRmlsbFByb2Nlc3Nvci5qYXZh) | `0.00% <0.00%> (ø)` | |
| [.../pinot/core/query/reduce/GapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...e/pinot/core/query/reduce/HavingFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvSGF2aW5nRmlsdGVySGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.31%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.77%)` | :arrow_down: |
| ... and [1327 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [262dc50...4a0d902](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (f07e90b) into [master](https://codecov.io/gh/apache/pinot/commit/262dc50e236ed2af25a0cf8c67658a48731ce573?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (262dc50) will **decrease** coverage by `56.74%`.
> The diff coverage is `0.27%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 70.83% 14.09% -56.75%
+ Complexity 4258 81 -4177
=============================================
Files 1636 1600 -36
Lines 85804 84538 -1266
Branches 12920 12871 -49
=============================================
- Hits 60779 11915 -48864
- Misses 20836 71724 +50888
+ Partials 4189 899 -3290
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `?` | |
| unittests2 | `14.09% <0.27%> (-0.09%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `0.00% <0.00%> (-88.20%)` | :arrow_down: |
| [...t/controller/api/resources/PinotQueryResource.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29udHJvbGxlci9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29udHJvbGxlci9hcGkvcmVzb3VyY2VzL1Bpbm90UXVlcnlSZXNvdXJjZS5qYXZh) | `0.00% <0.00%> (-50.35%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...not/core/query/reduce/GapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwRmlsbERhdGFUYWJsZVJlZHVjZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [.../pinot/core/query/reduce/GapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...e/pinot/core/query/reduce/HavingFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvSGF2aW5nRmlsdGVySGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.31%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.77%)` | :arrow_down: |
| [.../pinot/core/query/reduce/ResultReducerFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUmVzdWx0UmVkdWNlckZhY3RvcnkuamF2YQ==) | `0.00% <0.00%> (-81.25%)` | :arrow_down: |
| [.../pinot/core/query/reduce/filter/AndRowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0FuZFJvd01hdGNoZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...core/query/reduce/filter/ColumnValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0NvbHVtblZhbHVlRXh0cmFjdG9yLmphdmE=) | `0.00% <0.00%> (ø)` | |
| ... and [1323 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [262dc50...f07e90b](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r815209865
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/plan/GapfillSelectionPlanNode.java
##########
@@ -0,0 +1,90 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.plan;
+
+import com.google.common.base.Preconditions;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.core.common.Operator;
+import org.apache.pinot.core.operator.blocks.IntermediateResultsBlock;
+import org.apache.pinot.core.operator.query.SelectionOnlyOperator;
+import org.apache.pinot.core.operator.transform.TransformOperator;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.query.selection.SelectionOperatorUtils;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.segment.spi.IndexSegment;
+
+
+/**
+ * The <code>PreAggGapFillSelectionPlanNode</code> class provides the execution
+ * plan for pre-aggregate gapfill query on a single segment.
+ */
+public class GapfillSelectionPlanNode implements PlanNode {
+ private final IndexSegment _indexSegment;
+ private final QueryContext _queryContext;
+
+ public GapfillSelectionPlanNode(IndexSegment indexSegment, QueryContext queryContext) {
+ _indexSegment = indexSegment;
+ _queryContext = queryContext;
+ }
+
+ @Override
+ public Operator<IntermediateResultsBlock> run() {
+ int limit = _queryContext.getLimit();
+
+ QueryContext queryContext = getSelectQueryContext();
+ Preconditions.checkArgument(queryContext.getOrderByExpressions() == null,
+ "The gapfill query should not have orderby expression.");
Review comment:
At the beginning, I thought since the result will be sorted by timestamp anyway, maybe we should not allow the customer to order the result by any other dimension. But you might be right, we should allow the customer order the result by the other dimension(s) also. We do not need do the ordering on pinot server for now. We can do the sorting on the broker side. I will make the change.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (a4b3044) into [master](https://codecov.io/gh/apache/pinot/commit/3f98ce37fdaef0335fcd82e621489d65751b1f55?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (3f98ce3) will **decrease** coverage by `0.94%`.
> The diff coverage is `79.51%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 70.72% 69.78% -0.95%
+ Complexity 4242 4241 -1
============================================
Files 1631 1641 +10
Lines 85279 86053 +774
Branches 12844 13029 +185
============================================
- Hits 60316 60051 -265
- Misses 20799 21820 +1021
- Partials 4164 4182 +18
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.63% <16.73%> (-0.06%)` | :arrow_down: |
| integration2 | `?` | |
| unittests1 | `67.04% <79.73%> (+0.06%)` | :arrow_up: |
| unittests2 | `14.08% <0.00%> (-0.02%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `69.92% <0.00%> (-1.93%)` | :arrow_down: |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `91.58% <0.00%> (-0.60%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...ot/core/query/reduce/filter/RowMatcherFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXJGYWN0b3J5LmphdmE=) | `66.66% <66.66%> (ø)` | |
| [...pinot/core/plan/maker/InstancePlanMakerImplV2.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL21ha2VyL0luc3RhbmNlUGxhbk1ha2VySW1wbFYyLmphdmE=) | `66.36% <73.68%> (-10.41%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `70.75% <74.35%> (+7.11%)` | :arrow_up: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `80.55% <75.00%> (-6.95%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `92.10% <75.00%> (+0.34%)` | :arrow_up: |
| [...xt/utils/BrokerRequestToQueryContextConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvQnJva2VyUmVxdWVzdFRvUXVlcnlDb250ZXh0Q29udmVydGVyLmphdmE=) | `92.77% <77.77%> (-5.62%)` | :arrow_down: |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| ... and [108 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [3f98ce3...a4b3044](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (0285775) into [master](https://codecov.io/gh/apache/pinot/commit/3f98ce37fdaef0335fcd82e621489d65751b1f55?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (3f98ce3) will **increase** coverage by `0.10%`.
> The diff coverage is `81.66%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
+ Coverage 70.72% 70.82% +0.10%
Complexity 4242 4242
============================================
Files 1631 1641 +10
Lines 85279 85899 +620
Branches 12844 12997 +153
============================================
+ Hits 60316 60842 +526
- Misses 20799 20853 +54
- Partials 4164 4204 +40
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.63% <13.89%> (-0.06%)` | :arrow_down: |
| integration2 | `27.35% <13.89%> (-0.14%)` | :arrow_down: |
| unittests1 | `67.11% <81.89%> (+0.13%)` | :arrow_up: |
| unittests2 | `14.03% <0.00%> (-0.07%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `71.72% <0.00%> (-0.13%)` | :arrow_down: |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `91.58% <0.00%> (-0.60%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...ot/core/query/reduce/filter/RowMatcherFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXJGYWN0b3J5LmphdmE=) | `66.66% <66.66%> (ø)` | |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `75.00% <81.15%> (+11.36%)` | :arrow_up: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `86.36% <81.81%> (-1.14%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `92.00% <83.33%> (+0.23%)` | :arrow_up: |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| [.../pinot/core/query/reduce/GapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `85.71% <85.71%> (ø)` | |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `86.73% <86.73%> (ø)` | |
| ... and [40 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [3f98ce3...0285775](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r815391501
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/PreAggregationGapFillDataTableReducer.java
##########
@@ -0,0 +1,775 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.PriorityQueue;
+import java.util.Set;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import org.apache.pinot.common.exception.QueryException;
+import org.apache.pinot.common.metrics.BrokerMetrics;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.QueryProcessingException;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.ConcurrentIndexedTable;
+import org.apache.pinot.core.data.table.IndexedTable;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.data.table.Record;
+import org.apache.pinot.core.data.table.SimpleIndexedTable;
+import org.apache.pinot.core.data.table.UnboundedConcurrentIndexedTable;
+import org.apache.pinot.core.operator.combine.GroupByOrderByCombineOperator;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.query.selection.SelectionOperatorUtils;
+import org.apache.pinot.core.transport.ServerRoutingInstance;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.core.util.GroupByUtils;
+import org.apache.pinot.core.util.trace.TraceRunnable;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+/**
+ * Helper class to reduce and set Aggregation results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class PreAggregationGapFillDataTableReducer implements DataTableReducer {
+ private static final int MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE = 2; // TBD, find a better value.
+
+ private final QueryContext _queryContext;
+
+ private final int _limitForAggregatedResult;
+ private int _limitForGapfilledResult;
+
+ private final DateTimeGranularitySpec _dateTimeGranularity;
+ private final DateTimeFormatSpec _dateTimeFormatter;
+ private final long _startMs;
+ private final long _endMs;
+ private final long _timeBucketSize;
+
+ private final List<Integer> _groupByKeyIndexes;
+ private boolean [] _isGroupBySelections;
+ private final Set<Key> _groupByKeys;
+ private final Map<Key, Object[]> _previousByGroupKey;
+ private final Map<String, ExpressionContext> _fillExpressions;
+ private final List<ExpressionContext> _timeSeries;
+ private final GapfillUtils.GapfillType _gapfillType;
+
+ PreAggregationGapFillDataTableReducer(QueryContext queryContext) {
+ _queryContext = queryContext;
+ _gapfillType = queryContext.getGapfillType();
+ _limitForAggregatedResult = queryContext.getLimit();
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ _limitForGapfilledResult = queryContext.getLimit();
+ } else {
+ _limitForGapfilledResult = queryContext.getSubQueryContext().getLimit();
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext);
+
+ Preconditions.checkArgument(
+ gapFillSelection != null && gapFillSelection.getFunction() != null, "Gapfill Expression should be function.");
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ Preconditions.checkArgument(
+ args.size() > 5, "PreAggregateGapFill does not have correct number of arguments.");
+ Preconditions.checkArgument(
+ args.get(1).getLiteral() != null, "The second argument of PostAggregateGapFill should be TimeFormatter.");
+ Preconditions.checkArgument(
+ args.get(2).getLiteral() != null, "The third argument of PostAggregateGapFill should be start time.");
+ Preconditions.checkArgument(
+ args.get(3).getLiteral() != null, "The fourth argument of PostAggregateGapFill should be end time.");
+ Preconditions.checkArgument(
+ args.get(4).getLiteral() != null, "The fifth argument of PostAggregateGapFill should be time bucket size.");
+
+ _dateTimeFormatter = new DateTimeFormatSpec(args.get(1).getLiteral());
+ _dateTimeGranularity = new DateTimeGranularitySpec(args.get(4).getLiteral());
+ String start = args.get(2).getLiteral();
+ _startMs = truncate(_dateTimeFormatter.fromFormatToMillis(start));
+ String end = args.get(3).getLiteral();
+ _endMs = truncate(_dateTimeFormatter.fromFormatToMillis(end));
+ _timeBucketSize = _dateTimeGranularity.granularityToMillis();
+
+ _fillExpressions = GapfillUtils.getFillExpressions(gapFillSelection);
+
+ _previousByGroupKey = new HashMap<>();
+ _groupByKeyIndexes = new ArrayList<>();
+ _groupByKeys = new HashSet<>();
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ Preconditions.checkArgument(timeseriesOn != null, "The TimeSeriesOn expressions should be specified.");
+ _timeSeries = timeseriesOn.getFunction().getArguments();
+ }
+
+ private void replaceColumnNameWithAlias(DataSchema dataSchema) {
+ QueryContext queryContext;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = _queryContext.getSubQueryContext().getSubQueryContext();
+ } else if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ queryContext = _queryContext;
+ } else {
+ queryContext = _queryContext.getSubQueryContext();
+ }
+ List<String> aliasList = queryContext.getAliasList();
+ Map<String, String> columnNameToAliasMap = new HashMap<>();
+ for (int i = 0; i < aliasList.size(); i++) {
+ if (aliasList.get(i) != null) {
+ ExpressionContext selection = queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(selection)) {
+ selection = selection.getFunction().getArguments().get(0);
+ }
+ columnNameToAliasMap.put(selection.toString(), aliasList.get(i));
+ }
+ }
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ if (columnNameToAliasMap.containsKey(dataSchema.getColumnNames()[i])) {
+ dataSchema.getColumnNames()[i] = columnNameToAliasMap.get(dataSchema.getColumnNames()[i]);
+ }
+ }
+ }
+
+ /**
+ * Computes the number of reduce threads to use per query.
+ * <ul>
+ * <li> Use single thread if number of data tables to reduce is less than
+ * {@value #MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE}.</li>
+ * <li> Else, use min of max allowed reduce threads per query, and number of data tables.</li>
+ * </ul>
+ *
+ * @param numDataTables Number of data tables to reduce
+ * @param maxReduceThreadsPerQuery Max allowed reduce threads per query
+ * @return Number of reduce threads to use for the query
+ */
+ private int getNumReduceThreadsToUse(int numDataTables, int maxReduceThreadsPerQuery) {
+ // Use single thread if number of data tables < MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE.
+ if (numDataTables < MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE) {
+ return Math.min(1, numDataTables); // Number of data tables can be zero.
+ }
+
+ return Math.min(maxReduceThreadsPerQuery, numDataTables);
+ }
+
+ private IndexedTable getIndexedTable(DataSchema dataSchema, Collection<DataTable> dataTablesToReduce,
+ DataTableReducerContext reducerContext)
+ throws TimeoutException {
+ long start = System.currentTimeMillis();
+ int numDataTables = dataTablesToReduce.size();
+
+ // Get the number of threads to use for reducing.
+ // In case of single reduce thread, fall back to SimpleIndexedTable to avoid redundant locking/unlocking calls.
+ int numReduceThreadsToUse = getNumReduceThreadsToUse(numDataTables, reducerContext.getMaxReduceThreadsPerQuery());
+ int limit = _queryContext.getLimit();
+ // TODO: Make minTrimSize configurable
+ int trimSize = GroupByUtils.getTableCapacity(limit);
+ // NOTE: For query with HAVING clause, use trimSize as resultSize to ensure the result accuracy.
+ // TODO: Resolve the HAVING clause within the IndexedTable before returning the result
+ int resultSize = _queryContext.getHavingFilter() != null ? trimSize : limit;
+ int trimThreshold = reducerContext.getGroupByTrimThreshold();
+ IndexedTable indexedTable;
+ if (numReduceThreadsToUse <= 1) {
+ indexedTable = new SimpleIndexedTable(dataSchema, _queryContext, resultSize, trimSize, trimThreshold);
+ } else {
+ if (trimThreshold >= GroupByOrderByCombineOperator.MAX_TRIM_THRESHOLD) {
+ // special case of trim threshold where it is set to max value.
+ // there won't be any trimming during upsert in this case.
+ // thus we can avoid the overhead of read-lock and write-lock
+ // in the upsert method.
+ indexedTable = new UnboundedConcurrentIndexedTable(dataSchema, _queryContext, resultSize);
+ } else {
+ indexedTable = new ConcurrentIndexedTable(dataSchema, _queryContext, resultSize, trimSize, trimThreshold);
+ }
+ }
+
+ Future[] futures = new Future[numDataTables];
+ CountDownLatch countDownLatch = new CountDownLatch(numDataTables);
+
+ // Create groups of data tables that each thread can process concurrently.
+ // Given that numReduceThreads is <= numDataTables, each group will have at least one data table.
+ ArrayList<DataTable> dataTables = new ArrayList<>(dataTablesToReduce);
+ List<List<DataTable>> reduceGroups = new ArrayList<>(numReduceThreadsToUse);
+
+ for (int i = 0; i < numReduceThreadsToUse; i++) {
+ reduceGroups.add(new ArrayList<>());
+ }
+ for (int i = 0; i < numDataTables; i++) {
+ reduceGroups.get(i % numReduceThreadsToUse).add(dataTables.get(i));
+ }
+
+ int cnt = 0;
+ ColumnDataType[] storedColumnDataTypes = dataSchema.getStoredColumnDataTypes();
+ int numColumns = storedColumnDataTypes.length;
+ for (List<DataTable> reduceGroup : reduceGroups) {
+ futures[cnt++] = reducerContext.getExecutorService().submit(new TraceRunnable() {
+ @Override
+ public void runJob() {
+ for (DataTable dataTable : reduceGroup) {
+ int numRows = dataTable.getNumberOfRows();
+
+ try {
+ for (int rowId = 0; rowId < numRows; rowId++) {
+ Object[] values = new Object[numColumns];
+ for (int colId = 0; colId < numColumns; colId++) {
+ switch (storedColumnDataTypes[colId]) {
+ case INT:
+ values[colId] = dataTable.getInt(rowId, colId);
+ break;
+ case LONG:
+ values[colId] = dataTable.getLong(rowId, colId);
+ break;
+ case FLOAT:
+ values[colId] = dataTable.getFloat(rowId, colId);
+ break;
+ case DOUBLE:
+ values[colId] = dataTable.getDouble(rowId, colId);
+ break;
+ case STRING:
+ values[colId] = dataTable.getString(rowId, colId);
+ break;
+ case BYTES:
+ values[colId] = dataTable.getBytes(rowId, colId);
+ break;
+ case OBJECT:
+ values[colId] = dataTable.getObject(rowId, colId);
+ break;
+ // Add other aggregation intermediate result / group-by column type supports here
+ default:
+ throw new IllegalStateException();
+ }
+ }
+ indexedTable.upsert(new Record(values));
+ }
+ } finally {
+ countDownLatch.countDown();
+ }
+ }
+ }
+ });
+ }
+
+ try {
+ long timeOutMs = reducerContext.getReduceTimeOutMs() - (System.currentTimeMillis() - start);
+ countDownLatch.await(timeOutMs, TimeUnit.MILLISECONDS);
+ } catch (InterruptedException e) {
+ for (Future future : futures) {
+ if (!future.isDone()) {
+ future.cancel(true);
+ }
+ }
+ throw new TimeoutException("Timed out in broker reduce phase.");
+ }
+
+ indexedTable.finish(true);
+ return indexedTable;
+ }
+
+ /**
+ * Here are three things that happen
+ * 1. Sort the result sets from all pinot servers based on timestamp
+ * 2. Gapfill the data for missing entities per time bucket
+ * 3. Aggregate the dataset per time bucket.
+ */
+ @Override
+ public void reduceAndSetResults(String tableName, DataSchema dataSchema,
+ Map<ServerRoutingInstance, DataTable> dataTableMap, BrokerResponseNative brokerResponseNative,
+ DataTableReducerContext reducerContext, BrokerMetrics brokerMetrics) {
+ DataSchema resultTableSchema = getResultTableDataSchema(dataSchema);
+ if (dataTableMap.isEmpty()) {
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, Collections.emptyList()));
+ return;
+ }
+
+ String[] columns = dataSchema.getColumnNames();
+
+ Map<String, Integer> indexes = new HashMap<>();
+ for (int i = 0; i < columns.length; i++) {
+ indexes.put(columns[i], i);
+ }
+
+ _isGroupBySelections = new boolean[dataSchema.getColumnDataTypes().length];
+
+ // The first one argument of timeSeries is time column. The left ones are defining entity.
+ for (ExpressionContext entityColum : _timeSeries) {
+ int index = indexes.get(entityColum.getIdentifier());
+ _isGroupBySelections[index] = true;
+ _groupByKeyIndexes.add(index);
+ }
+
+ List<Object[]> sortedRawRows;
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL_AGGREGATE
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL_SELECT) {
+ sortedRawRows = mergeAndSort(dataTableMap.values(), dataSchema);
+ } else {
+ try {
+ IndexedTable indexedTable = getIndexedTable(dataSchema, dataTableMap.values(), reducerContext);
+ sortedRawRows = mergeAndSort(indexedTable, dataSchema);
+ } catch (TimeoutException e) {
+ brokerResponseNative.getProcessingExceptions()
+ .add(new QueryProcessingException(QueryException.BROKER_TIMEOUT_ERROR_CODE, e.getMessage()));
+ return;
+ }
+ }
+ List<Object[]> resultRows;
+ replaceColumnNameWithAlias(dataSchema);
+ if (_queryContext.getAggregationFunctions() != null) {
+ validateGroupByForOuterQuery();
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] amrishlal commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
amrishlal commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r819826831
##########
File path: pinot-common/src/main/java/org/apache/pinot/sql/parsers/CalciteSqlParser.java
##########
@@ -369,6 +369,9 @@ private static PinotQuery compileCalciteSqlToPinotQuery(String sql) {
DataSource dataSource = new DataSource();
dataSource.setTableName(fromNode.toString());
pinotQuery.setDataSource(dataSource);
+ if (fromNode instanceof SqlSelect || fromNode instanceof SqlOrderBy) {
Review comment:
OK, thanks for the reference. I would suggest changing this logic to:
```
if (fromNode != null) {
DataSource dataSource = new DataSource();
if (fromNode instanceof SqlSelect || fromNode instanceof SqlOrderBy) {
dataSource.setSubquery(compileSqlNodeToPinotQuery(fromNode));
} else {
dataSource.setTableName(fromNode.toString());
}
pinotQuery.setDataSource(dataSource);
}
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] amrishlal commented on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
amrishlal commented on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1061051085
> Do you have an example about how to add the integration test I can add? I do not see any query-related test against broker and server
Please see `OfflineClusterIntegrationTest`
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] amrishlal commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
amrishlal commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r819020015
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/QueryContext.java
##########
@@ -133,6 +137,8 @@ private QueryContext(String tableName, List<ExpressionContext> selectExpressions
_queryOptions = queryOptions;
_debugOptions = debugOptions;
_brokerRequest = brokerRequest;
+ _gapfillType = null;
Review comment:
Instead of setting `_gapfillType` to null, can we set it to `GAP_FILL_NONE` as that will clearly show that no gap filling will happen by default?
##########
File path: pinot-core/src/test/java/org/apache/pinot/queries/GapfillQueriesTest.java
##########
@@ -0,0 +1,3615 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.queries;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import org.apache.commons.io.FileUtils;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.segment.local.indexsegment.immutable.ImmutableSegmentLoader;
+import org.apache.pinot.segment.local.segment.creator.impl.SegmentIndexCreationDriverImpl;
+import org.apache.pinot.segment.local.segment.readers.GenericRowRecordReader;
+import org.apache.pinot.segment.spi.ImmutableSegment;
+import org.apache.pinot.segment.spi.IndexSegment;
+import org.apache.pinot.segment.spi.creator.SegmentGeneratorConfig;
+import org.apache.pinot.spi.config.table.TableConfig;
+import org.apache.pinot.spi.config.table.TableType;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+import org.apache.pinot.spi.data.FieldSpec.DataType;
+import org.apache.pinot.spi.data.Schema;
+import org.apache.pinot.spi.data.readers.GenericRow;
+import org.apache.pinot.spi.utils.ReadMode;
+import org.apache.pinot.spi.utils.builder.TableConfigBuilder;
+import org.testng.Assert;
+import org.testng.annotations.AfterClass;
+import org.testng.annotations.BeforeClass;
+import org.testng.annotations.Test;
+
+
+/**
+ * Queries test for Gapfill queries.
+ */
+// TODO: Item 1. table alias for subquery in next PR
+// TODO: Item 2. Deprecate PostAggregateGapfill implementation in next PR
+@SuppressWarnings("rawtypes")
+public class GapfillQueriesTest extends BaseQueriesTest {
+ private static final File INDEX_DIR = new File(FileUtils.getTempDirectory(), "PostAggregationGapfillQueriesTest");
Review comment:
Seems like `PostAggregationGapfillQueriesTest` should be replaced with `GapfillQueriesTest`
##########
File path: pinot-common/src/main/java/org/apache/pinot/sql/parsers/CalciteSqlParser.java
##########
@@ -369,6 +369,9 @@ private static PinotQuery compileCalciteSqlToPinotQuery(String sql) {
DataSource dataSource = new DataSource();
dataSource.setTableName(fromNode.toString());
pinotQuery.setDataSource(dataSource);
+ if (fromNode instanceof SqlSelect || fromNode instanceof SqlOrderBy) {
Review comment:
Not sure if I understand why `fromNode` should be an instance of SqlOrderBy? I don't think we can have an SQL statement that has an order by clause within from clause ( `SELECT x FROM ORDER BY x`)
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/utils/BrokerRequestToQueryContextConverter.java
##########
@@ -53,12 +55,81 @@ private BrokerRequestToQueryContextConverter() {
* Converts the given {@link BrokerRequest} into a {@link QueryContext}.
*/
public static QueryContext convert(BrokerRequest brokerRequest) {
- return brokerRequest.getPinotQuery() != null ? convertSQL(brokerRequest) : convertPQL(brokerRequest);
+ if (brokerRequest.getPinotQuery() != null) {
+ QueryContext queryContext = convertSQL(brokerRequest.getPinotQuery(), brokerRequest);
+ queryContext.setGapfillType(GapfillUtils.getGapfillType(queryContext));
Review comment:
Seems like this should be done using the `QueryContext.Builder` pattern inside of `convertSQL` function?
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/utils/BrokerRequestToQueryContextConverter.java
##########
@@ -53,12 +55,81 @@ private BrokerRequestToQueryContextConverter() {
* Converts the given {@link BrokerRequest} into a {@link QueryContext}.
*/
public static QueryContext convert(BrokerRequest brokerRequest) {
- return brokerRequest.getPinotQuery() != null ? convertSQL(brokerRequest) : convertPQL(brokerRequest);
+ if (brokerRequest.getPinotQuery() != null) {
+ QueryContext queryContext = convertSQL(brokerRequest.getPinotQuery(), brokerRequest);
+ queryContext.setGapfillType(GapfillUtils.getGapfillType(queryContext));
+ validateForGapfillQuery(queryContext);
Review comment:
It seems like this validation is just checking syntax, can it be done on the Broker? If I am not mistaken almost all of Pinot's syntax checks happen on the Broker side.
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/util/GapfillUtils.java
##########
@@ -119,4 +130,138 @@ static public Serializable getDefaultValue(DataSchema.ColumnDataType dataType) {
private static String canonicalizeFunctionName(String functionName) {
return StringUtils.remove(functionName, '_').toLowerCase();
}
+
+ public static boolean isGapfill(ExpressionContext expressionContext) {
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ return false;
+ }
+
+ return GAP_FILL.equals(canonicalizeFunctionName(expressionContext.getFunction().getFunctionName()));
+ }
+
+ private static boolean isGapfill(QueryContext queryContext) {
+ for (ExpressionContext expressionContext : queryContext.getSelectExpressions()) {
+ if (isGapfill(expressionContext)) {
+ return true;
+ }
+ }
+ return false;
+ }
+
+ public static GapfillType getGapfillType(QueryContext queryContext) {
+ if (queryContext.getSubQueryContext() == null) {
+ if (isGapfill(queryContext)) {
+ Preconditions.checkArgument(queryContext.getAggregationFunctions() == null,
+ "Aggregation and Gapfill can not be in the same sql statement.");
Review comment:
Would it be possible to do these syntax checks all together in one place on the broker side when the initial query is received? That way bad queries can quickly be filtered out and we don't have to do syntax checks deep inside code logic.
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/util/GapfillUtils.java
##########
@@ -31,7 +36,10 @@
*/
public class GapfillUtils {
private static final String POST_AGGREGATE_GAP_FILL = "postaggregategapfill";
Review comment:
Is this a left over from previous implementation? I thought `postaggregategapfill` function was no longer being used? I don't see any of the test cases using postaggregategapfill function?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (4666247) into [master](https://codecov.io/gh/apache/pinot/commit/3f98ce37fdaef0335fcd82e621489d65751b1f55?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (3f98ce3) will **decrease** coverage by `40.10%`.
> The diff coverage is `26.66%`.
> :exclamation: Current head 4666247 differs from pull request most recent head 1c1ba84. Consider uploading reports for the commit 1c1ba84 to get more accurate results
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 70.72% 30.62% -40.11%
=============================================
Files 1631 1629 -2
Lines 85279 85728 +449
Branches 12844 12997 +153
=============================================
- Hits 60316 26256 -34060
- Misses 20799 57098 +36299
+ Partials 4164 2374 -1790
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.89% <25.72%> (+0.20%)` | :arrow_up: |
| integration2 | `27.40% <22.45%> (-0.10%)` | :arrow_down: |
| unittests1 | `?` | |
| unittests2 | `?` | |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `62.92% <0.00%> (-8.93%)` | :arrow_down: |
| [.../org/apache/pinot/core/common/MinionConstants.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9jb21tb24vTWluaW9uQ29uc3RhbnRzLmphdmE=) | `0.00% <ø> (ø)` | |
| [...manager/realtime/LLRealtimeSegmentDataManager.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9kYXRhL21hbmFnZXIvcmVhbHRpbWUvTExSZWFsdGltZVNlZ21lbnREYXRhTWFuYWdlci5qYXZh) | `58.72% <ø> (-12.77%)` | :arrow_down: |
| [...a/manager/realtime/RealtimeSegmentDataManager.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9kYXRhL21hbmFnZXIvcmVhbHRpbWUvUmVhbHRpbWVTZWdtZW50RGF0YU1hbmFnZXIuamF2YQ==) | `50.00% <ø> (ø)` | |
| [...ata/manager/realtime/RealtimeTableDataManager.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9kYXRhL21hbmFnZXIvcmVhbHRpbWUvUmVhbHRpbWVUYWJsZURhdGFNYW5hZ2VyLmphdmE=) | `67.46% <ø> (-0.81%)` | :arrow_down: |
| [...ava/org/apache/pinot/core/plan/FilterPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0ZpbHRlclBsYW5Ob2RlLmphdmE=) | `57.00% <ø> (-32.72%)` | :arrow_down: |
| [...ache/pinot/core/plan/GapfillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...not/core/query/reduce/GapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwRmlsbERhdGFUYWJsZVJlZHVjZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [.../pinot/core/query/reduce/GapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| ... and [1209 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [3f98ce3...1c1ba84](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] amrishlal commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
amrishlal commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r821031097
##########
File path: pinot-common/src/main/java/org/apache/pinot/sql/parsers/CalciteSqlParser.java
##########
@@ -369,6 +369,9 @@ private static PinotQuery compileCalciteSqlToPinotQuery(String sql) {
DataSource dataSource = new DataSource();
dataSource.setTableName(fromNode.toString());
pinotQuery.setDataSource(dataSource);
+ if (fromNode instanceof SqlSelect || fromNode instanceof SqlOrderBy) {
Review comment:
My understanding is that by the time reducer code is called, you have flattened all the subqueries into a single query? In that case you would need to appropriately set a table name for reducer (or make other appropriate changes so that the query is semantically correct).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r819937297
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/GapFillDataTableReducer.java
##########
@@ -0,0 +1,690 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import org.apache.pinot.common.exception.QueryException;
+import org.apache.pinot.common.metrics.BrokerMetrics;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.QueryProcessingException;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.ConcurrentIndexedTable;
+import org.apache.pinot.core.data.table.IndexedTable;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.data.table.Record;
+import org.apache.pinot.core.data.table.SimpleIndexedTable;
+import org.apache.pinot.core.data.table.UnboundedConcurrentIndexedTable;
+import org.apache.pinot.core.operator.combine.GroupByOrderByCombineOperator;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.query.selection.SelectionOperatorUtils;
+import org.apache.pinot.core.transport.ServerRoutingInstance;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.core.util.GroupByUtils;
+import org.apache.pinot.core.util.trace.TraceRunnable;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+
+/**
+ * Helper class to reduce and set gap fill results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class GapFillDataTableReducer implements DataTableReducer {
+ private static final int MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE = 2; // TBD, find a better value.
+
+ private final QueryContext _queryContext;
+
+ private final int _limitForAggregatedResult;
+ private final DateTimeGranularitySpec _dateTimeGranularity;
+ private final DateTimeFormatSpec _dateTimeFormatter;
+ private final long _startMs;
+ private final long _endMs;
+ private final long _timeBucketSize;
+ private final int _numOfTimeBuckets;
+ private final List<Integer> _groupByKeyIndexes;
+ private final Set<Key> _groupByKeys;
+ private final Map<Key, Object[]> _previousByGroupKey;
+ private final Map<String, ExpressionContext> _fillExpressions;
+ private final List<ExpressionContext> _timeSeries;
+ private final GapfillUtils.GapfillType _gapfillType;
+ private int _limitForGapfilledResult;
+ private boolean[] _isGroupBySelections;
+
+ GapFillDataTableReducer(QueryContext queryContext) {
+ _queryContext = queryContext;
+ _gapfillType = queryContext.getGapfillType();
+ _limitForAggregatedResult = queryContext.getLimit();
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ _limitForGapfilledResult = queryContext.getLimit();
+ } else {
+ _limitForGapfilledResult = queryContext.getSubQueryContext().getLimit();
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext);
+
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+
+ _dateTimeFormatter = new DateTimeFormatSpec(args.get(1).getLiteral());
+ _dateTimeGranularity = new DateTimeGranularitySpec(args.get(4).getLiteral());
+ String start = args.get(2).getLiteral();
+ _startMs = truncate(_dateTimeFormatter.fromFormatToMillis(start));
+ String end = args.get(3).getLiteral();
+ _endMs = truncate(_dateTimeFormatter.fromFormatToMillis(end));
+ _timeBucketSize = _dateTimeGranularity.granularityToMillis();
+ _numOfTimeBuckets = (int) ((_endMs - _startMs) / _timeBucketSize);
+
+ _fillExpressions = GapfillUtils.getFillExpressions(gapFillSelection);
+
+ _previousByGroupKey = new HashMap<>();
+ _groupByKeyIndexes = new ArrayList<>();
+ _groupByKeys = new HashSet<>();
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ _timeSeries = timeseriesOn.getFunction().getArguments();
+ }
+
+ private int findBucketIndex(long time) {
+ return (int) ((time - _startMs) / _timeBucketSize);
+ }
+
+ private void replaceColumnNameWithAlias(DataSchema dataSchema) {
+ QueryContext queryContext;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = _queryContext.getSubQueryContext().getSubQueryContext();
+ } else if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ queryContext = _queryContext;
+ } else {
+ queryContext = _queryContext.getSubQueryContext();
+ }
+ List<String> aliasList = queryContext.getAliasList();
+ Map<String, String> columnNameToAliasMap = new HashMap<>();
+ for (int i = 0; i < aliasList.size(); i++) {
+ if (aliasList.get(i) != null) {
+ ExpressionContext selection = queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(selection)) {
+ selection = selection.getFunction().getArguments().get(0);
+ }
+ columnNameToAliasMap.put(selection.toString(), aliasList.get(i));
+ }
+ }
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ if (columnNameToAliasMap.containsKey(dataSchema.getColumnNames()[i])) {
+ dataSchema.getColumnNames()[i] = columnNameToAliasMap.get(dataSchema.getColumnNames()[i]);
+ }
+ }
+ }
+
+ /**
+ * Computes the number of reduce threads to use per query.
+ * <ul>
+ * <li> Use single thread if number of data tables to reduce is less than
+ * {@value #MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE}.</li>
+ * <li> Else, use min of max allowed reduce threads per query, and number of data tables.</li>
+ * </ul>
+ *
+ * @param numDataTables Number of data tables to reduce
+ * @param maxReduceThreadsPerQuery Max allowed reduce threads per query
+ * @return Number of reduce threads to use for the query
+ */
+ private int getNumReduceThreadsToUse(int numDataTables, int maxReduceThreadsPerQuery) {
+ // Use single thread if number of data tables < MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE.
+ if (numDataTables < MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE) {
+ return Math.min(1, numDataTables); // Number of data tables can be zero.
+ }
+
+ return Math.min(maxReduceThreadsPerQuery, numDataTables);
+ }
+
+ private IndexedTable getIndexedTable(DataSchema dataSchema, Collection<DataTable> dataTablesToReduce,
+ DataTableReducerContext reducerContext)
+ throws TimeoutException {
+ QueryContext queryContext = _queryContext.getSubQueryContext();
+ if (_queryContext.getGapfillType() == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = queryContext.getSubQueryContext();
+ }
+ long start = System.currentTimeMillis();
+ int numDataTables = dataTablesToReduce.size();
+
+ // Get the number of threads to use for reducing.
+ // In case of single reduce thread, fall back to SimpleIndexedTable to avoid redundant locking/unlocking calls.
+ int numReduceThreadsToUse = getNumReduceThreadsToUse(numDataTables, reducerContext.getMaxReduceThreadsPerQuery());
+ int limit = queryContext.getLimit();
+ // TODO: Make minTrimSize configurable
+ int trimSize = GroupByUtils.getTableCapacity(limit);
+ // NOTE: For query with HAVING clause, use trimSize as resultSize to ensure the result accuracy.
+ // TODO: Resolve the HAVING clause within the IndexedTable before returning the result
+ int resultSize = queryContext.getHavingFilter() != null ? trimSize : limit;
+ int trimThreshold = reducerContext.getGroupByTrimThreshold();
+ IndexedTable indexedTable;
+ if (numReduceThreadsToUse <= 1) {
+ indexedTable = new SimpleIndexedTable(dataSchema, queryContext, resultSize, trimSize, trimThreshold);
+ } else {
+ if (trimThreshold >= GroupByOrderByCombineOperator.MAX_TRIM_THRESHOLD) {
+ // special case of trim threshold where it is set to max value.
+ // there won't be any trimming during upsert in this case.
+ // thus we can avoid the overhead of read-lock and write-lock
+ // in the upsert method.
+ indexedTable = new UnboundedConcurrentIndexedTable(dataSchema, queryContext, resultSize);
+ } else {
+ indexedTable = new ConcurrentIndexedTable(dataSchema, queryContext, resultSize, trimSize, trimThreshold);
+ }
+ }
+
+ Future[] futures = new Future[numDataTables];
+ CountDownLatch countDownLatch = new CountDownLatch(numDataTables);
+
+ // Create groups of data tables that each thread can process concurrently.
+ // Given that numReduceThreads is <= numDataTables, each group will have at least one data table.
+ ArrayList<DataTable> dataTables = new ArrayList<>(dataTablesToReduce);
+ List<List<DataTable>> reduceGroups = new ArrayList<>(numReduceThreadsToUse);
+
+ for (int i = 0; i < numReduceThreadsToUse; i++) {
+ reduceGroups.add(new ArrayList<>());
+ }
+ for (int i = 0; i < numDataTables; i++) {
+ reduceGroups.get(i % numReduceThreadsToUse).add(dataTables.get(i));
+ }
+
+ int cnt = 0;
+ ColumnDataType[] storedColumnDataTypes = dataSchema.getStoredColumnDataTypes();
+ int numColumns = storedColumnDataTypes.length;
+ for (List<DataTable> reduceGroup : reduceGroups) {
+ futures[cnt++] = reducerContext.getExecutorService().submit(new TraceRunnable() {
+ @Override
+ public void runJob() {
+ for (DataTable dataTable : reduceGroup) {
+ int numRows = dataTable.getNumberOfRows();
+
+ try {
+ for (int rowId = 0; rowId < numRows; rowId++) {
+ Object[] values = new Object[numColumns];
+ for (int colId = 0; colId < numColumns; colId++) {
+ switch (storedColumnDataTypes[colId]) {
+ case INT:
+ values[colId] = dataTable.getInt(rowId, colId);
+ break;
+ case LONG:
+ values[colId] = dataTable.getLong(rowId, colId);
+ break;
+ case FLOAT:
+ values[colId] = dataTable.getFloat(rowId, colId);
+ break;
+ case DOUBLE:
+ values[colId] = dataTable.getDouble(rowId, colId);
+ break;
+ case STRING:
+ values[colId] = dataTable.getString(rowId, colId);
+ break;
+ case BYTES:
+ values[colId] = dataTable.getBytes(rowId, colId);
+ break;
+ case OBJECT:
+ values[colId] = dataTable.getObject(rowId, colId);
+ break;
+ // Add other aggregation intermediate result / group-by column type supports here
+ default:
+ throw new IllegalStateException();
+ }
+ }
+ indexedTable.upsert(new Record(values));
+ }
+ } finally {
+ countDownLatch.countDown();
+ }
+ }
+ }
+ });
+ }
+
+ try {
+ long timeOutMs = reducerContext.getReduceTimeOutMs() - (System.currentTimeMillis() - start);
+ countDownLatch.await(timeOutMs, TimeUnit.MILLISECONDS);
+ } catch (InterruptedException e) {
+ for (Future future : futures) {
+ if (!future.isDone()) {
+ future.cancel(true);
+ }
+ }
+ throw new TimeoutException("Timed out in broker reduce phase.");
+ }
+
+ indexedTable.finish(true);
+ return indexedTable;
+ }
+
+ /**
+ * Here are three things that happen
+ * 1. Sort the result sets from all pinot servers based on timestamp
+ * 2. Gapfill the data for missing entities per time bucket
+ * 3. Aggregate the dataset per time bucket.
+ */
+ @Override
+ public void reduceAndSetResults(String tableName, DataSchema dataSchema,
+ Map<ServerRoutingInstance, DataTable> dataTableMap, BrokerResponseNative brokerResponseNative,
+ DataTableReducerContext reducerContext, BrokerMetrics brokerMetrics) {
+ DataSchema resultTableSchema = getResultTableDataSchema(dataSchema);
+ if (dataTableMap.isEmpty()) {
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, Collections.emptyList()));
+ return;
+ }
+
+ String[] columns = dataSchema.getColumnNames();
+
+ Map<String, Integer> indexes = new HashMap<>();
+ for (int i = 0; i < columns.length; i++) {
+ indexes.put(columns[i], i);
+ }
+
+ _isGroupBySelections = new boolean[dataSchema.getColumnDataTypes().length];
+
+ // The first one argument of timeSeries is time column. The left ones are defining entity.
+ for (ExpressionContext entityColum : _timeSeries) {
+ int index = indexes.get(entityColum.getIdentifier());
+ _isGroupBySelections[index] = true;
+ _groupByKeyIndexes.add(index);
+ }
+
+ List<Object[]>[] timeBucketedRawRows;
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL_AGGREGATE || _gapfillType == GapfillUtils.GapfillType.GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL_SELECT) {
+ timeBucketedRawRows = putRawRowsIntoTimeBucket(dataTableMap.values());
+ } else {
+ try {
+ IndexedTable indexedTable = getIndexedTable(dataSchema, dataTableMap.values(), reducerContext);
+ timeBucketedRawRows = putRawRowsIntoTimeBucket(indexedTable);
+ } catch (TimeoutException e) {
+ brokerResponseNative.getProcessingExceptions()
+ .add(new QueryProcessingException(QueryException.BROKER_TIMEOUT_ERROR_CODE, e.getMessage()));
+ return;
+ }
+ }
+ List<Object[]> resultRows;
+ replaceColumnNameWithAlias(dataSchema);
+
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL_AGGREGATE || _gapfillType == GapfillUtils.GapfillType.GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL_SELECT) {
+ List<Object[]> gapfilledRows = gapFillAndAggregate(timeBucketedRawRows, resultTableSchema, dataSchema);
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL_SELECT) {
+ List<String> selectionColumns = SelectionOperatorUtils.getSelectionColumns(_queryContext, dataSchema);
+ resultRows = new ArrayList<>(gapfilledRows.size());
+
+ Map<String, Integer> columnNameToIndexMap = new HashMap<>(dataSchema.getColumnNames().length);
+ String[] columnNames = dataSchema.getColumnNames();
+ for (int i = 0; i < columnNames.length; i++) {
+ columnNameToIndexMap.put(columnNames[i], i);
+ }
+
+ ColumnDataType[] columnDataTypes = dataSchema.getColumnDataTypes();
+ ColumnDataType[] resultColumnDataTypes = new ColumnDataType[selectionColumns.size()];
+ for (int i = 0; i < resultColumnDataTypes.length; i++) {
+ String name = selectionColumns.get(i);
+ int index = columnNameToIndexMap.get(name);
+ resultColumnDataTypes[i] = columnDataTypes[index];
+ }
+
+ for (Object[] row : gapfilledRows) {
+ Object[] resultRow = new Object[selectionColumns.size()];
+ for (int i = 0; i < selectionColumns.size(); i++) {
+ int index = columnNameToIndexMap.get(selectionColumns.get(i));
+ resultRow[i] = resultColumnDataTypes[i].convertAndFormat(row[index]);
+ }
+ resultRows.add(resultRow);
+ }
+ } else {
+ resultRows = gapfilledRows;
+ }
+ } else {
+ this.setupColumnTypeForAggregatedColum(dataSchema.getColumnDataTypes());
+ ColumnDataType[] columnDataTypes = dataSchema.getColumnDataTypes();
+ for (List<Object[]> rawRowsForTimeBucket : timeBucketedRawRows) {
+ if (rawRowsForTimeBucket == null) {
+ continue;
+ }
+ for (Object[] row : rawRowsForTimeBucket) {
+ extractFinalAggregationResults(row);
+ for (int i = 0; i < columnDataTypes.length; i++) {
+ row[i] = columnDataTypes[i].convert(row[i]);
+ }
+ }
+ }
+ resultRows = gapFillAndAggregate(timeBucketedRawRows, resultTableSchema, dataSchema);
+ }
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, resultRows));
+ }
+
+ private void extractFinalAggregationResults(Object[] row) {
+ AggregationFunction[] aggregationFunctions;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL) {
+ aggregationFunctions = _queryContext.getSubQueryContext().getAggregationFunctions();
+ } else {
+ aggregationFunctions = _queryContext.getSubQueryContext().getSubQueryContext().getAggregationFunctions();
+ }
+ int numAggregationFunctionsForInnerQuery = aggregationFunctions == null ? 0 : aggregationFunctions.length;
+ for (int i = 0; i < numAggregationFunctionsForInnerQuery; i++) {
+ int valueIndex = _timeSeries.size() + 1 + i;
+ row[valueIndex] = aggregationFunctions[i].extractFinalResult(row[valueIndex]);
+ }
+ }
+
+ private void setupColumnTypeForAggregatedColum(ColumnDataType[] columnDataTypes) {
+ AggregationFunction[] aggregationFunctions;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL) {
+ aggregationFunctions = _queryContext.getSubQueryContext().getAggregationFunctions();
+ } else {
+ aggregationFunctions = _queryContext.getSubQueryContext().getSubQueryContext().getAggregationFunctions();
+ }
+ int numAggregationFunctionsForInnerQuery = aggregationFunctions == null ? 0 : aggregationFunctions.length;
+ for (int i = 0; i < numAggregationFunctionsForInnerQuery; i++) {
+ columnDataTypes[_timeSeries.size() + 1 + i] = aggregationFunctions[i].getFinalResultColumnType();
+ }
+ }
+
+ /**
+ * Constructs the DataSchema for the ResultTable.
+ */
+ private DataSchema getResultTableDataSchema(DataSchema dataSchema) {
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ return dataSchema;
+ }
+
+ int numOfColumns = _queryContext.getSelectExpressions().size();
+ String[] columnNames = new String[numOfColumns];
+ ColumnDataType[] columnDataTypes = new ColumnDataType[numOfColumns];
+ for (int i = 0; i < numOfColumns; i++) {
+ ExpressionContext expressionContext = _queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(expressionContext)) {
+ expressionContext = expressionContext.getFunction().getArguments().get(0);
+ }
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ columnNames[i] = expressionContext.getIdentifier();
+ columnDataTypes[i] = ColumnDataType.STRING;
+ } else {
+ FunctionContext functionContext = expressionContext.getFunction();
+ AggregationFunction aggregationFunction =
+ AggregationFunctionFactory.getAggregationFunction(functionContext, _queryContext);
+ columnDataTypes[i] = aggregationFunction.getFinalResultColumnType();
+ columnNames[i] = functionContext.toString();
+ }
+ }
+ return new DataSchema(columnNames, columnDataTypes);
+ }
+
+ private Key constructGroupKeys(Object[] row) {
+ Object[] groupKeys = new Object[_groupByKeyIndexes.size()];
+ for (int i = 0; i < _groupByKeyIndexes.size(); i++) {
+ groupKeys[i] = row[_groupByKeyIndexes.get(i)];
+ }
+ return new Key(groupKeys);
+ }
+
+ private long truncate(long epoch) {
+ int sz = _dateTimeGranularity.getSize();
+ return epoch / sz * sz;
+ }
+
+ private List<Object[]> gapFillAndAggregate(List<Object[]>[] timeBucketedRawRows,
+ DataSchema dataSchemaForAggregatedResult, DataSchema dataSchema) {
+ List<Object[]> result = new ArrayList<>();
+
+ GapfillFilterHandler postGapfillFilterHandler = null;
+ if (_queryContext.getSubQueryContext() != null && _queryContext.getFilter() != null) {
+ postGapfillFilterHandler = new GapfillFilterHandler(_queryContext.getFilter(), dataSchema);
+ }
+ GapfillFilterHandler postAggregateHavingFilterHandler = null;
+ if (_queryContext.getHavingFilter() != null) {
+ postAggregateHavingFilterHandler =
+ new GapfillFilterHandler(_queryContext.getHavingFilter(), dataSchemaForAggregatedResult);
+ }
+ for (long time = _startMs; time < _endMs; time += _timeBucketSize) {
+ int index = findBucketIndex(time);
+ List<Object[]> bucketedResult = gapfill(time, timeBucketedRawRows[index], dataSchema, postGapfillFilterHandler);
+ if (_queryContext.getAggregationFunctions() == null) {
+ result.addAll(bucketedResult);
+ } else if (bucketedResult.size() > 0) {
+ List<Object[]> aggregatedRows = aggregateGapfilledData(bucketedResult, dataSchema);
+ for (Object[] aggregatedRow : aggregatedRows) {
+ if (postAggregateHavingFilterHandler == null || postAggregateHavingFilterHandler.isMatch(aggregatedRow)) {
+ result.add(aggregatedRow);
+ }
+ if (result.size() >= _limitForAggregatedResult) {
+ return result;
+ }
+ }
+ }
+ }
+ return result;
+ }
+
+ private List<Object[]> gapfill(long bucketTime, List<Object[]> rawRowsForBucket, DataSchema dataSchema,
+ GapfillFilterHandler postGapfillFilterHandler) {
+ List<Object[]> bucketedResult = new ArrayList<>();
+ ColumnDataType[] resultColumnDataTypes = dataSchema.getColumnDataTypes();
+ int numResultColumns = resultColumnDataTypes.length;
+ Set<Key> keys = new HashSet<>(_groupByKeys);
+
+ if (rawRowsForBucket != null) {
+ for (Object[] resultRow : rawRowsForBucket) {
+ for (int i = 0; i < resultColumnDataTypes.length; i++) {
+ resultRow[i] = resultColumnDataTypes[i].format(resultRow[i]);
+ }
+
+ long timeCol = _dateTimeFormatter.fromFormatToMillis(String.valueOf(resultRow[0]));
+ if (timeCol > bucketTime) {
+ break;
+ }
+ if (timeCol == bucketTime) {
+ if (postGapfillFilterHandler == null || postGapfillFilterHandler.isMatch(resultRow)) {
+ if (bucketedResult.size() >= _limitForGapfilledResult) {
+ _limitForGapfilledResult = 0;
+ break;
+ } else {
+ bucketedResult.add(resultRow);
+ }
+ }
+ Key key = constructGroupKeys(resultRow);
+ keys.remove(key);
+ _previousByGroupKey.put(key, resultRow);
+ }
+ }
+ }
+
+ for (Key key : keys) {
+ Object[] gapfillRow = new Object[numResultColumns];
+ int keyIndex = 0;
+ if (resultColumnDataTypes[0] == ColumnDataType.LONG) {
+ gapfillRow[0] = Long.valueOf(_dateTimeFormatter.fromMillisToFormat(bucketTime));
+ } else {
+ gapfillRow[0] = _dateTimeFormatter.fromMillisToFormat(bucketTime);
+ }
+ for (int i = 1; i < _isGroupBySelections.length; i++) {
+ if (_isGroupBySelections[i]) {
+ gapfillRow[i] = key.getValues()[keyIndex++];
+ } else {
+ gapfillRow[i] = getFillValue(i, dataSchema.getColumnName(i), key, resultColumnDataTypes[i]);
+ }
+ }
+
+ if (postGapfillFilterHandler == null || postGapfillFilterHandler.isMatch(gapfillRow)) {
+ if (bucketedResult.size() >= _limitForGapfilledResult) {
+ break;
+ } else {
+ bucketedResult.add(gapfillRow);
+ }
+ }
+ }
+ if (_limitForGapfilledResult > _groupByKeys.size()) {
+ _limitForGapfilledResult -= _groupByKeys.size();
+ } else {
+ _limitForGapfilledResult = 0;
+ }
+ return bucketedResult;
+ }
+
+ private List<Object[]> aggregateGapfilledData(List<Object[]> bucketedRows, DataSchema dataSchema) {
+ List<ExpressionContext> groupbyExpressions = _queryContext.getGroupByExpressions();
+ Preconditions.checkArgument(groupbyExpressions != null, "No GroupBy Clause.");
+ Map<String, Integer> indexes = new HashMap<>();
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ indexes.put(dataSchema.getColumnName(i), i);
+ }
+
+ Map<List<Object>, Integer> groupKeyIndexes = new HashMap<>();
+ int[] groupKeyArray = new int[bucketedRows.size()];
+ List<Object[]> aggregatedResult = new ArrayList<>();
+ for (int i = 0; i < bucketedRows.size(); i++) {
+ Object[] bucketedRow = bucketedRows.get(i);
+ List<Object> groupKey = new ArrayList<>(groupbyExpressions.size());
+ for (ExpressionContext groupbyExpression : groupbyExpressions) {
+ int columnIndex = indexes.get(groupbyExpression.toString());
+ groupKey.add(bucketedRow[columnIndex]);
+ }
+ if (groupKeyIndexes.containsKey(groupKey)) {
+ groupKeyArray[i] = groupKeyIndexes.get(groupKey);
+ } else {
+ // create the new groupBy Result row and fill the group by key
+ groupKeyArray[i] = groupKeyIndexes.size();
+ groupKeyIndexes.put(groupKey, groupKeyIndexes.size());
+ Object[] row = new Object[_queryContext.getSelectExpressions().size()];
+ for (int j = 0; j < _queryContext.getSelectExpressions().size(); j++) {
+ ExpressionContext expressionContext = _queryContext.getSelectExpressions().get(j);
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ row[j] = bucketedRow[indexes.get(expressionContext.toString())];
+ }
+ }
+ aggregatedResult.add(row);
+ }
+ }
+
+ Map<ExpressionContext, BlockValSet> blockValSetMap = new HashMap<>();
+ for (int i = 1; i < dataSchema.getColumnNames().length; i++) {
+ blockValSetMap.put(ExpressionContext.forIdentifier(dataSchema.getColumnName(i)),
+ new ColumnDataToBlockValSetConverter(dataSchema.getColumnDataType(i), bucketedRows, i));
+ }
+
+ for (int i = 0; i < _queryContext.getSelectExpressions().size(); i++) {
+ ExpressionContext expressionContext = _queryContext.getSelectExpressions().get(i);
+ if (expressionContext.getType() == ExpressionContext.Type.FUNCTION) {
+ FunctionContext functionContext = expressionContext.getFunction();
+ AggregationFunction aggregationFunction =
+ AggregationFunctionFactory.getAggregationFunction(functionContext, _queryContext);
+ GroupByResultHolder groupByResultHolder =
+ aggregationFunction.createGroupByResultHolder(_groupByKeys.size(), _groupByKeys.size());
+ aggregationFunction.aggregateGroupBySV(bucketedRows.size(), groupKeyArray, groupByResultHolder, blockValSetMap);
+ for (int j = 0; j < groupKeyIndexes.size(); j++) {
+ Object[] row = aggregatedResult.get(j);
+ row[i] = aggregationFunction.extractGroupByResult(groupByResultHolder, j);
+ row[i] = aggregationFunction.extractFinalResult(row[i]);
+ }
+ }
+ }
+ return aggregatedResult;
+ }
+
+ private Object getFillValue(int columnIndex, String columnName, Object key, ColumnDataType dataType) {
+ ExpressionContext expressionContext = _fillExpressions.get(columnName);
+ if (expressionContext != null && expressionContext.getFunction() != null && GapfillUtils
+ .isFill(expressionContext)) {
+ List<ExpressionContext> args = expressionContext.getFunction().getArguments();
+ if (args.get(1).getLiteral() == null) {
+ throw new UnsupportedOperationException("Wrong Sql.");
+ }
+ GapfillUtils.FillType fillType = GapfillUtils.FillType.valueOf(args.get(1).getLiteral());
+ if (fillType == GapfillUtils.FillType.FILL_DEFAULT_VALUE) {
+ // TODO: may fill the default value from sql in the future.
+ return GapfillUtils.getDefaultValue(dataType);
+ } else if (fillType == GapfillUtils.FillType.FILL_PREVIOUS_VALUE) {
+ Object[] row = _previousByGroupKey.get(key);
+ if (row != null) {
+ return row[columnIndex];
+ } else {
+ return GapfillUtils.getDefaultValue(dataType);
+ }
+ } else {
+ throw new UnsupportedOperationException("unsupported fill type.");
+ }
+ } else {
+ return GapfillUtils.getDefaultValue(dataType);
+ }
+ }
+
+ /**
+ * Merge all result tables from different pinot servers and sort the rows based on timebucket.
+ */
+ private List<Object[]>[] putRawRowsIntoTimeBucket(Collection<DataTable> dataTables) {
+ List<Object[]>[] bucketedItems = new List[_numOfTimeBuckets];
+
+ for (DataTable dataTable : dataTables) {
+ int numRows = dataTable.getNumberOfRows();
+ for (int rowId = 0; rowId < numRows; rowId++) {
+ Object[] row = SelectionOperatorUtils.extractRowFromDataTable(dataTable, rowId);
+ String timeCol = row[0] instanceof Long ? ((Long) row[0]).toString() : (String) row[0];
+ long timeBucket = _dateTimeFormatter.fromFormatToMillis(timeCol);
+ int index = findBucketIndex(timeBucket);
+ if (bucketedItems[index] == null) {
+ bucketedItems[index] = new ArrayList<>();
+ }
+ bucketedItems[index].add(row);
+ _groupByKeys.add(constructGroupKeys(row));
+ }
+ }
+ return bucketedItems;
+ }
+
+ private List<Object[]>[] putRawRowsIntoTimeBucket(IndexedTable indexedTable) {
+ List<Object[]>[] bucketedItems = new List[_numOfTimeBuckets];
+
+ Iterator<Record> iterator = indexedTable.iterator();
+ while (iterator.hasNext()) {
+ Object[] row = iterator.next().getValues();
+ String timeCol = row[0] instanceof Long ? ((Long) row[0]).toString() : (String) row[0];
+ long timeBucket = _dateTimeFormatter.fromFormatToMillis(timeCol);
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (57e966b) into [master](https://codecov.io/gh/apache/pinot/commit/fb572bd0aba20d2b8a83320df6dd24cb0c654b30?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (fb572bd) will **increase** coverage by `0.99%`.
> The diff coverage is `69.63%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
+ Coverage 70.39% 71.38% +0.99%
Complexity 4308 4308
============================================
Files 1623 1636 +13
Lines 84365 85120 +755
Branches 12657 12839 +182
============================================
+ Hits 59386 60764 +1378
+ Misses 20876 20152 -724
- Partials 4103 4204 +101
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.63% <10.55%> (-0.26%)` | :arrow_down: |
| integration2 | `27.38% <10.79%> (?)` | |
| unittests1 | `67.88% <69.39%> (-0.02%)` | :arrow_down: |
| unittests2 | `14.11% <0.00%> (-0.10%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...he/pinot/core/plan/GapfillAggregationPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxBZ2dyZWdhdGlvblBsYW5Ob2RlLmphdmE=) | `0.00% <0.00%> (ø)` | |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `91.58% <0.00%> (-0.60%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...plan/GapfillAggregationGroupByOrderByPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxBZ2dyZWdhdGlvbkdyb3VwQnlPcmRlckJ5UGxhbk5vZGUuamF2YQ==) | `51.21% <51.21%> (ø)` | |
| [.../combine/GapfillGroupByOrderByCombineOperator.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9vcGVyYXRvci9jb21iaW5lL0dhcGZpbGxHcm91cEJ5T3JkZXJCeUNvbWJpbmVPcGVyYXRvci5qYXZh) | `58.88% <58.88%> (ø)` | |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `85.00% <60.00%> (+1.07%)` | :arrow_up: |
| [...che/pinot/core/query/reduce/filter/RowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXIuamF2YQ==) | `66.66% <66.66%> (ø)` | |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `73.61% <82.92%> (+9.97%)` | :arrow_up: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `91.89% <83.33%> (+0.22%)` | :arrow_up: |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| ... and [107 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [fb572bd...57e966b](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (f78cb1d) into [master](https://codecov.io/gh/apache/pinot/commit/916d807c8f67b32c1a430692f74134c9c976c33d?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (916d807) will **decrease** coverage by `0.99%`.
> The diff coverage is `82.00%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 71.02% 70.03% -1.00%
Complexity 4314 4314
============================================
Files 1626 1636 +10
Lines 84929 85563 +634
Branches 12783 12941 +158
============================================
- Hits 60325 59923 -402
- Misses 20462 21464 +1002
- Partials 4142 4176 +34
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.47% <14.78%> (-0.32%)` | :arrow_down: |
| integration2 | `?` | |
| unittests1 | `67.52% <82.23%> (+0.13%)` | :arrow_up: |
| unittests2 | `13.99% <0.00%> (-0.12%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `69.57% <0.00%> (-1.95%)` | :arrow_down: |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `91.58% <0.00%> (-0.60%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...che/pinot/core/query/reduce/filter/RowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXIuamF2YQ==) | `66.66% <66.66%> (ø)` | |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `75.24% <81.42%> (+11.61%)` | :arrow_up: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `83.33% <81.81%> (-4.17%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `92.00% <83.33%> (+0.23%)` | :arrow_up: |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `85.71% <85.71%> (ø)` | |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `86.73% <86.73%> (ø)` | |
| ... and [109 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [916d807...f78cb1d](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] Jackie-Jiang commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
Jackie-Jiang commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r811396986
##########
File path: pinot-common/src/main/java/org/apache/pinot/sql/parsers/CalciteSqlParser.java
##########
@@ -129,8 +129,25 @@ public static PinotQuery compileToPinotQuery(String sql)
if (!options.isEmpty()) {
sql = removeOptionsFromSql(sql);
}
+
+ SqlParser sqlParser = SqlParser.create(sql, PARSER_CONFIG);
+ SqlNode sqlNode;
+ try {
+ sqlNode = sqlParser.parseQuery();
+ } catch (SqlParseException e) {
+ throw new SqlCompilationException("Caught exception while parsing query: " + sql, e);
+ }
+
// Compile Sql without OPTION statements.
- PinotQuery pinotQuery = compileCalciteSqlToPinotQuery(sql);
+ PinotQuery pinotQuery = compileSqlNodeToPinotQuery(sqlNode);
+
+ SqlSelect sqlSelect = getSelectNode(sqlNode);
+ if (sqlSelect != null) {
+ SqlNode fromNode = sqlSelect.getFrom();
+ if (fromNode != null && (fromNode instanceof SqlSelect || fromNode instanceof SqlOrderBy)) {
+ pinotQuery.getDataSource().setSubquery(compileSqlNodeToPinotQuery(fromNode));
+ }
+ }
Review comment:
Do we still need this part? I think this is already handled recursively within `compileSqlNodeToPinotQuery()`
##########
File path: pinot-common/src/main/java/org/apache/pinot/sql/parsers/CalciteSqlParser.java
##########
@@ -325,32 +342,30 @@ private static void setOptions(PinotQuery pinotQuery, List<String> optionsStatem
pinotQuery.setQueryOptions(options);
}
- private static PinotQuery compileCalciteSqlToPinotQuery(String sql) {
- SqlParser sqlParser = SqlParser.create(sql, PARSER_CONFIG);
- SqlNode sqlNode;
- try {
- sqlNode = sqlParser.parseQuery();
- } catch (SqlParseException e) {
- throw new SqlCompilationException("Caught exception while parsing query: " + sql, e);
- }
-
- PinotQuery pinotQuery = new PinotQuery();
- if (sqlNode instanceof SqlExplain) {
- // Extract sql node for the query
- sqlNode = ((SqlExplain) sqlNode).getExplicandum();
- pinotQuery.setExplain(true);
- }
- SqlSelect selectNode;
+ private static SqlSelect getSelectNode(SqlNode sqlNode) {
+ SqlSelect selectNode = null;
if (sqlNode instanceof SqlOrderBy) {
// Store order-by info into the select sql node
SqlOrderBy orderByNode = (SqlOrderBy) sqlNode;
selectNode = (SqlSelect) orderByNode.query;
selectNode.setOrderBy(orderByNode.orderList);
selectNode.setFetch(orderByNode.fetch);
selectNode.setOffset(orderByNode.offset);
- } else {
+ } else if (sqlNode instanceof SqlSelect) {
Review comment:
We probably want a `Precondition.checkArgument()` instead of the if check. If `sqlNode` is neither `SqlOrderBy` nor `SqlSelect`, we will get `NPE` later
##########
File path: pinot-core/src/test/java/org/apache/pinot/queries/PreAggregationGapfillQueriesTest.java
##########
@@ -0,0 +1,3277 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.queries;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import org.apache.commons.io.FileUtils;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.segment.local.indexsegment.immutable.ImmutableSegmentLoader;
+import org.apache.pinot.segment.local.segment.creator.impl.SegmentIndexCreationDriverImpl;
+import org.apache.pinot.segment.local.segment.readers.GenericRowRecordReader;
+import org.apache.pinot.segment.spi.ImmutableSegment;
+import org.apache.pinot.segment.spi.IndexSegment;
+import org.apache.pinot.segment.spi.creator.SegmentGeneratorConfig;
+import org.apache.pinot.spi.config.table.TableConfig;
+import org.apache.pinot.spi.config.table.TableType;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+import org.apache.pinot.spi.data.FieldSpec.DataType;
+import org.apache.pinot.spi.data.Schema;
+import org.apache.pinot.spi.data.readers.GenericRow;
+import org.apache.pinot.spi.utils.ReadMode;
+import org.apache.pinot.spi.utils.builder.TableConfigBuilder;
+import org.testng.Assert;
+import org.testng.annotations.AfterClass;
+import org.testng.annotations.BeforeClass;
+import org.testng.annotations.Test;
+
+
+/**
+ * Queries test for Gapfill queries.
+ */
+// TODO: Item 1. table alias for subquery in next PR
+// TODO: Item 2. Deprecate PostAggregateGapfill implementation in next PR
+@SuppressWarnings("rawtypes")
+public class PreAggregationGapfillQueriesTest extends BaseQueriesTest {
+ private static final File INDEX_DIR = new File(FileUtils.getTempDirectory(), "PostAggregationGapfillQueriesTest");
+ private static final String RAW_TABLE_NAME = "parkingData";
+ private static final String SEGMENT_NAME = "testSegment";
+
+ private static final int NUM_LOTS = 4;
+
+ private static final String IS_OCCUPIED_COLUMN = "isOccupied";
+ private static final String LEVEL_ID_COLUMN = "levelId";
+ private static final String LOT_ID_COLUMN = "lotId";
+ private static final String EVENT_TIME_COLUMN = "eventTime";
+ private static final Schema SCHEMA = new Schema.SchemaBuilder()
+ .addSingleValueDimension(IS_OCCUPIED_COLUMN, DataType.INT)
+ .addSingleValueDimension(LOT_ID_COLUMN, DataType.STRING)
+ .addSingleValueDimension(LEVEL_ID_COLUMN, DataType.STRING)
+ .addSingleValueDimension(EVENT_TIME_COLUMN, DataType.LONG)
+ .setPrimaryKeyColumns(Arrays.asList(LOT_ID_COLUMN, EVENT_TIME_COLUMN))
+ .build();
+ private static final TableConfig TABLE_CONFIG = new TableConfigBuilder(TableType.OFFLINE).setTableName(RAW_TABLE_NAME)
+ .build();
+
+ private IndexSegment _indexSegment;
+ private List<IndexSegment> _indexSegments;
+
+ @Override
+ protected String getFilter() {
+ // NOTE: Use a match all filter to switch between DictionaryBasedAggregationOperator and AggregationOperator
+ return " WHERE eventTime >= 0";
+ }
+
+ @Override
+ protected IndexSegment getIndexSegment() {
+ return _indexSegment;
+ }
+
+ @Override
+ protected List<IndexSegment> getIndexSegments() {
+ return _indexSegments;
+ }
+
+ GenericRow createRow(String time, int levelId, int lotId, boolean isOccupied) {
+ DateTimeFormatSpec dateTimeFormatter
+ = new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ GenericRow parkingRow = new GenericRow();
+ parkingRow.putValue(EVENT_TIME_COLUMN, dateTimeFormatter.fromFormatToMillis(time));
+ parkingRow.putValue(LEVEL_ID_COLUMN, "Level_" + String.valueOf(levelId));
+ parkingRow.putValue(LOT_ID_COLUMN, "LotId_" + String.valueOf(lotId));
+ parkingRow.putValue(IS_OCCUPIED_COLUMN, isOccupied);
+ return parkingRow;
+ }
+
+ @BeforeClass
+ public void setUp()
+ throws Exception {
+ FileUtils.deleteDirectory(INDEX_DIR);
+
+ List<GenericRow> records = new ArrayList<>(NUM_LOTS * 2);
+ records.add(createRow("2021-11-07 04:11:00.000", 0, 0, true));
+ records.add(createRow("2021-11-07 04:21:00.000", 0, 0, true));
+ records.add(createRow("2021-11-07 04:31:00.000", 1, 0, true));
+ records.add(createRow("2021-11-07 05:17:00.000", 0, 1, true));
+ records.add(createRow("2021-11-07 05:37:00.000", 0, 1, true));
+ records.add(createRow("2021-11-07 05:47:00.000", 1, 2, true));
+ records.add(createRow("2021-11-07 06:25:00.000", 0, 2, true));
+ records.add(createRow("2021-11-07 06:35:00.000", 0, 2, true));
+ records.add(createRow("2021-11-07 06:36:00.000", 1, 1, true));
+ records.add(createRow("2021-11-07 07:44:00.000", 0, 3, true));
+ records.add(createRow("2021-11-07 07:46:00.000", 1, 3, true));
+ records.add(createRow("2021-11-07 07:54:00.000", 1, 3, true));
+ records.add(createRow("2021-11-07 08:44:00.000", 0, 2, false));
+ records.add(createRow("2021-11-07 08:44:00.000", 1, 2, false));
+ records.add(createRow("2021-11-07 09:31:00.000", 0, 3, false));
+ records.add(createRow("2021-11-07 09:31:00.000", 1, 3, false));
+ records.add(createRow("2021-11-07 10:17:00.000", 0, 0, false));
+ records.add(createRow("2021-11-07 10:33:00.000", 0, 0, false));
+ records.add(createRow("2021-11-07 10:33:00.000", 1, 0, false));
+ records.add(createRow("2021-11-07 11:54:00.000", 0, 1, false));
+ records.add(createRow("2021-11-07 11:57:00.000", 1, 1, false));
+
+ SegmentGeneratorConfig segmentGeneratorConfig = new SegmentGeneratorConfig(TABLE_CONFIG, SCHEMA);
+ segmentGeneratorConfig.setTableName(RAW_TABLE_NAME);
+ segmentGeneratorConfig.setSegmentName(SEGMENT_NAME);
+ segmentGeneratorConfig.setOutDir(INDEX_DIR.getPath());
+
+ SegmentIndexCreationDriverImpl driver = new SegmentIndexCreationDriverImpl();
+ driver.init(segmentGeneratorConfig, new GenericRowRecordReader(records));
+ driver.build();
+
+ ImmutableSegment immutableSegment = ImmutableSegmentLoader.load(new File(INDEX_DIR, SEGMENT_NAME), ReadMode.mmap);
+ _indexSegment = immutableSegment;
+ _indexSegments = Arrays.asList(immutableSegment);
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestSelectSelect() {
+ DateTimeFormatSpec dateTimeFormatter
+ = new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + " GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " levelId, lotId, isOccupied "
+ + "FROM parkingData "
+ + "WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + "LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int [][] expectedOccupiedSlotsCounts1 =
+ new int [][] {{6, 6}, {8, 4}, {10, 2}, {12, 0}, {6, 4}, {4, 6}, {2, 10}, {0, 10}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = (String) gapFillRows1.get(index)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[3].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, lotId, isOccupied, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int [] expectedOccupiedSlotsCounts2 = new int [] {6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = (String) gapFillRows2.get(index)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestAggregateSelect() {
+ DateTimeFormatSpec dateTimeFormatter
+ = new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)), levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int [][] expectedOccupiedSlotsCounts1 =
+ new int [][] {{2, 6}, {4, 4}, {6, 2}, {8, 0}, {6, 2}, {4, 4}, {2, 6}, {0, 8}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = (String) gapFillRows1.get(index)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[3].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + "GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)), levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int [] expectedOccupiedSlotsCounts2 = new int [] {2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = (String) gapFillRows2.get(index)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestGapfillAggregate() {
+ DateTimeFormatSpec dateTimeFormatter
+ = new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String dataTimeConvertQuery = "SELECT "
+ + "DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + "'1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col, "
+ + "SUM(isOccupied) "
+ + "FROM parkingData "
+ + "WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + "GROUP BY 1 "
+ + "ORDER BY 1 "
+ + "LIMIT 200";
+
+ BrokerResponseNative dateTimeConvertBrokerResponse = getBrokerResponseForSqlQuery(dataTimeConvertQuery);
+
+ ResultTable dateTimeConvertResultTable = dateTimeConvertBrokerResponse.getResultTable();
+ Assert.assertEquals(dateTimeConvertResultTable.getRows().size(), 8);
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCounts1 = new double [] {6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double [] expectedOccupiedSlotsCounts2 = new double [] {6, 8, 10, 12, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = (String) gapFillRows2.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestGapfillAggregateWithOptionalGroupBy() {
+ DateTimeFormatSpec dateTimeFormatter
+ = new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCountsForLevel11 = new double [] {4, 5, 6, 5, 3, 2, 1, 0};
+ double [] expectedOccupiedSlotsCountsForLevel21 = new double [] {2, 3, 4, 7, 3, 2, 1, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = (String) gapFillRows1.get(i + 1)[0];
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double [] expectedOccupiedSlotsCountsForLevel12 = new double [] {4, 5, 6, 5, 3, 2, 1};
+ double [] expectedOccupiedSlotsCountsForLevel22 = new double [] {2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCountsForLevel12.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel12.length * 2; i += 2) {
+ String firstTimeCol = (String) gapFillRows2.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i)[2]);
+ }
+ firstTimeCol = (String) gapFillRows2.get(i + 1)[0];
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestGapfillAggregateWithHavingClause() {
+ DateTimeFormatSpec dateTimeFormatter
+ = new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " HAVING occupied_slots_count > 0"
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCountsForLevel11 = new double [] {4, 5, 6, 5, 3, 2, 1};
+ double [] expectedOccupiedSlotsCountsForLevel21 = new double [] {2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = (String) gapFillRows1.get(i + 1)[0];
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestAggregateAggregate() {
+ DateTimeFormatSpec dateTimeFormatter
+ = new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCounts1 = new double [] {2, 4, 6, 8, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double [] expectedOccupiedSlotsCounts2 = new double [] {2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = (String) gapFillRows2.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestAggregateAggregateWithOptionalGroupBy() {
+ DateTimeFormatSpec dateTimeFormatter
+ = new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCounts1 = new double [] {1, 2, 3, 4, 3, 2, 1, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i += 2) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = (String) gapFillRows1.get(i + 1)[0];
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double [] expectedOccupiedSlotsCounts2 = new double [] {1, 2, 3, 4, 3, 2, 1};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i += 2) {
+ String firstTimeCol = (String) gapFillRows2.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = (String) gapFillRows2.get(i + 1)[0];
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestAggregateAggregateWithHavingClause() {
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)),"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " HAVING occupied_slots_count > 0"
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCounts1 = new double [] {1, 2, 3, 4, 3, 2, 1};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length * 2);
+ DateTimeFormatSpec dateTimeFormatter
+ = new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+
+ long start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i += 2) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = (String) gapFillRows1.get(i + 1)[0];
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochHoursGapfillTestSelectSelect() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + " GapFill(ToEpochHours(eventTime), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId "
+ + "FROM parkingData "
+ + "WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + "LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int [][] expectedOccupiedSlotsCounts1 =
+ new int [][] {{6, 6}, {8, 4}, {10, 2}, {12, 0}, {6, 4}, {4, 6}, {2, 10}, {0, 10}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = ((Long) (gapFillRows1.get(index)[0])).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[3].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, lotId, isOccupied, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochHours(eventTime), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int [] expectedOccupiedSlotsCounts2 = new int [] {6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void toEpochHoursGapfillTestAggregateSelect() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + " GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)),"
+ + " levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT ToEpochHours(eventTime) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int [][] expectedOccupiedSlotsCounts1 =
+ new int [][] {{2, 6}, {4, 4}, {6, 2}, {8, 0}, {6, 2}, {4, 4}, {2, 6}, {0, 8}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[3].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + " GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)),"
+ + " levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT ToEpochHours(eventTime) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int [] expectedOccupiedSlotsCounts2 = new int [] {2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void toEpochHoursGapfillTestGapfillAggregate() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochHours(eventTime), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCounts1 = new double [] {6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochHours(eventTime), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double [] expectedOccupiedSlotsCounts2 = new double [] {6, 8, 10, 12, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochHoursGapfillTestGapfillAggregateWithOptionalGroupBy() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochHours(eventTime), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCountsForLevel11 = new double [] {4, 5, 6, 5, 3, 2, 1, 0};
+ double [] expectedOccupiedSlotsCountsForLevel21 = new double [] {2, 3, 4, 7, 3, 2, 1, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochHours(eventTime), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double [] expectedOccupiedSlotsCountsForLevel12 = new double [] {4, 5, 6, 5, 3, 2, 1};
+ double [] expectedOccupiedSlotsCountsForLevel22 = new double [] {2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCountsForLevel12.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel12.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows2.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochHoursGapfillTestGapfillAggregateWithHavingClause() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochHours(eventTime), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " HAVING occupied_slots_count > 0"
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCountsForLevel11 = new double [] {4, 5, 6, 5, 3, 2, 1};
+ double [] expectedOccupiedSlotsCountsForLevel21 = new double [] {2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochHoursGapfillTestAggregateAggregate() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)),"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochHours(eventTime) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCounts1 = new double [] {2, 4, 6, 8, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochHours(eventTime) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double [] expectedOccupiedSlotsCounts2 = new double [] {2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochHoursGapfillTestAggregateAggregateWithOptionalGroupBy() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochHours(eventTime) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCounts1 = new double [] {1, 2, 3, 4, 3, 2, 1, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochHours(eventTime) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double [] expectedOccupiedSlotsCounts2 = new double [] {1, 2, 3, 4, 3, 2, 1};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = ((Long) gapFillRows2.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochHoursGapfillTestAggregateAggregateWithHavingClause() {
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochHours(eventTime) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " HAVING occupied_slots_count > 0"
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCounts1 = new double [] {1, 2, 3, 4, 3, 2, 1};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length * 2);
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+
+ long start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesRoundedHoursGapfillTestSelectSelect() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MINUTES:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT GapFill(ToEpochMinutesRounded(eventTime, 60), '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int [][] expectedOccupiedSlotsCounts1 =
+ new int [][] {{6, 6}, {8, 4}, {10, 2}, {12, 0}, {6, 4}, {4, 6}, {2, 10}, {0, 10}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = ((Long) (gapFillRows1.get(index)[0])).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[3].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, lotId, isOccupied, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesRounded(eventTime, 60), '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int [] expectedOccupiedSlotsCounts2 = new int [] {6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void toEpochMinutesRoundedHoursGapfillTestAggregateSelect() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MINUTES:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "GapFill(time_col, '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)), levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT ToEpochMinutesRounded(eventTime, 60) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int [][] expectedOccupiedSlotsCounts1 =
+ new int [][] {{2, 6}, {4, 4}, {6, 2}, {8, 0}, {6, 2}, {4, 4}, {2, 6}, {0, 8}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[3].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + "GapFill(time_col, '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)), levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT ToEpochMinutesRounded(eventTime, 60) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int [] expectedOccupiedSlotsCounts2 = new int [] {2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void toEpochMinutesRoundedHoursGapfillTestGapfillAggregate() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MINUTES:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesRounded(eventTime, 60), '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCounts1 = new double [] {6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesRounded(eventTime, 60), '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double [] expectedOccupiedSlotsCounts2 = new double [] {6, 8, 10, 12, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesRoundedHoursGapfillTestGapfillAggregateWithOptionalGroupBy() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MINUTES:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesRounded(eventTime, 60), '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCountsForLevel11 = new double [] {4, 5, 6, 5, 3, 2, 1, 0};
+ double [] expectedOccupiedSlotsCountsForLevel21 = new double [] {2, 3, 4, 7, 3, 2, 1, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesRounded(eventTime, 60), '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double [] expectedOccupiedSlotsCountsForLevel12 = new double [] {4, 5, 6, 5, 3, 2, 1};
+ double [] expectedOccupiedSlotsCountsForLevel22 = new double [] {2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCountsForLevel12.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel12.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows2.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesRoundedHoursGapfillTestGapfillAggregateWithHavingClause() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MINUTES:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesRounded(eventTime, 60), '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " HAVING occupied_slots_count > 0"
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCountsForLevel11 = new double [] {4, 5, 6, 5, 3, 2, 1};
+ double [] expectedOccupiedSlotsCountsForLevel21 = new double [] {2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesRoundedHoursGapfillTestAggregateAggregate() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MINUTES:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochMinutesRounded(eventTime, 60) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCounts1 = new double [] {2, 4, 6, 8, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochMinutesRounded(eventTime, 60) time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double [] expectedOccupiedSlotsCounts2 = new double [] {2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesRoundedHoursGapfillTestAggregateAggregateWithOptionalGroupBy() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MINUTES:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochMinutesRounded(eventTime, 60) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCounts1 = new double [] {1, 2, 3, 4, 3, 2, 1, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
Review comment:
Nice, so we can take arbitrary level of queries?
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/plan/GapfillSelectionPlanNode.java
##########
@@ -0,0 +1,90 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.plan;
+
+import com.google.common.base.Preconditions;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.core.common.Operator;
+import org.apache.pinot.core.operator.blocks.IntermediateResultsBlock;
+import org.apache.pinot.core.operator.query.SelectionOnlyOperator;
+import org.apache.pinot.core.operator.transform.TransformOperator;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.query.selection.SelectionOperatorUtils;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.segment.spi.IndexSegment;
+
+
+/**
+ * The <code>PreAggGapFillSelectionPlanNode</code> class provides the execution
+ * plan for pre-aggregate gapfill query on a single segment.
+ */
+public class GapfillSelectionPlanNode implements PlanNode {
+ private final IndexSegment _indexSegment;
+ private final QueryContext _queryContext;
+
+ public GapfillSelectionPlanNode(IndexSegment indexSegment, QueryContext queryContext) {
+ _indexSegment = indexSegment;
+ _queryContext = queryContext;
+ }
+
+ @Override
+ public Operator<IntermediateResultsBlock> run() {
+ int limit = _queryContext.getLimit();
+
+ QueryContext queryContext = getSelectQueryContext();
+ Preconditions.checkArgument(queryContext.getOrderByExpressions() == null,
+ "The gapfill query should not have orderby expression.");
Review comment:
Why do we have this limitation?
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/filter/RowMatcher.java
##########
@@ -0,0 +1,49 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce.filter;
+
+import org.apache.pinot.common.request.context.FilterContext;
+
+
+/**
+ * Filter matcher for the rows.
+ */
+public interface RowMatcher {
+ /**
+ * Returns {@code true} if the given row matches the filter, {@code false} otherwise.
+ */
+ boolean isMatch(Object[] row);
+
+ /**
+ * Helper method to construct a RowMatcher based on the given filter.
+ */
+ public static RowMatcher getRowMatcher(FilterContext filter, ValueExtractorFactory valueExtractorFactory) {
Review comment:
Let's add a `RowMatcherFactory` and move this method there
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/data/table/IndexedTable.java
##########
@@ -65,10 +66,15 @@ protected IndexedTable(DataSchema dataSchema, QueryContext queryContext, int res
_lookupMap = lookupMap;
_resultSize = resultSize;
- List<ExpressionContext> groupByExpressions = queryContext.getGroupByExpressions();
+ List<ExpressionContext> groupByExpressions;
+ if (queryContext.getGapfillType() != GapfillUtils.GapfillType.NONE) {
Review comment:
Suggest using `null` to represent no gap-fill
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/data/table/IndexedTable.java
##########
@@ -65,10 +66,15 @@ protected IndexedTable(DataSchema dataSchema, QueryContext queryContext, int res
_lookupMap = lookupMap;
_resultSize = resultSize;
- List<ExpressionContext> groupByExpressions = queryContext.getGroupByExpressions();
+ List<ExpressionContext> groupByExpressions;
+ if (queryContext.getGapfillType() != GapfillUtils.GapfillType.NONE) {
+ groupByExpressions = GapfillUtils.getGroupByExpressions(queryContext);
+ } else {
+ groupByExpressions = queryContext.getGroupByExpressions();
+ }
+ _aggregationFunctions = queryContext.getAggregationFunctions();
Review comment:
(minor) Not move this as we want to keep the logic related to `groupByExpressions` together
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/PreAggregateGapfillFilterHandler.java
##########
@@ -0,0 +1,74 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FilterContext;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.core.query.reduce.filter.ColumnValueExtractor;
+import org.apache.pinot.core.query.reduce.filter.LiteralValueExtractor;
+import org.apache.pinot.core.query.reduce.filter.RowMatcher;
+import org.apache.pinot.core.query.reduce.filter.ValueExtractor;
+import org.apache.pinot.core.query.reduce.filter.ValueExtractorFactory;
+import org.apache.pinot.core.util.GapfillUtils;
+
+/**
+ * Handler for Filter clause of PreAggregateGapFill.
+ */
+public class PreAggregateGapfillFilterHandler implements ValueExtractorFactory {
+ private final RowMatcher _rowMatcher;
+ private final DataSchema _dataSchema;
+ private final Map<String, Integer> _indexes;
+
+ public PreAggregateGapfillFilterHandler(FilterContext filter, DataSchema dataSchema) {
+ _dataSchema = dataSchema;
+ _indexes = new HashMap<>();
+ for (int i = 0; i < _dataSchema.size(); i++) {
+ _indexes.put(_dataSchema.getColumnName(i), i);
+ }
+ _rowMatcher = RowMatcher.getRowMatcher(filter, this);
+ }
+
+ /**
+ * Returns {@code true} if the given row matches the HAVING clause, {@code false} otherwise.
+ */
+ public boolean isMatch(Object[] row) {
+ return _rowMatcher.isMatch(row);
+ }
+
+ /**
+ * Returns a ValueExtractor based on the given expression.
+ */
+ @Override
+ public ValueExtractor getValueExtractor(ExpressionContext expression) {
+ expression = GapfillUtils.stripGapfill(expression);
+ if (expression.getType() == ExpressionContext.Type.LITERAL) {
+ // Literal
+ return new LiteralValueExtractor(expression.getLiteral());
+ }
+
+ if (expression.getType() == ExpressionContext.Type.IDENTIFIER) {
+ return new ColumnValueExtractor(_indexes.get(expression.getIdentifier()), _dataSchema);
+ } else {
+ return new ColumnValueExtractor(_indexes.get(expression.getFunction().toString()), _dataSchema);
Review comment:
This does not always work for aggregation function because the column name in data schema might not match the `FunctionContext.toString()`. You may refer to `PostAggregationHandler`, which does not rely on the column name to find the index
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/QueryContext.java
##########
@@ -85,6 +86,7 @@
// Keep the BrokerRequest to make incremental changes
// TODO: Remove it once the whole query engine is using the QueryContext
private final BrokerRequest _brokerRequest;
+ private QueryContext _subqueryContext;
Review comment:
Shall we add `_subqueryContext` and `_gapfillType` as final fields, and set their value in the builder similar to other fields?
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/plan/CombinePlanNode.java
##########
@@ -161,8 +162,18 @@ public BaseCombineOperator run() {
// Streaming query (only support selection only)
return new StreamingSelectionOnlyCombineOperator(operators, _queryContext, _executorService, _streamObserver);
}
+ GapfillUtils.GapfillType gapfillType = _queryContext.getGapfillType();
if (QueryContextUtils.isAggregationQuery(_queryContext)) {
- if (_queryContext.getGroupByExpressions() == null) {
+ if (gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ _queryContext.getSubQueryContext().getSubQueryContext().setEndTimeMs(_queryContext.getEndTimeMs());
+ return new GroupByOrderByCombineOperator(
Review comment:
(code format) Can you please apply the latest [Pinot Style](https://docs.pinot.apache.org/developers/developers-and-contributors/code-setup#intellij) and reformat the changed files?
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/plan/CombinePlanNode.java
##########
@@ -180,6 +191,8 @@ public BaseCombineOperator run() {
// Selection order-by
return new SelectionOrderByCombineOperator(operators, _queryContext, _executorService);
}
+ } else if (gapfillType != GapfillUtils.GapfillType.NONE) {
+ return new SelectionOnlyCombineOperator(operators, _queryContext, _executorService);
Review comment:
Is this correct? Should we construct the operator based on whether the subquery has order-by?
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/PreAggregateGapfillFilterHandler.java
##########
@@ -0,0 +1,74 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FilterContext;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.core.query.reduce.filter.ColumnValueExtractor;
+import org.apache.pinot.core.query.reduce.filter.LiteralValueExtractor;
+import org.apache.pinot.core.query.reduce.filter.RowMatcher;
+import org.apache.pinot.core.query.reduce.filter.ValueExtractor;
+import org.apache.pinot.core.query.reduce.filter.ValueExtractorFactory;
+import org.apache.pinot.core.util.GapfillUtils;
+
+/**
+ * Handler for Filter clause of PreAggregateGapFill.
+ */
+public class PreAggregateGapfillFilterHandler implements ValueExtractorFactory {
Review comment:
Since we are going to deprecate `PostAggregateGapfill` and only keep this, let's rename it to `GapfillFilterHandler`? Same for other classes with `PreAggregate`
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/PreAggregationGapFillDataTableReducer.java
##########
@@ -0,0 +1,775 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.PriorityQueue;
+import java.util.Set;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import org.apache.pinot.common.exception.QueryException;
+import org.apache.pinot.common.metrics.BrokerMetrics;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.QueryProcessingException;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.ConcurrentIndexedTable;
+import org.apache.pinot.core.data.table.IndexedTable;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.data.table.Record;
+import org.apache.pinot.core.data.table.SimpleIndexedTable;
+import org.apache.pinot.core.data.table.UnboundedConcurrentIndexedTable;
+import org.apache.pinot.core.operator.combine.GroupByOrderByCombineOperator;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.query.selection.SelectionOperatorUtils;
+import org.apache.pinot.core.transport.ServerRoutingInstance;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.core.util.GroupByUtils;
+import org.apache.pinot.core.util.trace.TraceRunnable;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+/**
+ * Helper class to reduce and set Aggregation results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class PreAggregationGapFillDataTableReducer implements DataTableReducer {
+ private static final int MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE = 2; // TBD, find a better value.
+
+ private final QueryContext _queryContext;
+
+ private final int _limitForAggregatedResult;
+ private int _limitForGapfilledResult;
+
+ private final DateTimeGranularitySpec _dateTimeGranularity;
+ private final DateTimeFormatSpec _dateTimeFormatter;
+ private final long _startMs;
+ private final long _endMs;
+ private final long _timeBucketSize;
+
+ private final List<Integer> _groupByKeyIndexes;
+ private boolean [] _isGroupBySelections;
+ private final Set<Key> _groupByKeys;
+ private final Map<Key, Object[]> _previousByGroupKey;
+ private final Map<String, ExpressionContext> _fillExpressions;
+ private final List<ExpressionContext> _timeSeries;
+ private final GapfillUtils.GapfillType _gapfillType;
+
+ PreAggregationGapFillDataTableReducer(QueryContext queryContext) {
+ _queryContext = queryContext;
+ _gapfillType = queryContext.getGapfillType();
+ _limitForAggregatedResult = queryContext.getLimit();
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ _limitForGapfilledResult = queryContext.getLimit();
+ } else {
+ _limitForGapfilledResult = queryContext.getSubQueryContext().getLimit();
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext);
+
+ Preconditions.checkArgument(
+ gapFillSelection != null && gapFillSelection.getFunction() != null, "Gapfill Expression should be function.");
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ Preconditions.checkArgument(
+ args.size() > 5, "PreAggregateGapFill does not have correct number of arguments.");
+ Preconditions.checkArgument(
+ args.get(1).getLiteral() != null, "The second argument of PostAggregateGapFill should be TimeFormatter.");
+ Preconditions.checkArgument(
+ args.get(2).getLiteral() != null, "The third argument of PostAggregateGapFill should be start time.");
+ Preconditions.checkArgument(
+ args.get(3).getLiteral() != null, "The fourth argument of PostAggregateGapFill should be end time.");
+ Preconditions.checkArgument(
+ args.get(4).getLiteral() != null, "The fifth argument of PostAggregateGapFill should be time bucket size.");
+
+ _dateTimeFormatter = new DateTimeFormatSpec(args.get(1).getLiteral());
+ _dateTimeGranularity = new DateTimeGranularitySpec(args.get(4).getLiteral());
+ String start = args.get(2).getLiteral();
+ _startMs = truncate(_dateTimeFormatter.fromFormatToMillis(start));
+ String end = args.get(3).getLiteral();
+ _endMs = truncate(_dateTimeFormatter.fromFormatToMillis(end));
+ _timeBucketSize = _dateTimeGranularity.granularityToMillis();
+
+ _fillExpressions = GapfillUtils.getFillExpressions(gapFillSelection);
+
+ _previousByGroupKey = new HashMap<>();
+ _groupByKeyIndexes = new ArrayList<>();
+ _groupByKeys = new HashSet<>();
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ Preconditions.checkArgument(timeseriesOn != null, "The TimeSeriesOn expressions should be specified.");
+ _timeSeries = timeseriesOn.getFunction().getArguments();
+ }
+
+ private void replaceColumnNameWithAlias(DataSchema dataSchema) {
+ QueryContext queryContext;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = _queryContext.getSubQueryContext().getSubQueryContext();
+ } else if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ queryContext = _queryContext;
+ } else {
+ queryContext = _queryContext.getSubQueryContext();
+ }
+ List<String> aliasList = queryContext.getAliasList();
+ Map<String, String> columnNameToAliasMap = new HashMap<>();
+ for (int i = 0; i < aliasList.size(); i++) {
+ if (aliasList.get(i) != null) {
+ ExpressionContext selection = queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(selection)) {
+ selection = selection.getFunction().getArguments().get(0);
+ }
+ columnNameToAliasMap.put(selection.toString(), aliasList.get(i));
+ }
+ }
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ if (columnNameToAliasMap.containsKey(dataSchema.getColumnNames()[i])) {
+ dataSchema.getColumnNames()[i] = columnNameToAliasMap.get(dataSchema.getColumnNames()[i]);
+ }
+ }
+ }
+
+ /**
+ * Computes the number of reduce threads to use per query.
+ * <ul>
+ * <li> Use single thread if number of data tables to reduce is less than
+ * {@value #MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE}.</li>
+ * <li> Else, use min of max allowed reduce threads per query, and number of data tables.</li>
+ * </ul>
+ *
+ * @param numDataTables Number of data tables to reduce
+ * @param maxReduceThreadsPerQuery Max allowed reduce threads per query
+ * @return Number of reduce threads to use for the query
+ */
+ private int getNumReduceThreadsToUse(int numDataTables, int maxReduceThreadsPerQuery) {
+ // Use single thread if number of data tables < MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE.
+ if (numDataTables < MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE) {
+ return Math.min(1, numDataTables); // Number of data tables can be zero.
+ }
+
+ return Math.min(maxReduceThreadsPerQuery, numDataTables);
+ }
+
+ private IndexedTable getIndexedTable(DataSchema dataSchema, Collection<DataTable> dataTablesToReduce,
+ DataTableReducerContext reducerContext)
+ throws TimeoutException {
+ long start = System.currentTimeMillis();
+ int numDataTables = dataTablesToReduce.size();
+
+ // Get the number of threads to use for reducing.
+ // In case of single reduce thread, fall back to SimpleIndexedTable to avoid redundant locking/unlocking calls.
+ int numReduceThreadsToUse = getNumReduceThreadsToUse(numDataTables, reducerContext.getMaxReduceThreadsPerQuery());
+ int limit = _queryContext.getLimit();
+ // TODO: Make minTrimSize configurable
+ int trimSize = GroupByUtils.getTableCapacity(limit);
+ // NOTE: For query with HAVING clause, use trimSize as resultSize to ensure the result accuracy.
+ // TODO: Resolve the HAVING clause within the IndexedTable before returning the result
+ int resultSize = _queryContext.getHavingFilter() != null ? trimSize : limit;
+ int trimThreshold = reducerContext.getGroupByTrimThreshold();
+ IndexedTable indexedTable;
+ if (numReduceThreadsToUse <= 1) {
+ indexedTable = new SimpleIndexedTable(dataSchema, _queryContext, resultSize, trimSize, trimThreshold);
+ } else {
+ if (trimThreshold >= GroupByOrderByCombineOperator.MAX_TRIM_THRESHOLD) {
+ // special case of trim threshold where it is set to max value.
+ // there won't be any trimming during upsert in this case.
+ // thus we can avoid the overhead of read-lock and write-lock
+ // in the upsert method.
+ indexedTable = new UnboundedConcurrentIndexedTable(dataSchema, _queryContext, resultSize);
+ } else {
+ indexedTable = new ConcurrentIndexedTable(dataSchema, _queryContext, resultSize, trimSize, trimThreshold);
+ }
+ }
+
+ Future[] futures = new Future[numDataTables];
+ CountDownLatch countDownLatch = new CountDownLatch(numDataTables);
+
+ // Create groups of data tables that each thread can process concurrently.
+ // Given that numReduceThreads is <= numDataTables, each group will have at least one data table.
+ ArrayList<DataTable> dataTables = new ArrayList<>(dataTablesToReduce);
+ List<List<DataTable>> reduceGroups = new ArrayList<>(numReduceThreadsToUse);
+
+ for (int i = 0; i < numReduceThreadsToUse; i++) {
+ reduceGroups.add(new ArrayList<>());
+ }
+ for (int i = 0; i < numDataTables; i++) {
+ reduceGroups.get(i % numReduceThreadsToUse).add(dataTables.get(i));
+ }
+
+ int cnt = 0;
+ ColumnDataType[] storedColumnDataTypes = dataSchema.getStoredColumnDataTypes();
+ int numColumns = storedColumnDataTypes.length;
+ for (List<DataTable> reduceGroup : reduceGroups) {
+ futures[cnt++] = reducerContext.getExecutorService().submit(new TraceRunnable() {
+ @Override
+ public void runJob() {
+ for (DataTable dataTable : reduceGroup) {
+ int numRows = dataTable.getNumberOfRows();
+
+ try {
+ for (int rowId = 0; rowId < numRows; rowId++) {
+ Object[] values = new Object[numColumns];
+ for (int colId = 0; colId < numColumns; colId++) {
+ switch (storedColumnDataTypes[colId]) {
+ case INT:
+ values[colId] = dataTable.getInt(rowId, colId);
+ break;
+ case LONG:
+ values[colId] = dataTable.getLong(rowId, colId);
+ break;
+ case FLOAT:
+ values[colId] = dataTable.getFloat(rowId, colId);
+ break;
+ case DOUBLE:
+ values[colId] = dataTable.getDouble(rowId, colId);
+ break;
+ case STRING:
+ values[colId] = dataTable.getString(rowId, colId);
+ break;
+ case BYTES:
+ values[colId] = dataTable.getBytes(rowId, colId);
+ break;
+ case OBJECT:
+ values[colId] = dataTable.getObject(rowId, colId);
+ break;
+ // Add other aggregation intermediate result / group-by column type supports here
+ default:
+ throw new IllegalStateException();
+ }
+ }
+ indexedTable.upsert(new Record(values));
+ }
+ } finally {
+ countDownLatch.countDown();
+ }
+ }
+ }
+ });
+ }
+
+ try {
+ long timeOutMs = reducerContext.getReduceTimeOutMs() - (System.currentTimeMillis() - start);
+ countDownLatch.await(timeOutMs, TimeUnit.MILLISECONDS);
+ } catch (InterruptedException e) {
+ for (Future future : futures) {
+ if (!future.isDone()) {
+ future.cancel(true);
+ }
+ }
+ throw new TimeoutException("Timed out in broker reduce phase.");
+ }
+
+ indexedTable.finish(true);
+ return indexedTable;
+ }
+
+ /**
+ * Here are three things that happen
+ * 1. Sort the result sets from all pinot servers based on timestamp
+ * 2. Gapfill the data for missing entities per time bucket
+ * 3. Aggregate the dataset per time bucket.
+ */
+ @Override
+ public void reduceAndSetResults(String tableName, DataSchema dataSchema,
+ Map<ServerRoutingInstance, DataTable> dataTableMap, BrokerResponseNative brokerResponseNative,
+ DataTableReducerContext reducerContext, BrokerMetrics brokerMetrics) {
+ DataSchema resultTableSchema = getResultTableDataSchema(dataSchema);
+ if (dataTableMap.isEmpty()) {
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, Collections.emptyList()));
+ return;
+ }
+
+ String[] columns = dataSchema.getColumnNames();
+
+ Map<String, Integer> indexes = new HashMap<>();
+ for (int i = 0; i < columns.length; i++) {
+ indexes.put(columns[i], i);
+ }
+
+ _isGroupBySelections = new boolean[dataSchema.getColumnDataTypes().length];
+
+ // The first one argument of timeSeries is time column. The left ones are defining entity.
+ for (ExpressionContext entityColum : _timeSeries) {
+ int index = indexes.get(entityColum.getIdentifier());
+ _isGroupBySelections[index] = true;
+ _groupByKeyIndexes.add(index);
+ }
+
+ List<Object[]> sortedRawRows;
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL_AGGREGATE
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL_SELECT) {
+ sortedRawRows = mergeAndSort(dataTableMap.values(), dataSchema);
+ } else {
+ try {
+ IndexedTable indexedTable = getIndexedTable(dataSchema, dataTableMap.values(), reducerContext);
+ sortedRawRows = mergeAndSort(indexedTable, dataSchema);
+ } catch (TimeoutException e) {
+ brokerResponseNative.getProcessingExceptions()
+ .add(new QueryProcessingException(QueryException.BROKER_TIMEOUT_ERROR_CODE, e.getMessage()));
+ return;
+ }
+ }
+ List<Object[]> resultRows;
+ replaceColumnNameWithAlias(dataSchema);
+ if (_queryContext.getAggregationFunctions() != null) {
+ validateGroupByForOuterQuery();
+ }
+
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL_AGGREGATE
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL_SELECT) {
+ List<Object[]> gapfilledRows = gapFillAndAggregate(sortedRawRows, resultTableSchema, dataSchema);
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL_SELECT) {
+ List<String> selectionColumns = SelectionOperatorUtils.getSelectionColumns(_queryContext, dataSchema);
+ resultRows = new ArrayList<>(gapfilledRows.size());
+
+ Map<String, Integer> columnNameToIndexMap = new HashMap<>(dataSchema.getColumnNames().length);
+ String[] columnNames = dataSchema.getColumnNames();
+ for (int i = 0; i < columnNames.length; i++) {
+ columnNameToIndexMap.put(columnNames[i], i);
+ }
+
+ ColumnDataType[] columnDataTypes = dataSchema.getColumnDataTypes();
+ ColumnDataType[] resultColumnDataTypes = new ColumnDataType[selectionColumns.size()];
+ for (int i = 0; i < resultColumnDataTypes.length; i++) {
+ String name = selectionColumns.get(i);
+ int index = columnNameToIndexMap.get(name);
+ resultColumnDataTypes[i] = columnDataTypes[index];
+ }
+
+ for (Object[] row : gapfilledRows) {
+ Object[] resultRow = new Object[selectionColumns.size()];
+ for (int i = 0; i < selectionColumns.size(); i++) {
+ int index = columnNameToIndexMap.get(selectionColumns.get(i));
+ resultRow[i] = resultColumnDataTypes[i].convertAndFormat(row[index]);
+ }
+ resultRows.add(resultRow);
+ }
+ } else {
+ resultRows = gapfilledRows;
+ }
+ } else {
+ this.setupColumnTypeForAggregatedColum(dataSchema.getColumnDataTypes());
+ ColumnDataType[] columnDataTypes = dataSchema.getColumnDataTypes();
+ for (Object[] row : sortedRawRows) {
+ extractFinalAggregationResults(row);
+ for (int i = 0; i < columnDataTypes.length; i++) {
+ row[i] = columnDataTypes[i].convert(row[i]);
+ }
+ }
+ resultRows = gapFillAndAggregate(sortedRawRows, resultTableSchema, dataSchema);
+ }
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, resultRows));
+ }
+
+ private void extractFinalAggregationResults(Object[] row) {
+ AggregationFunction[] aggregationFunctions;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL) {
+ aggregationFunctions = _queryContext.getSubQueryContext().getAggregationFunctions();
+ } else {
+ aggregationFunctions = _queryContext.getSubQueryContext().getSubQueryContext().getAggregationFunctions();
+ }
+ int numAggregationFunctionsForInnerQuery = aggregationFunctions == null ? 0 : aggregationFunctions.length;
+ for (int i = 0; i < numAggregationFunctionsForInnerQuery; i++) {
+ int valueIndex = _timeSeries.size() + 1 + i;
+ row[valueIndex] = aggregationFunctions[i].extractFinalResult(row[valueIndex]);
+ }
+ }
+
+ private void setupColumnTypeForAggregatedColum(ColumnDataType[] columnDataTypes) {
+ AggregationFunction[] aggregationFunctions;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL) {
+ aggregationFunctions = _queryContext.getSubQueryContext().getAggregationFunctions();
+ } else {
+ aggregationFunctions = _queryContext.getSubQueryContext().getSubQueryContext().getAggregationFunctions();
+ }
+ int numAggregationFunctionsForInnerQuery = aggregationFunctions == null ? 0 : aggregationFunctions.length;
+ for (int i = 0; i < numAggregationFunctionsForInnerQuery; i++) {
+ columnDataTypes[_timeSeries.size() + 1 + i] = aggregationFunctions[i].getFinalResultColumnType();
+ }
+ }
+
+ /**
+ * Constructs the DataSchema for the ResultTable.
+ */
+ private DataSchema getResultTableDataSchema(DataSchema dataSchema) {
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ return dataSchema;
+ }
+
+ int numOfColumns = _queryContext.getSelectExpressions().size();
+ String [] columnNames = new String[numOfColumns];
+ ColumnDataType [] columnDataTypes = new ColumnDataType[numOfColumns];
+ for (int i = 0; i < numOfColumns; i++) {
+ ExpressionContext expressionContext = _queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(expressionContext)) {
+ expressionContext = expressionContext.getFunction().getArguments().get(0);
+ }
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ columnNames[i] = expressionContext.getIdentifier();
+ columnDataTypes[i] = ColumnDataType.STRING;
+ } else {
+ FunctionContext functionContext = expressionContext.getFunction();
+ AggregationFunction aggregationFunction =
+ AggregationFunctionFactory.getAggregationFunction(functionContext, _queryContext);
+ columnDataTypes[i] = aggregationFunction.getFinalResultColumnType();
+ columnNames[i] = functionContext.toString();
+ }
+ }
+ return new DataSchema(columnNames, columnDataTypes);
+ }
+
+ private Key constructGroupKeys(Object[] row) {
+ Object [] groupKeys = new Object[_groupByKeyIndexes.size()];
+ for (int i = 0; i < _groupByKeyIndexes.size(); i++) {
+ groupKeys[i] = row[_groupByKeyIndexes.get(i)];
+ }
+ return new Key(groupKeys);
+ }
+
+ private long truncate(long epoch) {
+ int sz = _dateTimeGranularity.getSize();
+ return epoch / sz * sz;
+ }
+
+ private List<Object[]> gapFillAndAggregate(List<Object[]> sortedRows,
+ DataSchema dataSchemaForAggregatedResult,
+ DataSchema dataSchema) {
+ List<Object[]> result = new ArrayList<>();
+
+ PreAggregateGapfillFilterHandler postGapfillFilterHandler = null;
+ if (_queryContext.getSubQueryContext() != null && _queryContext.getFilter() != null) {
+ postGapfillFilterHandler = new PreAggregateGapfillFilterHandler(_queryContext.getFilter(), dataSchema);
+ }
+ PreAggregateGapfillFilterHandler postAggregateHavingFilterHandler = null;
+ if (_queryContext.getHavingFilter() != null) {
+ postAggregateHavingFilterHandler = new PreAggregateGapfillFilterHandler(
+ _queryContext.getHavingFilter(), dataSchemaForAggregatedResult);
+ }
+ Object[] previous = null;
+ Iterator<Object[]> sortedIterator = sortedRows.iterator();
+ for (long time = _startMs; time < _endMs; time += _timeBucketSize) {
+ List<Object[]> bucketedResult = new ArrayList<>();
+ previous = gapfill(time, bucketedResult, sortedIterator, previous, dataSchema, postGapfillFilterHandler);
+ if (_queryContext.getAggregationFunctions() == null) {
+ result.addAll(bucketedResult);
+ } else if (bucketedResult.size() > 0) {
+ List<Object[]> aggregatedRows = aggregateGapfilledData(bucketedResult, dataSchema);
+ for (Object[] aggregatedRow : aggregatedRows) {
+ if (postAggregateHavingFilterHandler == null || postAggregateHavingFilterHandler.isMatch(aggregatedRow)) {
+ result.add(aggregatedRow);
+ }
+ if (result.size() >= _limitForAggregatedResult) {
+ return result;
+ }
+ }
+ }
+ }
+ return result;
+ }
+
+ private Object[] gapfill(long bucketTime,
+ List<Object[]> bucketedResult,
+ Iterator<Object[]> sortedIterator,
+ Object[] previous,
+ DataSchema dataSchema,
+ PreAggregateGapfillFilterHandler postGapfillFilterHandler) {
+ ColumnDataType[] resultColumnDataTypes = dataSchema.getColumnDataTypes();
+ int numResultColumns = resultColumnDataTypes.length;
+ Set<Key> keys = new HashSet<>(_groupByKeys);
+ if (previous == null && sortedIterator.hasNext()) {
+ previous = sortedIterator.next();
+ }
+
+ while (previous != null) {
+ Object[] resultRow = previous;
+ for (int i = 0; i < resultColumnDataTypes.length; i++) {
+ resultRow[i] = resultColumnDataTypes[i].format(resultRow[i]);
+ }
+
+ long timeCol = _dateTimeFormatter.fromFormatToMillis(String.valueOf(resultRow[0]));
+ if (timeCol > bucketTime) {
+ break;
+ }
+ if (timeCol == bucketTime) {
+ if (postGapfillFilterHandler == null || postGapfillFilterHandler.isMatch(previous)) {
+ if (bucketedResult.size() >= _limitForGapfilledResult) {
+ _limitForGapfilledResult = 0;
+ break;
+ } else {
+ bucketedResult.add(resultRow);
+ }
+ }
+ Key key = constructGroupKeys(resultRow);
+ keys.remove(key);
+ _previousByGroupKey.put(key, resultRow);
+ }
+ if (sortedIterator.hasNext()) {
+ previous = sortedIterator.next();
+ } else {
+ previous = null;
+ }
+ }
+
+ for (Key key : keys) {
+ Object[] gapfillRow = new Object[numResultColumns];
+ int keyIndex = 0;
+ if (resultColumnDataTypes[0] == ColumnDataType.LONG) {
+ gapfillRow[0] = Long.valueOf(_dateTimeFormatter.fromMillisToFormat(bucketTime));
+ } else {
+ gapfillRow[0] = _dateTimeFormatter.fromMillisToFormat(bucketTime);
+ }
+ for (int i = 1; i < _isGroupBySelections.length; i++) {
+ if (_isGroupBySelections[i]) {
+ gapfillRow[i] = key.getValues()[keyIndex++];
+ } else {
+ gapfillRow[i] = getFillValue(i, dataSchema.getColumnName(i), key, resultColumnDataTypes[i]);
+ }
+ }
+
+ if (postGapfillFilterHandler == null || postGapfillFilterHandler.isMatch(gapfillRow)) {
+ if (bucketedResult.size() >= _limitForGapfilledResult) {
+ break;
+ } else {
+ bucketedResult.add(gapfillRow);
+ }
+ }
+ }
+ if (_limitForGapfilledResult > _groupByKeys.size()) {
+ _limitForGapfilledResult -= _groupByKeys.size();
+ } else {
+ _limitForGapfilledResult = 0;
+ }
+ return previous;
+ }
+
+ /**
+ * Make sure that the outer query has the group by clause and the group by clause has the time bucket.
+ */
+ private void validateGroupByForOuterQuery() {
+ List<ExpressionContext> groupbyExpressions = _queryContext.getGroupByExpressions();
+ Preconditions.checkArgument(groupbyExpressions != null, "No GroupBy Clause.");
+ List<ExpressionContext> innerSelections = _queryContext.getSubQueryContext().getSelectExpressions();
+ String timeBucketCol = null;
+ List<String> strAlias = _queryContext.getSubQueryContext().getAliasList();
+ for (int i = 0; i < innerSelections.size(); i++) {
+ ExpressionContext innerSelection = innerSelections.get(i);
+ if (GapfillUtils.isGapfill(innerSelection)) {
+ if (strAlias.get(i) != null) {
+ timeBucketCol = strAlias.get(i);
+ } else {
+ timeBucketCol = innerSelection.getFunction().getArguments().get(0).toString();
+ }
+ break;
+ }
+ }
+
+ Preconditions.checkArgument(timeBucketCol != null, "No Group By timebucket.");
+
+ boolean findTimeBucket = false;
+ for (ExpressionContext groupbyExp : groupbyExpressions) {
+ if (timeBucketCol.equals(groupbyExp.toString())) {
+ findTimeBucket = true;
+ break;
+ }
+ }
+
+ Preconditions.checkArgument(findTimeBucket, "No Group By timebucket.");
+ }
+
+ private List<Object[]> aggregateGapfilledData(List<Object[]> bucketedRows, DataSchema dataSchema) {
Review comment:
This looks like a hacky way to calculate the group-by results by constructing column-major block from result rows. Can we use `IndexedTable` here which directly takes result rows?
Another problem of using this approach is that, if the subquery is aggregation query, the result row will hold intermediate result, which cannot be fed into the `AggregationFunction.aggregateGroupBySV()`. `AggregationFunction.merge()` is the one to use for intermediate result.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (2cbe396) into [master](https://codecov.io/gh/apache/pinot/commit/aa441895675fcee84617cd84137a642628a966b3?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (aa44189) will **decrease** coverage by `6.80%`.
> The diff coverage is `78.65%`.
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 70.91% 64.10% -6.81%
Complexity 4253 4253
============================================
Files 1637 1601 -36
Lines 85830 84431 -1399
Branches 12911 12833 -78
============================================
- Hits 60864 54123 -6741
- Misses 20781 26393 +5612
+ Partials 4185 3915 -270
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `67.00% <79.38%> (+0.07%)` | :arrow_up: |
| unittests2 | `14.08% <0.33%> (-0.12%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/GrpcBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvR3JwY0Jyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `0.00% <ø> (-78.58%)` | :arrow_down: |
| [...thandler/SingleConnectionBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvU2luZ2xlQ29ubmVjdGlvbkJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `13.20% <0.00%> (-73.83%)` | :arrow_down: |
| [...t/controller/api/resources/PinotQueryResource.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29udHJvbGxlci9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29udHJvbGxlci9hcGkvcmVzb3VyY2VzL1Bpbm90UXVlcnlSZXNvdXJjZS5qYXZh) | `0.00% <0.00%> (-50.35%)` | :arrow_down: |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `86.08% <0.00%> (-6.10%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `23.75% <33.33%> (-48.20%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `91.89% <66.66%> (+0.12%)` | :arrow_up: |
| [...ot/core/query/reduce/filter/RowMatcherFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXJGYWN0b3J5LmphdmE=) | `66.66% <66.66%> (ø)` | |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `70.00% <72.36%> (+6.36%)` | :arrow_up: |
| [...xt/utils/BrokerRequestToQueryContextConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvQnJva2VyUmVxdWVzdFRvUXVlcnlDb250ZXh0Q29udmVydGVyLmphdmE=) | `94.02% <76.92%> (-4.36%)` | :arrow_down: |
| ... and [399 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [aa44189...2cbe396](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (2cbe396) into [master](https://codecov.io/gh/apache/pinot/commit/aa441895675fcee84617cd84137a642628a966b3?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (aa44189) will **decrease** coverage by `56.82%`.
> The diff coverage is `0.33%`.
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 70.91% 14.08% -56.83%
+ Complexity 4253 81 -4172
=============================================
Files 1637 1601 -36
Lines 85830 84431 -1399
Branches 12911 12833 -78
=============================================
- Hits 60864 11892 -48972
- Misses 20781 71644 +50863
+ Partials 4185 895 -3290
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `?` | |
| unittests2 | `14.08% <0.33%> (-0.12%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/GrpcBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvR3JwY0Jyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `0.00% <ø> (-78.58%)` | :arrow_down: |
| [...thandler/SingleConnectionBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvU2luZ2xlQ29ubmVjdGlvbkJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `13.20% <0.00%> (-73.83%)` | :arrow_down: |
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `0.00% <0.00%> (-87.71%)` | :arrow_down: |
| [...t/controller/api/resources/PinotQueryResource.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29udHJvbGxlci9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29udHJvbGxlci9hcGkvcmVzb3VyY2VzL1Bpbm90UXVlcnlSZXNvdXJjZS5qYXZh) | `0.00% <0.00%> (-50.35%)` | :arrow_down: |
| [...e/pinot/core/query/reduce/BrokerReduceService.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQnJva2VyUmVkdWNlU2VydmljZS5qYXZh) | `0.00% <0.00%> (-97.30%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...ache/pinot/core/query/reduce/GapFillProcessor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwRmlsbFByb2Nlc3Nvci5qYXZh) | `0.00% <0.00%> (ø)` | |
| [.../pinot/core/query/reduce/GapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...e/pinot/core/query/reduce/HavingFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvSGF2aW5nRmlsdGVySGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.31%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.77%)` | :arrow_down: |
| ... and [1327 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [aa44189...2cbe396](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (1bdd2f3) into [master](https://codecov.io/gh/apache/pinot/commit/360a2051c1eb20af552b8222eda97eb4a3e95387?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (360a205) will **decrease** coverage by `6.68%`.
> The diff coverage is `76.55%`.
> :exclamation: Current head 1bdd2f3 differs from pull request most recent head 8741eec. Consider uploading reports for the commit 8741eec to get more accurate results
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 70.81% 64.12% -6.69%
- Complexity 4264 4266 +2
============================================
Files 1639 1603 -36
Lines 85919 84519 -1400
Branches 12921 12856 -65
============================================
- Hits 60840 54201 -6639
- Misses 20873 26399 +5526
+ Partials 4206 3919 -287
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `67.06% <78.36%> (+0.10%)` | :arrow_up: |
| unittests2 | `14.11% <0.34%> (-0.09%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/GrpcBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvR3JwY0Jyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `0.00% <ø> (-75.68%)` | :arrow_down: |
| [...thandler/SingleConnectionBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvU2luZ2xlQ29ubmVjdGlvbkJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `13.20% <0.00%> (-73.83%)` | :arrow_down: |
| [...pache/pinot/common/utils/request/RequestUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9jb21tb24vdXRpbHMvcmVxdWVzdC9SZXF1ZXN0VXRpbHMuamF2YQ==) | `85.71% <0.00%> (-1.79%)` | :arrow_down: |
| [...t/controller/api/resources/PinotQueryResource.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29udHJvbGxlci9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29udHJvbGxlci9hcGkvcmVzb3VyY2VzL1Bpbm90UXVlcnlSZXNvdXJjZS5qYXZh) | `0.00% <0.00%> (-50.35%)` | :arrow_down: |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `86.64% <ø> (-5.54%)` | :arrow_down: |
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `23.68% <14.28%> (-48.09%)` | :arrow_down: |
| [...e/pinot/core/query/reduce/RowBasedBlockValSet.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUm93QmFzZWRCbG9ja1ZhbFNldC5qYXZh) | `16.12% <16.12%> (ø)` | |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `91.89% <66.66%> (+0.12%)` | :arrow_up: |
| [...ot/core/query/reduce/filter/RowMatcherFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXJGYWN0b3J5LmphdmE=) | `66.66% <66.66%> (ø)` | |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `71.27% <74.35%> (+7.63%)` | :arrow_up: |
| ... and [398 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [360a205...8741eec](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829462309
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/QueryContext.java
##########
@@ -85,6 +86,7 @@
// Keep the BrokerRequest to make incremental changes
// TODO: Remove it once the whole query engine is using the QueryContext
private final BrokerRequest _brokerRequest;
+ private final QueryContext _subQueryContext;
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829536104
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/util/GapfillUtils.java
##########
@@ -71,12 +86,15 @@ public static boolean isFill(ExpressionContext expressionContext) {
return false;
}
- return FILL.equals(canonicalizeFunctionName(expressionContext.getFunction().getFunctionName()));
+ return FILL.equalsIgnoreCase(canonicalizeFunctionName(expressionContext.getFunction().getFunctionName()));
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829578276
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/util/GapfillUtils.java
##########
@@ -119,4 +137,265 @@ static public Serializable getDefaultValue(DataSchema.ColumnDataType dataType) {
private static String canonicalizeFunctionName(String functionName) {
return StringUtils.remove(functionName, '_').toLowerCase();
}
+
+ public static boolean isGapfill(ExpressionContext expressionContext) {
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ return false;
+ }
+
+ return GAP_FILL.equals(canonicalizeFunctionName(expressionContext.getFunction().getFunctionName()));
+ }
+
+ private static boolean isGapfill(QueryContext queryContext) {
+ for (ExpressionContext expressionContext : queryContext.getSelectExpressions()) {
+ if (isGapfill(expressionContext)) {
+ return true;
+ }
+ }
+ return false;
+ }
+
+ /**
+ * Get the gapfill type for queryContext. Also do the validation for gapfill request.
+ * @param queryContext
+ */
+ public static void setGapfillType(QueryContext queryContext) {
+ GapfillType gapfillType = null;
+ if (queryContext.getSubQueryContext() == null) {
+ if (isGapfill(queryContext)) {
+ Preconditions.checkArgument(queryContext.getAggregationFunctions() == null,
+ "Aggregation and Gapfill can not be in the same sql statement.");
+ gapfillType = GapfillType.GAP_FILL;
+ }
+ } else if (isGapfill(queryContext)) {
+ Preconditions.checkArgument(queryContext.getSubQueryContext().getAggregationFunctions() != null,
+ "Select and Gapfill should be in the same sql statement.");
+ Preconditions.checkArgument(queryContext.getSubQueryContext().getSubQueryContext() == null,
+ "There is no three levels nesting sql when the outer query is gapfill.");
+ gapfillType = GapfillType.AGGREGATE_GAP_FILL;
+ } else if (isGapfill(queryContext.getSubQueryContext())) {
+ if (queryContext.getAggregationFunctions() == null) {
+ gapfillType = GapfillType.GAP_FILL_SELECT;
+ } else if (queryContext.getSubQueryContext().getSubQueryContext() == null) {
+ gapfillType = GapfillType.GAP_FILL_AGGREGATE;
+ } else {
+ Preconditions
+ .checkArgument(queryContext.getSubQueryContext().getSubQueryContext().getAggregationFunctions() != null,
+ "Select cannot happen before gapfill.");
+ gapfillType = GapfillType.AGGREGATE_GAP_FILL_AGGREGATE;
+ }
+ }
+
+ queryContext.setGapfillType(gapfillType);
+ if (gapfillType == null) {
+ return;
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext);
+
+ Preconditions.checkArgument(gapFillSelection != null && gapFillSelection.getFunction() != null,
+ "Gapfill Expression should be function.");
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ Preconditions.checkArgument(args.size() > 5, "PreAggregateGapFill does not have correct number of arguments.");
+ Preconditions.checkArgument(args.get(1).getLiteral() != null,
+ "The second argument of PostAggregateGapFill should be TimeFormatter.");
+ Preconditions.checkArgument(args.get(2).getLiteral() != null,
+ "The third argument of PostAggregateGapFill should be start time.");
+ Preconditions.checkArgument(args.get(3).getLiteral() != null,
+ "The fourth argument of PostAggregateGapFill should be end time.");
+ Preconditions.checkArgument(args.get(4).getLiteral() != null,
+ "The fifth argument of PostAggregateGapFill should be time bucket size.");
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ Preconditions.checkArgument(timeseriesOn != null, "The TimeSeriesOn expressions should be specified.");
+
+ if (queryContext.getAggregationFunctions() == null) {
+ return;
+ }
+
+ List<ExpressionContext> groupbyExpressions = queryContext.getGroupByExpressions();
+ Preconditions.checkArgument(groupbyExpressions != null, "No GroupBy Clause.");
+ List<ExpressionContext> innerSelections = queryContext.getSubQueryContext().getSelectExpressions();
+ String timeBucketCol = null;
+ List<String> strAlias = queryContext.getSubQueryContext().getAliasList();
+ for (int i = 0; i < innerSelections.size(); i++) {
+ ExpressionContext innerSelection = innerSelections.get(i);
+ if (GapfillUtils.isGapfill(innerSelection)) {
+ if (strAlias.get(i) != null) {
+ timeBucketCol = strAlias.get(i);
+ } else {
+ timeBucketCol = innerSelection.getFunction().getArguments().get(0).toString();
+ }
+ break;
+ }
+ }
+
+ Preconditions.checkArgument(timeBucketCol != null, "No Group By timebucket.");
+
+ boolean findTimeBucket = false;
+ for (ExpressionContext groupbyExp : groupbyExpressions) {
+ if (timeBucketCol.equals(groupbyExp.toString())) {
+ findTimeBucket = true;
+ break;
+ }
+ }
+
+ Preconditions.checkArgument(findTimeBucket, "No Group By timebucket.");
+ }
+
+ private static ExpressionContext findGapfillExpressionContext(QueryContext queryContext) {
+ for (ExpressionContext expressionContext : queryContext.getSelectExpressions()) {
+ if (isGapfill(expressionContext)) {
+ return expressionContext;
+ }
+ }
+ return null;
+ }
+
+ public static ExpressionContext getGapfillExpressionContext(QueryContext queryContext) {
+ GapfillType gapfillType = queryContext.getGapfillType();
+ if (gapfillType == GapfillType.AGGREGATE_GAP_FILL || gapfillType == GapfillType.GAP_FILL) {
+ return findGapfillExpressionContext(queryContext);
+ } else if (gapfillType == GapfillType.GAP_FILL_AGGREGATE || gapfillType == GapfillType.AGGREGATE_GAP_FILL_AGGREGATE
+ || gapfillType == GapfillType.GAP_FILL_SELECT) {
+ return findGapfillExpressionContext(queryContext.getSubQueryContext());
+ } else {
+ return null;
+ }
+ }
+
+ public static int findTimeBucketColumnIndex(QueryContext queryContext) {
+ GapfillType gapfillType = queryContext.getGapfillType();
+ if (gapfillType == GapfillType.GAP_FILL_AGGREGATE
+ || gapfillType == GapfillType.GAP_FILL_SELECT
+ || gapfillType == GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = queryContext.getSubQueryContext();
+ }
+ List<ExpressionContext> expressionContexts = queryContext.getSelectExpressions();
+ for (int i = 0; i < expressionContexts.size(); i++) {
+ if (isGapfill(expressionContexts.get(i))) {
+ return i;
+ }
+ }
+ return -1;
+ }
+
+ public static ExpressionContext getTimeSeriesOnExpressionContext(ExpressionContext gapFillSelection) {
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ for (int i = STARTING_INDEX_OF_OPTIONAL_ARGS_FOR_PRE_AGGREGATE_GAP_FILL; i < args.size(); i++) {
+ if (GapfillUtils.isTimeSeriesOn(args.get(i))) {
+ return args.get(i);
+ }
+ }
+ return null;
+ }
+
+ public static Map<String, ExpressionContext> getFillExpressions(ExpressionContext gapFillSelection) {
+ Map<String, ExpressionContext> fillExpressions = new HashMap<>();
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ for (int i = STARTING_INDEX_OF_OPTIONAL_ARGS_FOR_PRE_AGGREGATE_GAP_FILL; i < args.size(); i++) {
+ if (GapfillUtils.isFill(args.get(i))) {
+ ExpressionContext fillExpression = args.get(i);
+ fillExpressions.put(fillExpression.getFunction().getArguments().get(0).getIdentifier(), fillExpression);
+ }
+ }
+ return fillExpressions;
+ }
+
+ public static String getTableName(PinotQuery pinotQuery) {
+ while (pinotQuery.getDataSource().getSubquery() != null) {
+ pinotQuery = pinotQuery.getDataSource().getSubquery();
+ }
+ return pinotQuery.getDataSource().getTableName();
+ }
+
+ public static BrokerRequest stripGapfill(BrokerRequest brokerRequest) {
+ if (brokerRequest.getPinotQuery().getDataSource() == null) {
+ return brokerRequest;
+ }
+ QueryContext queryContext = BrokerRequestToQueryContextConverter.convert(brokerRequest);
+ GapfillUtils.GapfillType gapfillType = queryContext.getGapfillType();
+ if (gapfillType == null) {
+ return brokerRequest;
+ }
+ switch (gapfillType) {
+ // one sql query with gapfill only
+ case GAP_FILL:
+ return stripGapfill(brokerRequest.getPinotQuery());
+ // gapfill as subquery, the outer query may have the filter
+ case GAP_FILL_SELECT:
+ // gapfill as subquery, the outer query has the aggregation
+ case GAP_FILL_AGGREGATE:
+ // aggregation as subqery, the outer query is gapfill
+ case AGGREGATE_GAP_FILL:
+ return stripGapfill(brokerRequest.getPinotQuery().getDataSource().getSubquery());
+ // aggegration as second nesting subquery, gapfill as first nesting subquery, different aggregation as outer query
+ case AGGREGATE_GAP_FILL_AGGREGATE:
+ return stripGapfill(brokerRequest.getPinotQuery().getDataSource().getSubquery().getDataSource().getSubquery());
+ default:
+ return brokerRequest;
+ }
+ }
+
+ private static BrokerRequest stripGapfill(PinotQuery pinotQuery) {
+ PinotQuery copy = new PinotQuery(pinotQuery);
+ BrokerRequest brokerRequest = new BrokerRequest();
+ brokerRequest.setPinotQuery(copy);
+ // Set table name in broker request because it is used for access control, query routing etc.
+ DataSource dataSource = copy.getDataSource();
+ if (dataSource != null) {
+ QuerySource querySource = new QuerySource();
+ querySource.setTableName(dataSource.getTableName());
+ brokerRequest.setQuerySource(querySource);
+ }
+ List<Expression> selectList = copy.getSelectList();
+ for (int i = 0; i < selectList.size(); i++) {
+ Expression select = selectList.get(i);
+ if (select.getType() != ExpressionType.FUNCTION) {
+ continue;
+ }
+ if (GAP_FILL.equalsIgnoreCase(select.getFunctionCall().getOperator())) {
+ selectList.set(i, select.getFunctionCall().getOperands().get(0));
+ break;
+ }
+ if (AS.equalsIgnoreCase(select.getFunctionCall().getOperator())
+ && select.getFunctionCall().getOperands().get(0).getType() == ExpressionType.FUNCTION
+ && GAP_FILL.equalsIgnoreCase(select.getFunctionCall().getOperands().get(0).getFunctionCall().getOperator())) {
+ select.getFunctionCall().getOperands().set(0,
+ select.getFunctionCall().getOperands().get(0).getFunctionCall().getOperands().get(0));
+ break;
+ }
+ }
+
+ for (Expression orderBy : copy.getOrderByList()) {
+ if (orderBy.getType() != ExpressionType.FUNCTION) {
+ continue;
+ }
+ if (orderBy.getFunctionCall().getOperands().get(0).getType() == ExpressionType.FUNCTION
+ && GAP_FILL.equalsIgnoreCase(
+ orderBy.getFunctionCall().getOperands().get(0).getFunctionCall().getOperator())) {
+ orderBy.getFunctionCall().getOperands().set(0,
+ orderBy.getFunctionCall().getOperands().get(0).getFunctionCall().getOperands().get(0));
+ break;
+ }
+ }
+ return brokerRequest;
+ }
+
+ public enum GapfillType {
Review comment:
Fixed.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829470251
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/ColumnDataToBlockValSetConverter.java
##########
@@ -0,0 +1,181 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.List;
+import javax.annotation.Nullable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.segment.spi.index.reader.Dictionary;
+import org.apache.pinot.spi.data.FieldSpec;
+
+
+/**
+ * As for Gapfilling Function, all raw data will be retrieved from the pinot
+ * server and merged on the pinot broker. The data will be in {@link DataTable}
+ * format.
+ * As part of Gapfilling Function execution plan, the aggregation function will
+ * work on the merged data on pinot broker. The aggregation function only takes
+ * the {@link BlockValSet} format.
+ * This is the Helper class to convert the data from {@link DataTable} to the
+ * block of values {@link BlockValSet} which used as input to the aggregation
+ * function.
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class ColumnDataToBlockValSetConverter implements BlockValSet {
+
+ private final FieldSpec.DataType _dataType;
+ private final List<Object[]> _rows;
+ private final int _columnIndex;
+
+ public ColumnDataToBlockValSetConverter(DataSchema.ColumnDataType columnDataType, List<Object[]> rows,
+ int columnIndex) {
+ _dataType = columnDataType.toDataType();
+ _rows = rows;
+ _columnIndex = columnIndex;
+ }
+
+ @Override
+ public FieldSpec.DataType getValueType() {
+ return _dataType;
+ }
+
+ @Override
+ public boolean isSingleValue() {
+ return true;
+ }
+
+ @Nullable
+ @Override
+ public Dictionary getDictionary() {
+ throw new UnsupportedOperationException("Not supported");
+ }
+
+ @Override
+ public int[] getDictionaryIdsSV() {
+ throw new UnsupportedOperationException("Not supported");
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829516324
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/GapfillFilterHandler.java
##########
@@ -0,0 +1,76 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FilterContext;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.core.query.reduce.filter.ColumnValueExtractor;
+import org.apache.pinot.core.query.reduce.filter.LiteralValueExtractor;
+import org.apache.pinot.core.query.reduce.filter.RowMatcher;
+import org.apache.pinot.core.query.reduce.filter.RowMatcherFactory;
+import org.apache.pinot.core.query.reduce.filter.ValueExtractor;
+import org.apache.pinot.core.query.reduce.filter.ValueExtractorFactory;
+import org.apache.pinot.core.util.GapfillUtils;
+
+
+/**
+ * Handler for Filter clause of GapFill.
+ */
+public class GapfillFilterHandler implements ValueExtractorFactory {
+ private final RowMatcher _rowMatcher;
+ private final DataSchema _dataSchema;
+ private final Map<String, Integer> _indexes;
+
+ public GapfillFilterHandler(FilterContext filter, DataSchema dataSchema) {
+ _dataSchema = dataSchema;
+ _indexes = new HashMap<>();
+ for (int i = 0; i < _dataSchema.size(); i++) {
+ _indexes.put(_dataSchema.getColumnName(i), i);
+ }
+ _rowMatcher = RowMatcherFactory.getRowMatcher(filter, this);
+ }
+
+ /**
+ * Returns {@code true} if the given row matches the HAVING clause, {@code false} otherwise.
+ */
+ public boolean isMatch(Object[] row) {
+ return _rowMatcher.isMatch(row);
+ }
+
+ /**
+ * Returns a ValueExtractor based on the given expression.
+ */
+ @Override
+ public ValueExtractor getValueExtractor(ExpressionContext expression) {
+ expression = GapfillUtils.stripGapfill(expression);
+ if (expression.getType() == ExpressionContext.Type.LITERAL) {
+ // Literal
+ return new LiteralValueExtractor(expression.getLiteral());
+ }
+
+ if (expression.getType() == ExpressionContext.Type.IDENTIFIER) {
+ return new ColumnValueExtractor(_indexes.get(expression.getIdentifier()), _dataSchema);
+ } else {
+ return new ColumnValueExtractor(_indexes.get(expression.getFunction().toString()), _dataSchema);
Review comment:
Add TODO
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/GapfillFilterHandler.java
##########
@@ -0,0 +1,76 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FilterContext;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.core.query.reduce.filter.ColumnValueExtractor;
+import org.apache.pinot.core.query.reduce.filter.LiteralValueExtractor;
+import org.apache.pinot.core.query.reduce.filter.RowMatcher;
+import org.apache.pinot.core.query.reduce.filter.RowMatcherFactory;
+import org.apache.pinot.core.query.reduce.filter.ValueExtractor;
+import org.apache.pinot.core.query.reduce.filter.ValueExtractorFactory;
+import org.apache.pinot.core.util.GapfillUtils;
+
+
+/**
+ * Handler for Filter clause of GapFill.
+ */
+public class GapfillFilterHandler implements ValueExtractorFactory {
+ private final RowMatcher _rowMatcher;
+ private final DataSchema _dataSchema;
+ private final Map<String, Integer> _indexes;
+
+ public GapfillFilterHandler(FilterContext filter, DataSchema dataSchema) {
+ _dataSchema = dataSchema;
+ _indexes = new HashMap<>();
+ for (int i = 0; i < _dataSchema.size(); i++) {
+ _indexes.put(_dataSchema.getColumnName(i), i);
Review comment:
Add TODO
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] Jackie-Jiang commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
Jackie-Jiang commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r830401982
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/filter/RowMatcherFactory.java
##########
@@ -0,0 +1,47 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce.filter;
+
+import org.apache.pinot.common.request.context.FilterContext;
+
+
+/**
+ * Factory for RowMatcher.
+ */
+public class RowMatcherFactory {
+ private RowMatcherFactory() {
+ }
+
+ /**
+ * Helper method to construct a RowMatcher based on the given filter.
+ */
+ public static RowMatcher getRowMatcher(FilterContext filter, ValueExtractorFactory valueExtractorFactory) {
+ switch (filter.getType()) {
+ case AND:
+ return new AndRowMatcher(filter.getChildren(), valueExtractorFactory);
+ case OR:
+ return new OrRowMatcher(filter.getChildren(), valueExtractorFactory);
+ case PREDICATE:
Review comment:
We need to handle `NOT` here. This is already fixed in #8366. If that one is merged first, we should integrate the fix here.
##########
File path: pinot-core/src/test/java/org/apache/pinot/queries/BaseQueriesTest.java
##########
@@ -197,21 +198,31 @@ protected BrokerResponseNative getBrokerResponseForSqlQuery(String sqlQuery, Pla
}
queryOptions.put(Request.QueryOptionKey.GROUP_BY_MODE, Request.SQL);
queryOptions.put(Request.QueryOptionKey.RESPONSE_FORMAT, Request.SQL);
+ BrokerRequest strippedBrokerRequest = GapfillUtils.stripGapfill(brokerRequest);
Review comment:
(MAJOR) Query options should be preserved when stripping the gapfill
##########
File path: pinot-core/src/test/java/org/apache/pinot/queries/BaseQueriesTest.java
##########
@@ -197,21 +198,31 @@ protected BrokerResponseNative getBrokerResponseForSqlQuery(String sqlQuery, Pla
}
queryOptions.put(Request.QueryOptionKey.GROUP_BY_MODE, Request.SQL);
queryOptions.put(Request.QueryOptionKey.RESPONSE_FORMAT, Request.SQL);
+ BrokerRequest strippedBrokerRequest = GapfillUtils.stripGapfill(brokerRequest);
+ queryOptions = strippedBrokerRequest.getPinotQuery().getQueryOptions();
+ if (queryOptions == null) {
+ queryOptions = new HashMap<>();
+ strippedBrokerRequest.getPinotQuery().setQueryOptions(queryOptions);
+ }
+ queryOptions.put(Request.QueryOptionKey.GROUP_BY_MODE, Request.SQL);
+ queryOptions.put(Request.QueryOptionKey.RESPONSE_FORMAT, Request.SQL);
QueryContext queryContext = BrokerRequestToQueryContextConverter.convert(brokerRequest);
- return getBrokerResponse(queryContext, planMaker);
+ QueryContext strippedQueryContext = BrokerRequestToQueryContextConverter.convert(strippedBrokerRequest);
+ return getBrokerResponse(queryContext, strippedQueryContext, planMaker);
}
/**
* Run query on multiple index segments with custom plan maker.
* <p>Use this to test the whole flow from server to broker.
* <p>The result should be equivalent to querying 4 identical index segments.
*/
- private BrokerResponseNative getBrokerResponse(QueryContext queryContext, PlanMaker planMaker) {
+ private BrokerResponseNative getBrokerResponse(
+ QueryContext queryContext, QueryContext strippedQueryContext, PlanMaker planMaker) {
Review comment:
```suggestion
QueryContext queryContext, QueryContext serverQueryContext, PlanMaker planMaker) {
```
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/selection/SelectionOperatorUtils.java
##########
@@ -79,7 +79,6 @@ private SelectionOperatorUtils() {
private static final String FLOAT_PATTERN = "#########0.0####";
private static final String DOUBLE_PATTERN = "###################0.0#########";
private static final DecimalFormatSymbols DECIMAL_FORMAT_SYMBOLS = DecimalFormatSymbols.getInstance(Locale.US);
-
Review comment:
(minor) revert this file
##########
File path: pinot-broker/src/main/java/org/apache/pinot/broker/requesthandler/BaseBrokerRequestHandler.java
##########
@@ -217,7 +218,10 @@ private BrokerResponseNative handleSQLRequest(long requestId, String query, Json
requestStatistics.setErrorCode(QueryException.PQL_PARSING_ERROR_CODE);
return new BrokerResponseNative(QueryException.getException(QueryException.PQL_PARSING_ERROR, e));
}
- PinotQuery pinotQuery = brokerRequest.getPinotQuery();
+
+ BrokerRequest serverBrokerRequest = GapfillUtils.stripGapfill(brokerRequest);
Review comment:
(MAJOR) Let's set the query options first (some options are already set during the query compilation), then strip the gapfill. You may just set the stripped query options to be the original query options without making a copy
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/GapfillProcessor.java
##########
@@ -0,0 +1,471 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.function.CountAggregationFunction;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+
+/**
+ * Helper class to reduce and set gap fill results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class GapfillProcessor {
+ private final QueryContext _queryContext;
+
+ private final int _limitForAggregatedResult;
+ private final DateTimeGranularitySpec _gapfillDateTimeGranularity;
+ private final DateTimeGranularitySpec _postGapfillDateTimeGranularity;
+ private final DateTimeFormatSpec _dateTimeFormatter;
+ private final long _startMs;
+ private final long _endMs;
+ private final long _gapfillTimeBucketSize;
+ private final long _postGapfillTimeBucketSize;
+ private final int _numOfTimeBuckets;
+ private final List<Integer> _groupByKeyIndexes;
+ private final Set<Key> _groupByKeys;
+ private final Map<Key, Object[]> _previousByGroupKey;
+ private final Map<String, ExpressionContext> _fillExpressions;
+ private final List<ExpressionContext> _timeSeries;
+ private final GapfillUtils.GapfillType _gapfillType;
+ private int _limitForGapfilledResult;
+ private boolean[] _isGroupBySelections;
+ private final int _timeBucketColumnIndex;
+ private int[] _sourceColumnIndexForResultSchema = null;
+ private final int _aggregationSize;
+
+ GapfillProcessor(QueryContext queryContext, GapfillUtils.GapfillType gapfillType) {
+ _queryContext = queryContext;
+ _gapfillType = gapfillType;
+ _limitForAggregatedResult = queryContext.getLimit();
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ _limitForGapfilledResult = queryContext.getLimit();
+ } else {
+ _limitForGapfilledResult = queryContext.getSubquery().getLimit();
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext, _gapfillType);
+ _timeBucketColumnIndex = GapfillUtils.findTimeBucketColumnIndex(queryContext, _gapfillType);
+
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+
+ _dateTimeFormatter = new DateTimeFormatSpec(args.get(1).getLiteral());
+ _gapfillDateTimeGranularity = new DateTimeGranularitySpec(args.get(4).getLiteral());
+ _postGapfillDateTimeGranularity = new DateTimeGranularitySpec(args.get(5).getLiteral());
Review comment:
Trying to understand when we need to use different granularity for gapfill and post-gapfill. Does this align with the gapfill definition?
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/GapfillProcessor.java
##########
@@ -0,0 +1,471 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.function.CountAggregationFunction;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+
+/**
+ * Helper class to reduce and set gap fill results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class GapfillProcessor {
+ private final QueryContext _queryContext;
+
+ private final int _limitForAggregatedResult;
+ private final DateTimeGranularitySpec _gapfillDateTimeGranularity;
+ private final DateTimeGranularitySpec _postGapfillDateTimeGranularity;
+ private final DateTimeFormatSpec _dateTimeFormatter;
+ private final long _startMs;
+ private final long _endMs;
+ private final long _gapfillTimeBucketSize;
+ private final long _postGapfillTimeBucketSize;
+ private final int _numOfTimeBuckets;
+ private final List<Integer> _groupByKeyIndexes;
+ private final Set<Key> _groupByKeys;
+ private final Map<Key, Object[]> _previousByGroupKey;
+ private final Map<String, ExpressionContext> _fillExpressions;
+ private final List<ExpressionContext> _timeSeries;
+ private final GapfillUtils.GapfillType _gapfillType;
+ private int _limitForGapfilledResult;
+ private boolean[] _isGroupBySelections;
+ private final int _timeBucketColumnIndex;
+ private int[] _sourceColumnIndexForResultSchema = null;
+ private final int _aggregationSize;
+
+ GapfillProcessor(QueryContext queryContext, GapfillUtils.GapfillType gapfillType) {
+ _queryContext = queryContext;
+ _gapfillType = gapfillType;
+ _limitForAggregatedResult = queryContext.getLimit();
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ _limitForGapfilledResult = queryContext.getLimit();
+ } else {
+ _limitForGapfilledResult = queryContext.getSubquery().getLimit();
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext, _gapfillType);
+ _timeBucketColumnIndex = GapfillUtils.findTimeBucketColumnIndex(queryContext, _gapfillType);
+
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+
+ _dateTimeFormatter = new DateTimeFormatSpec(args.get(1).getLiteral());
+ _gapfillDateTimeGranularity = new DateTimeGranularitySpec(args.get(4).getLiteral());
+ _postGapfillDateTimeGranularity = new DateTimeGranularitySpec(args.get(5).getLiteral());
+ String start = args.get(2).getLiteral();
+ _startMs = truncate(_dateTimeFormatter.fromFormatToMillis(start));
+ String end = args.get(3).getLiteral();
+ _endMs = truncate(_dateTimeFormatter.fromFormatToMillis(end));
+ _gapfillTimeBucketSize = _gapfillDateTimeGranularity.granularityToMillis();
+ _postGapfillTimeBucketSize = _postGapfillDateTimeGranularity.granularityToMillis();
+ _numOfTimeBuckets = (int) ((_endMs - _startMs) / _gapfillTimeBucketSize);
+
+ _fillExpressions = GapfillUtils.getFillExpressions(gapFillSelection);
+
+ _aggregationSize = (int) (_postGapfillTimeBucketSize / _gapfillTimeBucketSize);
+
+ _previousByGroupKey = new HashMap<>();
+ _groupByKeyIndexes = new ArrayList<>();
+ _groupByKeys = new HashSet<>();
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ _timeSeries = timeseriesOn.getFunction().getArguments();
+ }
+
+ private int findGapfillBucketIndex(long time) {
+ return (int) ((time - _startMs) / _gapfillTimeBucketSize);
+ }
+
+ private void replaceColumnNameWithAlias(DataSchema dataSchema) {
+ QueryContext queryContext;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = _queryContext.getSubquery().getSubquery();
+ } else if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ queryContext = _queryContext;
+ } else {
+ queryContext = _queryContext.getSubquery();
+ }
+ List<String> aliasList = queryContext.getAliasList();
+ Map<String, String> columnNameToAliasMap = new HashMap<>();
+ for (int i = 0; i < aliasList.size(); i++) {
+ if (aliasList.get(i) != null) {
+ ExpressionContext selection = queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(selection)) {
+ selection = selection.getFunction().getArguments().get(0);
+ }
+ columnNameToAliasMap.put(selection.toString(), aliasList.get(i));
+ }
+ }
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ if (columnNameToAliasMap.containsKey(dataSchema.getColumnNames()[i])) {
+ dataSchema.getColumnNames()[i] = columnNameToAliasMap.get(dataSchema.getColumnNames()[i]);
+ }
+ }
+ }
+
+ /**
+ * Here are three things that happen
+ * 1. Sort the result sets from all pinot servers based on timestamp
+ * 2. Gapfill the data for missing entities per time bucket
+ * 3. Aggregate the dataset per time bucket.
+ */
+ public void process(BrokerResponseNative brokerResponseNative) {
+ DataSchema dataSchema = brokerResponseNative.getResultTable().getDataSchema();
+ DataSchema resultTableSchema = getResultTableDataSchema(dataSchema);
+ if (brokerResponseNative.getResultTable().getRows().isEmpty()) {
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, Collections.emptyList()));
+ return;
+ }
+
+ String[] columns = dataSchema.getColumnNames();
+
+ Map<String, Integer> indexes = new HashMap<>();
+ for (int i = 0; i < columns.length; i++) {
+ indexes.put(columns[i], i);
+ }
+
+ _isGroupBySelections = new boolean[dataSchema.getColumnDataTypes().length];
+
+ // The first one argument of timeSeries is time column. The left ones are defining entity.
+ for (ExpressionContext entityColum : _timeSeries) {
+ int index = indexes.get(entityColum.getIdentifier());
+ _isGroupBySelections[index] = true;
+ }
+
+ for (int i = 0; i < _isGroupBySelections.length; i++) {
+ if (_isGroupBySelections[i]) {
+ _groupByKeyIndexes.add(i);
+ }
+ }
+
+ List<Object[]>[] timeBucketedRawRows = putRawRowsIntoTimeBucket(brokerResponseNative.getResultTable().getRows());
+
+ replaceColumnNameWithAlias(dataSchema);
+
+ if (_queryContext.getAggregationFunctions() == null) {
+
+ Map<String, Integer> sourceColumnsIndexes = new HashMap<>();
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ sourceColumnsIndexes.put(dataSchema.getColumnName(i), i);
+ }
+ _sourceColumnIndexForResultSchema = new int[resultTableSchema.getColumnNames().length];
+ for (int i = 0; i < _sourceColumnIndexForResultSchema.length; i++) {
+ _sourceColumnIndexForResultSchema[i] = sourceColumnsIndexes.get(resultTableSchema.getColumnName(i));
+ }
+ }
+
+ List<Object[]> resultRows = gapFillAndAggregate(timeBucketedRawRows, resultTableSchema, dataSchema);
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, resultRows));
+ }
+
+ /**
+ * Constructs the DataSchema for the ResultTable.
+ */
+ private DataSchema getResultTableDataSchema(DataSchema dataSchema) {
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ return dataSchema;
+ }
+
+ int numOfColumns = _queryContext.getSelectExpressions().size();
+ String[] columnNames = new String[numOfColumns];
+ ColumnDataType[] columnDataTypes = new ColumnDataType[numOfColumns];
+ for (int i = 0; i < numOfColumns; i++) {
+ ExpressionContext expressionContext = _queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(expressionContext)) {
+ expressionContext = expressionContext.getFunction().getArguments().get(0);
+ }
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ columnNames[i] = expressionContext.getIdentifier();
+ columnDataTypes[i] = ColumnDataType.STRING;
+ } else {
+ FunctionContext functionContext = expressionContext.getFunction();
+ AggregationFunction aggregationFunction =
+ AggregationFunctionFactory.getAggregationFunction(functionContext, _queryContext);
+ columnDataTypes[i] = aggregationFunction.getFinalResultColumnType();
+ columnNames[i] = functionContext.toString();
+ }
+ }
+ return new DataSchema(columnNames, columnDataTypes);
+ }
+
+ private Key constructGroupKeys(Object[] row) {
+ Object[] groupKeys = new Object[_groupByKeyIndexes.size()];
+ for (int i = 0; i < _groupByKeyIndexes.size(); i++) {
+ groupKeys[i] = row[_groupByKeyIndexes.get(i)];
+ }
+ return new Key(groupKeys);
+ }
+
+ private long truncate(long epoch) {
+ int sz = _gapfillDateTimeGranularity.getSize();
+ return epoch / sz * sz;
+ }
+
+ private List<Object[]> gapFillAndAggregate(List<Object[]>[] timeBucketedRawRows,
+ DataSchema dataSchemaForAggregatedResult, DataSchema dataSchema) {
+ List<Object[]> result = new ArrayList<>();
+
+ GapfillFilterHandler postGapfillFilterHandler = null;
+ if (_queryContext.getSubquery() != null && _queryContext.getFilter() != null) {
+ postGapfillFilterHandler = new GapfillFilterHandler(_queryContext.getFilter(), dataSchema);
+ }
+ GapfillFilterHandler postAggregateHavingFilterHandler = null;
+ if (_queryContext.getHavingFilter() != null) {
+ postAggregateHavingFilterHandler =
+ new GapfillFilterHandler(_queryContext.getHavingFilter(), dataSchemaForAggregatedResult);
+ }
+ long start = _startMs;
+ ColumnDataType[] resultColumnDataTypes = dataSchema.getColumnDataTypes();
+ List<Object[]> bucketedResult = new ArrayList<>();
+ for (long time = _startMs; time < _endMs; time += _gapfillTimeBucketSize) {
+ int index = findGapfillBucketIndex(time);
+ gapfill(time, bucketedResult, timeBucketedRawRows[index], dataSchema, postGapfillFilterHandler);
+ if (_queryContext.getAggregationFunctions() == null) {
+ for (Object [] row : bucketedResult) {
+ Object[] resultRow = new Object[_sourceColumnIndexForResultSchema.length];
+ for (int i = 0; i < _sourceColumnIndexForResultSchema.length; i++) {
+ resultRow[i] = row[_sourceColumnIndexForResultSchema[i]];
+ }
+ result.add(resultRow);
+ }
+ bucketedResult = new ArrayList<>();
Review comment:
Unnecessary allocation
```suggestion
bucketedResult = bucketedResult.clear();
```
##########
File path: pinot-core/src/test/java/org/apache/pinot/queries/BaseQueriesTest.java
##########
@@ -197,21 +198,31 @@ protected BrokerResponseNative getBrokerResponseForSqlQuery(String sqlQuery, Pla
}
queryOptions.put(Request.QueryOptionKey.GROUP_BY_MODE, Request.SQL);
queryOptions.put(Request.QueryOptionKey.RESPONSE_FORMAT, Request.SQL);
+ BrokerRequest strippedBrokerRequest = GapfillUtils.stripGapfill(brokerRequest);
Review comment:
(minor) rename it to `serverBrokerRequest`
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/GapfillProcessor.java
##########
@@ -0,0 +1,471 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.function.CountAggregationFunction;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+
+/**
+ * Helper class to reduce and set gap fill results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class GapfillProcessor {
+ private final QueryContext _queryContext;
+
+ private final int _limitForAggregatedResult;
+ private final DateTimeGranularitySpec _gapfillDateTimeGranularity;
+ private final DateTimeGranularitySpec _postGapfillDateTimeGranularity;
+ private final DateTimeFormatSpec _dateTimeFormatter;
+ private final long _startMs;
+ private final long _endMs;
+ private final long _gapfillTimeBucketSize;
+ private final long _postGapfillTimeBucketSize;
+ private final int _numOfTimeBuckets;
+ private final List<Integer> _groupByKeyIndexes;
+ private final Set<Key> _groupByKeys;
+ private final Map<Key, Object[]> _previousByGroupKey;
+ private final Map<String, ExpressionContext> _fillExpressions;
+ private final List<ExpressionContext> _timeSeries;
+ private final GapfillUtils.GapfillType _gapfillType;
+ private int _limitForGapfilledResult;
+ private boolean[] _isGroupBySelections;
+ private final int _timeBucketColumnIndex;
+ private int[] _sourceColumnIndexForResultSchema = null;
+ private final int _aggregationSize;
+
+ GapfillProcessor(QueryContext queryContext, GapfillUtils.GapfillType gapfillType) {
+ _queryContext = queryContext;
+ _gapfillType = gapfillType;
+ _limitForAggregatedResult = queryContext.getLimit();
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ _limitForGapfilledResult = queryContext.getLimit();
+ } else {
+ _limitForGapfilledResult = queryContext.getSubquery().getLimit();
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext, _gapfillType);
+ _timeBucketColumnIndex = GapfillUtils.findTimeBucketColumnIndex(queryContext, _gapfillType);
+
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+
+ _dateTimeFormatter = new DateTimeFormatSpec(args.get(1).getLiteral());
+ _gapfillDateTimeGranularity = new DateTimeGranularitySpec(args.get(4).getLiteral());
+ _postGapfillDateTimeGranularity = new DateTimeGranularitySpec(args.get(5).getLiteral());
+ String start = args.get(2).getLiteral();
+ _startMs = truncate(_dateTimeFormatter.fromFormatToMillis(start));
+ String end = args.get(3).getLiteral();
+ _endMs = truncate(_dateTimeFormatter.fromFormatToMillis(end));
+ _gapfillTimeBucketSize = _gapfillDateTimeGranularity.granularityToMillis();
+ _postGapfillTimeBucketSize = _postGapfillDateTimeGranularity.granularityToMillis();
+ _numOfTimeBuckets = (int) ((_endMs - _startMs) / _gapfillTimeBucketSize);
+
+ _fillExpressions = GapfillUtils.getFillExpressions(gapFillSelection);
+
+ _aggregationSize = (int) (_postGapfillTimeBucketSize / _gapfillTimeBucketSize);
+
+ _previousByGroupKey = new HashMap<>();
+ _groupByKeyIndexes = new ArrayList<>();
+ _groupByKeys = new HashSet<>();
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ _timeSeries = timeseriesOn.getFunction().getArguments();
+ }
+
+ private int findGapfillBucketIndex(long time) {
+ return (int) ((time - _startMs) / _gapfillTimeBucketSize);
+ }
+
+ private void replaceColumnNameWithAlias(DataSchema dataSchema) {
+ QueryContext queryContext;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = _queryContext.getSubquery().getSubquery();
+ } else if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ queryContext = _queryContext;
+ } else {
+ queryContext = _queryContext.getSubquery();
+ }
+ List<String> aliasList = queryContext.getAliasList();
+ Map<String, String> columnNameToAliasMap = new HashMap<>();
+ for (int i = 0; i < aliasList.size(); i++) {
+ if (aliasList.get(i) != null) {
+ ExpressionContext selection = queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(selection)) {
+ selection = selection.getFunction().getArguments().get(0);
+ }
+ columnNameToAliasMap.put(selection.toString(), aliasList.get(i));
+ }
+ }
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ if (columnNameToAliasMap.containsKey(dataSchema.getColumnNames()[i])) {
+ dataSchema.getColumnNames()[i] = columnNameToAliasMap.get(dataSchema.getColumnNames()[i]);
+ }
+ }
+ }
+
+ /**
+ * Here are three things that happen
+ * 1. Sort the result sets from all pinot servers based on timestamp
+ * 2. Gapfill the data for missing entities per time bucket
+ * 3. Aggregate the dataset per time bucket.
+ */
+ public void process(BrokerResponseNative brokerResponseNative) {
+ DataSchema dataSchema = brokerResponseNative.getResultTable().getDataSchema();
+ DataSchema resultTableSchema = getResultTableDataSchema(dataSchema);
+ if (brokerResponseNative.getResultTable().getRows().isEmpty()) {
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, Collections.emptyList()));
+ return;
+ }
+
+ String[] columns = dataSchema.getColumnNames();
+
+ Map<String, Integer> indexes = new HashMap<>();
+ for (int i = 0; i < columns.length; i++) {
+ indexes.put(columns[i], i);
+ }
+
+ _isGroupBySelections = new boolean[dataSchema.getColumnDataTypes().length];
+
+ // The first one argument of timeSeries is time column. The left ones are defining entity.
+ for (ExpressionContext entityColum : _timeSeries) {
+ int index = indexes.get(entityColum.getIdentifier());
+ _isGroupBySelections[index] = true;
+ }
+
+ for (int i = 0; i < _isGroupBySelections.length; i++) {
+ if (_isGroupBySelections[i]) {
+ _groupByKeyIndexes.add(i);
+ }
+ }
+
+ List<Object[]>[] timeBucketedRawRows = putRawRowsIntoTimeBucket(brokerResponseNative.getResultTable().getRows());
+
+ replaceColumnNameWithAlias(dataSchema);
+
+ if (_queryContext.getAggregationFunctions() == null) {
+
+ Map<String, Integer> sourceColumnsIndexes = new HashMap<>();
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ sourceColumnsIndexes.put(dataSchema.getColumnName(i), i);
+ }
+ _sourceColumnIndexForResultSchema = new int[resultTableSchema.getColumnNames().length];
+ for (int i = 0; i < _sourceColumnIndexForResultSchema.length; i++) {
+ _sourceColumnIndexForResultSchema[i] = sourceColumnsIndexes.get(resultTableSchema.getColumnName(i));
+ }
+ }
+
+ List<Object[]> resultRows = gapFillAndAggregate(timeBucketedRawRows, resultTableSchema, dataSchema);
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, resultRows));
+ }
+
+ /**
+ * Constructs the DataSchema for the ResultTable.
+ */
+ private DataSchema getResultTableDataSchema(DataSchema dataSchema) {
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ return dataSchema;
+ }
+
+ int numOfColumns = _queryContext.getSelectExpressions().size();
+ String[] columnNames = new String[numOfColumns];
+ ColumnDataType[] columnDataTypes = new ColumnDataType[numOfColumns];
+ for (int i = 0; i < numOfColumns; i++) {
+ ExpressionContext expressionContext = _queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(expressionContext)) {
+ expressionContext = expressionContext.getFunction().getArguments().get(0);
+ }
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ columnNames[i] = expressionContext.getIdentifier();
+ columnDataTypes[i] = ColumnDataType.STRING;
+ } else {
+ FunctionContext functionContext = expressionContext.getFunction();
+ AggregationFunction aggregationFunction =
+ AggregationFunctionFactory.getAggregationFunction(functionContext, _queryContext);
+ columnDataTypes[i] = aggregationFunction.getFinalResultColumnType();
+ columnNames[i] = functionContext.toString();
+ }
+ }
+ return new DataSchema(columnNames, columnDataTypes);
+ }
+
+ private Key constructGroupKeys(Object[] row) {
+ Object[] groupKeys = new Object[_groupByKeyIndexes.size()];
+ for (int i = 0; i < _groupByKeyIndexes.size(); i++) {
+ groupKeys[i] = row[_groupByKeyIndexes.get(i)];
+ }
+ return new Key(groupKeys);
+ }
+
+ private long truncate(long epoch) {
+ int sz = _gapfillDateTimeGranularity.getSize();
+ return epoch / sz * sz;
+ }
+
+ private List<Object[]> gapFillAndAggregate(List<Object[]>[] timeBucketedRawRows,
+ DataSchema dataSchemaForAggregatedResult, DataSchema dataSchema) {
+ List<Object[]> result = new ArrayList<>();
+
+ GapfillFilterHandler postGapfillFilterHandler = null;
+ if (_queryContext.getSubquery() != null && _queryContext.getFilter() != null) {
+ postGapfillFilterHandler = new GapfillFilterHandler(_queryContext.getFilter(), dataSchema);
+ }
+ GapfillFilterHandler postAggregateHavingFilterHandler = null;
+ if (_queryContext.getHavingFilter() != null) {
+ postAggregateHavingFilterHandler =
+ new GapfillFilterHandler(_queryContext.getHavingFilter(), dataSchemaForAggregatedResult);
+ }
+ long start = _startMs;
+ ColumnDataType[] resultColumnDataTypes = dataSchema.getColumnDataTypes();
+ List<Object[]> bucketedResult = new ArrayList<>();
+ for (long time = _startMs; time < _endMs; time += _gapfillTimeBucketSize) {
+ int index = findGapfillBucketIndex(time);
+ gapfill(time, bucketedResult, timeBucketedRawRows[index], dataSchema, postGapfillFilterHandler);
+ if (_queryContext.getAggregationFunctions() == null) {
+ for (Object [] row : bucketedResult) {
+ Object[] resultRow = new Object[_sourceColumnIndexForResultSchema.length];
+ for (int i = 0; i < _sourceColumnIndexForResultSchema.length; i++) {
+ resultRow[i] = row[_sourceColumnIndexForResultSchema[i]];
+ }
+ result.add(resultRow);
+ }
+ bucketedResult = new ArrayList<>();
+ } else if (index % _aggregationSize == _aggregationSize - 1 && bucketedResult.size() > 0) {
Review comment:
Can `bucketedResult` ever be empty? If it is empty, do we need to update the `start`?
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/BrokerReduceService.java
##########
@@ -103,11 +104,23 @@ public BrokerResponseNative reduceOnDataTable(BrokerRequest brokerRequest,
return brokerResponseNative;
}
- QueryContext queryContext = BrokerRequestToQueryContextConverter.convert(brokerRequest);
- DataTableReducer dataTableReducer = ResultReducerFactory.getResultReducer(queryContext);
+ QueryContext serverQueryContext = BrokerRequestToQueryContextConverter.convert(serverBrokerRequest);
+ DataTableReducer dataTableReducer = ResultReducerFactory.getResultReducer(serverQueryContext);
dataTableReducer.reduceAndSetResults(rawTableName, cachedDataSchema, dataTableMap, brokerResponseNative,
new DataTableReducerContext(_reduceExecutorService, _maxReduceThreadsPerQuery, reduceTimeOutMs,
_groupByTrimThreshold), brokerMetrics);
+ QueryContext queryContext;
+ if (brokerRequest == serverBrokerRequest) {
+ queryContext = serverQueryContext;
+ } else {
+ queryContext = BrokerRequestToQueryContextConverter.convert(brokerRequest);
+ }
+
+ GapfillUtils.GapfillType gapfillType = GapfillUtils.getGapfillType(queryContext);
+ if (gapfillType != null) {
+ GapfillProcessor gapfillProcessor = new GapfillProcessor(queryContext, gapfillType);
+ gapfillProcessor.process(brokerResponseNative);
+ }
Review comment:
No need to check gapfill type when server request is the same as broker request
```suggestion
if (brokerRequest == serverBrokerRequest) {
queryContext = serverQueryContext;
} else {
queryContext = BrokerRequestToQueryContextConverter.convert(brokerRequest);
GapfillUtils.GapfillType gapfillType = GapfillUtils.getGapfillType(queryContext);
if (gapfillType != null) {
GapfillProcessor gapfillProcessor = new GapfillProcessor(queryContext, gapfillType);
gapfillProcessor.process(brokerResponseNative);
}
}
```
##########
File path: pinot-core/src/test/java/org/apache/pinot/queries/BaseQueriesTest.java
##########
@@ -197,21 +198,31 @@ protected BrokerResponseNative getBrokerResponseForSqlQuery(String sqlQuery, Pla
}
queryOptions.put(Request.QueryOptionKey.GROUP_BY_MODE, Request.SQL);
queryOptions.put(Request.QueryOptionKey.RESPONSE_FORMAT, Request.SQL);
+ BrokerRequest strippedBrokerRequest = GapfillUtils.stripGapfill(brokerRequest);
+ queryOptions = strippedBrokerRequest.getPinotQuery().getQueryOptions();
+ if (queryOptions == null) {
+ queryOptions = new HashMap<>();
+ strippedBrokerRequest.getPinotQuery().setQueryOptions(queryOptions);
+ }
+ queryOptions.put(Request.QueryOptionKey.GROUP_BY_MODE, Request.SQL);
+ queryOptions.put(Request.QueryOptionKey.RESPONSE_FORMAT, Request.SQL);
QueryContext queryContext = BrokerRequestToQueryContextConverter.convert(brokerRequest);
- return getBrokerResponse(queryContext, planMaker);
+ QueryContext strippedQueryContext = BrokerRequestToQueryContextConverter.convert(strippedBrokerRequest);
Review comment:
Let's compare the reference before converting the `strippedBrokerRequest`, and rename it to `serverQueryContext`. Same for `getBrokerResponseForOptimizedSqlQuery`
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/GapfillProcessor.java
##########
@@ -0,0 +1,471 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.function.CountAggregationFunction;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+
+/**
+ * Helper class to reduce and set gap fill results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class GapfillProcessor {
+ private final QueryContext _queryContext;
+
+ private final int _limitForAggregatedResult;
+ private final DateTimeGranularitySpec _gapfillDateTimeGranularity;
+ private final DateTimeGranularitySpec _postGapfillDateTimeGranularity;
+ private final DateTimeFormatSpec _dateTimeFormatter;
+ private final long _startMs;
+ private final long _endMs;
+ private final long _gapfillTimeBucketSize;
+ private final long _postGapfillTimeBucketSize;
+ private final int _numOfTimeBuckets;
+ private final List<Integer> _groupByKeyIndexes;
+ private final Set<Key> _groupByKeys;
+ private final Map<Key, Object[]> _previousByGroupKey;
+ private final Map<String, ExpressionContext> _fillExpressions;
+ private final List<ExpressionContext> _timeSeries;
+ private final GapfillUtils.GapfillType _gapfillType;
+ private int _limitForGapfilledResult;
+ private boolean[] _isGroupBySelections;
+ private final int _timeBucketColumnIndex;
+ private int[] _sourceColumnIndexForResultSchema = null;
+ private final int _aggregationSize;
+
+ GapfillProcessor(QueryContext queryContext, GapfillUtils.GapfillType gapfillType) {
+ _queryContext = queryContext;
+ _gapfillType = gapfillType;
+ _limitForAggregatedResult = queryContext.getLimit();
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ _limitForGapfilledResult = queryContext.getLimit();
+ } else {
+ _limitForGapfilledResult = queryContext.getSubquery().getLimit();
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext, _gapfillType);
+ _timeBucketColumnIndex = GapfillUtils.findTimeBucketColumnIndex(queryContext, _gapfillType);
+
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+
+ _dateTimeFormatter = new DateTimeFormatSpec(args.get(1).getLiteral());
+ _gapfillDateTimeGranularity = new DateTimeGranularitySpec(args.get(4).getLiteral());
+ _postGapfillDateTimeGranularity = new DateTimeGranularitySpec(args.get(5).getLiteral());
+ String start = args.get(2).getLiteral();
+ _startMs = truncate(_dateTimeFormatter.fromFormatToMillis(start));
+ String end = args.get(3).getLiteral();
+ _endMs = truncate(_dateTimeFormatter.fromFormatToMillis(end));
+ _gapfillTimeBucketSize = _gapfillDateTimeGranularity.granularityToMillis();
+ _postGapfillTimeBucketSize = _postGapfillDateTimeGranularity.granularityToMillis();
+ _numOfTimeBuckets = (int) ((_endMs - _startMs) / _gapfillTimeBucketSize);
+
+ _fillExpressions = GapfillUtils.getFillExpressions(gapFillSelection);
+
+ _aggregationSize = (int) (_postGapfillTimeBucketSize / _gapfillTimeBucketSize);
+
+ _previousByGroupKey = new HashMap<>();
+ _groupByKeyIndexes = new ArrayList<>();
+ _groupByKeys = new HashSet<>();
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ _timeSeries = timeseriesOn.getFunction().getArguments();
+ }
+
+ private int findGapfillBucketIndex(long time) {
+ return (int) ((time - _startMs) / _gapfillTimeBucketSize);
+ }
+
+ private void replaceColumnNameWithAlias(DataSchema dataSchema) {
+ QueryContext queryContext;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = _queryContext.getSubquery().getSubquery();
+ } else if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ queryContext = _queryContext;
+ } else {
+ queryContext = _queryContext.getSubquery();
+ }
+ List<String> aliasList = queryContext.getAliasList();
+ Map<String, String> columnNameToAliasMap = new HashMap<>();
+ for (int i = 0; i < aliasList.size(); i++) {
+ if (aliasList.get(i) != null) {
+ ExpressionContext selection = queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(selection)) {
+ selection = selection.getFunction().getArguments().get(0);
+ }
+ columnNameToAliasMap.put(selection.toString(), aliasList.get(i));
+ }
+ }
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ if (columnNameToAliasMap.containsKey(dataSchema.getColumnNames()[i])) {
+ dataSchema.getColumnNames()[i] = columnNameToAliasMap.get(dataSchema.getColumnNames()[i]);
+ }
+ }
+ }
+
+ /**
+ * Here are three things that happen
+ * 1. Sort the result sets from all pinot servers based on timestamp
+ * 2. Gapfill the data for missing entities per time bucket
+ * 3. Aggregate the dataset per time bucket.
+ */
+ public void process(BrokerResponseNative brokerResponseNative) {
+ DataSchema dataSchema = brokerResponseNative.getResultTable().getDataSchema();
+ DataSchema resultTableSchema = getResultTableDataSchema(dataSchema);
+ if (brokerResponseNative.getResultTable().getRows().isEmpty()) {
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, Collections.emptyList()));
+ return;
+ }
+
+ String[] columns = dataSchema.getColumnNames();
+
+ Map<String, Integer> indexes = new HashMap<>();
+ for (int i = 0; i < columns.length; i++) {
+ indexes.put(columns[i], i);
+ }
+
+ _isGroupBySelections = new boolean[dataSchema.getColumnDataTypes().length];
+
+ // The first one argument of timeSeries is time column. The left ones are defining entity.
+ for (ExpressionContext entityColum : _timeSeries) {
+ int index = indexes.get(entityColum.getIdentifier());
+ _isGroupBySelections[index] = true;
+ }
+
+ for (int i = 0; i < _isGroupBySelections.length; i++) {
+ if (_isGroupBySelections[i]) {
+ _groupByKeyIndexes.add(i);
+ }
+ }
+
+ List<Object[]>[] timeBucketedRawRows = putRawRowsIntoTimeBucket(brokerResponseNative.getResultTable().getRows());
+
+ replaceColumnNameWithAlias(dataSchema);
+
+ if (_queryContext.getAggregationFunctions() == null) {
+
+ Map<String, Integer> sourceColumnsIndexes = new HashMap<>();
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ sourceColumnsIndexes.put(dataSchema.getColumnName(i), i);
+ }
+ _sourceColumnIndexForResultSchema = new int[resultTableSchema.getColumnNames().length];
+ for (int i = 0; i < _sourceColumnIndexForResultSchema.length; i++) {
+ _sourceColumnIndexForResultSchema[i] = sourceColumnsIndexes.get(resultTableSchema.getColumnName(i));
+ }
+ }
+
+ List<Object[]> resultRows = gapFillAndAggregate(timeBucketedRawRows, resultTableSchema, dataSchema);
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, resultRows));
+ }
+
+ /**
+ * Constructs the DataSchema for the ResultTable.
+ */
+ private DataSchema getResultTableDataSchema(DataSchema dataSchema) {
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ return dataSchema;
+ }
+
+ int numOfColumns = _queryContext.getSelectExpressions().size();
+ String[] columnNames = new String[numOfColumns];
+ ColumnDataType[] columnDataTypes = new ColumnDataType[numOfColumns];
+ for (int i = 0; i < numOfColumns; i++) {
+ ExpressionContext expressionContext = _queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(expressionContext)) {
+ expressionContext = expressionContext.getFunction().getArguments().get(0);
+ }
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ columnNames[i] = expressionContext.getIdentifier();
+ columnDataTypes[i] = ColumnDataType.STRING;
+ } else {
+ FunctionContext functionContext = expressionContext.getFunction();
+ AggregationFunction aggregationFunction =
+ AggregationFunctionFactory.getAggregationFunction(functionContext, _queryContext);
+ columnDataTypes[i] = aggregationFunction.getFinalResultColumnType();
+ columnNames[i] = functionContext.toString();
+ }
+ }
+ return new DataSchema(columnNames, columnDataTypes);
+ }
+
+ private Key constructGroupKeys(Object[] row) {
+ Object[] groupKeys = new Object[_groupByKeyIndexes.size()];
+ for (int i = 0; i < _groupByKeyIndexes.size(); i++) {
+ groupKeys[i] = row[_groupByKeyIndexes.get(i)];
+ }
+ return new Key(groupKeys);
+ }
+
+ private long truncate(long epoch) {
+ int sz = _gapfillDateTimeGranularity.getSize();
+ return epoch / sz * sz;
+ }
+
+ private List<Object[]> gapFillAndAggregate(List<Object[]>[] timeBucketedRawRows,
+ DataSchema dataSchemaForAggregatedResult, DataSchema dataSchema) {
+ List<Object[]> result = new ArrayList<>();
+
+ GapfillFilterHandler postGapfillFilterHandler = null;
+ if (_queryContext.getSubquery() != null && _queryContext.getFilter() != null) {
+ postGapfillFilterHandler = new GapfillFilterHandler(_queryContext.getFilter(), dataSchema);
+ }
+ GapfillFilterHandler postAggregateHavingFilterHandler = null;
+ if (_queryContext.getHavingFilter() != null) {
+ postAggregateHavingFilterHandler =
+ new GapfillFilterHandler(_queryContext.getHavingFilter(), dataSchemaForAggregatedResult);
+ }
+ long start = _startMs;
+ ColumnDataType[] resultColumnDataTypes = dataSchema.getColumnDataTypes();
+ List<Object[]> bucketedResult = new ArrayList<>();
+ for (long time = _startMs; time < _endMs; time += _gapfillTimeBucketSize) {
+ int index = findGapfillBucketIndex(time);
+ gapfill(time, bucketedResult, timeBucketedRawRows[index], dataSchema, postGapfillFilterHandler);
+ if (_queryContext.getAggregationFunctions() == null) {
+ for (Object [] row : bucketedResult) {
+ Object[] resultRow = new Object[_sourceColumnIndexForResultSchema.length];
+ for (int i = 0; i < _sourceColumnIndexForResultSchema.length; i++) {
+ resultRow[i] = row[_sourceColumnIndexForResultSchema[i]];
+ }
+ result.add(resultRow);
+ }
+ bucketedResult = new ArrayList<>();
+ } else if (index % _aggregationSize == _aggregationSize - 1 && bucketedResult.size() > 0) {
+ Object timeCol;
+ if (resultColumnDataTypes[_timeBucketColumnIndex] == ColumnDataType.LONG) {
+ timeCol = Long.valueOf(_dateTimeFormatter.fromMillisToFormat(start));
+ } else {
+ timeCol = _dateTimeFormatter.fromMillisToFormat(start);
+ }
+ List<Object[]> aggregatedRows = aggregateGapfilledData(timeCol, bucketedResult, dataSchema);
+ for (Object[] aggregatedRow : aggregatedRows) {
+ if (postAggregateHavingFilterHandler == null || postAggregateHavingFilterHandler.isMatch(aggregatedRow)) {
+ result.add(aggregatedRow);
+ }
+ if (result.size() >= _limitForAggregatedResult) {
+ return result;
+ }
+ }
+ bucketedResult = new ArrayList<>();
Review comment:
```suggestion
bucketedResult.clear();
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r814138628
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/QueryContext.java
##########
@@ -85,6 +86,7 @@
// Keep the BrokerRequest to make incremental changes
// TODO: Remove it once the whole query engine is using the QueryContext
private final BrokerRequest _brokerRequest;
+ private QueryContext _subqueryContext;
Review comment:
_gapfillType is calculated when the whole queryContext is constructed. _subqueryContext is final now
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (37d1bbe) into [master](https://codecov.io/gh/apache/pinot/commit/1d1a7d34709b6a89985a610f46dd1c97d6c9271a?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (1d1a7d3) will **decrease** coverage by `0.07%`.
> The diff coverage is `73.80%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 71.40% 71.33% -0.08%
- Complexity 4223 4224 +1
============================================
Files 1597 1610 +13
Lines 82903 83320 +417
Branches 12369 12453 +84
============================================
+ Hits 59201 59438 +237
- Misses 19689 19834 +145
- Partials 4013 4048 +35
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.87% <15.94%> (-0.12%)` | :arrow_down: |
| integration2 | `27.59% <16.40%> (-0.11%)` | :arrow_down: |
| unittests1 | `68.14% <73.34%> (+<0.01%)` | :arrow_up: |
| unittests2 | `14.29% <0.00%> (-0.07%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...pache/pinot/core/query/reduce/BlockValSetImpl.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQmxvY2tWYWxTZXRJbXBsLmphdmE=) | `17.30% <17.30%> (ø)` | |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `56.52% <42.85%> (-7.12%)` | :arrow_down: |
| [...query/request/context/utils/QueryContextUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvUXVlcnlDb250ZXh0VXRpbHMuamF2YQ==) | `73.91% <60.00%> (-3.36%)` | :arrow_down: |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `63.88% <63.88%> (ø)` | |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `92.42% <66.66%> (+0.11%)` | :arrow_up: |
| [...che/pinot/core/query/reduce/filter/RowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXIuamF2YQ==) | `66.66% <66.66%> (ø)` | |
| [...PreAggregationGapFillSelectionOperatorService.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsU2VsZWN0aW9uT3BlcmF0b3JTZXJ2aWNlLmphdmE=) | `81.86% <81.86%> (ø)` | |
| [...inot/core/plan/PreAggGapFillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL1ByZUFnZ0dhcEZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `82.75% <82.75%> (ø)` | |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `85.71% <85.71%> (ø)` | |
| ... and [48 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [1d1a7d3...37d1bbe](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (37d1bbe) into [master](https://codecov.io/gh/apache/pinot/commit/1d1a7d34709b6a89985a610f46dd1c97d6c9271a?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (1d1a7d3) will **decrease** coverage by `1.04%`.
> The diff coverage is `73.34%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 71.40% 70.36% -1.05%
- Complexity 4223 4224 +1
============================================
Files 1597 1610 +13
Lines 82903 83320 +417
Branches 12369 12453 +84
============================================
- Hits 59201 58628 -573
- Misses 19689 20654 +965
- Partials 4013 4038 +25
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.87% <15.94%> (-0.12%)` | :arrow_down: |
| integration2 | `?` | |
| unittests1 | `68.14% <73.34%> (+<0.01%)` | :arrow_up: |
| unittests2 | `14.29% <0.00%> (-0.07%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...pache/pinot/core/query/reduce/BlockValSetImpl.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQmxvY2tWYWxTZXRJbXBsLmphdmE=) | `17.30% <17.30%> (ø)` | |
| [...query/request/context/utils/QueryContextUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvUXVlcnlDb250ZXh0VXRpbHMuamF2YQ==) | `69.56% <20.00%> (-7.71%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `56.52% <42.85%> (-7.12%)` | :arrow_down: |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `63.88% <63.88%> (ø)` | |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `92.42% <66.66%> (+0.11%)` | :arrow_up: |
| [...che/pinot/core/query/reduce/filter/RowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXIuamF2YQ==) | `66.66% <66.66%> (ø)` | |
| [...PreAggregationGapFillSelectionOperatorService.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsU2VsZWN0aW9uT3BlcmF0b3JTZXJ2aWNlLmphdmE=) | `81.86% <81.86%> (ø)` | |
| [...inot/core/plan/PreAggGapFillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL1ByZUFnZ0dhcEZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `82.75% <82.75%> (ø)` | |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `85.71% <85.71%> (ø)` | |
| ... and [121 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [1d1a7d3...37d1bbe](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (149817e) into [master](https://codecov.io/gh/apache/pinot/commit/cc2f3fe196d29a0d716bfee07add9b761e8fa98e?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (cc2f3fe) will **increase** coverage by `6.60%`.
> The diff coverage is `75.06%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
+ Coverage 64.63% 71.24% +6.60%
- Complexity 4260 4261 +1
============================================
Files 1562 1617 +55
Lines 81525 83736 +2211
Branches 12252 12529 +277
============================================
+ Hits 52695 59659 +6964
+ Misses 25072 19993 -5079
- Partials 3758 4084 +326
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.95% <16.29%> (?)` | |
| integration2 | `27.52% <16.79%> (?)` | |
| unittests1 | `67.88% <74.56%> (+0.01%)` | :arrow_up: |
| unittests2 | `14.17% <0.00%> (-0.05%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `58.73% <54.83%> (-4.91%)` | :arrow_down: |
| [...query/request/context/utils/QueryContextUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvUXVlcnlDb250ZXh0VXRpbHMuamF2YQ==) | `73.91% <60.00%> (-1.09%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `92.42% <66.66%> (+0.11%)` | :arrow_up: |
| [...che/pinot/core/query/reduce/filter/RowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXIuamF2YQ==) | `66.66% <66.66%> (ø)` | |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `83.76% <83.76%> (ø)` | |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `85.71% <85.71%> (ø)` | |
| [.../core/query/reduce/filter/PredicateRowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1ByZWRpY2F0ZVJvd01hdGNoZXIuamF2YQ==) | `87.50% <87.50%> (ø)` | |
| [...inot/core/plan/PreAggGapFillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL1ByZUFnZ0dhcEZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `90.00% <90.00%> (ø)` | |
| ... and [384 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [cc2f3fe...149817e](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] Jackie-Jiang commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
Jackie-Jiang commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r787204527
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/QueryContext.java
##########
@@ -85,6 +85,7 @@
// Keep the BrokerRequest to make incremental changes
// TODO: Remove it once the whole query engine is using the QueryContext
private final BrokerRequest _brokerRequest;
+ private QueryContext _preAggregateGapFillQueryContext;
Review comment:
@siddharthteotia From query syntax perspective, I don't see difference between pre-aggregate-gap-fill and general subquery. Essentially the `FROM` clause contains a query instead of a table name. I'd suggest adding it as general subquery, but we can support it in the future.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (149817e) into [master](https://codecov.io/gh/apache/pinot/commit/cc2f3fe196d29a0d716bfee07add9b761e8fa98e?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (cc2f3fe) will **increase** coverage by `0.03%`.
> The diff coverage is `74.56%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
+ Coverage 64.63% 64.67% +0.03%
- Complexity 4260 4261 +1
============================================
Files 1562 1572 +10
Lines 81525 81856 +331
Branches 12252 12325 +73
============================================
+ Hits 52695 52939 +244
- Misses 25072 25126 +54
- Partials 3758 3791 +33
```
| Flag | Coverage Δ | |
|---|---|---|
| unittests1 | `67.88% <74.56%> (+0.01%)` | :arrow_up: |
| unittests2 | `14.17% <0.00%> (-0.05%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...query/request/context/utils/QueryContextUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvUXVlcnlDb250ZXh0VXRpbHMuamF2YQ==) | `69.56% <20.00%> (-5.44%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `58.73% <54.83%> (-4.91%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `92.42% <66.66%> (+0.11%)` | :arrow_up: |
| [...che/pinot/core/query/reduce/filter/RowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXIuamF2YQ==) | `66.66% <66.66%> (ø)` | |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `83.76% <83.76%> (ø)` | |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `85.71% <85.71%> (ø)` | |
| [.../core/query/reduce/filter/PredicateRowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1ByZWRpY2F0ZVJvd01hdGNoZXIuamF2YQ==) | `87.50% <87.50%> (ø)` | |
| [...inot/core/plan/PreAggGapFillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL1ByZUFnZ0dhcEZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `90.00% <90.00%> (ø)` | |
| ... and [18 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [cc2f3fe...149817e](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (149817e) into [master](https://codecov.io/gh/apache/pinot/commit/cc2f3fe196d29a0d716bfee07add9b761e8fa98e?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (cc2f3fe) will **decrease** coverage by `50.46%`.
> The diff coverage is `0.00%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 64.63% 14.17% -50.47%
+ Complexity 4260 81 -4179
=============================================
Files 1562 1572 +10
Lines 81525 81856 +331
Branches 12252 12325 +73
=============================================
- Hits 52695 11601 -41094
- Misses 25072 69400 +44328
+ Partials 3758 855 -2903
```
| Flag | Coverage Δ | |
|---|---|---|
| unittests1 | `?` | |
| unittests2 | `14.17% <0.00%> (-0.05%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `0.00% <0.00%> (-86.80%)` | :arrow_down: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (-83.93%)` | :arrow_down: |
| [...inot/core/plan/PreAggGapFillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL1ByZUFnZ0dhcEZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...pinot/core/plan/maker/InstancePlanMakerImplV2.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL21ha2VyL0luc3RhbmNlUGxhbk1ha2VySW1wbFYyLmphdmE=) | `0.00% <0.00%> (-62.63%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...e/pinot/core/query/reduce/HavingFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvSGF2aW5nRmlsdGVySGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.31%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `0.00% <0.00%> (-92.31%)` | :arrow_down: |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `0.00% <0.00%> (ø)` | |
| [.../pinot/core/query/reduce/ResultReducerFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUmVzdWx0UmVkdWNlckZhY3RvcnkuamF2YQ==) | `0.00% <0.00%> (-75.00%)` | :arrow_down: |
| ... and [1089 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [cc2f3fe...149817e](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] siddharthteotia commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
siddharthteotia commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r787173187
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/QueryContext.java
##########
@@ -85,6 +85,7 @@
// Keep the BrokerRequest to make incremental changes
// TODO: Remove it once the whole query engine is using the QueryContext
private final BrokerRequest _brokerRequest;
+ private QueryContext _preAggregateGapFillQueryContext;
Review comment:
@Jackie-Jiang
So this was one of the things which was discussed a lot. My concern was that changing the PinotQuery now to accommodate "**generic**" subquery has to be done carefully accounting for standard sql subquery syntax and semantics and we should be confident that it will hold in future
If we are really touching the FROM clause, my suggestion would be to make sure we understand calcite's treatment of simple FROM clause (table name as today) and complex FROM clause (sub-queries). Whatever we do today in the FROM clause to make this particular gapfill sub-query work should not interfere in the future when we are going to leverage / extend calcite's generic subquery planner to support all kinds of subqueries.
So this is why the path of least resistance could be to not touch PinotQuery for generic Subquery support since that may require a lot of design thinking right away before agreeing upon how to model any subquery in Pinot and block the current feature. So may be making it specific like in the above code just for gapfill is the way to go ?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (4666247) into [master](https://codecov.io/gh/apache/pinot/commit/3f98ce37fdaef0335fcd82e621489d65751b1f55?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (3f98ce3) will **decrease** coverage by `41.83%`.
> The diff coverage is `25.72%`.
> :exclamation: Current head 4666247 differs from pull request most recent head 1c1ba84. Consider uploading reports for the commit 1c1ba84 to get more accurate results
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 70.72% 28.89% -41.84%
=============================================
Files 1631 1629 -2
Lines 85279 85728 +449
Branches 12844 12997 +153
=============================================
- Hits 60316 24773 -35543
- Misses 20799 58667 +37868
+ Partials 4164 2288 -1876
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.89% <25.72%> (+0.20%)` | :arrow_up: |
| integration2 | `?` | |
| unittests1 | `?` | |
| unittests2 | `?` | |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `61.13% <0.00%> (-10.73%)` | :arrow_down: |
| [.../org/apache/pinot/core/common/MinionConstants.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9jb21tb24vTWluaW9uQ29uc3RhbnRzLmphdmE=) | `0.00% <ø> (ø)` | |
| [...manager/realtime/LLRealtimeSegmentDataManager.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9kYXRhL21hbmFnZXIvcmVhbHRpbWUvTExSZWFsdGltZVNlZ21lbnREYXRhTWFuYWdlci5qYXZh) | `56.64% <ø> (-14.85%)` | :arrow_down: |
| [...a/manager/realtime/RealtimeSegmentDataManager.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9kYXRhL21hbmFnZXIvcmVhbHRpbWUvUmVhbHRpbWVTZWdtZW50RGF0YU1hbmFnZXIuamF2YQ==) | `50.00% <ø> (ø)` | |
| [...ata/manager/realtime/RealtimeTableDataManager.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9kYXRhL21hbmFnZXIvcmVhbHRpbWUvUmVhbHRpbWVUYWJsZURhdGFNYW5hZ2VyLmphdmE=) | `42.16% <ø> (-26.11%)` | :arrow_down: |
| [...ava/org/apache/pinot/core/plan/FilterPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0ZpbHRlclBsYW5Ob2RlLmphdmE=) | `53.27% <ø> (-36.45%)` | :arrow_down: |
| [...ache/pinot/core/plan/GapfillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...not/core/query/reduce/GapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwRmlsbERhdGFUYWJsZVJlZHVjZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [.../pinot/core/query/reduce/GapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| ... and [1261 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [3f98ce3...1c1ba84](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r819267921
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/utils/BrokerRequestToQueryContextConverter.java
##########
@@ -53,12 +55,81 @@ private BrokerRequestToQueryContextConverter() {
* Converts the given {@link BrokerRequest} into a {@link QueryContext}.
*/
public static QueryContext convert(BrokerRequest brokerRequest) {
- return brokerRequest.getPinotQuery() != null ? convertSQL(brokerRequest) : convertPQL(brokerRequest);
+ if (brokerRequest.getPinotQuery() != null) {
+ QueryContext queryContext = convertSQL(brokerRequest.getPinotQuery(), brokerRequest);
+ queryContext.setGapfillType(GapfillUtils.getGapfillType(queryContext));
+ validateForGapfillQuery(queryContext);
Review comment:
If I am not mistaken, BrokerRequestToQueryContextConverter is running inside Pinot Broker. @amrishlal can you clarify your comment further?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (b2edfa9) into [master](https://codecov.io/gh/apache/pinot/commit/3f98ce37fdaef0335fcd82e621489d65751b1f55?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (3f98ce3) will **decrease** coverage by `56.64%`.
> The diff coverage is `0.00%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 70.72% 14.08% -56.65%
+ Complexity 4242 81 -4161
=============================================
Files 1631 1596 -35
Lines 85279 84232 -1047
Branches 12844 12830 -14
=============================================
- Hits 60316 11860 -48456
- Misses 20799 71488 +50689
+ Partials 4164 884 -3280
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `?` | |
| unittests2 | `14.08% <0.00%> (-0.02%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `24.41% <0.00%> (-47.44%)` | :arrow_down: |
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `0.00% <0.00%> (-86.56%)` | :arrow_down: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (-87.50%)` | :arrow_down: |
| [...ache/pinot/core/plan/GapfillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...pinot/core/plan/maker/InstancePlanMakerImplV2.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL21ha2VyL0luc3RhbmNlUGxhbk1ha2VySW1wbFYyLmphdmE=) | `0.00% <0.00%> (-76.77%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...not/core/query/reduce/GapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwRmlsbERhdGFUYWJsZVJlZHVjZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [.../pinot/core/query/reduce/GapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...e/pinot/core/query/reduce/HavingFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvSGF2aW5nRmlsdGVySGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.31%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.77%)` | :arrow_down: |
| ... and [1328 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [3f98ce3...b2edfa9](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r819266353
##########
File path: pinot-common/src/main/java/org/apache/pinot/sql/parsers/CalciteSqlParser.java
##########
@@ -369,6 +369,9 @@ private static PinotQuery compileCalciteSqlToPinotQuery(String sql) {
DataSource dataSource = new DataSource();
dataSource.setTableName(fromNode.toString());
pinotQuery.setDataSource(dataSource);
+ if (fromNode instanceof SqlSelect || fromNode instanceof SqlOrderBy) {
Review comment:
I do not have deep insight into it. But the following code shows that SqlOrderBy
has the SqlSelect member. https://github.com/apache/pinot/blob/master/pinot-common/src/main/java/org/apache/pinot/sql/parsers/CalciteSqlParser.java#L343-L353
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (4a0d902) into [master](https://codecov.io/gh/apache/pinot/commit/262dc50e236ed2af25a0cf8c67658a48731ce573?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (262dc50) will **decrease** coverage by `1.33%`.
> The diff coverage is `77.11%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 70.83% 69.50% -1.34%
+ Complexity 4258 4252 -6
============================================
Files 1636 1645 +9
Lines 85804 86326 +522
Branches 12920 13059 +139
============================================
- Hits 60779 59998 -781
- Misses 20836 22126 +1290
- Partials 4189 4202 +13
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `27.44% <19.74%> (-0.15%)` | :arrow_down: |
| unittests1 | `66.96% <76.19%> (+0.01%)` | :arrow_up: |
| unittests2 | `14.08% <0.31%> (-0.10%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/GrpcBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvR3JwY0Jyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `78.57% <ø> (ø)` | |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `91.58% <0.00%> (-0.60%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...query/request/context/utils/QueryContextUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvUXVlcnlDb250ZXh0VXRpbHMuamF2YQ==) | `67.30% <36.36%> (-9.97%)` | :arrow_down: |
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `70.79% <66.66%> (-1.04%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `91.89% <66.66%> (+0.12%)` | :arrow_up: |
| [...ot/core/query/reduce/filter/RowMatcherFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXJGYWN0b3J5LmphdmE=) | `66.66% <66.66%> (ø)` | |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `65.55% <66.85%> (+1.91%)` | :arrow_up: |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `88.01% <84.61%> (-0.18%)` | :arrow_down: |
| ... and [137 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [262dc50...4a0d902](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r821252308
##########
File path: pinot-common/src/main/java/org/apache/pinot/sql/parsers/CalciteSqlParser.java
##########
@@ -369,6 +369,9 @@ private static PinotQuery compileCalciteSqlToPinotQuery(String sql) {
DataSource dataSource = new DataSource();
dataSource.setTableName(fromNode.toString());
pinotQuery.setDataSource(dataSource);
+ if (fromNode instanceof SqlSelect || fromNode instanceof SqlOrderBy) {
Review comment:
No, I did not flatten the subqueries into a single query.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829447775
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/BrokerReduceService.java
##########
@@ -108,7 +108,12 @@ public BrokerResponseNative reduceOnDataTable(BrokerRequest brokerRequest,
dataTableReducer.reduceAndSetResults(rawTableName, cachedDataSchema, dataTableMap, brokerResponseNative,
new DataTableReducerContext(_reduceExecutorService, _maxReduceThreadsPerQuery, reduceTimeOutMs,
_groupByTrimThreshold), brokerMetrics);
- updateAlias(queryContext, brokerResponseNative);
+ QueryContext originalQueryContext = BrokerRequestToQueryContextConverter.convert(originalBrokerRequest);
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] Jackie-Jiang commented on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
Jackie-Jiang commented on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1063318916
Discussed offline. We can simplify the logic by handling the whole gapfill processing on the broker side in the `BrokerRequestHandler`:
1. When getting a gapfill query, rewrite it to the regular non-gapfill query (if the leaf subquery is gap-fill query, trim off the gap-fill part and select all the required columns)
2. Send the non-gapfill query as regular query
3. After reducing the server responses to `BrokerResponse`, apply gapfill to the `ResultTable` within the `BrokerResponse` and set the new `ResultTable`
With this logic, all the gapfill handling logic are handled at broker side in the same place without introducing unnecessary overhead to the server
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r831473442
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/util/GapfillUtils.java
##########
@@ -116,7 +132,265 @@ static public Serializable getDefaultValue(DataSchema.ColumnDataType dataType) {
}
}
- private static String canonicalizeFunctionName(String functionName) {
- return StringUtils.remove(functionName, '_').toLowerCase();
+ public static boolean isGapfill(ExpressionContext expressionContext) {
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ return false;
+ }
+
+ return GAP_FILL.equals(expressionContext.getFunction().getFunctionName());
+ }
+
+ private static boolean isGapfill(QueryContext queryContext) {
+ for (ExpressionContext expressionContext : queryContext.getSelectExpressions()) {
+ if (isGapfill(expressionContext)) {
+ return true;
+ }
+ }
+ return false;
+ }
+
+ /**
+ * Get the gapfill type for queryContext. Also do the validation for gapfill request.
+ * @param queryContext
+ */
+ public static GapfillType getGapfillType(QueryContext queryContext) {
+ GapfillType gapfillType = null;
+ if (queryContext.getSubquery() == null) {
+ if (isGapfill(queryContext)) {
+ Preconditions.checkArgument(queryContext.getAggregationFunctions() == null,
+ "Aggregation and Gapfill can not be in the same sql statement.");
+ gapfillType = GapfillType.GAP_FILL;
+ }
+ } else if (isGapfill(queryContext)) {
+ Preconditions.checkArgument(queryContext.getSubquery().getAggregationFunctions() != null,
+ "Select and Gapfill should be in the same sql statement.");
+ Preconditions.checkArgument(queryContext.getSubquery().getSubquery() == null,
+ "There is no three levels nesting sql when the outer query is gapfill.");
+ gapfillType = GapfillType.AGGREGATE_GAP_FILL;
+ } else if (isGapfill(queryContext.getSubquery())) {
+ if (queryContext.getAggregationFunctions() == null) {
+ gapfillType = GapfillType.GAP_FILL_SELECT;
+ } else if (queryContext.getSubquery().getSubquery() == null) {
+ gapfillType = GapfillType.GAP_FILL_AGGREGATE;
+ } else {
+ Preconditions
+ .checkArgument(queryContext.getSubquery().getSubquery().getAggregationFunctions() != null,
+ "Select cannot happen before gapfill.");
+ gapfillType = GapfillType.AGGREGATE_GAP_FILL_AGGREGATE;
+ }
+ }
+
+ if (gapfillType == null) {
+ return gapfillType;
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext, gapfillType);
+
+ Preconditions.checkArgument(gapFillSelection != null && gapFillSelection.getFunction() != null,
+ "Gapfill Expression should be function.");
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ Preconditions.checkArgument(args.size() > 5, "PreAggregateGapFill does not have correct number of arguments.");
+ Preconditions.checkArgument(args.get(1).getLiteral() != null,
+ "The second argument of PostAggregateGapFill should be TimeFormatter.");
+ Preconditions.checkArgument(args.get(2).getLiteral() != null,
+ "The third argument of PostAggregateGapFill should be start time.");
+ Preconditions.checkArgument(args.get(3).getLiteral() != null,
+ "The fourth argument of PostAggregateGapFill should be end time.");
+ Preconditions.checkArgument(args.get(4).getLiteral() != null,
+ "The fifth argument of PostAggregateGapFill should be time bucket size.");
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ Preconditions.checkArgument(timeseriesOn != null, "The TimeSeriesOn expressions should be specified.");
+
+ if (queryContext.getAggregationFunctions() == null) {
+ return gapfillType;
+ }
+
+ List<ExpressionContext> groupbyExpressions = queryContext.getGroupByExpressions();
+ Preconditions.checkArgument(groupbyExpressions != null, "No GroupBy Clause.");
+ List<ExpressionContext> innerSelections = queryContext.getSubquery().getSelectExpressions();
+ String timeBucketCol = null;
+ List<String> strAlias = queryContext.getSubquery().getAliasList();
+ for (int i = 0; i < innerSelections.size(); i++) {
+ ExpressionContext innerSelection = innerSelections.get(i);
+ if (GapfillUtils.isGapfill(innerSelection)) {
+ if (strAlias.get(i) != null) {
+ timeBucketCol = strAlias.get(i);
+ } else {
+ timeBucketCol = innerSelection.getFunction().getArguments().get(0).toString();
+ }
+ break;
+ }
+ }
+
+ Preconditions.checkArgument(timeBucketCol != null, "No Group By timebucket.");
+
+ boolean findTimeBucket = false;
+ for (ExpressionContext groupbyExp : groupbyExpressions) {
+ if (timeBucketCol.equals(groupbyExp.toString())) {
+ findTimeBucket = true;
+ break;
+ }
+ }
+
+ Preconditions.checkArgument(findTimeBucket, "No Group By timebucket.");
+ return gapfillType;
+ }
+
+ private static ExpressionContext findGapfillExpressionContext(QueryContext queryContext) {
+ for (ExpressionContext expressionContext : queryContext.getSelectExpressions()) {
+ if (isGapfill(expressionContext)) {
+ return expressionContext;
+ }
+ }
+ return null;
+ }
+
+ public static ExpressionContext getGapfillExpressionContext(QueryContext queryContext, GapfillType gapfillType) {
+ if (gapfillType == GapfillType.AGGREGATE_GAP_FILL || gapfillType == GapfillType.GAP_FILL) {
+ return findGapfillExpressionContext(queryContext);
+ } else if (gapfillType == GapfillType.GAP_FILL_AGGREGATE || gapfillType == GapfillType.AGGREGATE_GAP_FILL_AGGREGATE
+ || gapfillType == GapfillType.GAP_FILL_SELECT) {
+ return findGapfillExpressionContext(queryContext.getSubquery());
+ } else {
+ return null;
+ }
+ }
+
+ public static int findTimeBucketColumnIndex(QueryContext queryContext, GapfillType gapfillType) {
+ if (gapfillType == GapfillType.GAP_FILL_AGGREGATE
+ || gapfillType == GapfillType.GAP_FILL_SELECT
+ || gapfillType == GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = queryContext.getSubquery();
+ }
+ List<ExpressionContext> expressionContexts = queryContext.getSelectExpressions();
+ for (int i = 0; i < expressionContexts.size(); i++) {
+ if (isGapfill(expressionContexts.get(i))) {
+ return i;
+ }
+ }
+ return -1;
+ }
+
+ public static ExpressionContext getTimeSeriesOnExpressionContext(ExpressionContext gapFillSelection) {
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ for (int i = STARTING_INDEX_OF_OPTIONAL_ARGS_FOR_PRE_AGGREGATE_GAP_FILL; i < args.size(); i++) {
+ if (GapfillUtils.isTimeSeriesOn(args.get(i))) {
+ return args.get(i);
+ }
+ }
+ return null;
+ }
+
+ public static Map<String, ExpressionContext> getFillExpressions(ExpressionContext gapFillSelection) {
+ Map<String, ExpressionContext> fillExpressions = new HashMap<>();
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ for (int i = STARTING_INDEX_OF_OPTIONAL_ARGS_FOR_PRE_AGGREGATE_GAP_FILL; i < args.size(); i++) {
+ if (GapfillUtils.isFill(args.get(i))) {
+ ExpressionContext fillExpression = args.get(i);
+ fillExpressions.put(fillExpression.getFunction().getArguments().get(0).getIdentifier(), fillExpression);
+ }
+ }
+ return fillExpressions;
+ }
+
+ public static BrokerRequest stripGapfill(BrokerRequest brokerRequest) {
+ if (brokerRequest.getPinotQuery().getDataSource() == null) {
+ return brokerRequest;
+ }
+ PinotQuery pinotQuery = brokerRequest.getPinotQuery();
+ if (pinotQuery.getDataSource().getSubquery() == null && !hasGapfill(pinotQuery)) {
+ return brokerRequest;
+ }
+
+ // carry over the query options from original query to server query.
+ Map<String, String> queryOptions = pinotQuery.getQueryOptions();
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (0e17924) into [master](https://codecov.io/gh/apache/pinot/commit/24d4fd268d28473ffd3ce1ce262322391810f356?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (24d4fd2) will **increase** coverage by `0.12%`.
> The diff coverage is `77.24%`.
> :exclamation: Current head 0e17924 differs from pull request most recent head a181fc7. Consider uploading reports for the commit a181fc7 to get more accurate results
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
+ Coverage 64.05% 64.18% +0.12%
- Complexity 4264 4268 +4
============================================
Files 1595 1604 +9
Lines 84050 84553 +503
Branches 12719 12861 +142
============================================
+ Hits 53835 54267 +432
- Misses 26337 26373 +36
- Partials 3878 3913 +35
```
| Flag | Coverage Δ | |
|---|---|---|
| unittests1 | `67.10% <79.01%> (+0.15%)` | :arrow_up: |
| unittests2 | `14.13% <0.33%> (-0.09%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/GrpcBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvR3JwY0Jyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `0.00% <ø> (ø)` | |
| [...thandler/SingleConnectionBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvU2luZ2xlQ29ubmVjdGlvbkJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `13.20% <0.00%> (+0.24%)` | :arrow_up: |
| [...pache/pinot/common/utils/request/RequestUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9jb21tb24vdXRpbHMvcmVxdWVzdC9SZXF1ZXN0VXRpbHMuamF2YQ==) | `85.71% <0.00%> (-1.79%)` | :arrow_down: |
| [...t/controller/api/resources/PinotQueryResource.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29udHJvbGxlci9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29udHJvbGxlci9hcGkvcmVzb3VyY2VzL1Bpbm90UXVlcnlSZXNvdXJjZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `23.68% <14.28%> (+0.02%)` | :arrow_up: |
| [...e/pinot/core/query/reduce/RowBasedBlockValSet.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUm93QmFzZWRCbG9ja1ZhbFNldC5qYXZh) | `16.12% <16.12%> (ø)` | |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `91.89% <66.66%> (+0.12%)` | :arrow_up: |
| [...ot/core/query/reduce/filter/RowMatcherFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXJGYWN0b3J5LmphdmE=) | `66.66% <66.66%> (ø)` | |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `72.28% <74.21%> (+8.64%)` | :arrow_up: |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| ... and [28 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [24d4fd2...a181fc7](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r830415110
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/GapfillProcessor.java
##########
@@ -0,0 +1,471 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.function.CountAggregationFunction;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+
+/**
+ * Helper class to reduce and set gap fill results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class GapfillProcessor {
+ private final QueryContext _queryContext;
+
+ private final int _limitForAggregatedResult;
+ private final DateTimeGranularitySpec _gapfillDateTimeGranularity;
+ private final DateTimeGranularitySpec _postGapfillDateTimeGranularity;
+ private final DateTimeFormatSpec _dateTimeFormatter;
+ private final long _startMs;
+ private final long _endMs;
+ private final long _gapfillTimeBucketSize;
+ private final long _postGapfillTimeBucketSize;
+ private final int _numOfTimeBuckets;
+ private final List<Integer> _groupByKeyIndexes;
+ private final Set<Key> _groupByKeys;
+ private final Map<Key, Object[]> _previousByGroupKey;
+ private final Map<String, ExpressionContext> _fillExpressions;
+ private final List<ExpressionContext> _timeSeries;
+ private final GapfillUtils.GapfillType _gapfillType;
+ private int _limitForGapfilledResult;
+ private boolean[] _isGroupBySelections;
+ private final int _timeBucketColumnIndex;
+ private int[] _sourceColumnIndexForResultSchema = null;
+ private final int _aggregationSize;
+
+ GapfillProcessor(QueryContext queryContext, GapfillUtils.GapfillType gapfillType) {
+ _queryContext = queryContext;
+ _gapfillType = gapfillType;
+ _limitForAggregatedResult = queryContext.getLimit();
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ _limitForGapfilledResult = queryContext.getLimit();
+ } else {
+ _limitForGapfilledResult = queryContext.getSubquery().getLimit();
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext, _gapfillType);
+ _timeBucketColumnIndex = GapfillUtils.findTimeBucketColumnIndex(queryContext, _gapfillType);
+
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+
+ _dateTimeFormatter = new DateTimeFormatSpec(args.get(1).getLiteral());
+ _gapfillDateTimeGranularity = new DateTimeGranularitySpec(args.get(4).getLiteral());
+ _postGapfillDateTimeGranularity = new DateTimeGranularitySpec(args.get(5).getLiteral());
+ String start = args.get(2).getLiteral();
+ _startMs = truncate(_dateTimeFormatter.fromFormatToMillis(start));
+ String end = args.get(3).getLiteral();
+ _endMs = truncate(_dateTimeFormatter.fromFormatToMillis(end));
+ _gapfillTimeBucketSize = _gapfillDateTimeGranularity.granularityToMillis();
+ _postGapfillTimeBucketSize = _postGapfillDateTimeGranularity.granularityToMillis();
+ _numOfTimeBuckets = (int) ((_endMs - _startMs) / _gapfillTimeBucketSize);
+
+ _fillExpressions = GapfillUtils.getFillExpressions(gapFillSelection);
+
+ _aggregationSize = (int) (_postGapfillTimeBucketSize / _gapfillTimeBucketSize);
+
+ _previousByGroupKey = new HashMap<>();
+ _groupByKeyIndexes = new ArrayList<>();
+ _groupByKeys = new HashSet<>();
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ _timeSeries = timeseriesOn.getFunction().getArguments();
+ }
+
+ private int findGapfillBucketIndex(long time) {
+ return (int) ((time - _startMs) / _gapfillTimeBucketSize);
+ }
+
+ private void replaceColumnNameWithAlias(DataSchema dataSchema) {
+ QueryContext queryContext;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = _queryContext.getSubquery().getSubquery();
+ } else if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ queryContext = _queryContext;
+ } else {
+ queryContext = _queryContext.getSubquery();
+ }
+ List<String> aliasList = queryContext.getAliasList();
+ Map<String, String> columnNameToAliasMap = new HashMap<>();
+ for (int i = 0; i < aliasList.size(); i++) {
+ if (aliasList.get(i) != null) {
+ ExpressionContext selection = queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(selection)) {
+ selection = selection.getFunction().getArguments().get(0);
+ }
+ columnNameToAliasMap.put(selection.toString(), aliasList.get(i));
+ }
+ }
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ if (columnNameToAliasMap.containsKey(dataSchema.getColumnNames()[i])) {
+ dataSchema.getColumnNames()[i] = columnNameToAliasMap.get(dataSchema.getColumnNames()[i]);
+ }
+ }
+ }
+
+ /**
+ * Here are three things that happen
+ * 1. Sort the result sets from all pinot servers based on timestamp
+ * 2. Gapfill the data for missing entities per time bucket
+ * 3. Aggregate the dataset per time bucket.
+ */
+ public void process(BrokerResponseNative brokerResponseNative) {
+ DataSchema dataSchema = brokerResponseNative.getResultTable().getDataSchema();
+ DataSchema resultTableSchema = getResultTableDataSchema(dataSchema);
+ if (brokerResponseNative.getResultTable().getRows().isEmpty()) {
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, Collections.emptyList()));
+ return;
+ }
+
+ String[] columns = dataSchema.getColumnNames();
+
+ Map<String, Integer> indexes = new HashMap<>();
+ for (int i = 0; i < columns.length; i++) {
+ indexes.put(columns[i], i);
+ }
+
+ _isGroupBySelections = new boolean[dataSchema.getColumnDataTypes().length];
+
+ // The first one argument of timeSeries is time column. The left ones are defining entity.
+ for (ExpressionContext entityColum : _timeSeries) {
+ int index = indexes.get(entityColum.getIdentifier());
+ _isGroupBySelections[index] = true;
+ }
+
+ for (int i = 0; i < _isGroupBySelections.length; i++) {
+ if (_isGroupBySelections[i]) {
+ _groupByKeyIndexes.add(i);
+ }
+ }
+
+ List<Object[]>[] timeBucketedRawRows = putRawRowsIntoTimeBucket(brokerResponseNative.getResultTable().getRows());
+
+ replaceColumnNameWithAlias(dataSchema);
+
+ if (_queryContext.getAggregationFunctions() == null) {
+
+ Map<String, Integer> sourceColumnsIndexes = new HashMap<>();
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ sourceColumnsIndexes.put(dataSchema.getColumnName(i), i);
+ }
+ _sourceColumnIndexForResultSchema = new int[resultTableSchema.getColumnNames().length];
+ for (int i = 0; i < _sourceColumnIndexForResultSchema.length; i++) {
+ _sourceColumnIndexForResultSchema[i] = sourceColumnsIndexes.get(resultTableSchema.getColumnName(i));
+ }
+ }
+
+ List<Object[]> resultRows = gapFillAndAggregate(timeBucketedRawRows, resultTableSchema, dataSchema);
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, resultRows));
+ }
+
+ /**
+ * Constructs the DataSchema for the ResultTable.
+ */
+ private DataSchema getResultTableDataSchema(DataSchema dataSchema) {
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ return dataSchema;
+ }
+
+ int numOfColumns = _queryContext.getSelectExpressions().size();
+ String[] columnNames = new String[numOfColumns];
+ ColumnDataType[] columnDataTypes = new ColumnDataType[numOfColumns];
+ for (int i = 0; i < numOfColumns; i++) {
+ ExpressionContext expressionContext = _queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(expressionContext)) {
+ expressionContext = expressionContext.getFunction().getArguments().get(0);
+ }
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ columnNames[i] = expressionContext.getIdentifier();
+ columnDataTypes[i] = ColumnDataType.STRING;
+ } else {
+ FunctionContext functionContext = expressionContext.getFunction();
+ AggregationFunction aggregationFunction =
+ AggregationFunctionFactory.getAggregationFunction(functionContext, _queryContext);
+ columnDataTypes[i] = aggregationFunction.getFinalResultColumnType();
+ columnNames[i] = functionContext.toString();
+ }
+ }
+ return new DataSchema(columnNames, columnDataTypes);
+ }
+
+ private Key constructGroupKeys(Object[] row) {
+ Object[] groupKeys = new Object[_groupByKeyIndexes.size()];
+ for (int i = 0; i < _groupByKeyIndexes.size(); i++) {
+ groupKeys[i] = row[_groupByKeyIndexes.get(i)];
+ }
+ return new Key(groupKeys);
+ }
+
+ private long truncate(long epoch) {
+ int sz = _gapfillDateTimeGranularity.getSize();
+ return epoch / sz * sz;
+ }
+
+ private List<Object[]> gapFillAndAggregate(List<Object[]>[] timeBucketedRawRows,
+ DataSchema dataSchemaForAggregatedResult, DataSchema dataSchema) {
+ List<Object[]> result = new ArrayList<>();
+
+ GapfillFilterHandler postGapfillFilterHandler = null;
+ if (_queryContext.getSubquery() != null && _queryContext.getFilter() != null) {
+ postGapfillFilterHandler = new GapfillFilterHandler(_queryContext.getFilter(), dataSchema);
+ }
+ GapfillFilterHandler postAggregateHavingFilterHandler = null;
+ if (_queryContext.getHavingFilter() != null) {
+ postAggregateHavingFilterHandler =
+ new GapfillFilterHandler(_queryContext.getHavingFilter(), dataSchemaForAggregatedResult);
+ }
+ long start = _startMs;
+ ColumnDataType[] resultColumnDataTypes = dataSchema.getColumnDataTypes();
+ List<Object[]> bucketedResult = new ArrayList<>();
+ for (long time = _startMs; time < _endMs; time += _gapfillTimeBucketSize) {
+ int index = findGapfillBucketIndex(time);
+ gapfill(time, bucketedResult, timeBucketedRawRows[index], dataSchema, postGapfillFilterHandler);
+ if (_queryContext.getAggregationFunctions() == null) {
+ for (Object [] row : bucketedResult) {
+ Object[] resultRow = new Object[_sourceColumnIndexForResultSchema.length];
+ for (int i = 0; i < _sourceColumnIndexForResultSchema.length; i++) {
+ resultRow[i] = row[_sourceColumnIndexForResultSchema[i]];
+ }
+ result.add(resultRow);
+ }
+ bucketedResult = new ArrayList<>();
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r830751691
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/filter/RowMatcherFactory.java
##########
@@ -0,0 +1,47 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce.filter;
+
+import org.apache.pinot.common.request.context.FilterContext;
+
+
+/**
+ * Factory for RowMatcher.
+ */
+public class RowMatcherFactory {
+ private RowMatcherFactory() {
+ }
+
+ /**
+ * Helper method to construct a RowMatcher based on the given filter.
+ */
+ public static RowMatcher getRowMatcher(FilterContext filter, ValueExtractorFactory valueExtractorFactory) {
+ switch (filter.getType()) {
+ case AND:
+ return new AndRowMatcher(filter.getChildren(), valueExtractorFactory);
+ case OR:
+ return new OrRowMatcher(filter.getChildren(), valueExtractorFactory);
+ case PREDICATE:
Review comment:
I prefer to integrate the fix here until your fix is merged in.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (bd1d287) into [master](https://codecov.io/gh/apache/pinot/commit/916d807c8f67b32c1a430692f74134c9c976c33d?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (916d807) will **decrease** coverage by `43.74%`.
> The diff coverage is `18.95%`.
> :exclamation: Current head bd1d287 differs from pull request most recent head f0544c7. Consider uploading reports for the commit f0544c7 to get more accurate results
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 71.02% 27.28% -43.75%
=============================================
Files 1626 1624 -2
Lines 84929 85257 +328
Branches 12783 12910 +127
=============================================
- Hits 60325 23263 -37062
- Misses 20462 59809 +39347
+ Partials 4142 2185 -1957
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `27.28% <18.95%> (-0.13%)` | :arrow_down: |
| unittests1 | `?` | |
| unittests2 | `?` | |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...e/pinot/broker/broker/helix/BaseBrokerStarter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvYnJva2VyL2hlbGl4L0Jhc2VCcm9rZXJTdGFydGVyLmphdmE=) | `72.97% <ø> (-2.71%)` | :arrow_down: |
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `61.30% <0.00%> (-10.22%)` | :arrow_down: |
| [...roker/requesthandler/GrpcBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvR3JwY0Jyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `78.57% <ø> (ø)` | |
| [...thandler/SingleConnectionBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvU2luZ2xlQ29ubmVjdGlvbkJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `87.03% <ø> (ø)` | |
| [...java/org/apache/pinot/common/config/TlsConfig.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9jb21tb24vY29uZmlnL1Rsc0NvbmZpZy5qYXZh) | `97.50% <ø> (ø)` | |
| [...n/java/org/apache/pinot/common/utils/TlsUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9jb21tb24vdXRpbHMvVGxzVXRpbHMuamF2YQ==) | `77.64% <ø> (ø)` | |
| [...apache/pinot/controller/BaseControllerStarter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29udHJvbGxlci9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29udHJvbGxlci9CYXNlQ29udHJvbGxlclN0YXJ0ZXIuamF2YQ==) | `74.84% <ø> (-7.33%)` | :arrow_down: |
| [...ache/pinot/core/plan/GapfillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| ... and [1224 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [916d807...f0544c7](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (57e966b) into [master](https://codecov.io/gh/apache/pinot/commit/fb572bd0aba20d2b8a83320df6dd24cb0c654b30?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (fb572bd) will **decrease** coverage by `56.27%`.
> The diff coverage is `0.00%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 70.39% 14.11% -56.28%
+ Complexity 4308 81 -4227
=============================================
Files 1623 1591 -32
Lines 84365 83239 -1126
Branches 12657 12635 -22
=============================================
- Hits 59386 11753 -47633
- Misses 20876 70604 +49728
+ Partials 4103 882 -3221
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| unittests1 | `?` | |
| unittests2 | `14.11% <0.00%> (-0.10%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `0.00% <0.00%> (-87.93%)` | :arrow_down: |
| [...org/apache/pinot/core/data/table/IndexedTable.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9kYXRhL3RhYmxlL0luZGV4ZWRUYWJsZS5qYXZh) | `0.00% <0.00%> (-84.75%)` | :arrow_down: |
| [.../combine/GapfillGroupByOrderByCombineOperator.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9vcGVyYXRvci9jb21iaW5lL0dhcGZpbGxHcm91cEJ5T3JkZXJCeUNvbWJpbmVPcGVyYXRvci5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (-83.93%)` | :arrow_down: |
| [...plan/GapfillAggregationGroupByOrderByPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxBZ2dyZWdhdGlvbkdyb3VwQnlPcmRlckJ5UGxhbk5vZGUuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...he/pinot/core/plan/GapfillAggregationPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxBZ2dyZWdhdGlvblBsYW5Ob2RlLmphdmE=) | `0.00% <0.00%> (ø)` | |
| [...ache/pinot/core/plan/GapfillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...pinot/core/plan/maker/InstancePlanMakerImplV2.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL21ha2VyL0luc3RhbmNlUGxhbk1ha2VySW1wbFYyLmphdmE=) | `0.00% <0.00%> (-62.63%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...e/pinot/core/query/reduce/HavingFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvSGF2aW5nRmlsdGVySGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.31%)` | :arrow_down: |
| ... and [1295 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [fb572bd...57e966b](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (dc923b1) into [master](https://codecov.io/gh/apache/pinot/commit/21632dadb8cd2d8b77aec523a758d73a64f70b07?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (21632da) will **decrease** coverage by `56.98%`.
> The diff coverage is `0.00%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 71.00% 14.01% -56.99%
+ Complexity 4320 81 -4239
=============================================
Files 1629 1594 -35
Lines 85132 83884 -1248
Branches 12812 12766 -46
=============================================
- Hits 60445 11754 -48691
- Misses 20526 71250 +50724
+ Partials 4161 880 -3281
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `?` | |
| unittests2 | `14.01% <0.00%> (-0.14%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `24.41% <0.00%> (-47.26%)` | :arrow_down: |
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `0.00% <0.00%> (-87.93%)` | :arrow_down: |
| [...org/apache/pinot/core/data/table/IndexedTable.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9kYXRhL3RhYmxlL0luZGV4ZWRUYWJsZS5qYXZh) | `0.00% <0.00%> (-84.75%)` | :arrow_down: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (-87.50%)` | :arrow_down: |
| [...ache/pinot/core/plan/GapfillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...pinot/core/plan/maker/InstancePlanMakerImplV2.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL21ha2VyL0luc3RhbmNlUGxhbk1ha2VySW1wbFYyLmphdmE=) | `0.00% <0.00%> (-76.77%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...e/pinot/core/query/reduce/HavingFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvSGF2aW5nRmlsdGVySGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.31%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.77%)` | :arrow_down: |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| ... and [1320 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [21632da...dc923b1](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r814082858
##########
File path: pinot-common/src/main/java/org/apache/pinot/sql/parsers/CalciteSqlParser.java
##########
@@ -325,32 +342,30 @@ private static void setOptions(PinotQuery pinotQuery, List<String> optionsStatem
pinotQuery.setQueryOptions(options);
}
- private static PinotQuery compileCalciteSqlToPinotQuery(String sql) {
- SqlParser sqlParser = SqlParser.create(sql, PARSER_CONFIG);
- SqlNode sqlNode;
- try {
- sqlNode = sqlParser.parseQuery();
- } catch (SqlParseException e) {
- throw new SqlCompilationException("Caught exception while parsing query: " + sql, e);
- }
-
- PinotQuery pinotQuery = new PinotQuery();
- if (sqlNode instanceof SqlExplain) {
- // Extract sql node for the query
- sqlNode = ((SqlExplain) sqlNode).getExplicandum();
- pinotQuery.setExplain(true);
- }
- SqlSelect selectNode;
+ private static SqlSelect getSelectNode(SqlNode sqlNode) {
+ SqlSelect selectNode = null;
if (sqlNode instanceof SqlOrderBy) {
// Store order-by info into the select sql node
SqlOrderBy orderByNode = (SqlOrderBy) sqlNode;
selectNode = (SqlSelect) orderByNode.query;
selectNode.setOrderBy(orderByNode.orderList);
selectNode.setFetch(orderByNode.fetch);
selectNode.setOffset(orderByNode.offset);
- } else {
+ } else if (sqlNode instanceof SqlSelect) {
Review comment:
Revert the change. Fixed.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (2e4a1ca) into [master](https://codecov.io/gh/apache/pinot/commit/21632dadb8cd2d8b77aec523a758d73a64f70b07?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (21632da) will **decrease** coverage by `3.91%`.
> The diff coverage is `81.45%`.
> :exclamation: Current head 2e4a1ca differs from pull request most recent head 69b8c46. Consider uploading reports for the commit 69b8c46 to get more accurate results
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 71.00% 67.08% -3.92%
+ Complexity 4320 4158 -162
============================================
Files 1629 1239 -390
Lines 85132 62731 -22401
Branches 12812 9817 -2995
============================================
- Hits 60445 42085 -18360
+ Misses 20526 17638 -2888
+ Partials 4161 3008 -1153
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `67.08% <81.45%> (-0.29%)` | :arrow_down: |
| unittests2 | `?` | |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...on/src/main/java/org/apache/pinot/serde/SerDe.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zZXJkZS9TZXJEZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `86.08% <0.00%> (-6.10%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...e/pinot/core/transport/InstanceRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS90cmFuc3BvcnQvSW5zdGFuY2VSZXF1ZXN0SGFuZGxlci5qYXZh) | `52.43% <50.00%> (-8.33%)` | :arrow_down: |
| [...rg/apache/pinot/core/transport/ServerChannels.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS90cmFuc3BvcnQvU2VydmVyQ2hhbm5lbHMuamF2YQ==) | `73.77% <50.00%> (-16.07%)` | :arrow_down: |
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `85.44% <65.51%> (-2.48%)` | :arrow_down: |
| [...che/pinot/core/query/reduce/filter/RowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXIuamF2YQ==) | `66.66% <66.66%> (ø)` | |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `75.00% <81.15%> (+11.36%)` | :arrow_up: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `83.33% <81.81%> (-4.17%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `92.00% <83.33%> (+0.23%)` | :arrow_up: |
| ... and [651 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [21632da...69b8c46](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (69b8c46) into [master](https://codecov.io/gh/apache/pinot/commit/21632dadb8cd2d8b77aec523a758d73a64f70b07?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (21632da) will **decrease** coverage by `56.98%`.
> The diff coverage is `0.00%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 71.00% 14.01% -56.99%
+ Complexity 4320 81 -4239
=============================================
Files 1629 1594 -35
Lines 85132 83953 -1179
Branches 12812 12780 -32
=============================================
- Hits 60445 11770 -48675
- Misses 20526 71307 +50781
+ Partials 4161 876 -3285
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `?` | |
| unittests2 | `14.01% <0.00%> (-0.13%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `24.41% <0.00%> (-47.26%)` | :arrow_down: |
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `0.00% <0.00%> (-87.93%)` | :arrow_down: |
| [...org/apache/pinot/core/data/table/IndexedTable.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9kYXRhL3RhYmxlL0luZGV4ZWRUYWJsZS5qYXZh) | `0.00% <0.00%> (-84.75%)` | :arrow_down: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (-87.50%)` | :arrow_down: |
| [...ache/pinot/core/plan/GapfillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...pinot/core/plan/maker/InstancePlanMakerImplV2.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL21ha2VyL0luc3RhbmNlUGxhbk1ha2VySW1wbFYyLmphdmE=) | `0.00% <0.00%> (-76.77%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [.../pinot/core/query/reduce/GapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...e/pinot/core/query/reduce/HavingFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvSGF2aW5nRmlsdGVySGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.31%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.77%)` | :arrow_down: |
| ... and [1323 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [21632da...69b8c46](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (3d822b7) into [master](https://codecov.io/gh/apache/pinot/commit/21632dadb8cd2d8b77aec523a758d73a64f70b07?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (21632da) will **decrease** coverage by `56.98%`.
> The diff coverage is `0.00%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 71.00% 14.01% -56.99%
+ Complexity 4320 81 -4239
=============================================
Files 1629 1594 -35
Lines 85132 83956 -1176
Branches 12812 12779 -33
=============================================
- Hits 60445 11769 -48676
- Misses 20526 71311 +50785
+ Partials 4161 876 -3285
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `?` | |
| unittests2 | `14.01% <0.00%> (-0.14%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `24.41% <0.00%> (-47.26%)` | :arrow_down: |
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `0.00% <0.00%> (-87.93%)` | :arrow_down: |
| [...org/apache/pinot/core/data/table/IndexedTable.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9kYXRhL3RhYmxlL0luZGV4ZWRUYWJsZS5qYXZh) | `0.00% <0.00%> (-84.75%)` | :arrow_down: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (-87.50%)` | :arrow_down: |
| [...ache/pinot/core/plan/GapfillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...pinot/core/plan/maker/InstancePlanMakerImplV2.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL21ha2VyL0luc3RhbmNlUGxhbk1ha2VySW1wbFYyLmphdmE=) | `0.00% <0.00%> (-76.77%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [.../pinot/core/query/reduce/GapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...e/pinot/core/query/reduce/HavingFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvSGF2aW5nRmlsdGVySGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.31%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.77%)` | :arrow_down: |
| ... and [1323 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [21632da...3d822b7](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r815391461
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/PreAggregationGapFillDataTableReducer.java
##########
@@ -0,0 +1,775 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.PriorityQueue;
+import java.util.Set;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import org.apache.pinot.common.exception.QueryException;
+import org.apache.pinot.common.metrics.BrokerMetrics;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.QueryProcessingException;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.ConcurrentIndexedTable;
+import org.apache.pinot.core.data.table.IndexedTable;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.data.table.Record;
+import org.apache.pinot.core.data.table.SimpleIndexedTable;
+import org.apache.pinot.core.data.table.UnboundedConcurrentIndexedTable;
+import org.apache.pinot.core.operator.combine.GroupByOrderByCombineOperator;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.query.selection.SelectionOperatorUtils;
+import org.apache.pinot.core.transport.ServerRoutingInstance;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.core.util.GroupByUtils;
+import org.apache.pinot.core.util.trace.TraceRunnable;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+/**
+ * Helper class to reduce and set Aggregation results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class PreAggregationGapFillDataTableReducer implements DataTableReducer {
+ private static final int MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE = 2; // TBD, find a better value.
+
+ private final QueryContext _queryContext;
+
+ private final int _limitForAggregatedResult;
+ private int _limitForGapfilledResult;
+
+ private final DateTimeGranularitySpec _dateTimeGranularity;
+ private final DateTimeFormatSpec _dateTimeFormatter;
+ private final long _startMs;
+ private final long _endMs;
+ private final long _timeBucketSize;
+
+ private final List<Integer> _groupByKeyIndexes;
+ private boolean [] _isGroupBySelections;
+ private final Set<Key> _groupByKeys;
+ private final Map<Key, Object[]> _previousByGroupKey;
+ private final Map<String, ExpressionContext> _fillExpressions;
+ private final List<ExpressionContext> _timeSeries;
+ private final GapfillUtils.GapfillType _gapfillType;
+
+ PreAggregationGapFillDataTableReducer(QueryContext queryContext) {
+ _queryContext = queryContext;
+ _gapfillType = queryContext.getGapfillType();
+ _limitForAggregatedResult = queryContext.getLimit();
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ _limitForGapfilledResult = queryContext.getLimit();
+ } else {
+ _limitForGapfilledResult = queryContext.getSubQueryContext().getLimit();
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext);
+
+ Preconditions.checkArgument(
+ gapFillSelection != null && gapFillSelection.getFunction() != null, "Gapfill Expression should be function.");
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ Preconditions.checkArgument(
+ args.size() > 5, "PreAggregateGapFill does not have correct number of arguments.");
+ Preconditions.checkArgument(
+ args.get(1).getLiteral() != null, "The second argument of PostAggregateGapFill should be TimeFormatter.");
+ Preconditions.checkArgument(
+ args.get(2).getLiteral() != null, "The third argument of PostAggregateGapFill should be start time.");
+ Preconditions.checkArgument(
+ args.get(3).getLiteral() != null, "The fourth argument of PostAggregateGapFill should be end time.");
+ Preconditions.checkArgument(
+ args.get(4).getLiteral() != null, "The fifth argument of PostAggregateGapFill should be time bucket size.");
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r806415880
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/plan/GapfillAggregationPlanNode.java
##########
@@ -0,0 +1,175 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.plan;
+
+import com.google.common.base.Preconditions;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.core.common.Operator;
+import org.apache.pinot.core.operator.blocks.IntermediateResultsBlock;
+import org.apache.pinot.core.operator.filter.BaseFilterOperator;
+import org.apache.pinot.core.operator.query.AggregationOperator;
+import org.apache.pinot.core.operator.query.DictionaryBasedAggregationOperator;
+import org.apache.pinot.core.operator.query.MetadataBasedAggregationOperator;
+import org.apache.pinot.core.operator.transform.TransformOperator;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionUtils;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.startree.CompositePredicateEvaluator;
+import org.apache.pinot.core.startree.StarTreeUtils;
+import org.apache.pinot.core.startree.plan.StarTreeTransformPlanNode;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.segment.spi.AggregationFunctionType;
+import org.apache.pinot.segment.spi.IndexSegment;
+import org.apache.pinot.segment.spi.index.reader.Dictionary;
+import org.apache.pinot.segment.spi.index.startree.AggregationFunctionColumnPair;
+import org.apache.pinot.segment.spi.index.startree.StarTreeV2;
+
+
+/**
+ * The <code>GapfillAggregationPlanNode</code> class provides the execution plan for gapfill aggregation only query on
+ * a single segment.
+ */
+@SuppressWarnings("rawtypes")
+public class GapfillAggregationPlanNode implements PlanNode {
Review comment:
Removed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] amrishlal commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
amrishlal commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r815203052
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/PreAggregationGapFillDataTableReducer.java
##########
@@ -0,0 +1,775 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.PriorityQueue;
+import java.util.Set;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import org.apache.pinot.common.exception.QueryException;
+import org.apache.pinot.common.metrics.BrokerMetrics;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.QueryProcessingException;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.ConcurrentIndexedTable;
+import org.apache.pinot.core.data.table.IndexedTable;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.data.table.Record;
+import org.apache.pinot.core.data.table.SimpleIndexedTable;
+import org.apache.pinot.core.data.table.UnboundedConcurrentIndexedTable;
+import org.apache.pinot.core.operator.combine.GroupByOrderByCombineOperator;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.query.selection.SelectionOperatorUtils;
+import org.apache.pinot.core.transport.ServerRoutingInstance;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.core.util.GroupByUtils;
+import org.apache.pinot.core.util.trace.TraceRunnable;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+/**
+ * Helper class to reduce and set Aggregation results into the BrokerResponseNative
+ */
Review comment:
/** Helper class to reduce and set gap fill results into the BrokerResponseNative */
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/PreAggregationGapFillDataTableReducer.java
##########
@@ -0,0 +1,775 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.PriorityQueue;
+import java.util.Set;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import org.apache.pinot.common.exception.QueryException;
+import org.apache.pinot.common.metrics.BrokerMetrics;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.QueryProcessingException;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.ConcurrentIndexedTable;
+import org.apache.pinot.core.data.table.IndexedTable;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.data.table.Record;
+import org.apache.pinot.core.data.table.SimpleIndexedTable;
+import org.apache.pinot.core.data.table.UnboundedConcurrentIndexedTable;
+import org.apache.pinot.core.operator.combine.GroupByOrderByCombineOperator;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.query.selection.SelectionOperatorUtils;
+import org.apache.pinot.core.transport.ServerRoutingInstance;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.core.util.GroupByUtils;
+import org.apache.pinot.core.util.trace.TraceRunnable;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+/**
+ * Helper class to reduce and set Aggregation results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class PreAggregationGapFillDataTableReducer implements DataTableReducer {
+ private static final int MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE = 2; // TBD, find a better value.
+
+ private final QueryContext _queryContext;
+
+ private final int _limitForAggregatedResult;
+ private int _limitForGapfilledResult;
+
+ private final DateTimeGranularitySpec _dateTimeGranularity;
+ private final DateTimeFormatSpec _dateTimeFormatter;
+ private final long _startMs;
+ private final long _endMs;
+ private final long _timeBucketSize;
+
+ private final List<Integer> _groupByKeyIndexes;
+ private boolean [] _isGroupBySelections;
+ private final Set<Key> _groupByKeys;
+ private final Map<Key, Object[]> _previousByGroupKey;
+ private final Map<String, ExpressionContext> _fillExpressions;
+ private final List<ExpressionContext> _timeSeries;
+ private final GapfillUtils.GapfillType _gapfillType;
+
+ PreAggregationGapFillDataTableReducer(QueryContext queryContext) {
+ _queryContext = queryContext;
+ _gapfillType = queryContext.getGapfillType();
+ _limitForAggregatedResult = queryContext.getLimit();
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ _limitForGapfilledResult = queryContext.getLimit();
+ } else {
+ _limitForGapfilledResult = queryContext.getSubQueryContext().getLimit();
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext);
+
+ Preconditions.checkArgument(
+ gapFillSelection != null && gapFillSelection.getFunction() != null, "Gapfill Expression should be function.");
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ Preconditions.checkArgument(
+ args.size() > 5, "PreAggregateGapFill does not have correct number of arguments.");
+ Preconditions.checkArgument(
+ args.get(1).getLiteral() != null, "The second argument of PostAggregateGapFill should be TimeFormatter.");
+ Preconditions.checkArgument(
+ args.get(2).getLiteral() != null, "The third argument of PostAggregateGapFill should be start time.");
+ Preconditions.checkArgument(
+ args.get(3).getLiteral() != null, "The fourth argument of PostAggregateGapFill should be end time.");
+ Preconditions.checkArgument(
+ args.get(4).getLiteral() != null, "The fifth argument of PostAggregateGapFill should be time bucket size.");
Review comment:
I am wondering if these checks are needed because by the time we get to this part, user query would already have been validated at parsing compilation time? If still needed, can these checks be moved to parsing / compilation time?
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/PreAggregationGapFillDataTableReducer.java
##########
@@ -0,0 +1,775 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.PriorityQueue;
+import java.util.Set;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import org.apache.pinot.common.exception.QueryException;
+import org.apache.pinot.common.metrics.BrokerMetrics;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.QueryProcessingException;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.ConcurrentIndexedTable;
+import org.apache.pinot.core.data.table.IndexedTable;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.data.table.Record;
+import org.apache.pinot.core.data.table.SimpleIndexedTable;
+import org.apache.pinot.core.data.table.UnboundedConcurrentIndexedTable;
+import org.apache.pinot.core.operator.combine.GroupByOrderByCombineOperator;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.query.selection.SelectionOperatorUtils;
+import org.apache.pinot.core.transport.ServerRoutingInstance;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.core.util.GroupByUtils;
+import org.apache.pinot.core.util.trace.TraceRunnable;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+/**
+ * Helper class to reduce and set Aggregation results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class PreAggregationGapFillDataTableReducer implements DataTableReducer {
+ private static final int MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE = 2; // TBD, find a better value.
+
+ private final QueryContext _queryContext;
+
+ private final int _limitForAggregatedResult;
+ private int _limitForGapfilledResult;
+
+ private final DateTimeGranularitySpec _dateTimeGranularity;
+ private final DateTimeFormatSpec _dateTimeFormatter;
+ private final long _startMs;
+ private final long _endMs;
+ private final long _timeBucketSize;
+
+ private final List<Integer> _groupByKeyIndexes;
+ private boolean [] _isGroupBySelections;
+ private final Set<Key> _groupByKeys;
+ private final Map<Key, Object[]> _previousByGroupKey;
+ private final Map<String, ExpressionContext> _fillExpressions;
+ private final List<ExpressionContext> _timeSeries;
+ private final GapfillUtils.GapfillType _gapfillType;
+
+ PreAggregationGapFillDataTableReducer(QueryContext queryContext) {
+ _queryContext = queryContext;
+ _gapfillType = queryContext.getGapfillType();
+ _limitForAggregatedResult = queryContext.getLimit();
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ _limitForGapfilledResult = queryContext.getLimit();
+ } else {
+ _limitForGapfilledResult = queryContext.getSubQueryContext().getLimit();
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext);
+
+ Preconditions.checkArgument(
+ gapFillSelection != null && gapFillSelection.getFunction() != null, "Gapfill Expression should be function.");
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ Preconditions.checkArgument(
+ args.size() > 5, "PreAggregateGapFill does not have correct number of arguments.");
+ Preconditions.checkArgument(
+ args.get(1).getLiteral() != null, "The second argument of PostAggregateGapFill should be TimeFormatter.");
+ Preconditions.checkArgument(
+ args.get(2).getLiteral() != null, "The third argument of PostAggregateGapFill should be start time.");
+ Preconditions.checkArgument(
+ args.get(3).getLiteral() != null, "The fourth argument of PostAggregateGapFill should be end time.");
+ Preconditions.checkArgument(
+ args.get(4).getLiteral() != null, "The fifth argument of PostAggregateGapFill should be time bucket size.");
+
+ _dateTimeFormatter = new DateTimeFormatSpec(args.get(1).getLiteral());
+ _dateTimeGranularity = new DateTimeGranularitySpec(args.get(4).getLiteral());
+ String start = args.get(2).getLiteral();
+ _startMs = truncate(_dateTimeFormatter.fromFormatToMillis(start));
+ String end = args.get(3).getLiteral();
+ _endMs = truncate(_dateTimeFormatter.fromFormatToMillis(end));
+ _timeBucketSize = _dateTimeGranularity.granularityToMillis();
+
+ _fillExpressions = GapfillUtils.getFillExpressions(gapFillSelection);
+
+ _previousByGroupKey = new HashMap<>();
+ _groupByKeyIndexes = new ArrayList<>();
+ _groupByKeys = new HashSet<>();
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ Preconditions.checkArgument(timeseriesOn != null, "The TimeSeriesOn expressions should be specified.");
+ _timeSeries = timeseriesOn.getFunction().getArguments();
+ }
+
+ private void replaceColumnNameWithAlias(DataSchema dataSchema) {
+ QueryContext queryContext;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = _queryContext.getSubQueryContext().getSubQueryContext();
+ } else if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ queryContext = _queryContext;
+ } else {
+ queryContext = _queryContext.getSubQueryContext();
+ }
+ List<String> aliasList = queryContext.getAliasList();
+ Map<String, String> columnNameToAliasMap = new HashMap<>();
+ for (int i = 0; i < aliasList.size(); i++) {
+ if (aliasList.get(i) != null) {
+ ExpressionContext selection = queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(selection)) {
+ selection = selection.getFunction().getArguments().get(0);
+ }
+ columnNameToAliasMap.put(selection.toString(), aliasList.get(i));
+ }
+ }
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ if (columnNameToAliasMap.containsKey(dataSchema.getColumnNames()[i])) {
+ dataSchema.getColumnNames()[i] = columnNameToAliasMap.get(dataSchema.getColumnNames()[i]);
+ }
+ }
+ }
+
+ /**
+ * Computes the number of reduce threads to use per query.
+ * <ul>
+ * <li> Use single thread if number of data tables to reduce is less than
+ * {@value #MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE}.</li>
+ * <li> Else, use min of max allowed reduce threads per query, and number of data tables.</li>
+ * </ul>
+ *
+ * @param numDataTables Number of data tables to reduce
+ * @param maxReduceThreadsPerQuery Max allowed reduce threads per query
+ * @return Number of reduce threads to use for the query
+ */
+ private int getNumReduceThreadsToUse(int numDataTables, int maxReduceThreadsPerQuery) {
+ // Use single thread if number of data tables < MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE.
+ if (numDataTables < MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE) {
+ return Math.min(1, numDataTables); // Number of data tables can be zero.
+ }
+
+ return Math.min(maxReduceThreadsPerQuery, numDataTables);
+ }
+
+ private IndexedTable getIndexedTable(DataSchema dataSchema, Collection<DataTable> dataTablesToReduce,
+ DataTableReducerContext reducerContext)
+ throws TimeoutException {
+ long start = System.currentTimeMillis();
+ int numDataTables = dataTablesToReduce.size();
+
+ // Get the number of threads to use for reducing.
+ // In case of single reduce thread, fall back to SimpleIndexedTable to avoid redundant locking/unlocking calls.
+ int numReduceThreadsToUse = getNumReduceThreadsToUse(numDataTables, reducerContext.getMaxReduceThreadsPerQuery());
+ int limit = _queryContext.getLimit();
+ // TODO: Make minTrimSize configurable
+ int trimSize = GroupByUtils.getTableCapacity(limit);
+ // NOTE: For query with HAVING clause, use trimSize as resultSize to ensure the result accuracy.
+ // TODO: Resolve the HAVING clause within the IndexedTable before returning the result
+ int resultSize = _queryContext.getHavingFilter() != null ? trimSize : limit;
+ int trimThreshold = reducerContext.getGroupByTrimThreshold();
+ IndexedTable indexedTable;
+ if (numReduceThreadsToUse <= 1) {
+ indexedTable = new SimpleIndexedTable(dataSchema, _queryContext, resultSize, trimSize, trimThreshold);
+ } else {
+ if (trimThreshold >= GroupByOrderByCombineOperator.MAX_TRIM_THRESHOLD) {
+ // special case of trim threshold where it is set to max value.
+ // there won't be any trimming during upsert in this case.
+ // thus we can avoid the overhead of read-lock and write-lock
+ // in the upsert method.
+ indexedTable = new UnboundedConcurrentIndexedTable(dataSchema, _queryContext, resultSize);
+ } else {
+ indexedTable = new ConcurrentIndexedTable(dataSchema, _queryContext, resultSize, trimSize, trimThreshold);
+ }
+ }
+
+ Future[] futures = new Future[numDataTables];
+ CountDownLatch countDownLatch = new CountDownLatch(numDataTables);
+
+ // Create groups of data tables that each thread can process concurrently.
+ // Given that numReduceThreads is <= numDataTables, each group will have at least one data table.
+ ArrayList<DataTable> dataTables = new ArrayList<>(dataTablesToReduce);
+ List<List<DataTable>> reduceGroups = new ArrayList<>(numReduceThreadsToUse);
+
+ for (int i = 0; i < numReduceThreadsToUse; i++) {
+ reduceGroups.add(new ArrayList<>());
+ }
+ for (int i = 0; i < numDataTables; i++) {
+ reduceGroups.get(i % numReduceThreadsToUse).add(dataTables.get(i));
+ }
+
+ int cnt = 0;
+ ColumnDataType[] storedColumnDataTypes = dataSchema.getStoredColumnDataTypes();
+ int numColumns = storedColumnDataTypes.length;
+ for (List<DataTable> reduceGroup : reduceGroups) {
+ futures[cnt++] = reducerContext.getExecutorService().submit(new TraceRunnable() {
+ @Override
+ public void runJob() {
+ for (DataTable dataTable : reduceGroup) {
+ int numRows = dataTable.getNumberOfRows();
+
+ try {
+ for (int rowId = 0; rowId < numRows; rowId++) {
+ Object[] values = new Object[numColumns];
+ for (int colId = 0; colId < numColumns; colId++) {
+ switch (storedColumnDataTypes[colId]) {
+ case INT:
+ values[colId] = dataTable.getInt(rowId, colId);
+ break;
+ case LONG:
+ values[colId] = dataTable.getLong(rowId, colId);
+ break;
+ case FLOAT:
+ values[colId] = dataTable.getFloat(rowId, colId);
+ break;
+ case DOUBLE:
+ values[colId] = dataTable.getDouble(rowId, colId);
+ break;
+ case STRING:
+ values[colId] = dataTable.getString(rowId, colId);
+ break;
+ case BYTES:
+ values[colId] = dataTable.getBytes(rowId, colId);
+ break;
+ case OBJECT:
+ values[colId] = dataTable.getObject(rowId, colId);
+ break;
+ // Add other aggregation intermediate result / group-by column type supports here
+ default:
+ throw new IllegalStateException();
+ }
+ }
+ indexedTable.upsert(new Record(values));
+ }
+ } finally {
+ countDownLatch.countDown();
+ }
+ }
+ }
+ });
+ }
+
+ try {
+ long timeOutMs = reducerContext.getReduceTimeOutMs() - (System.currentTimeMillis() - start);
+ countDownLatch.await(timeOutMs, TimeUnit.MILLISECONDS);
+ } catch (InterruptedException e) {
+ for (Future future : futures) {
+ if (!future.isDone()) {
+ future.cancel(true);
+ }
+ }
+ throw new TimeoutException("Timed out in broker reduce phase.");
+ }
+
+ indexedTable.finish(true);
+ return indexedTable;
+ }
+
+ /**
+ * Here are three things that happen
+ * 1. Sort the result sets from all pinot servers based on timestamp
+ * 2. Gapfill the data for missing entities per time bucket
+ * 3. Aggregate the dataset per time bucket.
+ */
+ @Override
+ public void reduceAndSetResults(String tableName, DataSchema dataSchema,
+ Map<ServerRoutingInstance, DataTable> dataTableMap, BrokerResponseNative brokerResponseNative,
+ DataTableReducerContext reducerContext, BrokerMetrics brokerMetrics) {
+ DataSchema resultTableSchema = getResultTableDataSchema(dataSchema);
+ if (dataTableMap.isEmpty()) {
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, Collections.emptyList()));
+ return;
+ }
+
+ String[] columns = dataSchema.getColumnNames();
+
+ Map<String, Integer> indexes = new HashMap<>();
+ for (int i = 0; i < columns.length; i++) {
+ indexes.put(columns[i], i);
+ }
+
+ _isGroupBySelections = new boolean[dataSchema.getColumnDataTypes().length];
+
+ // The first one argument of timeSeries is time column. The left ones are defining entity.
+ for (ExpressionContext entityColum : _timeSeries) {
+ int index = indexes.get(entityColum.getIdentifier());
+ _isGroupBySelections[index] = true;
+ _groupByKeyIndexes.add(index);
+ }
+
+ List<Object[]> sortedRawRows;
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL_AGGREGATE
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL_SELECT) {
+ sortedRawRows = mergeAndSort(dataTableMap.values(), dataSchema);
+ } else {
+ try {
+ IndexedTable indexedTable = getIndexedTable(dataSchema, dataTableMap.values(), reducerContext);
+ sortedRawRows = mergeAndSort(indexedTable, dataSchema);
+ } catch (TimeoutException e) {
+ brokerResponseNative.getProcessingExceptions()
+ .add(new QueryProcessingException(QueryException.BROKER_TIMEOUT_ERROR_CODE, e.getMessage()));
+ return;
+ }
+ }
+ List<Object[]> resultRows;
+ replaceColumnNameWithAlias(dataSchema);
+ if (_queryContext.getAggregationFunctions() != null) {
+ validateGroupByForOuterQuery();
Review comment:
Why do we need to do this validation? If the _queryContext object was constructed properly, there should be no need to do this validation? Can this validation be moved to compilation stage?
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/PreAggregationGapFillDataTableReducer.java
##########
@@ -0,0 +1,775 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.PriorityQueue;
+import java.util.Set;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import org.apache.pinot.common.exception.QueryException;
+import org.apache.pinot.common.metrics.BrokerMetrics;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.QueryProcessingException;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.ConcurrentIndexedTable;
+import org.apache.pinot.core.data.table.IndexedTable;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.data.table.Record;
+import org.apache.pinot.core.data.table.SimpleIndexedTable;
+import org.apache.pinot.core.data.table.UnboundedConcurrentIndexedTable;
+import org.apache.pinot.core.operator.combine.GroupByOrderByCombineOperator;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.query.selection.SelectionOperatorUtils;
+import org.apache.pinot.core.transport.ServerRoutingInstance;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.core.util.GroupByUtils;
+import org.apache.pinot.core.util.trace.TraceRunnable;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+/**
+ * Helper class to reduce and set Aggregation results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class PreAggregationGapFillDataTableReducer implements DataTableReducer {
+ private static final int MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE = 2; // TBD, find a better value.
+
+ private final QueryContext _queryContext;
+
+ private final int _limitForAggregatedResult;
+ private int _limitForGapfilledResult;
+
+ private final DateTimeGranularitySpec _dateTimeGranularity;
+ private final DateTimeFormatSpec _dateTimeFormatter;
+ private final long _startMs;
+ private final long _endMs;
+ private final long _timeBucketSize;
+
+ private final List<Integer> _groupByKeyIndexes;
+ private boolean [] _isGroupBySelections;
+ private final Set<Key> _groupByKeys;
+ private final Map<Key, Object[]> _previousByGroupKey;
+ private final Map<String, ExpressionContext> _fillExpressions;
+ private final List<ExpressionContext> _timeSeries;
+ private final GapfillUtils.GapfillType _gapfillType;
+
+ PreAggregationGapFillDataTableReducer(QueryContext queryContext) {
+ _queryContext = queryContext;
+ _gapfillType = queryContext.getGapfillType();
+ _limitForAggregatedResult = queryContext.getLimit();
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ _limitForGapfilledResult = queryContext.getLimit();
+ } else {
+ _limitForGapfilledResult = queryContext.getSubQueryContext().getLimit();
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext);
+
+ Preconditions.checkArgument(
+ gapFillSelection != null && gapFillSelection.getFunction() != null, "Gapfill Expression should be function.");
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ Preconditions.checkArgument(
+ args.size() > 5, "PreAggregateGapFill does not have correct number of arguments.");
+ Preconditions.checkArgument(
+ args.get(1).getLiteral() != null, "The second argument of PostAggregateGapFill should be TimeFormatter.");
+ Preconditions.checkArgument(
+ args.get(2).getLiteral() != null, "The third argument of PostAggregateGapFill should be start time.");
+ Preconditions.checkArgument(
+ args.get(3).getLiteral() != null, "The fourth argument of PostAggregateGapFill should be end time.");
+ Preconditions.checkArgument(
+ args.get(4).getLiteral() != null, "The fifth argument of PostAggregateGapFill should be time bucket size.");
+
+ _dateTimeFormatter = new DateTimeFormatSpec(args.get(1).getLiteral());
+ _dateTimeGranularity = new DateTimeGranularitySpec(args.get(4).getLiteral());
+ String start = args.get(2).getLiteral();
+ _startMs = truncate(_dateTimeFormatter.fromFormatToMillis(start));
+ String end = args.get(3).getLiteral();
+ _endMs = truncate(_dateTimeFormatter.fromFormatToMillis(end));
+ _timeBucketSize = _dateTimeGranularity.granularityToMillis();
+
+ _fillExpressions = GapfillUtils.getFillExpressions(gapFillSelection);
+
+ _previousByGroupKey = new HashMap<>();
+ _groupByKeyIndexes = new ArrayList<>();
+ _groupByKeys = new HashSet<>();
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ Preconditions.checkArgument(timeseriesOn != null, "The TimeSeriesOn expressions should be specified.");
+ _timeSeries = timeseriesOn.getFunction().getArguments();
+ }
+
+ private void replaceColumnNameWithAlias(DataSchema dataSchema) {
+ QueryContext queryContext;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = _queryContext.getSubQueryContext().getSubQueryContext();
+ } else if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ queryContext = _queryContext;
+ } else {
+ queryContext = _queryContext.getSubQueryContext();
+ }
+ List<String> aliasList = queryContext.getAliasList();
+ Map<String, String> columnNameToAliasMap = new HashMap<>();
+ for (int i = 0; i < aliasList.size(); i++) {
+ if (aliasList.get(i) != null) {
+ ExpressionContext selection = queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(selection)) {
+ selection = selection.getFunction().getArguments().get(0);
+ }
+ columnNameToAliasMap.put(selection.toString(), aliasList.get(i));
+ }
+ }
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ if (columnNameToAliasMap.containsKey(dataSchema.getColumnNames()[i])) {
+ dataSchema.getColumnNames()[i] = columnNameToAliasMap.get(dataSchema.getColumnNames()[i]);
+ }
+ }
+ }
+
+ /**
+ * Computes the number of reduce threads to use per query.
+ * <ul>
+ * <li> Use single thread if number of data tables to reduce is less than
+ * {@value #MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE}.</li>
+ * <li> Else, use min of max allowed reduce threads per query, and number of data tables.</li>
+ * </ul>
+ *
+ * @param numDataTables Number of data tables to reduce
+ * @param maxReduceThreadsPerQuery Max allowed reduce threads per query
+ * @return Number of reduce threads to use for the query
+ */
+ private int getNumReduceThreadsToUse(int numDataTables, int maxReduceThreadsPerQuery) {
+ // Use single thread if number of data tables < MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE.
+ if (numDataTables < MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE) {
+ return Math.min(1, numDataTables); // Number of data tables can be zero.
+ }
+
+ return Math.min(maxReduceThreadsPerQuery, numDataTables);
+ }
+
+ private IndexedTable getIndexedTable(DataSchema dataSchema, Collection<DataTable> dataTablesToReduce,
+ DataTableReducerContext reducerContext)
+ throws TimeoutException {
+ long start = System.currentTimeMillis();
+ int numDataTables = dataTablesToReduce.size();
+
+ // Get the number of threads to use for reducing.
+ // In case of single reduce thread, fall back to SimpleIndexedTable to avoid redundant locking/unlocking calls.
+ int numReduceThreadsToUse = getNumReduceThreadsToUse(numDataTables, reducerContext.getMaxReduceThreadsPerQuery());
+ int limit = _queryContext.getLimit();
+ // TODO: Make minTrimSize configurable
+ int trimSize = GroupByUtils.getTableCapacity(limit);
+ // NOTE: For query with HAVING clause, use trimSize as resultSize to ensure the result accuracy.
+ // TODO: Resolve the HAVING clause within the IndexedTable before returning the result
+ int resultSize = _queryContext.getHavingFilter() != null ? trimSize : limit;
+ int trimThreshold = reducerContext.getGroupByTrimThreshold();
+ IndexedTable indexedTable;
+ if (numReduceThreadsToUse <= 1) {
+ indexedTable = new SimpleIndexedTable(dataSchema, _queryContext, resultSize, trimSize, trimThreshold);
+ } else {
+ if (trimThreshold >= GroupByOrderByCombineOperator.MAX_TRIM_THRESHOLD) {
+ // special case of trim threshold where it is set to max value.
+ // there won't be any trimming during upsert in this case.
+ // thus we can avoid the overhead of read-lock and write-lock
+ // in the upsert method.
+ indexedTable = new UnboundedConcurrentIndexedTable(dataSchema, _queryContext, resultSize);
+ } else {
+ indexedTable = new ConcurrentIndexedTable(dataSchema, _queryContext, resultSize, trimSize, trimThreshold);
+ }
+ }
+
+ Future[] futures = new Future[numDataTables];
+ CountDownLatch countDownLatch = new CountDownLatch(numDataTables);
+
+ // Create groups of data tables that each thread can process concurrently.
+ // Given that numReduceThreads is <= numDataTables, each group will have at least one data table.
+ ArrayList<DataTable> dataTables = new ArrayList<>(dataTablesToReduce);
+ List<List<DataTable>> reduceGroups = new ArrayList<>(numReduceThreadsToUse);
+
+ for (int i = 0; i < numReduceThreadsToUse; i++) {
+ reduceGroups.add(new ArrayList<>());
+ }
+ for (int i = 0; i < numDataTables; i++) {
+ reduceGroups.get(i % numReduceThreadsToUse).add(dataTables.get(i));
+ }
+
+ int cnt = 0;
+ ColumnDataType[] storedColumnDataTypes = dataSchema.getStoredColumnDataTypes();
+ int numColumns = storedColumnDataTypes.length;
+ for (List<DataTable> reduceGroup : reduceGroups) {
+ futures[cnt++] = reducerContext.getExecutorService().submit(new TraceRunnable() {
+ @Override
+ public void runJob() {
+ for (DataTable dataTable : reduceGroup) {
+ int numRows = dataTable.getNumberOfRows();
+
+ try {
+ for (int rowId = 0; rowId < numRows; rowId++) {
+ Object[] values = new Object[numColumns];
+ for (int colId = 0; colId < numColumns; colId++) {
+ switch (storedColumnDataTypes[colId]) {
+ case INT:
+ values[colId] = dataTable.getInt(rowId, colId);
+ break;
+ case LONG:
+ values[colId] = dataTable.getLong(rowId, colId);
+ break;
+ case FLOAT:
+ values[colId] = dataTable.getFloat(rowId, colId);
+ break;
+ case DOUBLE:
+ values[colId] = dataTable.getDouble(rowId, colId);
+ break;
+ case STRING:
+ values[colId] = dataTable.getString(rowId, colId);
+ break;
+ case BYTES:
+ values[colId] = dataTable.getBytes(rowId, colId);
+ break;
+ case OBJECT:
+ values[colId] = dataTable.getObject(rowId, colId);
+ break;
+ // Add other aggregation intermediate result / group-by column type supports here
+ default:
+ throw new IllegalStateException();
+ }
+ }
+ indexedTable.upsert(new Record(values));
+ }
+ } finally {
+ countDownLatch.countDown();
+ }
+ }
+ }
+ });
+ }
+
+ try {
+ long timeOutMs = reducerContext.getReduceTimeOutMs() - (System.currentTimeMillis() - start);
+ countDownLatch.await(timeOutMs, TimeUnit.MILLISECONDS);
+ } catch (InterruptedException e) {
+ for (Future future : futures) {
+ if (!future.isDone()) {
+ future.cancel(true);
+ }
+ }
+ throw new TimeoutException("Timed out in broker reduce phase.");
+ }
+
+ indexedTable.finish(true);
+ return indexedTable;
+ }
+
+ /**
+ * Here are three things that happen
+ * 1. Sort the result sets from all pinot servers based on timestamp
+ * 2. Gapfill the data for missing entities per time bucket
+ * 3. Aggregate the dataset per time bucket.
+ */
+ @Override
+ public void reduceAndSetResults(String tableName, DataSchema dataSchema,
+ Map<ServerRoutingInstance, DataTable> dataTableMap, BrokerResponseNative brokerResponseNative,
+ DataTableReducerContext reducerContext, BrokerMetrics brokerMetrics) {
+ DataSchema resultTableSchema = getResultTableDataSchema(dataSchema);
+ if (dataTableMap.isEmpty()) {
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, Collections.emptyList()));
+ return;
+ }
+
+ String[] columns = dataSchema.getColumnNames();
+
+ Map<String, Integer> indexes = new HashMap<>();
+ for (int i = 0; i < columns.length; i++) {
+ indexes.put(columns[i], i);
+ }
+
+ _isGroupBySelections = new boolean[dataSchema.getColumnDataTypes().length];
+
+ // The first one argument of timeSeries is time column. The left ones are defining entity.
+ for (ExpressionContext entityColum : _timeSeries) {
+ int index = indexes.get(entityColum.getIdentifier());
+ _isGroupBySelections[index] = true;
+ _groupByKeyIndexes.add(index);
+ }
+
+ List<Object[]> sortedRawRows;
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL_AGGREGATE
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL_SELECT) {
+ sortedRawRows = mergeAndSort(dataTableMap.values(), dataSchema);
+ } else {
+ try {
+ IndexedTable indexedTable = getIndexedTable(dataSchema, dataTableMap.values(), reducerContext);
+ sortedRawRows = mergeAndSort(indexedTable, dataSchema);
Review comment:
It seems like mergeAndSort can be made much much faster if the the servers pre-sorted their resultset before sending it to the broker. Servers would sort their resultset in parallel and in this case, one would only need to do a k-way merge here to produce a globally sorted resultset without actually doing a full global sort or all the records.
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/PreAggregationGapFillDataTableReducer.java
##########
@@ -0,0 +1,775 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.PriorityQueue;
+import java.util.Set;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import org.apache.pinot.common.exception.QueryException;
+import org.apache.pinot.common.metrics.BrokerMetrics;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.QueryProcessingException;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.ConcurrentIndexedTable;
+import org.apache.pinot.core.data.table.IndexedTable;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.data.table.Record;
+import org.apache.pinot.core.data.table.SimpleIndexedTable;
+import org.apache.pinot.core.data.table.UnboundedConcurrentIndexedTable;
+import org.apache.pinot.core.operator.combine.GroupByOrderByCombineOperator;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.query.selection.SelectionOperatorUtils;
+import org.apache.pinot.core.transport.ServerRoutingInstance;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.core.util.GroupByUtils;
+import org.apache.pinot.core.util.trace.TraceRunnable;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+/**
+ * Helper class to reduce and set Aggregation results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class PreAggregationGapFillDataTableReducer implements DataTableReducer {
Review comment:
Seems like this should be renamed to GapFillDataTableReducer?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r814089803
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/data/table/IndexedTable.java
##########
@@ -65,10 +66,15 @@ protected IndexedTable(DataSchema dataSchema, QueryContext queryContext, int res
_lookupMap = lookupMap;
_resultSize = resultSize;
- List<ExpressionContext> groupByExpressions = queryContext.getGroupByExpressions();
+ List<ExpressionContext> groupByExpressions;
+ if (queryContext.getGapfillType() != GapfillUtils.GapfillType.NONE) {
Review comment:
Done
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r815390654
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/PreAggregationGapFillDataTableReducer.java
##########
@@ -0,0 +1,775 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.PriorityQueue;
+import java.util.Set;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import org.apache.pinot.common.exception.QueryException;
+import org.apache.pinot.common.metrics.BrokerMetrics;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.QueryProcessingException;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.ConcurrentIndexedTable;
+import org.apache.pinot.core.data.table.IndexedTable;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.data.table.Record;
+import org.apache.pinot.core.data.table.SimpleIndexedTable;
+import org.apache.pinot.core.data.table.UnboundedConcurrentIndexedTable;
+import org.apache.pinot.core.operator.combine.GroupByOrderByCombineOperator;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.query.selection.SelectionOperatorUtils;
+import org.apache.pinot.core.transport.ServerRoutingInstance;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.core.util.GroupByUtils;
+import org.apache.pinot.core.util.trace.TraceRunnable;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+/**
+ * Helper class to reduce and set Aggregation results into the BrokerResponseNative
+ */
Review comment:
Fixed
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/PreAggregationGapFillDataTableReducer.java
##########
@@ -0,0 +1,775 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.PriorityQueue;
+import java.util.Set;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import org.apache.pinot.common.exception.QueryException;
+import org.apache.pinot.common.metrics.BrokerMetrics;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.QueryProcessingException;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.ConcurrentIndexedTable;
+import org.apache.pinot.core.data.table.IndexedTable;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.data.table.Record;
+import org.apache.pinot.core.data.table.SimpleIndexedTable;
+import org.apache.pinot.core.data.table.UnboundedConcurrentIndexedTable;
+import org.apache.pinot.core.operator.combine.GroupByOrderByCombineOperator;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.query.selection.SelectionOperatorUtils;
+import org.apache.pinot.core.transport.ServerRoutingInstance;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.core.util.GroupByUtils;
+import org.apache.pinot.core.util.trace.TraceRunnable;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+/**
+ * Helper class to reduce and set Aggregation results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class PreAggregationGapFillDataTableReducer implements DataTableReducer {
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (a5316f7) into [master](https://codecov.io/gh/apache/pinot/commit/3f98ce37fdaef0335fcd82e621489d65751b1f55?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (3f98ce3) will **decrease** coverage by `56.72%`.
> The diff coverage is `0.00%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 70.72% 13.99% -56.73%
+ Complexity 4242 81 -4161
=============================================
Files 1631 1596 -35
Lines 85279 84069 -1210
Branches 12844 12808 -36
=============================================
- Hits 60316 11769 -48547
- Misses 20799 71419 +50620
+ Partials 4164 881 -3283
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `?` | |
| unittests2 | `13.99% <0.00%> (-0.10%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `24.41% <0.00%> (-47.44%)` | :arrow_down: |
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `0.00% <0.00%> (-86.56%)` | :arrow_down: |
| [...org/apache/pinot/core/data/table/IndexedTable.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9kYXRhL3RhYmxlL0luZGV4ZWRUYWJsZS5qYXZh) | `0.00% <0.00%> (-84.75%)` | :arrow_down: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (-87.50%)` | :arrow_down: |
| [...ache/pinot/core/plan/GapfillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...pinot/core/plan/maker/InstancePlanMakerImplV2.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL21ha2VyL0luc3RhbmNlUGxhbk1ha2VySW1wbFYyLmphdmE=) | `0.00% <0.00%> (-76.77%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...not/core/query/reduce/GapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwRmlsbERhdGFUYWJsZVJlZHVjZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [.../pinot/core/query/reduce/GapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...e/pinot/core/query/reduce/HavingFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvSGF2aW5nRmlsdGVySGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.31%)` | :arrow_down: |
| ... and [1321 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [3f98ce3...a5316f7](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] Jackie-Jiang commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
Jackie-Jiang commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r787054185
##########
File path: pinot-common/src/thrift/query.thrift
##########
@@ -20,6 +20,7 @@ namespace java org.apache.pinot.common.request
struct DataSource {
1: optional string tableName;
+ 2: optional PinotQuery preAggregateGapfillQuery;
Review comment:
Let's name it `subquery`. We want it to be general in the query syntax
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/QueryContext.java
##########
@@ -85,6 +85,7 @@
// Keep the BrokerRequest to make incremental changes
// TODO: Remove it once the whole query engine is using the QueryContext
private final BrokerRequest _brokerRequest;
+ private QueryContext _preAggregateGapFillQueryContext;
Review comment:
Can we generify this to `subquery`? I think this can be modeled as general subquery
##########
File path: pinot-common/src/main/java/org/apache/pinot/sql/parsers/CalciteSqlParser.java
##########
@@ -117,21 +117,50 @@ private static String removeTerminatingSemicolon(String sql) {
return sql;
}
+ private static SqlNode parse(String sql) {
+ SqlParser sqlParser = SqlParser.create(sql, PARSER_CONFIG);
+ try {
+ return sqlParser.parseQuery();
+ } catch (SqlParseException e) {
+ throw new SqlCompilationException("Caught exception while parsing query: " + sql, e);
+ }
+ }
+
+ public static PinotQuery compileToPinotQueryWithSubquery(String sql)
+ throws SqlCompilationException {
+ return compileToPinotQuery(sql, true);
+ }
+
public static PinotQuery compileToPinotQuery(String sql)
throws SqlCompilationException {
- // Remove the comments from the query
- sql = removeComments(sql);
Review comment:
This should not be removed
##########
File path: pinot-common/src/main/java/org/apache/pinot/sql/parsers/CalciteSqlParser.java
##########
@@ -117,21 +117,50 @@ private static String removeTerminatingSemicolon(String sql) {
return sql;
}
+ private static SqlNode parse(String sql) {
+ SqlParser sqlParser = SqlParser.create(sql, PARSER_CONFIG);
+ try {
+ return sqlParser.parseQuery();
+ } catch (SqlParseException e) {
+ throw new SqlCompilationException("Caught exception while parsing query: " + sql, e);
+ }
+ }
+
+ public static PinotQuery compileToPinotQueryWithSubquery(String sql)
+ throws SqlCompilationException {
+ return compileToPinotQuery(sql, true);
+ }
+
public static PinotQuery compileToPinotQuery(String sql)
throws SqlCompilationException {
- // Remove the comments from the query
- sql = removeComments(sql);
+ return compileToPinotQuery(sql, false);
Review comment:
Let's directly extend this method to support subquery within the `FROM` clause. The subquery parsing should always be enabled regardless of the query type
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (a9f2578) into [master](https://codecov.io/gh/apache/pinot/commit/1d1a7d34709b6a89985a610f46dd1c97d6c9271a?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (1d1a7d3) will **decrease** coverage by `0.05%`.
> The diff coverage is `74.08%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 71.40% 71.35% -0.06%
- Complexity 4223 4225 +2
============================================
Files 1597 1610 +13
Lines 82903 83317 +414
Branches 12369 12449 +80
============================================
+ Hits 59201 59451 +250
- Misses 19689 19833 +144
- Partials 4013 4033 +20
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.91% <16.05%> (-0.08%)` | :arrow_down: |
| integration2 | `27.73% <16.51%> (+0.02%)` | :arrow_up: |
| unittests1 | `68.14% <73.62%> (+<0.01%)` | :arrow_up: |
| unittests2 | `14.29% <0.00%> (-0.07%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `58.73% <54.83%> (-4.91%)` | :arrow_down: |
| [...query/request/context/utils/QueryContextUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvUXVlcnlDb250ZXh0VXRpbHMuamF2YQ==) | `73.91% <60.00%> (-3.36%)` | :arrow_down: |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `63.88% <63.88%> (ø)` | |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `92.42% <66.66%> (+0.11%)` | :arrow_up: |
| [...che/pinot/core/query/reduce/filter/RowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXIuamF2YQ==) | `66.66% <66.66%> (ø)` | |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| [...PreAggregationGapFillSelectionOperatorService.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsU2VsZWN0aW9uT3BlcmF0b3JTZXJ2aWNlLmphdmE=) | `83.51% <83.51%> (ø)` | |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `85.71% <85.71%> (ø)` | |
| [.../core/query/reduce/filter/PredicateRowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1ByZWRpY2F0ZVJvd01hdGNoZXIuamF2YQ==) | `87.50% <87.50%> (ø)` | |
| ... and [63 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [1d1a7d3...a9f2578](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r787316118
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/QueryContext.java
##########
@@ -85,6 +85,7 @@
// Keep the BrokerRequest to make incremental changes
// TODO: Remove it once the whole query engine is using the QueryContext
private final BrokerRequest _brokerRequest;
+ private QueryContext _preAggregateGapFillQueryContext;
Review comment:
Done
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (149817e) into [master](https://codecov.io/gh/apache/pinot/commit/cc2f3fe196d29a0d716bfee07add9b761e8fa98e?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (cc2f3fe) will **increase** coverage by `5.67%`.
> The diff coverage is `74.56%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
+ Coverage 64.63% 70.30% +5.67%
- Complexity 4260 4261 +1
============================================
Files 1562 1617 +55
Lines 81525 83736 +2211
Branches 12252 12529 +277
============================================
+ Hits 52695 58872 +6177
+ Misses 25072 20781 -4291
- Partials 3758 4083 +325
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.95% <16.29%> (?)` | |
| unittests1 | `67.88% <74.56%> (+0.01%)` | :arrow_up: |
| unittests2 | `14.17% <0.00%> (-0.05%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...query/request/context/utils/QueryContextUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvUXVlcnlDb250ZXh0VXRpbHMuamF2YQ==) | `69.56% <20.00%> (-5.44%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `58.73% <54.83%> (-4.91%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `92.42% <66.66%> (+0.11%)` | :arrow_up: |
| [...che/pinot/core/query/reduce/filter/RowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXIuamF2YQ==) | `66.66% <66.66%> (ø)` | |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `83.76% <83.76%> (ø)` | |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `85.71% <85.71%> (ø)` | |
| [.../core/query/reduce/filter/PredicateRowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1ByZWRpY2F0ZVJvd01hdGNoZXIuamF2YQ==) | `87.50% <87.50%> (ø)` | |
| [...inot/core/plan/PreAggGapFillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL1ByZUFnZ0dhcEZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `90.00% <90.00%> (ø)` | |
| ... and [349 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [cc2f3fe...149817e](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r793122899
##########
File path: pinot-common/src/main/java/org/apache/pinot/sql/parsers/CalciteSqlParser.java
##########
@@ -321,32 +338,30 @@ private static void setOptions(PinotQuery pinotQuery, List<String> optionsStatem
pinotQuery.setQueryOptions(options);
}
- private static PinotQuery compileCalciteSqlToPinotQuery(String sql) {
- SqlParser sqlParser = SqlParser.create(sql, PARSER_CONFIG);
- SqlNode sqlNode;
- try {
- sqlNode = sqlParser.parseQuery();
- } catch (SqlParseException e) {
- throw new SqlCompilationException("Caught exception while parsing query: " + sql, e);
- }
-
- PinotQuery pinotQuery = new PinotQuery();
- if (sqlNode instanceof SqlExplain) {
- // Extract sql node for the query
- sqlNode = ((SqlExplain) sqlNode).getExplicandum();
- pinotQuery.setExplain(true);
- }
Review comment:
This piece of the code has been moved to somewhere else and it will be executed.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829578012
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/QueryContext.java
##########
@@ -133,6 +137,8 @@ private QueryContext(String tableName, List<ExpressionContext> selectExpressions
_queryOptions = queryOptions;
_debugOptions = debugOptions;
_brokerRequest = brokerRequest;
+ _gapfillType = null;
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829536271
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/util/GapfillUtils.java
##########
@@ -119,4 +137,265 @@ static public Serializable getDefaultValue(DataSchema.ColumnDataType dataType) {
private static String canonicalizeFunctionName(String functionName) {
return StringUtils.remove(functionName, '_').toLowerCase();
}
+
+ public static boolean isGapfill(ExpressionContext expressionContext) {
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ return false;
+ }
+
+ return GAP_FILL.equals(canonicalizeFunctionName(expressionContext.getFunction().getFunctionName()));
+ }
+
+ private static boolean isGapfill(QueryContext queryContext) {
+ for (ExpressionContext expressionContext : queryContext.getSelectExpressions()) {
+ if (isGapfill(expressionContext)) {
+ return true;
+ }
+ }
+ return false;
+ }
+
+ /**
+ * Get the gapfill type for queryContext. Also do the validation for gapfill request.
+ * @param queryContext
+ */
+ public static void setGapfillType(QueryContext queryContext) {
+ GapfillType gapfillType = null;
+ if (queryContext.getSubQueryContext() == null) {
+ if (isGapfill(queryContext)) {
+ Preconditions.checkArgument(queryContext.getAggregationFunctions() == null,
+ "Aggregation and Gapfill can not be in the same sql statement.");
+ gapfillType = GapfillType.GAP_FILL;
+ }
+ } else if (isGapfill(queryContext)) {
+ Preconditions.checkArgument(queryContext.getSubQueryContext().getAggregationFunctions() != null,
+ "Select and Gapfill should be in the same sql statement.");
+ Preconditions.checkArgument(queryContext.getSubQueryContext().getSubQueryContext() == null,
+ "There is no three levels nesting sql when the outer query is gapfill.");
+ gapfillType = GapfillType.AGGREGATE_GAP_FILL;
+ } else if (isGapfill(queryContext.getSubQueryContext())) {
+ if (queryContext.getAggregationFunctions() == null) {
+ gapfillType = GapfillType.GAP_FILL_SELECT;
+ } else if (queryContext.getSubQueryContext().getSubQueryContext() == null) {
+ gapfillType = GapfillType.GAP_FILL_AGGREGATE;
+ } else {
+ Preconditions
+ .checkArgument(queryContext.getSubQueryContext().getSubQueryContext().getAggregationFunctions() != null,
+ "Select cannot happen before gapfill.");
+ gapfillType = GapfillType.AGGREGATE_GAP_FILL_AGGREGATE;
+ }
+ }
+
+ queryContext.setGapfillType(gapfillType);
+ if (gapfillType == null) {
+ return;
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext);
+
+ Preconditions.checkArgument(gapFillSelection != null && gapFillSelection.getFunction() != null,
+ "Gapfill Expression should be function.");
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ Preconditions.checkArgument(args.size() > 5, "PreAggregateGapFill does not have correct number of arguments.");
+ Preconditions.checkArgument(args.get(1).getLiteral() != null,
+ "The second argument of PostAggregateGapFill should be TimeFormatter.");
+ Preconditions.checkArgument(args.get(2).getLiteral() != null,
+ "The third argument of PostAggregateGapFill should be start time.");
+ Preconditions.checkArgument(args.get(3).getLiteral() != null,
+ "The fourth argument of PostAggregateGapFill should be end time.");
+ Preconditions.checkArgument(args.get(4).getLiteral() != null,
+ "The fifth argument of PostAggregateGapFill should be time bucket size.");
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ Preconditions.checkArgument(timeseriesOn != null, "The TimeSeriesOn expressions should be specified.");
+
+ if (queryContext.getAggregationFunctions() == null) {
+ return;
+ }
+
+ List<ExpressionContext> groupbyExpressions = queryContext.getGroupByExpressions();
+ Preconditions.checkArgument(groupbyExpressions != null, "No GroupBy Clause.");
+ List<ExpressionContext> innerSelections = queryContext.getSubQueryContext().getSelectExpressions();
+ String timeBucketCol = null;
+ List<String> strAlias = queryContext.getSubQueryContext().getAliasList();
+ for (int i = 0; i < innerSelections.size(); i++) {
+ ExpressionContext innerSelection = innerSelections.get(i);
+ if (GapfillUtils.isGapfill(innerSelection)) {
+ if (strAlias.get(i) != null) {
+ timeBucketCol = strAlias.get(i);
+ } else {
+ timeBucketCol = innerSelection.getFunction().getArguments().get(0).toString();
+ }
+ break;
+ }
+ }
+
+ Preconditions.checkArgument(timeBucketCol != null, "No Group By timebucket.");
+
+ boolean findTimeBucket = false;
+ for (ExpressionContext groupbyExp : groupbyExpressions) {
+ if (timeBucketCol.equals(groupbyExp.toString())) {
+ findTimeBucket = true;
+ break;
+ }
+ }
+
+ Preconditions.checkArgument(findTimeBucket, "No Group By timebucket.");
+ }
+
+ private static ExpressionContext findGapfillExpressionContext(QueryContext queryContext) {
+ for (ExpressionContext expressionContext : queryContext.getSelectExpressions()) {
+ if (isGapfill(expressionContext)) {
+ return expressionContext;
+ }
+ }
+ return null;
+ }
+
+ public static ExpressionContext getGapfillExpressionContext(QueryContext queryContext) {
+ GapfillType gapfillType = queryContext.getGapfillType();
+ if (gapfillType == GapfillType.AGGREGATE_GAP_FILL || gapfillType == GapfillType.GAP_FILL) {
+ return findGapfillExpressionContext(queryContext);
+ } else if (gapfillType == GapfillType.GAP_FILL_AGGREGATE || gapfillType == GapfillType.AGGREGATE_GAP_FILL_AGGREGATE
+ || gapfillType == GapfillType.GAP_FILL_SELECT) {
+ return findGapfillExpressionContext(queryContext.getSubQueryContext());
+ } else {
+ return null;
+ }
+ }
+
+ public static int findTimeBucketColumnIndex(QueryContext queryContext) {
+ GapfillType gapfillType = queryContext.getGapfillType();
+ if (gapfillType == GapfillType.GAP_FILL_AGGREGATE
+ || gapfillType == GapfillType.GAP_FILL_SELECT
+ || gapfillType == GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = queryContext.getSubQueryContext();
+ }
+ List<ExpressionContext> expressionContexts = queryContext.getSelectExpressions();
+ for (int i = 0; i < expressionContexts.size(); i++) {
+ if (isGapfill(expressionContexts.get(i))) {
+ return i;
+ }
+ }
+ return -1;
+ }
+
+ public static ExpressionContext getTimeSeriesOnExpressionContext(ExpressionContext gapFillSelection) {
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ for (int i = STARTING_INDEX_OF_OPTIONAL_ARGS_FOR_PRE_AGGREGATE_GAP_FILL; i < args.size(); i++) {
+ if (GapfillUtils.isTimeSeriesOn(args.get(i))) {
+ return args.get(i);
+ }
+ }
+ return null;
+ }
+
+ public static Map<String, ExpressionContext> getFillExpressions(ExpressionContext gapFillSelection) {
+ Map<String, ExpressionContext> fillExpressions = new HashMap<>();
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ for (int i = STARTING_INDEX_OF_OPTIONAL_ARGS_FOR_PRE_AGGREGATE_GAP_FILL; i < args.size(); i++) {
+ if (GapfillUtils.isFill(args.get(i))) {
+ ExpressionContext fillExpression = args.get(i);
+ fillExpressions.put(fillExpression.getFunction().getArguments().get(0).getIdentifier(), fillExpression);
+ }
+ }
+ return fillExpressions;
+ }
+
+ public static String getTableName(PinotQuery pinotQuery) {
+ while (pinotQuery.getDataSource().getSubquery() != null) {
+ pinotQuery = pinotQuery.getDataSource().getSubquery();
+ }
+ return pinotQuery.getDataSource().getTableName();
+ }
+
+ public static BrokerRequest stripGapfill(BrokerRequest brokerRequest) {
+ if (brokerRequest.getPinotQuery().getDataSource() == null) {
+ return brokerRequest;
+ }
+ QueryContext queryContext = BrokerRequestToQueryContextConverter.convert(brokerRequest);
+ GapfillUtils.GapfillType gapfillType = queryContext.getGapfillType();
+ if (gapfillType == null) {
+ return brokerRequest;
+ }
+ switch (gapfillType) {
+ // one sql query with gapfill only
+ case GAP_FILL:
+ return stripGapfill(brokerRequest.getPinotQuery());
+ // gapfill as subquery, the outer query may have the filter
+ case GAP_FILL_SELECT:
+ // gapfill as subquery, the outer query has the aggregation
+ case GAP_FILL_AGGREGATE:
+ // aggregation as subqery, the outer query is gapfill
+ case AGGREGATE_GAP_FILL:
+ return stripGapfill(brokerRequest.getPinotQuery().getDataSource().getSubquery());
+ // aggegration as second nesting subquery, gapfill as first nesting subquery, different aggregation as outer query
+ case AGGREGATE_GAP_FILL_AGGREGATE:
+ return stripGapfill(brokerRequest.getPinotQuery().getDataSource().getSubquery().getDataSource().getSubquery());
+ default:
+ return brokerRequest;
+ }
+ }
+
+ private static BrokerRequest stripGapfill(PinotQuery pinotQuery) {
+ PinotQuery copy = new PinotQuery(pinotQuery);
+ BrokerRequest brokerRequest = new BrokerRequest();
+ brokerRequest.setPinotQuery(copy);
+ // Set table name in broker request because it is used for access control, query routing etc.
+ DataSource dataSource = copy.getDataSource();
+ if (dataSource != null) {
+ QuerySource querySource = new QuerySource();
+ querySource.setTableName(dataSource.getTableName());
+ brokerRequest.setQuerySource(querySource);
+ }
+ List<Expression> selectList = copy.getSelectList();
+ for (int i = 0; i < selectList.size(); i++) {
+ Expression select = selectList.get(i);
+ if (select.getType() != ExpressionType.FUNCTION) {
+ continue;
+ }
+ if (GAP_FILL.equalsIgnoreCase(select.getFunctionCall().getOperator())) {
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829543582
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/util/GapfillUtils.java
##########
@@ -119,4 +137,265 @@ static public Serializable getDefaultValue(DataSchema.ColumnDataType dataType) {
private static String canonicalizeFunctionName(String functionName) {
return StringUtils.remove(functionName, '_').toLowerCase();
}
+
+ public static boolean isGapfill(ExpressionContext expressionContext) {
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ return false;
+ }
+
+ return GAP_FILL.equals(canonicalizeFunctionName(expressionContext.getFunction().getFunctionName()));
+ }
+
+ private static boolean isGapfill(QueryContext queryContext) {
+ for (ExpressionContext expressionContext : queryContext.getSelectExpressions()) {
+ if (isGapfill(expressionContext)) {
+ return true;
+ }
+ }
+ return false;
+ }
+
+ /**
+ * Get the gapfill type for queryContext. Also do the validation for gapfill request.
+ * @param queryContext
+ */
+ public static void setGapfillType(QueryContext queryContext) {
+ GapfillType gapfillType = null;
+ if (queryContext.getSubQueryContext() == null) {
+ if (isGapfill(queryContext)) {
+ Preconditions.checkArgument(queryContext.getAggregationFunctions() == null,
+ "Aggregation and Gapfill can not be in the same sql statement.");
+ gapfillType = GapfillType.GAP_FILL;
+ }
+ } else if (isGapfill(queryContext)) {
+ Preconditions.checkArgument(queryContext.getSubQueryContext().getAggregationFunctions() != null,
+ "Select and Gapfill should be in the same sql statement.");
+ Preconditions.checkArgument(queryContext.getSubQueryContext().getSubQueryContext() == null,
+ "There is no three levels nesting sql when the outer query is gapfill.");
+ gapfillType = GapfillType.AGGREGATE_GAP_FILL;
+ } else if (isGapfill(queryContext.getSubQueryContext())) {
+ if (queryContext.getAggregationFunctions() == null) {
+ gapfillType = GapfillType.GAP_FILL_SELECT;
+ } else if (queryContext.getSubQueryContext().getSubQueryContext() == null) {
+ gapfillType = GapfillType.GAP_FILL_AGGREGATE;
+ } else {
+ Preconditions
+ .checkArgument(queryContext.getSubQueryContext().getSubQueryContext().getAggregationFunctions() != null,
+ "Select cannot happen before gapfill.");
+ gapfillType = GapfillType.AGGREGATE_GAP_FILL_AGGREGATE;
+ }
+ }
+
+ queryContext.setGapfillType(gapfillType);
+ if (gapfillType == null) {
+ return;
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext);
+
+ Preconditions.checkArgument(gapFillSelection != null && gapFillSelection.getFunction() != null,
+ "Gapfill Expression should be function.");
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ Preconditions.checkArgument(args.size() > 5, "PreAggregateGapFill does not have correct number of arguments.");
+ Preconditions.checkArgument(args.get(1).getLiteral() != null,
+ "The second argument of PostAggregateGapFill should be TimeFormatter.");
+ Preconditions.checkArgument(args.get(2).getLiteral() != null,
+ "The third argument of PostAggregateGapFill should be start time.");
+ Preconditions.checkArgument(args.get(3).getLiteral() != null,
+ "The fourth argument of PostAggregateGapFill should be end time.");
+ Preconditions.checkArgument(args.get(4).getLiteral() != null,
+ "The fifth argument of PostAggregateGapFill should be time bucket size.");
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ Preconditions.checkArgument(timeseriesOn != null, "The TimeSeriesOn expressions should be specified.");
+
+ if (queryContext.getAggregationFunctions() == null) {
+ return;
+ }
+
+ List<ExpressionContext> groupbyExpressions = queryContext.getGroupByExpressions();
+ Preconditions.checkArgument(groupbyExpressions != null, "No GroupBy Clause.");
+ List<ExpressionContext> innerSelections = queryContext.getSubQueryContext().getSelectExpressions();
+ String timeBucketCol = null;
+ List<String> strAlias = queryContext.getSubQueryContext().getAliasList();
+ for (int i = 0; i < innerSelections.size(); i++) {
+ ExpressionContext innerSelection = innerSelections.get(i);
+ if (GapfillUtils.isGapfill(innerSelection)) {
+ if (strAlias.get(i) != null) {
+ timeBucketCol = strAlias.get(i);
+ } else {
+ timeBucketCol = innerSelection.getFunction().getArguments().get(0).toString();
+ }
+ break;
+ }
+ }
+
+ Preconditions.checkArgument(timeBucketCol != null, "No Group By timebucket.");
+
+ boolean findTimeBucket = false;
+ for (ExpressionContext groupbyExp : groupbyExpressions) {
+ if (timeBucketCol.equals(groupbyExp.toString())) {
+ findTimeBucket = true;
+ break;
+ }
+ }
+
+ Preconditions.checkArgument(findTimeBucket, "No Group By timebucket.");
+ }
+
+ private static ExpressionContext findGapfillExpressionContext(QueryContext queryContext) {
+ for (ExpressionContext expressionContext : queryContext.getSelectExpressions()) {
+ if (isGapfill(expressionContext)) {
+ return expressionContext;
+ }
+ }
+ return null;
+ }
+
+ public static ExpressionContext getGapfillExpressionContext(QueryContext queryContext) {
+ GapfillType gapfillType = queryContext.getGapfillType();
+ if (gapfillType == GapfillType.AGGREGATE_GAP_FILL || gapfillType == GapfillType.GAP_FILL) {
+ return findGapfillExpressionContext(queryContext);
+ } else if (gapfillType == GapfillType.GAP_FILL_AGGREGATE || gapfillType == GapfillType.AGGREGATE_GAP_FILL_AGGREGATE
+ || gapfillType == GapfillType.GAP_FILL_SELECT) {
+ return findGapfillExpressionContext(queryContext.getSubQueryContext());
+ } else {
+ return null;
+ }
+ }
+
+ public static int findTimeBucketColumnIndex(QueryContext queryContext) {
+ GapfillType gapfillType = queryContext.getGapfillType();
+ if (gapfillType == GapfillType.GAP_FILL_AGGREGATE
+ || gapfillType == GapfillType.GAP_FILL_SELECT
+ || gapfillType == GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = queryContext.getSubQueryContext();
+ }
+ List<ExpressionContext> expressionContexts = queryContext.getSelectExpressions();
+ for (int i = 0; i < expressionContexts.size(); i++) {
+ if (isGapfill(expressionContexts.get(i))) {
+ return i;
+ }
+ }
+ return -1;
+ }
+
+ public static ExpressionContext getTimeSeriesOnExpressionContext(ExpressionContext gapFillSelection) {
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ for (int i = STARTING_INDEX_OF_OPTIONAL_ARGS_FOR_PRE_AGGREGATE_GAP_FILL; i < args.size(); i++) {
+ if (GapfillUtils.isTimeSeriesOn(args.get(i))) {
+ return args.get(i);
+ }
+ }
+ return null;
+ }
+
+ public static Map<String, ExpressionContext> getFillExpressions(ExpressionContext gapFillSelection) {
+ Map<String, ExpressionContext> fillExpressions = new HashMap<>();
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ for (int i = STARTING_INDEX_OF_OPTIONAL_ARGS_FOR_PRE_AGGREGATE_GAP_FILL; i < args.size(); i++) {
+ if (GapfillUtils.isFill(args.get(i))) {
+ ExpressionContext fillExpression = args.get(i);
+ fillExpressions.put(fillExpression.getFunction().getArguments().get(0).getIdentifier(), fillExpression);
+ }
+ }
+ return fillExpressions;
+ }
+
+ public static String getTableName(PinotQuery pinotQuery) {
+ while (pinotQuery.getDataSource().getSubquery() != null) {
+ pinotQuery = pinotQuery.getDataSource().getSubquery();
+ }
+ return pinotQuery.getDataSource().getTableName();
+ }
+
+ public static BrokerRequest stripGapfill(BrokerRequest brokerRequest) {
+ if (brokerRequest.getPinotQuery().getDataSource() == null) {
+ return brokerRequest;
+ }
+ QueryContext queryContext = BrokerRequestToQueryContextConverter.convert(brokerRequest);
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (fda4e72) into [master](https://codecov.io/gh/apache/pinot/commit/cd311bcc2da2d0c7ecb05970581926b5af37f358?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (cd311bc) will **decrease** coverage by `32.97%`.
> The diff coverage is `17.89%`.
> :exclamation: Current head fda4e72 differs from pull request most recent head 96e0a1c. Consider uploading reports for the commit 96e0a1c to get more accurate results
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 70.76% 37.78% -32.98%
+ Complexity 4264 81 -4183
=============================================
Files 1639 1648 +9
Lines 85920 86419 +499
Branches 12922 13063 +141
=============================================
- Hits 60801 32655 -28146
- Misses 20925 51168 +30243
+ Partials 4194 2596 -1598
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.74% <17.55%> (-0.14%)` | :arrow_down: |
| integration2 | `27.27% <17.89%> (-0.28%)` | :arrow_down: |
| unittests1 | `?` | |
| unittests2 | `14.10% <0.33%> (-0.09%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/GrpcBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvR3JwY0Jyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `75.67% <ø> (ø)` | |
| [.../pinot/core/query/reduce/GapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...ache/pinot/core/query/reduce/GapfillProcessor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwZmlsbFByb2Nlc3Nvci5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...e/pinot/core/query/reduce/RowBasedBlockValSet.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUm93QmFzZWRCbG9ja1ZhbFNldC5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `69.70% <ø> (-22.48%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `16.57% <13.46%> (-47.07%)` | :arrow_down: |
| [...pache/pinot/common/utils/request/RequestUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9jb21tb24vdXRpbHMvcmVxdWVzdC9SZXF1ZXN0VXRpbHMuamF2YQ==) | `56.46% <33.33%> (-31.04%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `83.78% <50.00%> (-7.99%)` | :arrow_down: |
| [.../core/query/reduce/filter/PredicateRowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1ByZWRpY2F0ZVJvd01hdGNoZXIuamF2YQ==) | `50.00% <50.00%> (ø)` | |
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `63.74% <55.55%> (-23.97%)` | :arrow_down: |
| ... and [946 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [cd311bc...96e0a1c](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (597b9c0) into [master](https://codecov.io/gh/apache/pinot/commit/91c2ebbf297c4bf3fecb5f98413e9f00e324e2dc?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (91c2ebb) will **decrease** coverage by `13.50%`.
> The diff coverage is `0.31%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 27.62% 14.11% -13.51%
- Complexity 0 81 +81
=============================================
Files 1624 1600 -24
Lines 85450 84442 -1008
Branches 12882 12855 -27
=============================================
- Hits 23604 11923 -11681
- Misses 59631 71619 +11988
+ Partials 2215 900 -1315
```
| Flag | Coverage Δ | |
|---|---|---|
| integration2 | `?` | |
| unittests2 | `14.11% <0.31%> (?)` | |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/GrpcBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvR3JwY0Jyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `0.00% <ø> (-78.58%)` | :arrow_down: |
| [...thandler/SingleConnectionBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvU2luZ2xlQ29ubmVjdGlvbkJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `13.20% <0.00%> (-73.83%)` | :arrow_down: |
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `0.00% <0.00%> (-64.59%)` | :arrow_down: |
| [...t/controller/api/resources/PinotQueryResource.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29udHJvbGxlci9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29udHJvbGxlci9hcGkvcmVzb3VyY2VzL1Bpbm90UXVlcnlSZXNvdXJjZS5qYXZh) | `0.00% <0.00%> (-50.35%)` | :arrow_down: |
| [...e/pinot/core/query/reduce/BrokerReduceService.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQnJva2VyUmVkdWNlU2VydmljZS5qYXZh) | `0.00% <0.00%> (-94.60%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...ache/pinot/core/query/reduce/GapFillProcessor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwRmlsbFByb2Nlc3Nvci5qYXZh) | `0.00% <0.00%> (ø)` | |
| [.../pinot/core/query/reduce/GapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...e/pinot/core/query/reduce/HavingFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvSGF2aW5nRmlsdGVySGFuZGxlci5qYXZh) | `0.00% <0.00%> (-78.27%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `0.00% <0.00%> (-84.71%)` | :arrow_down: |
| ... and [817 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [91c2ebb...597b9c0](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r831525452
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/GapfillProcessor.java
##########
@@ -0,0 +1,477 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.function.CountAggregationFunction;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+
+/**
+ * Helper class to reduce and set gap fill results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class GapfillProcessor {
+ private final QueryContext _queryContext;
+
+ private final int _limitForAggregatedResult;
+ private final DateTimeGranularitySpec _gapfillDateTimeGranularity;
+ private final DateTimeGranularitySpec _postGapfillDateTimeGranularity;
+ private final DateTimeFormatSpec _dateTimeFormatter;
+ private final long _startMs;
+ private final long _endMs;
+ private final long _gapfillTimeBucketSize;
+ private final long _postGapfillTimeBucketSize;
+ private final int _numOfTimeBuckets;
+ private final List<Integer> _groupByKeyIndexes;
+ private final Set<Key> _groupByKeys;
+ private final Map<Key, Object[]> _previousByGroupKey;
+ private final Map<String, ExpressionContext> _fillExpressions;
+ private final List<ExpressionContext> _timeSeries;
+ private final GapfillUtils.GapfillType _gapfillType;
+ private int _limitForGapfilledResult;
+ private boolean[] _isGroupBySelections;
+ private final int _timeBucketColumnIndex;
+ private int[] _sourceColumnIndexForResultSchema = null;
+ private final int _aggregationSize;
+
+ GapfillProcessor(QueryContext queryContext, GapfillUtils.GapfillType gapfillType) {
+ _queryContext = queryContext;
+ _gapfillType = gapfillType;
+ _limitForAggregatedResult = queryContext.getLimit();
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ _limitForGapfilledResult = queryContext.getLimit();
+ } else {
+ _limitForGapfilledResult = queryContext.getSubquery().getLimit();
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext, _gapfillType);
+ _timeBucketColumnIndex = GapfillUtils.findTimeBucketColumnIndex(queryContext, _gapfillType);
+
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+
+ _dateTimeFormatter = new DateTimeFormatSpec(args.get(1).getLiteral());
+ _gapfillDateTimeGranularity = new DateTimeGranularitySpec(args.get(4).getLiteral());
+ if (args.get(5).getLiteral() == null) {
+ _postGapfillDateTimeGranularity = _gapfillDateTimeGranularity;
+ } else {
+ _postGapfillDateTimeGranularity = new DateTimeGranularitySpec(args.get(5).getLiteral());
+ }
+ String start = args.get(2).getLiteral();
+ _startMs = truncate(_dateTimeFormatter.fromFormatToMillis(start));
+ String end = args.get(3).getLiteral();
+ _endMs = truncate(_dateTimeFormatter.fromFormatToMillis(end));
+ _gapfillTimeBucketSize = _gapfillDateTimeGranularity.granularityToMillis();
+ _postGapfillTimeBucketSize = _postGapfillDateTimeGranularity.granularityToMillis();
+ _numOfTimeBuckets = (int) ((_endMs - _startMs) / _gapfillTimeBucketSize);
+
+ _fillExpressions = GapfillUtils.getFillExpressions(gapFillSelection);
+
+ _aggregationSize = (int) (_postGapfillTimeBucketSize / _gapfillTimeBucketSize);
+
+ _previousByGroupKey = new HashMap<>();
+ _groupByKeyIndexes = new ArrayList<>();
+ _groupByKeys = new HashSet<>();
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ _timeSeries = timeseriesOn.getFunction().getArguments();
+ }
+
+ private int findGapfillBucketIndex(long time) {
+ return (int) ((time - _startMs) / _gapfillTimeBucketSize);
+ }
+
+ private void replaceColumnNameWithAlias(DataSchema dataSchema) {
+ QueryContext queryContext;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = _queryContext.getSubquery().getSubquery();
+ } else if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ queryContext = _queryContext;
+ } else {
+ queryContext = _queryContext.getSubquery();
+ }
+ List<String> aliasList = queryContext.getAliasList();
+ Map<String, String> columnNameToAliasMap = new HashMap<>();
+ for (int i = 0; i < aliasList.size(); i++) {
+ if (aliasList.get(i) != null) {
+ ExpressionContext selection = queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(selection)) {
+ selection = selection.getFunction().getArguments().get(0);
+ }
+ columnNameToAliasMap.put(selection.toString(), aliasList.get(i));
+ }
+ }
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ if (columnNameToAliasMap.containsKey(dataSchema.getColumnNames()[i])) {
+ dataSchema.getColumnNames()[i] = columnNameToAliasMap.get(dataSchema.getColumnNames()[i]);
+ }
+ }
+ }
+
+ /**
+ * Here are three things that happen
+ * 1. Sort the result sets from all pinot servers based on timestamp
+ * 2. Gapfill the data for missing entities per time bucket
+ * 3. Aggregate the dataset per time bucket.
+ */
+ public void process(BrokerResponseNative brokerResponseNative) {
+ DataSchema dataSchema = brokerResponseNative.getResultTable().getDataSchema();
+ DataSchema resultTableSchema = getResultTableDataSchema(dataSchema);
+ if (brokerResponseNative.getResultTable().getRows().isEmpty()) {
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, Collections.emptyList()));
+ return;
+ }
+
+ String[] columns = dataSchema.getColumnNames();
+
+ Map<String, Integer> indexes = new HashMap<>();
+ for (int i = 0; i < columns.length; i++) {
+ indexes.put(columns[i], i);
+ }
+
+ _isGroupBySelections = new boolean[dataSchema.getColumnDataTypes().length];
+
+ // The first one argument of timeSeries is time column. The left ones are defining entity.
+ for (ExpressionContext entityColum : _timeSeries) {
+ int index = indexes.get(entityColum.getIdentifier());
+ _isGroupBySelections[index] = true;
+ }
+
+ for (int i = 0; i < _isGroupBySelections.length; i++) {
+ if (_isGroupBySelections[i]) {
+ _groupByKeyIndexes.add(i);
+ }
+ }
+
+ List<Object[]>[] timeBucketedRawRows = putRawRowsIntoTimeBucket(brokerResponseNative.getResultTable().getRows());
+
+ replaceColumnNameWithAlias(dataSchema);
+
+ if (_queryContext.getAggregationFunctions() == null) {
+
+ Map<String, Integer> sourceColumnsIndexes = new HashMap<>();
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ sourceColumnsIndexes.put(dataSchema.getColumnName(i), i);
+ }
+ _sourceColumnIndexForResultSchema = new int[resultTableSchema.getColumnNames().length];
+ for (int i = 0; i < _sourceColumnIndexForResultSchema.length; i++) {
+ _sourceColumnIndexForResultSchema[i] = sourceColumnsIndexes.get(resultTableSchema.getColumnName(i));
+ }
+ }
+
+ List<Object[]> resultRows = gapFillAndAggregate(timeBucketedRawRows, resultTableSchema, dataSchema);
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, resultRows));
+ }
+
+ /**
+ * Constructs the DataSchema for the ResultTable.
+ */
+ private DataSchema getResultTableDataSchema(DataSchema dataSchema) {
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ return dataSchema;
+ }
+
+ int numOfColumns = _queryContext.getSelectExpressions().size();
+ String[] columnNames = new String[numOfColumns];
+ ColumnDataType[] columnDataTypes = new ColumnDataType[numOfColumns];
+ for (int i = 0; i < numOfColumns; i++) {
+ ExpressionContext expressionContext = _queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(expressionContext)) {
+ expressionContext = expressionContext.getFunction().getArguments().get(0);
+ }
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ columnNames[i] = expressionContext.getIdentifier();
+ columnDataTypes[i] = ColumnDataType.STRING;
+ } else {
+ FunctionContext functionContext = expressionContext.getFunction();
+ AggregationFunction aggregationFunction =
+ AggregationFunctionFactory.getAggregationFunction(functionContext, _queryContext);
+ columnDataTypes[i] = aggregationFunction.getFinalResultColumnType();
+ columnNames[i] = functionContext.toString();
+ }
+ }
+ return new DataSchema(columnNames, columnDataTypes);
+ }
+
+ private Key constructGroupKeys(Object[] row) {
+ Object[] groupKeys = new Object[_groupByKeyIndexes.size()];
+ for (int i = 0; i < _groupByKeyIndexes.size(); i++) {
+ groupKeys[i] = row[_groupByKeyIndexes.get(i)];
+ }
+ return new Key(groupKeys);
+ }
+
+ private long truncate(long epoch) {
+ int sz = _gapfillDateTimeGranularity.getSize();
+ return epoch / sz * sz;
+ }
+
+ private List<Object[]> gapFillAndAggregate(List<Object[]>[] timeBucketedRawRows,
+ DataSchema dataSchemaForAggregatedResult, DataSchema dataSchema) {
+ List<Object[]> result = new ArrayList<>();
+
+ GapfillFilterHandler postGapfillFilterHandler = null;
+ if (_queryContext.getSubquery() != null && _queryContext.getFilter() != null) {
+ postGapfillFilterHandler = new GapfillFilterHandler(_queryContext.getFilter(), dataSchema);
+ }
+ GapfillFilterHandler postAggregateHavingFilterHandler = null;
+ if (_queryContext.getHavingFilter() != null) {
+ postAggregateHavingFilterHandler =
+ new GapfillFilterHandler(_queryContext.getHavingFilter(), dataSchemaForAggregatedResult);
+ }
+ long start = _startMs;
+ ColumnDataType[] resultColumnDataTypes = dataSchema.getColumnDataTypes();
+ List<Object[]> bucketedResult = new ArrayList<>();
+ for (long time = _startMs; time < _endMs; time += _gapfillTimeBucketSize) {
+ int index = findGapfillBucketIndex(time);
+ gapfill(time, bucketedResult, timeBucketedRawRows[index], dataSchema, postGapfillFilterHandler);
+ if (_queryContext.getAggregationFunctions() == null) {
Review comment:
When _queryContext.getAggregationFunctions() == null, the bucketed result will be added to final result. But when _aggregationSize == 1, it need go through the aggregation.
So I cannot take this suggestion.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] Jackie-Jiang commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
Jackie-Jiang commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r831445321
##########
File path: pinot-broker/src/main/java/org/apache/pinot/broker/requesthandler/BaseBrokerRequestHandler.java
##########
@@ -217,7 +218,10 @@ private BrokerResponseNative handleSQLRequest(long requestId, String query, Json
requestStatistics.setErrorCode(QueryException.PQL_PARSING_ERROR_CODE);
return new BrokerResponseNative(QueryException.getException(QueryException.PQL_PARSING_ERROR, e));
}
- PinotQuery pinotQuery = brokerRequest.getPinotQuery();
+
+ BrokerRequest serverBrokerRequest = GapfillUtils.stripGapfill(brokerRequest);
+
+ PinotQuery pinotQuery = serverBrokerRequest.getPinotQuery();
Review comment:
Still suggest putting the `setOptions()` first, and `stripGapfill` just carry over the options. Currently it works, but rely on both query sharing the same reference of the options
```suggestion
setOptions(brokerRequest.getPinotQuery());
BrokerRequest serverBrokerRequest = GapfillUtils.stripGapfill(brokerRequest);
PinotQuery pinotQuery = serverBrokerRequest.getPinotQuery();
```
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/util/GapfillUtils.java
##########
@@ -116,7 +132,265 @@ static public Serializable getDefaultValue(DataSchema.ColumnDataType dataType) {
}
}
- private static String canonicalizeFunctionName(String functionName) {
- return StringUtils.remove(functionName, '_').toLowerCase();
+ public static boolean isGapfill(ExpressionContext expressionContext) {
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ return false;
+ }
+
+ return GAP_FILL.equals(expressionContext.getFunction().getFunctionName());
+ }
+
+ private static boolean isGapfill(QueryContext queryContext) {
+ for (ExpressionContext expressionContext : queryContext.getSelectExpressions()) {
+ if (isGapfill(expressionContext)) {
+ return true;
+ }
+ }
+ return false;
+ }
+
+ /**
+ * Get the gapfill type for queryContext. Also do the validation for gapfill request.
+ * @param queryContext
+ */
+ public static GapfillType getGapfillType(QueryContext queryContext) {
+ GapfillType gapfillType = null;
+ if (queryContext.getSubquery() == null) {
+ if (isGapfill(queryContext)) {
+ Preconditions.checkArgument(queryContext.getAggregationFunctions() == null,
+ "Aggregation and Gapfill can not be in the same sql statement.");
+ gapfillType = GapfillType.GAP_FILL;
+ }
+ } else if (isGapfill(queryContext)) {
+ Preconditions.checkArgument(queryContext.getSubquery().getAggregationFunctions() != null,
+ "Select and Gapfill should be in the same sql statement.");
+ Preconditions.checkArgument(queryContext.getSubquery().getSubquery() == null,
+ "There is no three levels nesting sql when the outer query is gapfill.");
+ gapfillType = GapfillType.AGGREGATE_GAP_FILL;
+ } else if (isGapfill(queryContext.getSubquery())) {
+ if (queryContext.getAggregationFunctions() == null) {
+ gapfillType = GapfillType.GAP_FILL_SELECT;
+ } else if (queryContext.getSubquery().getSubquery() == null) {
+ gapfillType = GapfillType.GAP_FILL_AGGREGATE;
+ } else {
+ Preconditions
+ .checkArgument(queryContext.getSubquery().getSubquery().getAggregationFunctions() != null,
+ "Select cannot happen before gapfill.");
+ gapfillType = GapfillType.AGGREGATE_GAP_FILL_AGGREGATE;
+ }
+ }
+
+ if (gapfillType == null) {
+ return gapfillType;
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext, gapfillType);
+
+ Preconditions.checkArgument(gapFillSelection != null && gapFillSelection.getFunction() != null,
+ "Gapfill Expression should be function.");
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ Preconditions.checkArgument(args.size() > 5, "PreAggregateGapFill does not have correct number of arguments.");
+ Preconditions.checkArgument(args.get(1).getLiteral() != null,
+ "The second argument of PostAggregateGapFill should be TimeFormatter.");
+ Preconditions.checkArgument(args.get(2).getLiteral() != null,
+ "The third argument of PostAggregateGapFill should be start time.");
+ Preconditions.checkArgument(args.get(3).getLiteral() != null,
+ "The fourth argument of PostAggregateGapFill should be end time.");
+ Preconditions.checkArgument(args.get(4).getLiteral() != null,
+ "The fifth argument of PostAggregateGapFill should be time bucket size.");
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ Preconditions.checkArgument(timeseriesOn != null, "The TimeSeriesOn expressions should be specified.");
+
+ if (queryContext.getAggregationFunctions() == null) {
+ return gapfillType;
+ }
+
+ List<ExpressionContext> groupbyExpressions = queryContext.getGroupByExpressions();
+ Preconditions.checkArgument(groupbyExpressions != null, "No GroupBy Clause.");
+ List<ExpressionContext> innerSelections = queryContext.getSubquery().getSelectExpressions();
+ String timeBucketCol = null;
+ List<String> strAlias = queryContext.getSubquery().getAliasList();
+ for (int i = 0; i < innerSelections.size(); i++) {
+ ExpressionContext innerSelection = innerSelections.get(i);
+ if (GapfillUtils.isGapfill(innerSelection)) {
+ if (strAlias.get(i) != null) {
+ timeBucketCol = strAlias.get(i);
+ } else {
+ timeBucketCol = innerSelection.getFunction().getArguments().get(0).toString();
+ }
+ break;
+ }
+ }
+
+ Preconditions.checkArgument(timeBucketCol != null, "No Group By timebucket.");
+
+ boolean findTimeBucket = false;
+ for (ExpressionContext groupbyExp : groupbyExpressions) {
+ if (timeBucketCol.equals(groupbyExp.toString())) {
+ findTimeBucket = true;
+ break;
+ }
+ }
+
+ Preconditions.checkArgument(findTimeBucket, "No Group By timebucket.");
+ return gapfillType;
+ }
+
+ private static ExpressionContext findGapfillExpressionContext(QueryContext queryContext) {
+ for (ExpressionContext expressionContext : queryContext.getSelectExpressions()) {
+ if (isGapfill(expressionContext)) {
+ return expressionContext;
+ }
+ }
+ return null;
+ }
+
+ public static ExpressionContext getGapfillExpressionContext(QueryContext queryContext, GapfillType gapfillType) {
+ if (gapfillType == GapfillType.AGGREGATE_GAP_FILL || gapfillType == GapfillType.GAP_FILL) {
+ return findGapfillExpressionContext(queryContext);
+ } else if (gapfillType == GapfillType.GAP_FILL_AGGREGATE || gapfillType == GapfillType.AGGREGATE_GAP_FILL_AGGREGATE
+ || gapfillType == GapfillType.GAP_FILL_SELECT) {
+ return findGapfillExpressionContext(queryContext.getSubquery());
+ } else {
+ return null;
+ }
+ }
+
+ public static int findTimeBucketColumnIndex(QueryContext queryContext, GapfillType gapfillType) {
+ if (gapfillType == GapfillType.GAP_FILL_AGGREGATE
+ || gapfillType == GapfillType.GAP_FILL_SELECT
+ || gapfillType == GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = queryContext.getSubquery();
+ }
+ List<ExpressionContext> expressionContexts = queryContext.getSelectExpressions();
+ for (int i = 0; i < expressionContexts.size(); i++) {
+ if (isGapfill(expressionContexts.get(i))) {
+ return i;
+ }
+ }
+ return -1;
+ }
+
+ public static ExpressionContext getTimeSeriesOnExpressionContext(ExpressionContext gapFillSelection) {
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ for (int i = STARTING_INDEX_OF_OPTIONAL_ARGS_FOR_PRE_AGGREGATE_GAP_FILL; i < args.size(); i++) {
+ if (GapfillUtils.isTimeSeriesOn(args.get(i))) {
+ return args.get(i);
+ }
+ }
+ return null;
+ }
+
+ public static Map<String, ExpressionContext> getFillExpressions(ExpressionContext gapFillSelection) {
+ Map<String, ExpressionContext> fillExpressions = new HashMap<>();
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ for (int i = STARTING_INDEX_OF_OPTIONAL_ARGS_FOR_PRE_AGGREGATE_GAP_FILL; i < args.size(); i++) {
+ if (GapfillUtils.isFill(args.get(i))) {
+ ExpressionContext fillExpression = args.get(i);
+ fillExpressions.put(fillExpression.getFunction().getArguments().get(0).getIdentifier(), fillExpression);
+ }
+ }
+ return fillExpressions;
+ }
+
+ public static BrokerRequest stripGapfill(BrokerRequest brokerRequest) {
+ if (brokerRequest.getPinotQuery().getDataSource() == null) {
+ return brokerRequest;
+ }
+ PinotQuery pinotQuery = brokerRequest.getPinotQuery();
+ if (pinotQuery.getDataSource().getSubquery() == null && !hasGapfill(pinotQuery)) {
+ return brokerRequest;
+ }
+
+ // carry over the query options from original query to server query.
+ Map<String, String> queryOptions = pinotQuery.getQueryOptions();
Review comment:
`debugOptions` also need to be carried over
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/GapfillProcessor.java
##########
@@ -0,0 +1,471 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.function.CountAggregationFunction;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+
+/**
+ * Helper class to reduce and set gap fill results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class GapfillProcessor {
+ private final QueryContext _queryContext;
+
+ private final int _limitForAggregatedResult;
+ private final DateTimeGranularitySpec _gapfillDateTimeGranularity;
+ private final DateTimeGranularitySpec _postGapfillDateTimeGranularity;
+ private final DateTimeFormatSpec _dateTimeFormatter;
+ private final long _startMs;
+ private final long _endMs;
+ private final long _gapfillTimeBucketSize;
+ private final long _postGapfillTimeBucketSize;
+ private final int _numOfTimeBuckets;
+ private final List<Integer> _groupByKeyIndexes;
+ private final Set<Key> _groupByKeys;
+ private final Map<Key, Object[]> _previousByGroupKey;
+ private final Map<String, ExpressionContext> _fillExpressions;
+ private final List<ExpressionContext> _timeSeries;
+ private final GapfillUtils.GapfillType _gapfillType;
+ private int _limitForGapfilledResult;
+ private boolean[] _isGroupBySelections;
+ private final int _timeBucketColumnIndex;
+ private int[] _sourceColumnIndexForResultSchema = null;
+ private final int _aggregationSize;
+
+ GapfillProcessor(QueryContext queryContext, GapfillUtils.GapfillType gapfillType) {
+ _queryContext = queryContext;
+ _gapfillType = gapfillType;
+ _limitForAggregatedResult = queryContext.getLimit();
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ _limitForGapfilledResult = queryContext.getLimit();
+ } else {
+ _limitForGapfilledResult = queryContext.getSubquery().getLimit();
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext, _gapfillType);
+ _timeBucketColumnIndex = GapfillUtils.findTimeBucketColumnIndex(queryContext, _gapfillType);
+
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+
+ _dateTimeFormatter = new DateTimeFormatSpec(args.get(1).getLiteral());
+ _gapfillDateTimeGranularity = new DateTimeGranularitySpec(args.get(4).getLiteral());
+ _postGapfillDateTimeGranularity = new DateTimeGranularitySpec(args.get(5).getLiteral());
Review comment:
My concern is that this seems not align with the gapfill definition. Currently the bucketed rows are passed to the parent aggregation function to be aggregated, but conceptually there should be no dependency between parent query and subquery, and parent query should just run on top of subquery results.
If this functionality is a must have, we may keep the workaround, but let's make this argument optional so that it is easy to use. I find it quite hard to understand the semantic of this post-gapfill granularity
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/GapfillProcessor.java
##########
@@ -0,0 +1,473 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.function.CountAggregationFunction;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+
+/**
+ * Helper class to reduce and set gap fill results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class GapfillProcessor {
+ private final QueryContext _queryContext;
+
+ private final int _limitForAggregatedResult;
+ private final DateTimeGranularitySpec _gapfillDateTimeGranularity;
+ private final DateTimeGranularitySpec _postGapfillDateTimeGranularity;
+ private final DateTimeFormatSpec _dateTimeFormatter;
+ private final long _startMs;
+ private final long _endMs;
+ private final long _gapfillTimeBucketSize;
+ private final long _postGapfillTimeBucketSize;
+ private final int _numOfTimeBuckets;
+ private final List<Integer> _groupByKeyIndexes;
+ private final Set<Key> _groupByKeys;
+ private final Map<Key, Object[]> _previousByGroupKey;
+ private final Map<String, ExpressionContext> _fillExpressions;
+ private final List<ExpressionContext> _timeSeries;
+ private final GapfillUtils.GapfillType _gapfillType;
+ private int _limitForGapfilledResult;
+ private boolean[] _isGroupBySelections;
+ private final int _timeBucketColumnIndex;
+ private int[] _sourceColumnIndexForResultSchema = null;
+ private final int _aggregationSize;
+
+ GapfillProcessor(QueryContext queryContext, GapfillUtils.GapfillType gapfillType) {
+ _queryContext = queryContext;
+ _gapfillType = gapfillType;
+ _limitForAggregatedResult = queryContext.getLimit();
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ _limitForGapfilledResult = queryContext.getLimit();
+ } else {
+ _limitForGapfilledResult = queryContext.getSubquery().getLimit();
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext, _gapfillType);
+ _timeBucketColumnIndex = GapfillUtils.findTimeBucketColumnIndex(queryContext, _gapfillType);
+
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+
+ _dateTimeFormatter = new DateTimeFormatSpec(args.get(1).getLiteral());
+ _gapfillDateTimeGranularity = new DateTimeGranularitySpec(args.get(4).getLiteral());
+ _postGapfillDateTimeGranularity = new DateTimeGranularitySpec(args.get(5).getLiteral());
+ String start = args.get(2).getLiteral();
+ _startMs = truncate(_dateTimeFormatter.fromFormatToMillis(start));
+ String end = args.get(3).getLiteral();
+ _endMs = truncate(_dateTimeFormatter.fromFormatToMillis(end));
+ _gapfillTimeBucketSize = _gapfillDateTimeGranularity.granularityToMillis();
+ _postGapfillTimeBucketSize = _postGapfillDateTimeGranularity.granularityToMillis();
+ _numOfTimeBuckets = (int) ((_endMs - _startMs) / _gapfillTimeBucketSize);
+
+ _fillExpressions = GapfillUtils.getFillExpressions(gapFillSelection);
+
+ _aggregationSize = (int) (_postGapfillTimeBucketSize / _gapfillTimeBucketSize);
+
+ _previousByGroupKey = new HashMap<>();
+ _groupByKeyIndexes = new ArrayList<>();
+ _groupByKeys = new HashSet<>();
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ _timeSeries = timeseriesOn.getFunction().getArguments();
+ }
+
+ private int findGapfillBucketIndex(long time) {
+ return (int) ((time - _startMs) / _gapfillTimeBucketSize);
+ }
+
+ private void replaceColumnNameWithAlias(DataSchema dataSchema) {
+ QueryContext queryContext;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = _queryContext.getSubquery().getSubquery();
+ } else if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ queryContext = _queryContext;
+ } else {
+ queryContext = _queryContext.getSubquery();
+ }
+ List<String> aliasList = queryContext.getAliasList();
+ Map<String, String> columnNameToAliasMap = new HashMap<>();
+ for (int i = 0; i < aliasList.size(); i++) {
+ if (aliasList.get(i) != null) {
+ ExpressionContext selection = queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(selection)) {
+ selection = selection.getFunction().getArguments().get(0);
+ }
+ columnNameToAliasMap.put(selection.toString(), aliasList.get(i));
+ }
+ }
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ if (columnNameToAliasMap.containsKey(dataSchema.getColumnNames()[i])) {
+ dataSchema.getColumnNames()[i] = columnNameToAliasMap.get(dataSchema.getColumnNames()[i]);
+ }
+ }
+ }
+
+ /**
+ * Here are three things that happen
+ * 1. Sort the result sets from all pinot servers based on timestamp
+ * 2. Gapfill the data for missing entities per time bucket
+ * 3. Aggregate the dataset per time bucket.
+ */
+ public void process(BrokerResponseNative brokerResponseNative) {
+ DataSchema dataSchema = brokerResponseNative.getResultTable().getDataSchema();
+ DataSchema resultTableSchema = getResultTableDataSchema(dataSchema);
+ if (brokerResponseNative.getResultTable().getRows().isEmpty()) {
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, Collections.emptyList()));
+ return;
+ }
+
+ String[] columns = dataSchema.getColumnNames();
+
+ Map<String, Integer> indexes = new HashMap<>();
+ for (int i = 0; i < columns.length; i++) {
+ indexes.put(columns[i], i);
+ }
+
+ _isGroupBySelections = new boolean[dataSchema.getColumnDataTypes().length];
+
+ // The first one argument of timeSeries is time column. The left ones are defining entity.
+ for (ExpressionContext entityColum : _timeSeries) {
+ int index = indexes.get(entityColum.getIdentifier());
+ _isGroupBySelections[index] = true;
+ }
+
+ for (int i = 0; i < _isGroupBySelections.length; i++) {
+ if (_isGroupBySelections[i]) {
+ _groupByKeyIndexes.add(i);
+ }
+ }
+
+ List<Object[]>[] timeBucketedRawRows = putRawRowsIntoTimeBucket(brokerResponseNative.getResultTable().getRows());
+
+ replaceColumnNameWithAlias(dataSchema);
+
+ if (_queryContext.getAggregationFunctions() == null) {
+
+ Map<String, Integer> sourceColumnsIndexes = new HashMap<>();
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ sourceColumnsIndexes.put(dataSchema.getColumnName(i), i);
+ }
+ _sourceColumnIndexForResultSchema = new int[resultTableSchema.getColumnNames().length];
+ for (int i = 0; i < _sourceColumnIndexForResultSchema.length; i++) {
+ _sourceColumnIndexForResultSchema[i] = sourceColumnsIndexes.get(resultTableSchema.getColumnName(i));
+ }
+ }
+
+ List<Object[]> resultRows = gapFillAndAggregate(timeBucketedRawRows, resultTableSchema, dataSchema);
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, resultRows));
+ }
+
+ /**
+ * Constructs the DataSchema for the ResultTable.
+ */
+ private DataSchema getResultTableDataSchema(DataSchema dataSchema) {
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ return dataSchema;
+ }
+
+ int numOfColumns = _queryContext.getSelectExpressions().size();
+ String[] columnNames = new String[numOfColumns];
+ ColumnDataType[] columnDataTypes = new ColumnDataType[numOfColumns];
+ for (int i = 0; i < numOfColumns; i++) {
+ ExpressionContext expressionContext = _queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(expressionContext)) {
+ expressionContext = expressionContext.getFunction().getArguments().get(0);
+ }
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ columnNames[i] = expressionContext.getIdentifier();
+ columnDataTypes[i] = ColumnDataType.STRING;
+ } else {
+ FunctionContext functionContext = expressionContext.getFunction();
+ AggregationFunction aggregationFunction =
+ AggregationFunctionFactory.getAggregationFunction(functionContext, _queryContext);
+ columnDataTypes[i] = aggregationFunction.getFinalResultColumnType();
+ columnNames[i] = functionContext.toString();
+ }
+ }
+ return new DataSchema(columnNames, columnDataTypes);
+ }
+
+ private Key constructGroupKeys(Object[] row) {
+ Object[] groupKeys = new Object[_groupByKeyIndexes.size()];
+ for (int i = 0; i < _groupByKeyIndexes.size(); i++) {
+ groupKeys[i] = row[_groupByKeyIndexes.get(i)];
+ }
+ return new Key(groupKeys);
+ }
+
+ private long truncate(long epoch) {
+ int sz = _gapfillDateTimeGranularity.getSize();
+ return epoch / sz * sz;
+ }
+
+ private List<Object[]> gapFillAndAggregate(List<Object[]>[] timeBucketedRawRows,
+ DataSchema dataSchemaForAggregatedResult, DataSchema dataSchema) {
+ List<Object[]> result = new ArrayList<>();
+
+ GapfillFilterHandler postGapfillFilterHandler = null;
+ if (_queryContext.getSubquery() != null && _queryContext.getFilter() != null) {
+ postGapfillFilterHandler = new GapfillFilterHandler(_queryContext.getFilter(), dataSchema);
+ }
+ GapfillFilterHandler postAggregateHavingFilterHandler = null;
+ if (_queryContext.getHavingFilter() != null) {
+ postAggregateHavingFilterHandler =
+ new GapfillFilterHandler(_queryContext.getHavingFilter(), dataSchemaForAggregatedResult);
+ }
+ long start = _startMs;
+ ColumnDataType[] resultColumnDataTypes = dataSchema.getColumnDataTypes();
+ List<Object[]> bucketedResult = new ArrayList<>();
+ for (long time = _startMs; time < _endMs; time += _gapfillTimeBucketSize) {
+ int index = findGapfillBucketIndex(time);
+ gapfill(time, bucketedResult, timeBucketedRawRows[index], dataSchema, postGapfillFilterHandler);
+ if (_queryContext.getAggregationFunctions() == null) {
+ for (Object [] row : bucketedResult) {
+ Object[] resultRow = new Object[_sourceColumnIndexForResultSchema.length];
+ for (int i = 0; i < _sourceColumnIndexForResultSchema.length; i++) {
+ resultRow[i] = row[_sourceColumnIndexForResultSchema[i]];
+ }
+ result.add(resultRow);
+ }
+ bucketedResult.clear();
+ } else if (index % _aggregationSize == _aggregationSize - 1) {
+ if (bucketedResult.size() > 0) {
+ Object timeCol;
+ if (resultColumnDataTypes[_timeBucketColumnIndex] == ColumnDataType.LONG) {
+ timeCol = Long.valueOf(_dateTimeFormatter.fromMillisToFormat(start));
+ } else {
+ timeCol = _dateTimeFormatter.fromMillisToFormat(start);
+ }
+ List<Object[]> aggregatedRows = aggregateGapfilledData(timeCol, bucketedResult, dataSchema);
+ for (Object[] aggregatedRow : aggregatedRows) {
+ if (postAggregateHavingFilterHandler == null || postAggregateHavingFilterHandler.isMatch(aggregatedRow)) {
+ result.add(aggregatedRow);
+ }
+ if (result.size() >= _limitForAggregatedResult) {
+ return result;
+ }
+ }
+ bucketedResult.clear();
+ }
+ start = time + _gapfillTimeBucketSize;
+ }
+ }
+ return result;
+ }
+
+ private void gapfill(long bucketTime, List<Object[]> bucketedResult, List<Object[]> rawRowsForBucket,
+ DataSchema dataSchema, GapfillFilterHandler postGapfillFilterHandler) {
+ ColumnDataType[] resultColumnDataTypes = dataSchema.getColumnDataTypes();
+ int numResultColumns = resultColumnDataTypes.length;
+ Set<Key> keys = new HashSet<>(_groupByKeys);
+
+ if (rawRowsForBucket != null) {
+ for (Object[] resultRow : rawRowsForBucket) {
+ for (int i = 0; i < resultColumnDataTypes.length; i++) {
+ resultRow[i] = resultColumnDataTypes[i].format(resultRow[i]);
+ }
+
+ long timeCol = _dateTimeFormatter.fromFormatToMillis(String.valueOf(resultRow[0]));
+ if (timeCol > bucketTime) {
+ break;
+ }
+ if (timeCol == bucketTime) {
+ if (postGapfillFilterHandler == null || postGapfillFilterHandler.isMatch(resultRow)) {
+ if (bucketedResult.size() >= _limitForGapfilledResult) {
+ _limitForGapfilledResult = 0;
+ break;
+ } else {
+ bucketedResult.add(resultRow);
+ }
+ }
+ Key key = constructGroupKeys(resultRow);
+ keys.remove(key);
+ _previousByGroupKey.put(key, resultRow);
+ }
+ }
+ }
+
+ for (Key key : keys) {
+ Object[] gapfillRow = new Object[numResultColumns];
+ int keyIndex = 0;
+ if (resultColumnDataTypes[_timeBucketColumnIndex] == ColumnDataType.LONG) {
+ gapfillRow[0] = Long.valueOf(_dateTimeFormatter.fromMillisToFormat(bucketTime));
+ } else {
+ gapfillRow[0] = _dateTimeFormatter.fromMillisToFormat(bucketTime);
+ }
+ for (int i = 1; i < _isGroupBySelections.length; i++) {
+ if (_isGroupBySelections[i]) {
+ gapfillRow[i] = key.getValues()[keyIndex++];
+ } else {
+ gapfillRow[i] = getFillValue(i, dataSchema.getColumnName(i), key, resultColumnDataTypes[i]);
+ }
+ }
+
+ if (postGapfillFilterHandler == null || postGapfillFilterHandler.isMatch(gapfillRow)) {
+ if (bucketedResult.size() >= _limitForGapfilledResult) {
+ break;
+ } else {
+ bucketedResult.add(gapfillRow);
+ }
+ }
+ }
+ if (_limitForGapfilledResult > _groupByKeys.size()) {
+ _limitForGapfilledResult -= _groupByKeys.size();
+ } else {
+ _limitForGapfilledResult = 0;
+ }
+ }
+
+ private List<Object[]> aggregateGapfilledData(Object timeCol, List<Object[]> bucketedRows, DataSchema dataSchema) {
+ List<ExpressionContext> groupbyExpressions = _queryContext.getGroupByExpressions();
+ Preconditions.checkArgument(groupbyExpressions != null, "No GroupBy Clause.");
+ Map<String, Integer> indexes = new HashMap<>();
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ indexes.put(dataSchema.getColumnName(i), i);
+ }
+
+ for (Object [] bucketedRow : bucketedRows) {
+ bucketedRow[_timeBucketColumnIndex] = timeCol;
+ }
+
+ Map<List<Object>, Integer> groupKeyIndexes = new HashMap<>();
+ int[] groupKeyArray = new int[bucketedRows.size()];
+ List<Object[]> aggregatedResult = new ArrayList<>();
+ for (int i = 0; i < bucketedRows.size(); i++) {
+ Object[] bucketedRow = bucketedRows.get(i);
+ List<Object> groupKey = new ArrayList<>(groupbyExpressions.size());
+ for (ExpressionContext groupbyExpression : groupbyExpressions) {
+ int columnIndex = indexes.get(groupbyExpression.toString());
+ groupKey.add(bucketedRow[columnIndex]);
+ }
+ if (groupKeyIndexes.containsKey(groupKey)) {
+ groupKeyArray[i] = groupKeyIndexes.get(groupKey);
+ } else {
+ // create the new groupBy Result row and fill the group by key
+ groupKeyArray[i] = groupKeyIndexes.size();
+ groupKeyIndexes.put(groupKey, groupKeyIndexes.size());
+ Object[] row = new Object[_queryContext.getSelectExpressions().size()];
+ for (int j = 0; j < _queryContext.getSelectExpressions().size(); j++) {
+ ExpressionContext expressionContext = _queryContext.getSelectExpressions().get(j);
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ row[j] = bucketedRow[indexes.get(expressionContext.toString())];
+ }
+ }
+ aggregatedResult.add(row);
+ }
+ }
+
+ Map<ExpressionContext, BlockValSet> blockValSetMap = new HashMap<>();
+ for (int i = 1; i < dataSchema.getColumnNames().length; i++) {
+ blockValSetMap.put(ExpressionContext.forIdentifier(dataSchema.getColumnName(i)),
+ new RowBasedBlockValSet(dataSchema.getColumnDataType(i), bucketedRows, i));
+ }
+
+ for (int i = 0; i < _queryContext.getSelectExpressions().size(); i++) {
Review comment:
You should be able to use the `_queryContext.getAggregationFunctions()`
##########
File path: pinot-core/src/test/java/org/apache/pinot/queries/GapfillQueriesTest.java
##########
@@ -0,0 +1,3701 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.queries;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import org.apache.commons.io.FileUtils;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.segment.local.indexsegment.immutable.ImmutableSegmentLoader;
+import org.apache.pinot.segment.local.segment.creator.impl.SegmentIndexCreationDriverImpl;
+import org.apache.pinot.segment.local.segment.readers.GenericRowRecordReader;
+import org.apache.pinot.segment.spi.ImmutableSegment;
+import org.apache.pinot.segment.spi.IndexSegment;
+import org.apache.pinot.segment.spi.creator.SegmentGeneratorConfig;
+import org.apache.pinot.spi.config.table.TableConfig;
+import org.apache.pinot.spi.config.table.TableType;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+import org.apache.pinot.spi.data.FieldSpec.DataType;
+import org.apache.pinot.spi.data.Schema;
+import org.apache.pinot.spi.data.readers.GenericRow;
+import org.apache.pinot.spi.utils.ReadMode;
+import org.apache.pinot.spi.utils.builder.TableConfigBuilder;
+import org.testng.Assert;
+import org.testng.annotations.AfterClass;
+import org.testng.annotations.BeforeClass;
+import org.testng.annotations.Test;
+
+
+/**
+ * Queries test for Gapfill queries.
+ */
+// TODO: Item 1. table alias for subquery in next PR
+// TODO: Item 2. Deprecate PostAggregateGapfill implementation in next PR
+@SuppressWarnings("rawtypes")
+public class GapfillQueriesTest extends BaseQueriesTest {
+ private static final File INDEX_DIR = new File(FileUtils.getTempDirectory(), "PostAggregationGapfillQueriesTest");
+ private static final String RAW_TABLE_NAME = "parkingData";
+ private static final String SEGMENT_NAME = "testSegment";
+
+ private static final int NUM_LOTS = 4;
+
+ private static final String IS_OCCUPIED_COLUMN = "isOccupied";
+ private static final String LEVEL_ID_COLUMN = "levelId";
+ private static final String LOT_ID_COLUMN = "lotId";
+ private static final String EVENT_TIME_COLUMN = "eventTime";
+ private static final Schema SCHEMA =
+ new Schema.SchemaBuilder().addSingleValueDimension(IS_OCCUPIED_COLUMN, DataType.INT)
+ .addSingleValueDimension(LOT_ID_COLUMN, DataType.STRING)
+ .addSingleValueDimension(LEVEL_ID_COLUMN, DataType.STRING)
+ .addSingleValueDimension(EVENT_TIME_COLUMN, DataType.LONG)
+ .setPrimaryKeyColumns(Arrays.asList(LOT_ID_COLUMN, EVENT_TIME_COLUMN)).build();
+ private static final TableConfig TABLE_CONFIG =
+ new TableConfigBuilder(TableType.OFFLINE).setTableName(RAW_TABLE_NAME).build();
+
+ private IndexSegment _indexSegment;
+ private List<IndexSegment> _indexSegments;
+
+ @Override
+ protected String getFilter() {
+ // NOTE: Use a match all filter to switch between DictionaryBasedAggregationOperator and AggregationOperator
+ return " WHERE eventTime >= 0";
+ }
+
+ @Override
+ protected IndexSegment getIndexSegment() {
+ return _indexSegment;
+ }
+
+ @Override
+ protected List<IndexSegment> getIndexSegments() {
+ return _indexSegments;
+ }
+
+ GenericRow createRow(String time, int levelId, int lotId, boolean isOccupied) {
+ DateTimeFormatSpec dateTimeFormatter =
+ new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ GenericRow parkingRow = new GenericRow();
+ parkingRow.putValue(EVENT_TIME_COLUMN, dateTimeFormatter.fromFormatToMillis(time));
+ parkingRow.putValue(LEVEL_ID_COLUMN, "Level_" + levelId);
+ parkingRow.putValue(LOT_ID_COLUMN, "LotId_" + lotId);
+ parkingRow.putValue(IS_OCCUPIED_COLUMN, isOccupied);
+ return parkingRow;
+ }
+
+ @BeforeClass
+ public void setUp()
+ throws Exception {
+ FileUtils.deleteDirectory(INDEX_DIR);
+
+ List<GenericRow> records = new ArrayList<>(NUM_LOTS * 2);
+ records.add(createRow("2021-11-07 04:11:00.000", 0, 0, true));
+ records.add(createRow("2021-11-07 04:21:00.000", 0, 0, true));
+ records.add(createRow("2021-11-07 04:31:00.000", 1, 0, true));
+ records.add(createRow("2021-11-07 05:17:00.000", 0, 1, true));
+ records.add(createRow("2021-11-07 05:37:00.000", 0, 1, true));
+ records.add(createRow("2021-11-07 05:47:00.000", 1, 2, true));
+ records.add(createRow("2021-11-07 06:25:00.000", 0, 2, true));
+ records.add(createRow("2021-11-07 06:35:00.000", 0, 2, true));
+ records.add(createRow("2021-11-07 06:36:00.000", 1, 1, true));
+ records.add(createRow("2021-11-07 07:44:00.000", 0, 3, true));
+ records.add(createRow("2021-11-07 07:46:00.000", 1, 3, true));
+ records.add(createRow("2021-11-07 07:54:00.000", 1, 3, true));
+ records.add(createRow("2021-11-07 08:44:00.000", 0, 2, false));
+ records.add(createRow("2021-11-07 08:44:00.000", 1, 2, false));
+ records.add(createRow("2021-11-07 09:31:00.000", 0, 3, false));
+ records.add(createRow("2021-11-07 09:31:00.000", 1, 3, false));
+ records.add(createRow("2021-11-07 10:17:00.000", 0, 0, false));
+ records.add(createRow("2021-11-07 10:33:00.000", 0, 0, false));
+ records.add(createRow("2021-11-07 10:33:00.000", 1, 0, false));
+ records.add(createRow("2021-11-07 11:54:00.000", 0, 1, false));
+ records.add(createRow("2021-11-07 11:57:00.000", 1, 1, false));
+
+ SegmentGeneratorConfig segmentGeneratorConfig = new SegmentGeneratorConfig(TABLE_CONFIG, SCHEMA);
+ segmentGeneratorConfig.setTableName(RAW_TABLE_NAME);
+ segmentGeneratorConfig.setSegmentName(SEGMENT_NAME);
+ segmentGeneratorConfig.setOutDir(INDEX_DIR.getPath());
+
+ SegmentIndexCreationDriverImpl driver = new SegmentIndexCreationDriverImpl();
+ driver.init(segmentGeneratorConfig, new GenericRowRecordReader(records));
+ driver.build();
+
+ ImmutableSegment immutableSegment = ImmutableSegmentLoader.load(new File(INDEX_DIR, SEGMENT_NAME), ReadMode.mmap);
+ _indexSegment = immutableSegment;
+ _indexSegments = Arrays.asList(immutableSegment);
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestSelectSelect() {
+ DateTimeFormatSpec dateTimeFormatter =
+ new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " levelId, lotId, isOccupied "
+ + "FROM parkingData "
+ + "WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + "LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int[][] expectedOccupiedSlotsCounts1 =
+ new int[][]{{6, 6}, {8, 4}, {10, 2}, {12, 0}, {6, 4}, {4, 6}, {2, 10}, {0, 10}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = (String) gapFillRows1.get(index)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[3].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, lotId, isOccupied, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int[] expectedOccupiedSlotsCounts2 = new int[]{6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = (String) gapFillRows2.get(index)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestAggregateSelect() {
+ DateTimeFormatSpec dateTimeFormatter =
+ new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)), levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int[][] expectedOccupiedSlotsCounts1 = new int[][]{{2, 6}, {4, 4}, {6, 2}, {8, 0}, {6, 2}, {4, 4}, {2, 6}, {0, 8}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = (String) gapFillRows1.get(index)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[3].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + "GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)), levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int[] expectedOccupiedSlotsCounts2 = new int[]{2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = (String) gapFillRows2.get(index)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestGapfillAggregate() {
+ DateTimeFormatSpec dateTimeFormatter =
+ new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String dataTimeConvertQuery = "SELECT "
+ + "DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + "'1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col, "
+ + "SUM(isOccupied) "
+ + "FROM parkingData "
+ + "WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + "GROUP BY 1 "
+ + "ORDER BY 1 "
+ + "LIMIT 200";
+
+ BrokerResponseNative dateTimeConvertBrokerResponse = getBrokerResponseForSqlQuery(dataTimeConvertQuery);
+
+ ResultTable dateTimeConvertResultTable = dateTimeConvertBrokerResponse.getResultTable();
+ Assert.assertEquals(dateTimeConvertResultTable.getRows().size(), 8);
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{6, 8, 10, 12, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = (String) gapFillRows2.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestGapfillAggregateWithOptionalGroupBy() {
+ DateTimeFormatSpec dateTimeFormatter =
+ new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCountsForLevel11 = new double[]{4, 5, 6, 5, 3, 2, 1, 0};
+ double[] expectedOccupiedSlotsCountsForLevel21 = new double[]{2, 3, 4, 7, 3, 2, 1, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = (String) gapFillRows1.get(i + 1)[0];
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCountsForLevel12 = new double[]{4, 5, 6, 5, 3, 2, 1};
+ double[] expectedOccupiedSlotsCountsForLevel22 = new double[]{2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCountsForLevel12.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel12.length * 2; i += 2) {
+ String firstTimeCol = (String) gapFillRows2.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i)[2]);
+ }
+ firstTimeCol = (String) gapFillRows2.get(i + 1)[0];
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestGapfillAggregateWithHavingClause() {
+ DateTimeFormatSpec dateTimeFormatter =
+ new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " HAVING occupied_slots_count > 0"
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCountsForLevel11 = new double[]{4, 5, 6, 5, 3, 2, 1};
+ double[] expectedOccupiedSlotsCountsForLevel21 = new double[]{2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = (String) gapFillRows1.get(i + 1)[0];
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestAggregateAggregate() {
+ DateTimeFormatSpec dateTimeFormatter =
+ new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{2, 4, 6, 8, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = (String) gapFillRows2.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestAggregateAggregateWithOptionalGroupBy() {
+ DateTimeFormatSpec dateTimeFormatter =
+ new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{1, 2, 3, 4, 3, 2, 1, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i += 2) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = (String) gapFillRows1.get(i + 1)[0];
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{1, 2, 3, 4, 3, 2, 1};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i += 2) {
+ String firstTimeCol = (String) gapFillRows2.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = (String) gapFillRows2.get(i + 1)[0];
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestAggregateAggregateWithHavingClause() {
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)),"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " HAVING occupied_slots_count > 0"
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{1, 2, 3, 4, 3, 2, 1};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length * 2);
+ DateTimeFormatSpec dateTimeFormatter =
+ new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+
+ long start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i += 2) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = (String) gapFillRows1.get(i + 1)[0];
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochHoursGapfillTestSelectSelect() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + " GapFill(ToEpochHours(eventTime), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId "
+ + "FROM parkingData "
+ + "WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + "LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int[][] expectedOccupiedSlotsCounts1 =
+ new int[][]{{6, 6}, {8, 4}, {10, 2}, {12, 0}, {6, 4}, {4, 6}, {2, 10}, {0, 10}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = ((Long) (gapFillRows1.get(index)[0])).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[1].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, lotId, isOccupied, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochHours(eventTime), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int[] expectedOccupiedSlotsCounts2 = new int[]{6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void toEpochHoursGapfillTestAggregateSelect() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + " GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)),"
+ + " levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT ToEpochHours(eventTime) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int[][] expectedOccupiedSlotsCounts1 = new int[][]{{2, 6}, {4, 4}, {6, 2}, {8, 0}, {6, 2}, {4, 4}, {2, 6}, {0, 8}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[3].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + " GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)),"
+ + " levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT ToEpochHours(eventTime) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int[] expectedOccupiedSlotsCounts2 = new int[]{2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void toEpochHoursGapfillTestGapfillAggregate() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochHours(eventTime), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochHours(eventTime), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{6, 8, 10, 12, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochHoursGapfillTestGapfillAggregateWithOptionalGroupBy() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochHours(eventTime), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCountsForLevel11 = new double[]{4, 5, 6, 5, 3, 2, 1, 0};
+ double[] expectedOccupiedSlotsCountsForLevel21 = new double[]{2, 3, 4, 7, 3, 2, 1, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochHours(eventTime), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCountsForLevel12 = new double[]{4, 5, 6, 5, 3, 2, 1};
+ double[] expectedOccupiedSlotsCountsForLevel22 = new double[]{2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCountsForLevel12.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel12.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows2.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochHoursGapfillTestGapfillAggregateWithHavingClause() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochHours(eventTime), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " HAVING occupied_slots_count > 0"
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCountsForLevel11 = new double[]{4, 5, 6, 5, 3, 2, 1};
+ double[] expectedOccupiedSlotsCountsForLevel21 = new double[]{2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochHoursGapfillTestAggregateAggregate() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)),"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochHours(eventTime) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{2, 4, 6, 8, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochHours(eventTime) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochHoursGapfillTestAggregateAggregateWithOptionalGroupBy() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochHours(eventTime) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{1, 2, 3, 4, 3, 2, 1, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochHours(eventTime) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{1, 2, 3, 4, 3, 2, 1};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = ((Long) gapFillRows2.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochHoursGapfillTestAggregateAggregateWithHavingClause() {
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochHours(eventTime) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " HAVING occupied_slots_count > 0"
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{1, 2, 3, 4, 3, 2, 1};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length * 2);
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+
+ long start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesRoundedHoursGapfillTestSelectSelect() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MINUTES:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT GapFill(ToEpochMinutesRounded(eventTime, 60), '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int[][] expectedOccupiedSlotsCounts1 =
+ new int[][]{{6, 6}, {8, 4}, {10, 2}, {12, 0}, {6, 4}, {4, 6}, {2, 10}, {0, 10}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = ((Long) (gapFillRows1.get(index)[0])).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[1].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, lotId, isOccupied, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesRounded(eventTime, 60), '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int[] expectedOccupiedSlotsCounts2 = new int[]{6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void toEpochMinutesRoundedHoursGapfillTestAggregateSelect() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MINUTES:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "GapFill(time_col, '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)), levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT ToEpochMinutesRounded(eventTime, 60) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int[][] expectedOccupiedSlotsCounts1 = new int[][]{{2, 6}, {4, 4}, {6, 2}, {8, 0}, {6, 2}, {4, 4}, {2, 6}, {0, 8}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[3].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + "GapFill(time_col, '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)), levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT ToEpochMinutesRounded(eventTime, 60) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int[] expectedOccupiedSlotsCounts2 = new int[]{2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void toEpochMinutesRoundedHoursGapfillTestGapfillAggregate() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MINUTES:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesRounded(eventTime, 60), '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesRounded(eventTime, 60), '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{6, 8, 10, 12, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesRoundedHoursGapfillTestGapfillAggregateWithOptionalGroupBy() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MINUTES:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesRounded(eventTime, 60), '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCountsForLevel11 = new double[]{4, 5, 6, 5, 3, 2, 1, 0};
+ double[] expectedOccupiedSlotsCountsForLevel21 = new double[]{2, 3, 4, 7, 3, 2, 1, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesRounded(eventTime, 60), '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCountsForLevel12 = new double[]{4, 5, 6, 5, 3, 2, 1};
+ double[] expectedOccupiedSlotsCountsForLevel22 = new double[]{2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCountsForLevel12.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel12.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows2.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesRoundedHoursGapfillTestGapfillAggregateWithHavingClause() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MINUTES:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesRounded(eventTime, 60), '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " HAVING occupied_slots_count > 0"
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCountsForLevel11 = new double[]{4, 5, 6, 5, 3, 2, 1};
+ double[] expectedOccupiedSlotsCountsForLevel21 = new double[]{2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesRoundedHoursGapfillTestAggregateAggregate() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MINUTES:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochMinutesRounded(eventTime, 60) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{2, 4, 6, 8, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochMinutesRounded(eventTime, 60) time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesRoundedHoursGapfillTestAggregateAggregateWithOptionalGroupBy() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MINUTES:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochMinutesRounded(eventTime, 60) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{1, 2, 3, 4, 3, 2, 1, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochMinutesRounded(eventTime, 60) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{1, 2, 3, 4, 3, 2, 1};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = ((Long) gapFillRows2.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesRoundedHoursGapfillTestAggregateAggregateWithHavingClause() {
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochMinutesRounded(eventTime, 60) time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " HAVING occupied_slots_count > 0"
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{1, 2, 3, 4, 3, 2, 1};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length * 2);
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MINUTES:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+
+ long start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesBucketHoursGapfillTestSelectSelect() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + " GapFill(ToEpochMinutesBucket(eventTime, 60), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int[][] expectedOccupiedSlotsCounts1 =
+ new int[][]{{6, 6}, {8, 4}, {10, 2}, {12, 0}, {6, 4}, {4, 6}, {2, 10}, {0, 10}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = ((Long) (gapFillRows1.get(index)[0])).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[1].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, lotId, isOccupied, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesBucket(eventTime, 60), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int[] expectedOccupiedSlotsCounts2 = new int[]{6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void toEpochMinutesBucketHoursGapfillTestAggregateSelect() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + " GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)),"
+ + " levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT ToEpochMinutesBucket(eventTime, 60) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int[][] expectedOccupiedSlotsCounts1 = new int[][]{{2, 6}, {4, 4}, {6, 2}, {8, 0}, {6, 2}, {4, 4}, {2, 6}, {0, 8}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[3].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + " GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)),"
+ + " levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT ToEpochMinutesBucket(eventTime, 60) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int[] expectedOccupiedSlotsCounts2 = new int[]{2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void toEpochMinutesBucketHoursGapfillTestGapfillAggregate() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesBucket(eventTime, 60), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesBucket(eventTime, 60), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{6, 8, 10, 12, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesBucketHoursGapfillTestGapfillAggregateWithOptionalGroupBy() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesBucket(eventTime, 60), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCountsForLevel11 = new double[]{4, 5, 6, 5, 3, 2, 1, 0};
+ double[] expectedOccupiedSlotsCountsForLevel21 = new double[]{2, 3, 4, 7, 3, 2, 1, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesBucket(eventTime, 60), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCountsForLevel12 = new double[]{4, 5, 6, 5, 3, 2, 1};
+ double[] expectedOccupiedSlotsCountsForLevel22 = new double[]{2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCountsForLevel12.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel12.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows2.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesBucketHoursGapfillTestGapfillAggregateWithHavingClause() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesBucket(eventTime, 60), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " HAVING occupied_slots_count > 0"
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCountsForLevel11 = new double[]{4, 5, 6, 5, 3, 2, 1};
+ double[] expectedOccupiedSlotsCountsForLevel21 = new double[]{2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesBucketHoursGapfillTestAggregateAggregate() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochMinutesBucket(eventTime, 60) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{2, 4, 6, 8, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochMinutesBucket(eventTime, 60) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesBucketHoursGapfillTestAggregateAggregateWithOptionalGroupBy() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochMinutesBucket(eventTime, 60) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{1, 2, 3, 4, 3, 2, 1, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochMinutesBucket(eventTime, 60) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{1, 2, 3, 4, 3, 2, 1};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = ((Long) gapFillRows2.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesBucketHoursGapfillTestAggregateAggregateWithHavingClause() {
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochMinutesBucket(eventTime, 60) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " HAVING occupied_slots_count > 0"
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{1, 2, 3, 4, 3, 2, 1};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length * 2);
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+
+ long start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void dateTruncHoursGapfillTestSelectSelect() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MILLISECONDS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + " GapFill(DATETRUNC('hour', eventTime, 'milliseconds'), '1:MILLISECONDS:EPOCH', "
+ + " '1636257600000', '1636286400000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int[][] expectedOccupiedSlotsCounts1 =
+ new int[][]{{6, 6}, {8, 4}, {10, 2}, {12, 0}, {6, 4}, {4, 6}, {2, 10}, {0, 10}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("1636257600000");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = ((Long) (gapFillRows1.get(index)[0])).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[1].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, lotId, isOccupied, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETRUNC('hour', eventTime, 'milliseconds'), '1:MILLISECONDS:EPOCH', "
+ + " '1636257600000', '1636286400000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int[] expectedOccupiedSlotsCounts2 = new int[]{6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("1636257600000");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void dateTruncHoursGapfillTestAggregateSelect() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MILLISECONDS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + " GapFill(time_col, '1:MILLISECONDS:EPOCH', "
+ + " '1636257600000', '1636286400000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)),"
+ + " levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT DATETRUNC('hour', eventTime, 'milliseconds') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int[][] expectedOccupiedSlotsCounts1 = new int[][]{{2, 6}, {4, 4}, {6, 2}, {8, 0}, {6, 2}, {4, 4}, {2, 6}, {0, 8}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("1636257600000");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[3].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + " GapFill(time_col, '1:MILLISECONDS:EPOCH', "
+ + " '1636257600000', '1636286400000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)),"
+ + " levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT DATETRUNC('hour', eventTime, 'milliseconds') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int[] expectedOccupiedSlotsCounts2 = new int[]{2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("1636257600000");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void dateTruncHoursGapfillTestGapfillAggregate() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MILLISECONDS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETRUNC('hour', eventTime, 'milliseconds'), '1:MILLISECONDS:EPOCH', "
+ + " '1636257600000', '1636286400000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("1636257600000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETRUNC('hour', eventTime, 'milliseconds'), '1:MILLISECONDS:EPOCH', "
+ + " '1636257600000', '1636286400000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{6, 8, 10, 12, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("1636257600000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void dateTruncHoursGapfillTestGapfillAggregateWithOptionalGroupBy() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MILLISECONDS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETRUNC('hour', eventTime, 'milliseconds'), '1:MILLISECONDS:EPOCH', "
+ + " '1636257600000', '1636286400000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCountsForLevel11 = new double[]{4, 5, 6, 5, 3, 2, 1, 0};
+ double[] expectedOccupiedSlotsCountsForLevel21 = new double[]{2, 3, 4, 7, 3, 2, 1, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("1636257600000");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETRUNC('hour', eventTime, 'milliseconds'), '1:MILLISECONDS:EPOCH', "
+ + " '1636257600000', '1636286400000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCountsForLevel12 = new double[]{4, 5, 6, 5, 3, 2, 1};
+ double[] expectedOccupiedSlotsCountsForLevel22 = new double[]{2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCountsForLevel12.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("1636257600000");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel12.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows2.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void dateTruncHoursGapfillTestGapfillAggregateWithHavingClause() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MILLISECONDS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETRUNC('hour', eventTime, 'milliseconds'), '1:MILLISECONDS:EPOCH', "
+ + " '1636257600000', '1636286400000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " HAVING occupied_slots_count > 0"
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCountsForLevel11 = new double[]{4, 5, 6, 5, 3, 2, 1};
+ double[] expectedOccupiedSlotsCountsForLevel21 = new double[]{2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("1636257600000");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void dateTruncHoursGapfillTestAggregateAggregate() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MILLISECONDS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:MILLISECONDS:EPOCH', "
+ + " '1636257600000', '1636286400000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)), "
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETRUNC('hour', eventTime, 'milliseconds') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{2, 4, 6, 8, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("1636257600000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[1]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[0]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:MILLISECONDS:EPOCH', "
+ + " '1636257600000', '1636286400000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETRUNC('hour', eventTime, 'milliseconds') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("1636257600000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void dateTruncHoursGapfillTestAggregateAggregateWithOptionalGroupBy() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MILLISECONDS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:MILLISECONDS:EPOCH', "
+ + " '1636257600000', '1636286400000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)), "
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETRUNC('hour', eventTime, 'milliseconds') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{1, 2, 3, 4, 3, 2, 1, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("1636257600000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:MILLISECONDS:EPOCH', "
+ + " '1636257600000', '1636286400000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)), "
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETRUNC('hour', eventTime, 'milliseconds') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{1, 2, 3, 4, 3, 2, 1};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("1636257600000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = ((Long) gapFillRows2.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void dateTruncHoursGapfillTestAggregateAggregateWithHavingClause() {
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:MILLISECONDS:EPOCH', "
+ + " '1636257600000', '1636286400000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)),"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETRUNC('hour', eventTime, 'milliseconds') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " HAVING occupied_slots_count > 0"
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{1, 2, 3, 4, 3, 2, 1};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length * 2);
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MILLISECONDS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+
+ long start = dateTimeFormatter.fromFormatToMillis("1636257600000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestWithMissingTimeSeries() {
+ String gapfillQuery = "SELECT "
+ + "time_col, levelId, lotId, isOccupied, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE')) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " LIMIT 200 ";
+
+ try {
+ getBrokerResponseForSqlQuery(gapfillQuery);
+ Assert.fail();
+ } catch (Exception ex) {
+ Assert.assertTrue(ex.getClass().getSimpleName().equals("IllegalArgumentException"));
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestWithMissingGroupByTimeBucket() {
+ String gapfillQuery = "SELECT "
+ + "levelId, SUM(isOccupied) "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY levelId"
+ + " LIMIT 200 ";
+
+ try {
+ getBrokerResponseForSqlQuery(gapfillQuery);
+ Assert.fail();
+ } catch (Exception ex) {
+ Assert.assertTrue(ex.getClass().getSimpleName().equals("IllegalArgumentException"));
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestAggregateAggregateWithLimitTesting() {
+ DateTimeFormatSpec dateTimeFormatter =
+ new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 40 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 40 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{2, 4, 6, 8, 6};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 56 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 6 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{2, 4, 6, 8, 6, 4};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestSelectSelectOrderBy() {
+ DateTimeFormatSpec dateTimeFormatter =
+ new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + " GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " levelId, lotId, isOccupied "
+ + "FROM parkingData "
+ + "WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + "ORDER BY time_col, levelId "
+ + "LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int[][] expectedOccupiedSlotsCounts1 =
+ new int[][]{{6, 6}, {8, 4}, {10, 2}, {12, 0}, {6, 4}, {4, 6}, {2, 10}, {0, 10}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = (String) gapFillRows1.get(index)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[3].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, lotId, isOccupied, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " ORDER BY time_col, levelId DESC"
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int[] expectedOccupiedSlotsCounts2 = new int[]{6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = (String) gapFillRows2.get(index)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestAggregateSelectOrderBy() {
+ DateTimeFormatSpec dateTimeFormatter =
+ new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)), levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " ORDER BY time_col, levelId "
+ + " LIMIT 200 "
+ + ") "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int[][] expectedOccupiedSlotsCounts1 = new int[][]{{2, 6}, {4, 4}, {6, 2}, {8, 0}, {6, 2}, {4, 4}, {2, 6}, {0, 8}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = (String) gapFillRows1.get(index)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[3].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 =
+ "SELECT "
+ + "GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)), levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " ORDER BY time_col, levelId "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int[] expectedOccupiedSlotsCounts2 = new int[]{2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = (String) gapFillRows2.get(index)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestGapfillAggregateOrderBy() {
+ DateTimeFormatSpec dateTimeFormatter =
+ new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String dataTimeConvertQuery = "SELECT "
+ + "DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + "'1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col, "
+ + "SUM(isOccupied) "
+ + "FROM parkingData "
+ + "WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + "GROUP BY 1 "
+ + "ORDER BY 1 "
+ + "LIMIT 200";
+
+ BrokerResponseNative dateTimeConvertBrokerResponse = getBrokerResponseForSqlQuery(dataTimeConvertQuery);
+
+ ResultTable dateTimeConvertResultTable = dateTimeConvertBrokerResponse.getResultTable();
+ Assert.assertEquals(dateTimeConvertResultTable.getRows().size(), 8);
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " ORDER BY time_col, levelId "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " ORDER BY time_col, levelId "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{6, 8, 10, 12, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = (String) gapFillRows2.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestAggregateAggregateOrderBy() {
+ DateTimeFormatSpec dateTimeFormatter =
+ new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " ORDER BY time_col, levelId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{2, 4, 6, 8, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " ORDER BY time_col, levelId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = (String) gapFillRows2.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestAggregateAggregateWithTimeBucketAggregation() {
+ DateTimeFormatSpec dateTimeFormatter =
+ new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, COUNT(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '5:MINUTES', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '5:MINUTES') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 2000 "
+ + " ) "
+ + " LIMIT 2000 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 2000 ";
+
+ System.out.println(gapfillQuery1);
Review comment:
Remove the console print
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r831471890
##########
File path: pinot-core/src/test/java/org/apache/pinot/queries/GapfillQueriesTest.java
##########
@@ -0,0 +1,3701 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.queries;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import org.apache.commons.io.FileUtils;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.segment.local.indexsegment.immutable.ImmutableSegmentLoader;
+import org.apache.pinot.segment.local.segment.creator.impl.SegmentIndexCreationDriverImpl;
+import org.apache.pinot.segment.local.segment.readers.GenericRowRecordReader;
+import org.apache.pinot.segment.spi.ImmutableSegment;
+import org.apache.pinot.segment.spi.IndexSegment;
+import org.apache.pinot.segment.spi.creator.SegmentGeneratorConfig;
+import org.apache.pinot.spi.config.table.TableConfig;
+import org.apache.pinot.spi.config.table.TableType;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+import org.apache.pinot.spi.data.FieldSpec.DataType;
+import org.apache.pinot.spi.data.Schema;
+import org.apache.pinot.spi.data.readers.GenericRow;
+import org.apache.pinot.spi.utils.ReadMode;
+import org.apache.pinot.spi.utils.builder.TableConfigBuilder;
+import org.testng.Assert;
+import org.testng.annotations.AfterClass;
+import org.testng.annotations.BeforeClass;
+import org.testng.annotations.Test;
+
+
+/**
+ * Queries test for Gapfill queries.
+ */
+// TODO: Item 1. table alias for subquery in next PR
+// TODO: Item 2. Deprecate PostAggregateGapfill implementation in next PR
+@SuppressWarnings("rawtypes")
+public class GapfillQueriesTest extends BaseQueriesTest {
+ private static final File INDEX_DIR = new File(FileUtils.getTempDirectory(), "PostAggregationGapfillQueriesTest");
+ private static final String RAW_TABLE_NAME = "parkingData";
+ private static final String SEGMENT_NAME = "testSegment";
+
+ private static final int NUM_LOTS = 4;
+
+ private static final String IS_OCCUPIED_COLUMN = "isOccupied";
+ private static final String LEVEL_ID_COLUMN = "levelId";
+ private static final String LOT_ID_COLUMN = "lotId";
+ private static final String EVENT_TIME_COLUMN = "eventTime";
+ private static final Schema SCHEMA =
+ new Schema.SchemaBuilder().addSingleValueDimension(IS_OCCUPIED_COLUMN, DataType.INT)
+ .addSingleValueDimension(LOT_ID_COLUMN, DataType.STRING)
+ .addSingleValueDimension(LEVEL_ID_COLUMN, DataType.STRING)
+ .addSingleValueDimension(EVENT_TIME_COLUMN, DataType.LONG)
+ .setPrimaryKeyColumns(Arrays.asList(LOT_ID_COLUMN, EVENT_TIME_COLUMN)).build();
+ private static final TableConfig TABLE_CONFIG =
+ new TableConfigBuilder(TableType.OFFLINE).setTableName(RAW_TABLE_NAME).build();
+
+ private IndexSegment _indexSegment;
+ private List<IndexSegment> _indexSegments;
+
+ @Override
+ protected String getFilter() {
+ // NOTE: Use a match all filter to switch between DictionaryBasedAggregationOperator and AggregationOperator
+ return " WHERE eventTime >= 0";
+ }
+
+ @Override
+ protected IndexSegment getIndexSegment() {
+ return _indexSegment;
+ }
+
+ @Override
+ protected List<IndexSegment> getIndexSegments() {
+ return _indexSegments;
+ }
+
+ GenericRow createRow(String time, int levelId, int lotId, boolean isOccupied) {
+ DateTimeFormatSpec dateTimeFormatter =
+ new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ GenericRow parkingRow = new GenericRow();
+ parkingRow.putValue(EVENT_TIME_COLUMN, dateTimeFormatter.fromFormatToMillis(time));
+ parkingRow.putValue(LEVEL_ID_COLUMN, "Level_" + levelId);
+ parkingRow.putValue(LOT_ID_COLUMN, "LotId_" + lotId);
+ parkingRow.putValue(IS_OCCUPIED_COLUMN, isOccupied);
+ return parkingRow;
+ }
+
+ @BeforeClass
+ public void setUp()
+ throws Exception {
+ FileUtils.deleteDirectory(INDEX_DIR);
+
+ List<GenericRow> records = new ArrayList<>(NUM_LOTS * 2);
+ records.add(createRow("2021-11-07 04:11:00.000", 0, 0, true));
+ records.add(createRow("2021-11-07 04:21:00.000", 0, 0, true));
+ records.add(createRow("2021-11-07 04:31:00.000", 1, 0, true));
+ records.add(createRow("2021-11-07 05:17:00.000", 0, 1, true));
+ records.add(createRow("2021-11-07 05:37:00.000", 0, 1, true));
+ records.add(createRow("2021-11-07 05:47:00.000", 1, 2, true));
+ records.add(createRow("2021-11-07 06:25:00.000", 0, 2, true));
+ records.add(createRow("2021-11-07 06:35:00.000", 0, 2, true));
+ records.add(createRow("2021-11-07 06:36:00.000", 1, 1, true));
+ records.add(createRow("2021-11-07 07:44:00.000", 0, 3, true));
+ records.add(createRow("2021-11-07 07:46:00.000", 1, 3, true));
+ records.add(createRow("2021-11-07 07:54:00.000", 1, 3, true));
+ records.add(createRow("2021-11-07 08:44:00.000", 0, 2, false));
+ records.add(createRow("2021-11-07 08:44:00.000", 1, 2, false));
+ records.add(createRow("2021-11-07 09:31:00.000", 0, 3, false));
+ records.add(createRow("2021-11-07 09:31:00.000", 1, 3, false));
+ records.add(createRow("2021-11-07 10:17:00.000", 0, 0, false));
+ records.add(createRow("2021-11-07 10:33:00.000", 0, 0, false));
+ records.add(createRow("2021-11-07 10:33:00.000", 1, 0, false));
+ records.add(createRow("2021-11-07 11:54:00.000", 0, 1, false));
+ records.add(createRow("2021-11-07 11:57:00.000", 1, 1, false));
+
+ SegmentGeneratorConfig segmentGeneratorConfig = new SegmentGeneratorConfig(TABLE_CONFIG, SCHEMA);
+ segmentGeneratorConfig.setTableName(RAW_TABLE_NAME);
+ segmentGeneratorConfig.setSegmentName(SEGMENT_NAME);
+ segmentGeneratorConfig.setOutDir(INDEX_DIR.getPath());
+
+ SegmentIndexCreationDriverImpl driver = new SegmentIndexCreationDriverImpl();
+ driver.init(segmentGeneratorConfig, new GenericRowRecordReader(records));
+ driver.build();
+
+ ImmutableSegment immutableSegment = ImmutableSegmentLoader.load(new File(INDEX_DIR, SEGMENT_NAME), ReadMode.mmap);
+ _indexSegment = immutableSegment;
+ _indexSegments = Arrays.asList(immutableSegment);
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestSelectSelect() {
+ DateTimeFormatSpec dateTimeFormatter =
+ new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " levelId, lotId, isOccupied "
+ + "FROM parkingData "
+ + "WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + "LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int[][] expectedOccupiedSlotsCounts1 =
+ new int[][]{{6, 6}, {8, 4}, {10, 2}, {12, 0}, {6, 4}, {4, 6}, {2, 10}, {0, 10}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = (String) gapFillRows1.get(index)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[3].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, lotId, isOccupied, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int[] expectedOccupiedSlotsCounts2 = new int[]{6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = (String) gapFillRows2.get(index)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestAggregateSelect() {
+ DateTimeFormatSpec dateTimeFormatter =
+ new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)), levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int[][] expectedOccupiedSlotsCounts1 = new int[][]{{2, 6}, {4, 4}, {6, 2}, {8, 0}, {6, 2}, {4, 4}, {2, 6}, {0, 8}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = (String) gapFillRows1.get(index)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[3].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + "GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)), levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int[] expectedOccupiedSlotsCounts2 = new int[]{2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = (String) gapFillRows2.get(index)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestGapfillAggregate() {
+ DateTimeFormatSpec dateTimeFormatter =
+ new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String dataTimeConvertQuery = "SELECT "
+ + "DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + "'1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col, "
+ + "SUM(isOccupied) "
+ + "FROM parkingData "
+ + "WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + "GROUP BY 1 "
+ + "ORDER BY 1 "
+ + "LIMIT 200";
+
+ BrokerResponseNative dateTimeConvertBrokerResponse = getBrokerResponseForSqlQuery(dataTimeConvertQuery);
+
+ ResultTable dateTimeConvertResultTable = dateTimeConvertBrokerResponse.getResultTable();
+ Assert.assertEquals(dateTimeConvertResultTable.getRows().size(), 8);
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{6, 8, 10, 12, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = (String) gapFillRows2.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestGapfillAggregateWithOptionalGroupBy() {
+ DateTimeFormatSpec dateTimeFormatter =
+ new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCountsForLevel11 = new double[]{4, 5, 6, 5, 3, 2, 1, 0};
+ double[] expectedOccupiedSlotsCountsForLevel21 = new double[]{2, 3, 4, 7, 3, 2, 1, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = (String) gapFillRows1.get(i + 1)[0];
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCountsForLevel12 = new double[]{4, 5, 6, 5, 3, 2, 1};
+ double[] expectedOccupiedSlotsCountsForLevel22 = new double[]{2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCountsForLevel12.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel12.length * 2; i += 2) {
+ String firstTimeCol = (String) gapFillRows2.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i)[2]);
+ }
+ firstTimeCol = (String) gapFillRows2.get(i + 1)[0];
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestGapfillAggregateWithHavingClause() {
+ DateTimeFormatSpec dateTimeFormatter =
+ new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " HAVING occupied_slots_count > 0"
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCountsForLevel11 = new double[]{4, 5, 6, 5, 3, 2, 1};
+ double[] expectedOccupiedSlotsCountsForLevel21 = new double[]{2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = (String) gapFillRows1.get(i + 1)[0];
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestAggregateAggregate() {
+ DateTimeFormatSpec dateTimeFormatter =
+ new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{2, 4, 6, 8, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = (String) gapFillRows2.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestAggregateAggregateWithOptionalGroupBy() {
+ DateTimeFormatSpec dateTimeFormatter =
+ new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{1, 2, 3, 4, 3, 2, 1, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i += 2) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = (String) gapFillRows1.get(i + 1)[0];
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{1, 2, 3, 4, 3, 2, 1};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i += 2) {
+ String firstTimeCol = (String) gapFillRows2.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = (String) gapFillRows2.get(i + 1)[0];
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestAggregateAggregateWithHavingClause() {
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)),"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " HAVING occupied_slots_count > 0"
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{1, 2, 3, 4, 3, 2, 1};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length * 2);
+ DateTimeFormatSpec dateTimeFormatter =
+ new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+
+ long start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i += 2) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = (String) gapFillRows1.get(i + 1)[0];
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochHoursGapfillTestSelectSelect() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + " GapFill(ToEpochHours(eventTime), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId "
+ + "FROM parkingData "
+ + "WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + "LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int[][] expectedOccupiedSlotsCounts1 =
+ new int[][]{{6, 6}, {8, 4}, {10, 2}, {12, 0}, {6, 4}, {4, 6}, {2, 10}, {0, 10}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = ((Long) (gapFillRows1.get(index)[0])).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[1].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, lotId, isOccupied, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochHours(eventTime), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int[] expectedOccupiedSlotsCounts2 = new int[]{6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void toEpochHoursGapfillTestAggregateSelect() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + " GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)),"
+ + " levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT ToEpochHours(eventTime) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int[][] expectedOccupiedSlotsCounts1 = new int[][]{{2, 6}, {4, 4}, {6, 2}, {8, 0}, {6, 2}, {4, 4}, {2, 6}, {0, 8}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[3].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + " GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)),"
+ + " levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT ToEpochHours(eventTime) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int[] expectedOccupiedSlotsCounts2 = new int[]{2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void toEpochHoursGapfillTestGapfillAggregate() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochHours(eventTime), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochHours(eventTime), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{6, 8, 10, 12, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochHoursGapfillTestGapfillAggregateWithOptionalGroupBy() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochHours(eventTime), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCountsForLevel11 = new double[]{4, 5, 6, 5, 3, 2, 1, 0};
+ double[] expectedOccupiedSlotsCountsForLevel21 = new double[]{2, 3, 4, 7, 3, 2, 1, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochHours(eventTime), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCountsForLevel12 = new double[]{4, 5, 6, 5, 3, 2, 1};
+ double[] expectedOccupiedSlotsCountsForLevel22 = new double[]{2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCountsForLevel12.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel12.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows2.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochHoursGapfillTestGapfillAggregateWithHavingClause() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochHours(eventTime), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " HAVING occupied_slots_count > 0"
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCountsForLevel11 = new double[]{4, 5, 6, 5, 3, 2, 1};
+ double[] expectedOccupiedSlotsCountsForLevel21 = new double[]{2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochHoursGapfillTestAggregateAggregate() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)),"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochHours(eventTime) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{2, 4, 6, 8, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochHours(eventTime) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochHoursGapfillTestAggregateAggregateWithOptionalGroupBy() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochHours(eventTime) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{1, 2, 3, 4, 3, 2, 1, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochHours(eventTime) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{1, 2, 3, 4, 3, 2, 1};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = ((Long) gapFillRows2.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochHoursGapfillTestAggregateAggregateWithHavingClause() {
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochHours(eventTime) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " HAVING occupied_slots_count > 0"
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{1, 2, 3, 4, 3, 2, 1};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length * 2);
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+
+ long start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesRoundedHoursGapfillTestSelectSelect() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MINUTES:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT GapFill(ToEpochMinutesRounded(eventTime, 60), '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int[][] expectedOccupiedSlotsCounts1 =
+ new int[][]{{6, 6}, {8, 4}, {10, 2}, {12, 0}, {6, 4}, {4, 6}, {2, 10}, {0, 10}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = ((Long) (gapFillRows1.get(index)[0])).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[1].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, lotId, isOccupied, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesRounded(eventTime, 60), '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int[] expectedOccupiedSlotsCounts2 = new int[]{6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void toEpochMinutesRoundedHoursGapfillTestAggregateSelect() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MINUTES:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "GapFill(time_col, '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)), levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT ToEpochMinutesRounded(eventTime, 60) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int[][] expectedOccupiedSlotsCounts1 = new int[][]{{2, 6}, {4, 4}, {6, 2}, {8, 0}, {6, 2}, {4, 4}, {2, 6}, {0, 8}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[3].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + "GapFill(time_col, '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)), levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT ToEpochMinutesRounded(eventTime, 60) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int[] expectedOccupiedSlotsCounts2 = new int[]{2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void toEpochMinutesRoundedHoursGapfillTestGapfillAggregate() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MINUTES:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesRounded(eventTime, 60), '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesRounded(eventTime, 60), '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{6, 8, 10, 12, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesRoundedHoursGapfillTestGapfillAggregateWithOptionalGroupBy() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MINUTES:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesRounded(eventTime, 60), '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCountsForLevel11 = new double[]{4, 5, 6, 5, 3, 2, 1, 0};
+ double[] expectedOccupiedSlotsCountsForLevel21 = new double[]{2, 3, 4, 7, 3, 2, 1, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesRounded(eventTime, 60), '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCountsForLevel12 = new double[]{4, 5, 6, 5, 3, 2, 1};
+ double[] expectedOccupiedSlotsCountsForLevel22 = new double[]{2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCountsForLevel12.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel12.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows2.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesRoundedHoursGapfillTestGapfillAggregateWithHavingClause() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MINUTES:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesRounded(eventTime, 60), '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " HAVING occupied_slots_count > 0"
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCountsForLevel11 = new double[]{4, 5, 6, 5, 3, 2, 1};
+ double[] expectedOccupiedSlotsCountsForLevel21 = new double[]{2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesRoundedHoursGapfillTestAggregateAggregate() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MINUTES:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochMinutesRounded(eventTime, 60) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{2, 4, 6, 8, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochMinutesRounded(eventTime, 60) time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesRoundedHoursGapfillTestAggregateAggregateWithOptionalGroupBy() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MINUTES:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochMinutesRounded(eventTime, 60) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{1, 2, 3, 4, 3, 2, 1, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochMinutesRounded(eventTime, 60) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{1, 2, 3, 4, 3, 2, 1};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = ((Long) gapFillRows2.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesRoundedHoursGapfillTestAggregateAggregateWithHavingClause() {
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:MINUTES:EPOCH', "
+ + " '27270960', '27271440', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochMinutesRounded(eventTime, 60) time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " HAVING occupied_slots_count > 0"
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{1, 2, 3, 4, 3, 2, 1};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length * 2);
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MINUTES:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+
+ long start = dateTimeFormatter.fromFormatToMillis("27270960");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesBucketHoursGapfillTestSelectSelect() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + " GapFill(ToEpochMinutesBucket(eventTime, 60), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int[][] expectedOccupiedSlotsCounts1 =
+ new int[][]{{6, 6}, {8, 4}, {10, 2}, {12, 0}, {6, 4}, {4, 6}, {2, 10}, {0, 10}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = ((Long) (gapFillRows1.get(index)[0])).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[1].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, lotId, isOccupied, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesBucket(eventTime, 60), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int[] expectedOccupiedSlotsCounts2 = new int[]{6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void toEpochMinutesBucketHoursGapfillTestAggregateSelect() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + " GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)),"
+ + " levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT ToEpochMinutesBucket(eventTime, 60) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int[][] expectedOccupiedSlotsCounts1 = new int[][]{{2, 6}, {4, 4}, {6, 2}, {8, 0}, {6, 2}, {4, 4}, {2, 6}, {0, 8}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[3].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + " GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)),"
+ + " levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT ToEpochMinutesBucket(eventTime, 60) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int[] expectedOccupiedSlotsCounts2 = new int[]{2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void toEpochMinutesBucketHoursGapfillTestGapfillAggregate() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesBucket(eventTime, 60), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesBucket(eventTime, 60), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{6, 8, 10, 12, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesBucketHoursGapfillTestGapfillAggregateWithOptionalGroupBy() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesBucket(eventTime, 60), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCountsForLevel11 = new double[]{4, 5, 6, 5, 3, 2, 1, 0};
+ double[] expectedOccupiedSlotsCountsForLevel21 = new double[]{2, 3, 4, 7, 3, 2, 1, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesBucket(eventTime, 60), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCountsForLevel12 = new double[]{4, 5, 6, 5, 3, 2, 1};
+ double[] expectedOccupiedSlotsCountsForLevel22 = new double[]{2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCountsForLevel12.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel12.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows2.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesBucketHoursGapfillTestGapfillAggregateWithHavingClause() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(ToEpochMinutesBucket(eventTime, 60), '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " HAVING occupied_slots_count > 0"
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCountsForLevel11 = new double[]{4, 5, 6, 5, 3, 2, 1};
+ double[] expectedOccupiedSlotsCountsForLevel21 = new double[]{2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesBucketHoursGapfillTestAggregateAggregate() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochMinutesBucket(eventTime, 60) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{2, 4, 6, 8, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochMinutesBucket(eventTime, 60) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesBucketHoursGapfillTestAggregateAggregateWithOptionalGroupBy() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochMinutesBucket(eventTime, 60) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{1, 2, 3, 4, 3, 2, 1, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochMinutesBucket(eventTime, 60) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{1, 2, 3, 4, 3, 2, 1};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = ((Long) gapFillRows2.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void toEpochMinutesBucketHoursGapfillTestAggregateAggregateWithHavingClause() {
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:HOURS:EPOCH', "
+ + " '454516', '454524', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT ToEpochMinutesBucket(eventTime, 60) AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " HAVING occupied_slots_count > 0"
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{1, 2, 3, 4, 3, 2, 1};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length * 2);
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:HOURS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+
+ long start = dateTimeFormatter.fromFormatToMillis("454516");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void dateTruncHoursGapfillTestSelectSelect() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MILLISECONDS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + " GapFill(DATETRUNC('hour', eventTime, 'milliseconds'), '1:MILLISECONDS:EPOCH', "
+ + " '1636257600000', '1636286400000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int[][] expectedOccupiedSlotsCounts1 =
+ new int[][]{{6, 6}, {8, 4}, {10, 2}, {12, 0}, {6, 4}, {4, 6}, {2, 10}, {0, 10}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("1636257600000");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = ((Long) (gapFillRows1.get(index)[0])).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[1].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, lotId, isOccupied, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETRUNC('hour', eventTime, 'milliseconds'), '1:MILLISECONDS:EPOCH', "
+ + " '1636257600000', '1636286400000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int[] expectedOccupiedSlotsCounts2 = new int[]{6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("1636257600000");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void dateTruncHoursGapfillTestAggregateSelect() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MILLISECONDS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + " GapFill(time_col, '1:MILLISECONDS:EPOCH', "
+ + " '1636257600000', '1636286400000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)),"
+ + " levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT DATETRUNC('hour', eventTime, 'milliseconds') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int[][] expectedOccupiedSlotsCounts1 = new int[][]{{2, 6}, {4, 4}, {6, 2}, {8, 0}, {6, 2}, {4, 4}, {2, 6}, {0, 8}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("1636257600000");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[3].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + " GapFill(time_col, '1:MILLISECONDS:EPOCH', "
+ + " '1636257600000', '1636286400000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)),"
+ + " levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT DATETRUNC('hour', eventTime, 'milliseconds') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int[] expectedOccupiedSlotsCounts2 = new int[]{2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("1636257600000");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(index)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void dateTruncHoursGapfillTestGapfillAggregate() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MILLISECONDS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETRUNC('hour', eventTime, 'milliseconds'), '1:MILLISECONDS:EPOCH', "
+ + " '1636257600000', '1636286400000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("1636257600000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETRUNC('hour', eventTime, 'milliseconds'), '1:MILLISECONDS:EPOCH', "
+ + " '1636257600000', '1636286400000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{6, 8, 10, 12, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("1636257600000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void dateTruncHoursGapfillTestGapfillAggregateWithOptionalGroupBy() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MILLISECONDS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETRUNC('hour', eventTime, 'milliseconds'), '1:MILLISECONDS:EPOCH', "
+ + " '1636257600000', '1636286400000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCountsForLevel11 = new double[]{4, 5, 6, 5, 3, 2, 1, 0};
+ double[] expectedOccupiedSlotsCountsForLevel21 = new double[]{2, 3, 4, 7, 3, 2, 1, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("1636257600000");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETRUNC('hour', eventTime, 'milliseconds'), '1:MILLISECONDS:EPOCH', "
+ + " '1636257600000', '1636286400000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCountsForLevel12 = new double[]{4, 5, 6, 5, 3, 2, 1};
+ double[] expectedOccupiedSlotsCountsForLevel22 = new double[]{2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCountsForLevel12.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("1636257600000");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel12.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows2.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void dateTruncHoursGapfillTestGapfillAggregateWithHavingClause() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MILLISECONDS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETRUNC('hour', eventTime, 'milliseconds'), '1:MILLISECONDS:EPOCH', "
+ + " '1636257600000', '1636286400000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " HAVING occupied_slots_count > 0"
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCountsForLevel11 = new double[]{4, 5, 6, 5, 3, 2, 1};
+ double[] expectedOccupiedSlotsCountsForLevel21 = new double[]{2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("1636257600000");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void dateTruncHoursGapfillTestAggregateAggregate() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MILLISECONDS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:MILLISECONDS:EPOCH', "
+ + " '1636257600000', '1636286400000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)), "
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETRUNC('hour', eventTime, 'milliseconds') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{2, 4, 6, 8, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("1636257600000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[1]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[0]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:MILLISECONDS:EPOCH', "
+ + " '1636257600000', '1636286400000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETRUNC('hour', eventTime, 'milliseconds') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("1636257600000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void dateTruncHoursGapfillTestAggregateAggregateWithOptionalGroupBy() {
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MILLISECONDS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:MILLISECONDS:EPOCH', "
+ + " '1636257600000', '1636286400000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)), "
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETRUNC('hour', eventTime, 'milliseconds') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{1, 2, 3, 4, 3, 2, 1, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("1636257600000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:MILLISECONDS:EPOCH', "
+ + " '1636257600000', '1636286400000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)), "
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETRUNC('hour', eventTime, 'milliseconds') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{1, 2, 3, 4, 3, 2, 1};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("1636257600000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows2.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = ((Long) gapFillRows2.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void dateTruncHoursGapfillTestAggregateAggregateWithHavingClause() {
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, '1:MILLISECONDS:EPOCH', "
+ + " '1636257600000', '1636286400000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)),"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETRUNC('hour', eventTime, 'milliseconds') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " HAVING occupied_slots_count > 0"
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{1, 2, 3, 4, 3, 2, 1};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length * 2);
+ DateTimeFormatSpec dateTimeFormatter = new DateTimeFormatSpec("1:MILLISECONDS:EPOCH");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+
+ long start = dateTimeFormatter.fromFormatToMillis("1636257600000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i += 2) {
+ String firstTimeCol = ((Long) gapFillRows1.get(i)[0]).toString();
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ firstTimeCol = ((Long) gapFillRows1.get(i + 1)[0]).toString();
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i / 2], gapFillRows1.get(i)[2]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestWithMissingTimeSeries() {
+ String gapfillQuery = "SELECT "
+ + "time_col, levelId, lotId, isOccupied, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE')) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " LIMIT 200 ";
+
+ try {
+ getBrokerResponseForSqlQuery(gapfillQuery);
+ Assert.fail();
+ } catch (Exception ex) {
+ Assert.assertTrue(ex.getClass().getSimpleName().equals("IllegalArgumentException"));
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestWithMissingGroupByTimeBucket() {
+ String gapfillQuery = "SELECT "
+ + "levelId, SUM(isOccupied) "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY levelId"
+ + " LIMIT 200 ";
+
+ try {
+ getBrokerResponseForSqlQuery(gapfillQuery);
+ Assert.fail();
+ } catch (Exception ex) {
+ Assert.assertTrue(ex.getClass().getSimpleName().equals("IllegalArgumentException"));
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestAggregateAggregateWithLimitTesting() {
+ DateTimeFormatSpec dateTimeFormatter =
+ new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 40 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 40 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{2, 4, 6, 8, 6};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 56 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 6 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{2, 4, 6, 8, 6, 4};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestSelectSelectOrderBy() {
+ DateTimeFormatSpec dateTimeFormatter =
+ new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + " GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " levelId, lotId, isOccupied "
+ + "FROM parkingData "
+ + "WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + "ORDER BY time_col, levelId "
+ + "LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int[][] expectedOccupiedSlotsCounts1 =
+ new int[][]{{6, 6}, {8, 4}, {10, 2}, {12, 0}, {6, 4}, {4, 6}, {2, 10}, {0, 10}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = (String) gapFillRows1.get(index)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[3].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, lotId, isOccupied, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " ORDER BY time_col, levelId DESC"
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int[] expectedOccupiedSlotsCounts2 = new int[]{6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = (String) gapFillRows2.get(index)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestAggregateSelectOrderBy() {
+ DateTimeFormatSpec dateTimeFormatter =
+ new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)), levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " ORDER BY time_col, levelId "
+ + " LIMIT 200 "
+ + ") "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int[][] expectedOccupiedSlotsCounts1 = new int[][]{{2, 6}, {4, 4}, {6, 2}, {8, 0}, {6, 2}, {4, 4}, {2, 6}, {0, 8}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = (String) gapFillRows1.get(index)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[3].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 =
+ "SELECT "
+ + "GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)), levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " ORDER BY time_col, levelId "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int[] expectedOccupiedSlotsCounts2 = new int[]{2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = (String) gapFillRows2.get(index)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestGapfillAggregateOrderBy() {
+ DateTimeFormatSpec dateTimeFormatter =
+ new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String dataTimeConvertQuery = "SELECT "
+ + "DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + "'1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col, "
+ + "SUM(isOccupied) "
+ + "FROM parkingData "
+ + "WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + "GROUP BY 1 "
+ + "ORDER BY 1 "
+ + "LIMIT 200";
+
+ BrokerResponseNative dateTimeConvertBrokerResponse = getBrokerResponseForSqlQuery(dataTimeConvertQuery);
+
+ ResultTable dateTimeConvertResultTable = dateTimeConvertBrokerResponse.getResultTable();
+ Assert.assertEquals(dateTimeConvertResultTable.getRows().size(), 8);
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " ORDER BY time_col, levelId "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " ORDER BY time_col, levelId "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{6, 8, 10, 12, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = (String) gapFillRows2.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestAggregateAggregateOrderBy() {
+ DateTimeFormatSpec dateTimeFormatter =
+ new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " ORDER BY time_col, levelId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double[] expectedOccupiedSlotsCounts1 = new double[]{2, 4, 6, 8, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " ORDER BY time_col, levelId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double[] expectedOccupiedSlotsCounts2 = new double[]{2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = (String) gapFillRows2.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestAggregateAggregateWithTimeBucketAggregation() {
+ DateTimeFormatSpec dateTimeFormatter =
+ new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, COUNT(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '5:MINUTES', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '5:MINUTES') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 2000 "
+ + " ) "
+ + " LIMIT 2000 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 2000 ";
+
+ System.out.println(gapfillQuery1);
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r819260278
##########
File path: pinot-core/src/test/java/org/apache/pinot/queries/GapfillQueriesTest.java
##########
@@ -0,0 +1,3615 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.queries;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import org.apache.commons.io.FileUtils;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.segment.local.indexsegment.immutable.ImmutableSegmentLoader;
+import org.apache.pinot.segment.local.segment.creator.impl.SegmentIndexCreationDriverImpl;
+import org.apache.pinot.segment.local.segment.readers.GenericRowRecordReader;
+import org.apache.pinot.segment.spi.ImmutableSegment;
+import org.apache.pinot.segment.spi.IndexSegment;
+import org.apache.pinot.segment.spi.creator.SegmentGeneratorConfig;
+import org.apache.pinot.spi.config.table.TableConfig;
+import org.apache.pinot.spi.config.table.TableType;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+import org.apache.pinot.spi.data.FieldSpec.DataType;
+import org.apache.pinot.spi.data.Schema;
+import org.apache.pinot.spi.data.readers.GenericRow;
+import org.apache.pinot.spi.utils.ReadMode;
+import org.apache.pinot.spi.utils.builder.TableConfigBuilder;
+import org.testng.Assert;
+import org.testng.annotations.AfterClass;
+import org.testng.annotations.BeforeClass;
+import org.testng.annotations.Test;
+
+
+/**
+ * Queries test for Gapfill queries.
+ */
+// TODO: Item 1. table alias for subquery in next PR
+// TODO: Item 2. Deprecate PostAggregateGapfill implementation in next PR
+@SuppressWarnings("rawtypes")
+public class GapfillQueriesTest extends BaseQueriesTest {
+ private static final File INDEX_DIR = new File(FileUtils.getTempDirectory(), "PostAggregationGapfillQueriesTest");
Review comment:
PostAggregationGapfillQueriesTest is following the different query design. It will be deprecated later. I prefer to keep them as they are.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] amrishlal commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
amrishlal commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r819826217
##########
File path: pinot-core/src/test/java/org/apache/pinot/queries/GapfillQueriesTest.java
##########
@@ -0,0 +1,3615 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.queries;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import org.apache.commons.io.FileUtils;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.segment.local.indexsegment.immutable.ImmutableSegmentLoader;
+import org.apache.pinot.segment.local.segment.creator.impl.SegmentIndexCreationDriverImpl;
+import org.apache.pinot.segment.local.segment.readers.GenericRowRecordReader;
+import org.apache.pinot.segment.spi.ImmutableSegment;
+import org.apache.pinot.segment.spi.IndexSegment;
+import org.apache.pinot.segment.spi.creator.SegmentGeneratorConfig;
+import org.apache.pinot.spi.config.table.TableConfig;
+import org.apache.pinot.spi.config.table.TableType;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+import org.apache.pinot.spi.data.FieldSpec.DataType;
+import org.apache.pinot.spi.data.Schema;
+import org.apache.pinot.spi.data.readers.GenericRow;
+import org.apache.pinot.spi.utils.ReadMode;
+import org.apache.pinot.spi.utils.builder.TableConfigBuilder;
+import org.testng.Assert;
+import org.testng.annotations.AfterClass;
+import org.testng.annotations.BeforeClass;
+import org.testng.annotations.Test;
+
+
+/**
+ * Queries test for Gapfill queries.
+ */
+// TODO: Item 1. table alias for subquery in next PR
+// TODO: Item 2. Deprecate PostAggregateGapfill implementation in next PR
+@SuppressWarnings("rawtypes")
+public class GapfillQueriesTest extends BaseQueriesTest {
+ private static final File INDEX_DIR = new File(FileUtils.getTempDirectory(), "PostAggregationGapfillQueriesTest");
Review comment:
ok
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (f07e90b) into [master](https://codecov.io/gh/apache/pinot/commit/262dc50e236ed2af25a0cf8c67658a48731ce573?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (262dc50) will **decrease** coverage by `34.61%`.
> The diff coverage is `16.64%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 70.83% 36.21% -34.62%
+ Complexity 4258 81 -4177
=============================================
Files 1636 1645 +9
Lines 85804 86422 +618
Branches 12920 13075 +155
=============================================
- Hits 60779 31302 -29477
- Misses 20836 52599 +31763
+ Partials 4189 2521 -1668
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.73% <16.64%> (-0.23%)` | :arrow_down: |
| integration2 | `?` | |
| unittests1 | `?` | |
| unittests2 | `14.09% <0.27%> (-0.09%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...t/controller/api/resources/PinotQueryResource.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29udHJvbGxlci9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29udHJvbGxlci9hcGkvcmVzb3VyY2VzL1Bpbm90UXVlcnlSZXNvdXJjZS5qYXZh) | `0.00% <0.00%> (-50.35%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...not/core/query/reduce/GapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwRmlsbERhdGFUYWJsZVJlZHVjZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [.../pinot/core/query/reduce/GapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `69.25% <0.00%> (-22.93%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `16.93% <12.42%> (-46.71%)` | :arrow_down: |
| [.../pinot/core/query/reduce/ResultReducerFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUmVzdWx0UmVkdWNlckZhY3RvcnkuamF2YQ==) | `55.55% <33.33%> (-25.70%)` | :arrow_down: |
| [...query/request/context/utils/QueryContextUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvUXVlcnlDb250ZXh0VXRpbHMuamF2YQ==) | `44.23% <36.36%> (-33.05%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `83.78% <50.00%> (-7.99%)` | :arrow_down: |
| [.../core/query/reduce/filter/PredicateRowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1ByZWRpY2F0ZVJvd01hdGNoZXIuamF2YQ==) | `50.00% <50.00%> (ø)` | |
| ... and [1008 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [262dc50...f07e90b](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (df0b289) into [master](https://codecov.io/gh/apache/pinot/commit/f67b5aacc5b494a2f5d78e87d67a84ea3aadc99a?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (f67b5aa) will **decrease** coverage by `56.69%`.
> The diff coverage is `0.00%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 70.76% 14.07% -56.70%
+ Complexity 4244 81 -4163
=============================================
Files 1631 1596 -35
Lines 85490 84232 -1258
Branches 12878 12830 -48
=============================================
- Hits 60499 11856 -48643
- Misses 20819 71485 +50666
+ Partials 4172 891 -3281
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `?` | |
| unittests2 | `14.07% <0.00%> (-0.14%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `24.41% <0.00%> (-47.44%)` | :arrow_down: |
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `0.00% <0.00%> (-86.56%)` | :arrow_down: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (-87.50%)` | :arrow_down: |
| [...ache/pinot/core/plan/GapfillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...pinot/core/plan/maker/InstancePlanMakerImplV2.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL21ha2VyL0luc3RhbmNlUGxhbk1ha2VySW1wbFYyLmphdmE=) | `0.00% <0.00%> (-76.77%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...not/core/query/reduce/GapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwRmlsbERhdGFUYWJsZVJlZHVjZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [.../pinot/core/query/reduce/GapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...e/pinot/core/query/reduce/HavingFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvSGF2aW5nRmlsdGVySGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.31%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.77%)` | :arrow_down: |
| ... and [1320 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [f67b5aa...df0b289](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r787215100
##########
File path: pinot-common/src/thrift/query.thrift
##########
@@ -20,6 +20,7 @@ namespace java org.apache.pinot.common.request
struct DataSource {
1: optional string tableName;
+ 2: optional PinotQuery preAggregateGapfillQuery;
Review comment:
Done
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (a9f2578) into [master](https://codecov.io/gh/apache/pinot/commit/1d1a7d34709b6a89985a610f46dd1c97d6c9271a?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (1d1a7d3) will **decrease** coverage by `0.11%`.
> The diff coverage is `74.08%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 71.40% 71.29% -0.12%
- Complexity 4223 4224 +1
============================================
Files 1597 1610 +13
Lines 82903 83317 +414
Branches 12369 12449 +80
============================================
+ Hits 59201 59403 +202
- Misses 19689 19874 +185
- Partials 4013 4040 +27
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.86% <16.05%> (-0.14%)` | :arrow_down: |
| integration2 | `27.56% <16.51%> (-0.15%)` | :arrow_down: |
| unittests1 | `68.13% <73.62%> (-0.01%)` | :arrow_down: |
| unittests2 | `14.24% <0.00%> (-0.13%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `58.73% <54.83%> (-4.91%)` | :arrow_down: |
| [...query/request/context/utils/QueryContextUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvUXVlcnlDb250ZXh0VXRpbHMuamF2YQ==) | `73.91% <60.00%> (-3.36%)` | :arrow_down: |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `63.88% <63.88%> (ø)` | |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `92.42% <66.66%> (+0.11%)` | :arrow_up: |
| [...che/pinot/core/query/reduce/filter/RowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXIuamF2YQ==) | `66.66% <66.66%> (ø)` | |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| [...PreAggregationGapFillSelectionOperatorService.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsU2VsZWN0aW9uT3BlcmF0b3JTZXJ2aWNlLmphdmE=) | `83.51% <83.51%> (ø)` | |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `85.71% <85.71%> (ø)` | |
| [.../core/query/reduce/filter/PredicateRowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1ByZWRpY2F0ZVJvd01hdGNoZXIuamF2YQ==) | `87.50% <87.50%> (ø)` | |
| ... and [62 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [1d1a7d3...a9f2578](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (bbde0cc) into [master](https://codecov.io/gh/apache/pinot/commit/cc2f3fe196d29a0d716bfee07add9b761e8fa98e?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (cc2f3fe) will **decrease** coverage by `50.44%`.
> The diff coverage is `0.00%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 64.63% 14.19% -50.45%
+ Complexity 4260 81 -4179
=============================================
Files 1562 1572 +10
Lines 81525 81856 +331
Branches 12252 12325 +73
=============================================
- Hits 52695 11620 -41075
- Misses 25072 69369 +44297
+ Partials 3758 867 -2891
```
| Flag | Coverage Δ | |
|---|---|---|
| unittests1 | `?` | |
| unittests2 | `14.19% <0.00%> (-0.03%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `0.00% <0.00%> (-86.80%)` | :arrow_down: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (-83.93%)` | :arrow_down: |
| [...inot/core/plan/PreAggGapFillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL1ByZUFnZ0dhcEZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...pinot/core/plan/maker/InstancePlanMakerImplV2.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL21ha2VyL0luc3RhbmNlUGxhbk1ha2VySW1wbFYyLmphdmE=) | `0.00% <0.00%> (-62.63%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...e/pinot/core/query/reduce/HavingFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvSGF2aW5nRmlsdGVySGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.31%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `0.00% <0.00%> (-92.31%)` | :arrow_down: |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `0.00% <0.00%> (ø)` | |
| [.../pinot/core/query/reduce/ResultReducerFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUmVzdWx0UmVkdWNlckZhY3RvcnkuamF2YQ==) | `0.00% <0.00%> (-75.00%)` | :arrow_down: |
| ... and [1092 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [cc2f3fe...bbde0cc](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829440629
##########
File path: pinot-common/src/main/java/org/apache/pinot/sql/parsers/CalciteSqlParser.java
##########
@@ -100,6 +95,9 @@ private CalciteSqlParser() {
private static final Pattern OPTIONS_REGEX_PATTEN =
Pattern.compile("option\\s*\\(([^\\)]+)\\)", Pattern.CASE_INSENSITIVE);
+ private CalciteSqlParser() {
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829450123
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/ColumnDataToBlockValSetConverter.java
##########
@@ -0,0 +1,181 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.List;
+import javax.annotation.Nullable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.segment.spi.index.reader.Dictionary;
+import org.apache.pinot.spi.data.FieldSpec;
+
+
+/**
+ * As for Gapfilling Function, all raw data will be retrieved from the pinot
+ * server and merged on the pinot broker. The data will be in {@link DataTable}
+ * format.
+ * As part of Gapfilling Function execution plan, the aggregation function will
+ * work on the merged data on pinot broker. The aggregation function only takes
+ * the {@link BlockValSet} format.
+ * This is the Helper class to convert the data from {@link DataTable} to the
+ * block of values {@link BlockValSet} which used as input to the aggregation
+ * function.
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class ColumnDataToBlockValSetConverter implements BlockValSet {
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829580728
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/GapFillProcessor.java
##########
@@ -0,0 +1,455 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+
+/**
+ * Helper class to reduce and set gap fill results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class GapFillProcessor {
+ private final QueryContext _queryContext;
+
+ private final int _limitForAggregatedResult;
+ private final DateTimeGranularitySpec _dateTimeGranularity;
+ private final DateTimeFormatSpec _dateTimeFormatter;
+ private final long _startMs;
+ private final long _endMs;
+ private final long _timeBucketSize;
+ private final int _numOfTimeBuckets;
+ private final List<Integer> _groupByKeyIndexes;
+ private final Set<Key> _groupByKeys;
+ private final Map<Key, Object[]> _previousByGroupKey;
+ private final Map<String, ExpressionContext> _fillExpressions;
+ private final List<ExpressionContext> _timeSeries;
+ private final GapfillUtils.GapfillType _gapfillType;
+ private int _limitForGapfilledResult;
+ private boolean[] _isGroupBySelections;
+ private final int _timeBucketColumnIndex;
+ private int[] _sourceColumnIndexForResultSchema = null;
+
+ GapFillProcessor(QueryContext queryContext) {
+ _queryContext = queryContext;
+ _gapfillType = queryContext.getGapfillType();
+ _limitForAggregatedResult = queryContext.getLimit();
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ _limitForGapfilledResult = queryContext.getLimit();
+ } else {
+ _limitForGapfilledResult = queryContext.getSubQueryContext().getLimit();
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext);
+ _timeBucketColumnIndex = GapfillUtils.findTimeBucketColumnIndex(queryContext);
+
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+
+ _dateTimeFormatter = new DateTimeFormatSpec(args.get(1).getLiteral());
+ _dateTimeGranularity = new DateTimeGranularitySpec(args.get(4).getLiteral());
+ String start = args.get(2).getLiteral();
+ _startMs = truncate(_dateTimeFormatter.fromFormatToMillis(start));
+ String end = args.get(3).getLiteral();
+ _endMs = truncate(_dateTimeFormatter.fromFormatToMillis(end));
+ _timeBucketSize = _dateTimeGranularity.granularityToMillis();
+ _numOfTimeBuckets = (int) ((_endMs - _startMs) / _timeBucketSize);
+
+ _fillExpressions = GapfillUtils.getFillExpressions(gapFillSelection);
+
+ _previousByGroupKey = new HashMap<>();
+ _groupByKeyIndexes = new ArrayList<>();
+ _groupByKeys = new HashSet<>();
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ _timeSeries = timeseriesOn.getFunction().getArguments();
+ }
+
+ private int findBucketIndex(long time) {
+ return (int) ((time - _startMs) / _timeBucketSize);
+ }
+
+ private void replaceColumnNameWithAlias(DataSchema dataSchema) {
+ QueryContext queryContext;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = _queryContext.getSubQueryContext().getSubQueryContext();
+ } else if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ queryContext = _queryContext;
+ } else {
+ queryContext = _queryContext.getSubQueryContext();
+ }
+ List<String> aliasList = queryContext.getAliasList();
+ Map<String, String> columnNameToAliasMap = new HashMap<>();
+ for (int i = 0; i < aliasList.size(); i++) {
+ if (aliasList.get(i) != null) {
+ ExpressionContext selection = queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(selection)) {
+ selection = selection.getFunction().getArguments().get(0);
+ }
+ columnNameToAliasMap.put(selection.toString(), aliasList.get(i));
+ }
+ }
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ if (columnNameToAliasMap.containsKey(dataSchema.getColumnNames()[i])) {
+ dataSchema.getColumnNames()[i] = columnNameToAliasMap.get(dataSchema.getColumnNames()[i]);
+ }
+ }
+ }
+
+ /**
+ * Here are three things that happen
+ * 1. Sort the result sets from all pinot servers based on timestamp
+ * 2. Gapfill the data for missing entities per time bucket
+ * 3. Aggregate the dataset per time bucket.
+ */
+ public void process(BrokerResponseNative brokerResponseNative) {
+ DataSchema dataSchema = brokerResponseNative.getResultTable().getDataSchema();
+ DataSchema resultTableSchema = getResultTableDataSchema(dataSchema);
+ if (brokerResponseNative.getResultTable().getRows().isEmpty()) {
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, Collections.emptyList()));
+ return;
+ }
+
+ String[] columns = dataSchema.getColumnNames();
+
+ Map<String, Integer> indexes = new HashMap<>();
+ for (int i = 0; i < columns.length; i++) {
+ indexes.put(columns[i], i);
+ }
+
+ _isGroupBySelections = new boolean[dataSchema.getColumnDataTypes().length];
+
+ // The first one argument of timeSeries is time column. The left ones are defining entity.
+ for (ExpressionContext entityColum : _timeSeries) {
+ int index = indexes.get(entityColum.getIdentifier());
+ _isGroupBySelections[index] = true;
+ }
+
+ for (int i = 0; i < _isGroupBySelections.length; i++) {
+ if (_isGroupBySelections[i]) {
+ _groupByKeyIndexes.add(i);
+ }
+ }
+
+ List<Object[]>[] timeBucketedRawRows = putRawRowsIntoTimeBucket(brokerResponseNative.getResultTable().getRows());
+
+ List<Object[]> resultRows;
+ replaceColumnNameWithAlias(dataSchema);
+
+ if (_queryContext.getAggregationFunctions() == null) {
+
+ Map<String, Integer> sourceColumnsIndexes = new HashMap<>();
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ sourceColumnsIndexes.put(dataSchema.getColumnName(i), i);
+ }
+ _sourceColumnIndexForResultSchema = new int[resultTableSchema.getColumnNames().length];
+ for (int i = 0; i < _sourceColumnIndexForResultSchema.length; i++) {
+ _sourceColumnIndexForResultSchema[i] = sourceColumnsIndexes.get(resultTableSchema.getColumnName(i));
+ }
+ }
+
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL_AGGREGATE || _gapfillType == GapfillUtils.GapfillType.GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL_SELECT) {
+ List<Object[]> gapfilledRows = gapFillAndAggregate(timeBucketedRawRows, resultTableSchema, dataSchema);
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL_SELECT) {
+ resultRows = new ArrayList<>(gapfilledRows.size());
+ resultRows.addAll(gapfilledRows);
+ } else {
+ resultRows = gapfilledRows;
+ }
+ } else {
+ resultRows = gapFillAndAggregate(timeBucketedRawRows, resultTableSchema, dataSchema);
+ }
Review comment:
Good catch, Fixed.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829582532
##########
File path: pinot-controller/src/main/java/org/apache/pinot/controller/api/resources/PinotQueryResource.java
##########
@@ -162,8 +163,7 @@ public String getQueryResponse(String query, String traceEnabled, String queryOp
String inputTableName;
switch (querySyntax) {
case CommonConstants.Broker.Request.SQL:
- inputTableName =
- SQL_QUERY_COMPILER.compileToBrokerRequest(query).getPinotQuery().getDataSource().getTableName();
+ inputTableName = GapfillUtils.getTableName(SQL_QUERY_COMPILER.compileToBrokerRequest(query).getPinotQuery());
Review comment:
Fixed.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (dc923b1) into [master](https://codecov.io/gh/apache/pinot/commit/21632dadb8cd2d8b77aec523a758d73a64f70b07?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (21632da) will **decrease** coverage by `6.50%`.
> The diff coverage is `82.00%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 71.00% 64.49% -6.51%
- Complexity 4320 4322 +2
============================================
Files 1629 1594 -35
Lines 85132 83884 -1248
Branches 12812 12766 -46
============================================
- Hits 60445 54099 -6346
- Misses 20526 25896 +5370
+ Partials 4161 3889 -272
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `67.55% <82.23%> (+0.17%)` | :arrow_up: |
| unittests2 | `14.01% <0.00%> (-0.14%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `24.41% <0.00%> (-47.26%)` | :arrow_down: |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `86.08% <0.00%> (-6.10%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...che/pinot/core/query/reduce/filter/RowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXIuamF2YQ==) | `66.66% <66.66%> (ø)` | |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `75.24% <81.42%> (+11.61%)` | :arrow_up: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `83.33% <81.81%> (-4.17%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `92.00% <83.33%> (+0.23%)` | :arrow_up: |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `85.71% <85.71%> (ø)` | |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `86.73% <86.73%> (ø)` | |
| ... and [396 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [21632da...dc923b1](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1053738973
> High level question: Since the gap-fill is always processed on the broker side, will it be simpler if we always rewrite the query on the broker side to trim off the gap-fill part, so that no change is required on the server side? Another benefit of this approach is that this can eliminate the overhead of parsing sub-queries and processing gap-fill again on the servers.
If the gap-fill part is trimmed off, it is hard to tell if this is gapfill query in case of Gapfill Type query. We do not parse the sub-queries and processing gap-fill again on servers.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r815480509
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/plan/CombinePlanNode.java
##########
@@ -161,8 +162,18 @@ public BaseCombineOperator run() {
// Streaming query (only support selection only)
return new StreamingSelectionOnlyCombineOperator(operators, _queryContext, _executorService, _streamObserver);
}
+ GapfillUtils.GapfillType gapfillType = _queryContext.getGapfillType();
if (QueryContextUtils.isAggregationQuery(_queryContext)) {
- if (_queryContext.getGroupByExpressions() == null) {
+ if (gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ _queryContext.getSubQueryContext().getSubQueryContext().setEndTimeMs(_queryContext.getEndTimeMs());
+ return new GroupByOrderByCombineOperator(
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (a5316f7) into [master](https://codecov.io/gh/apache/pinot/commit/3f98ce37fdaef0335fcd82e621489d65751b1f55?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (3f98ce3) will **increase** coverage by `0.12%`.
> The diff coverage is `82.59%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
+ Coverage 70.72% 70.85% +0.12%
+ Complexity 4242 4241 -1
============================================
Files 1631 1641 +10
Lines 85279 85951 +672
Branches 12844 13012 +168
============================================
+ Hits 60316 60901 +585
- Misses 20799 20845 +46
- Partials 4164 4205 +41
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.64% <15.63%> (-0.05%)` | :arrow_down: |
| integration2 | `27.33% <16.14%> (-0.17%)` | :arrow_down: |
| unittests1 | `67.14% <82.30%> (+0.15%)` | :arrow_up: |
| unittests2 | `13.99% <0.00%> (-0.10%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `71.72% <0.00%> (-0.13%)` | :arrow_down: |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `91.58% <0.00%> (-0.60%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...ot/core/query/reduce/filter/RowMatcherFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXJGYWN0b3J5LmphdmE=) | `66.66% <66.66%> (ø)` | |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `92.10% <75.00%> (+0.34%)` | :arrow_up: |
| [...xt/utils/BrokerRequestToQueryContextConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvQnJva2VyUmVxdWVzdFRvUXVlcnlDb250ZXh0Q29udmVydGVyLmphdmE=) | `92.68% <76.74%> (-5.71%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `76.53% <82.85%> (+12.89%)` | :arrow_up: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `86.36% <83.33%> (-1.14%)` | :arrow_down: |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| [...ache/pinot/core/plan/GapfillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `84.21% <84.21%> (ø)` | |
| ... and [42 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [3f98ce3...a5316f7](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r814352908
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/filter/RowMatcher.java
##########
@@ -0,0 +1,49 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce.filter;
+
+import org.apache.pinot.common.request.context.FilterContext;
+
+
+/**
+ * Filter matcher for the rows.
+ */
+public interface RowMatcher {
+ /**
+ * Returns {@code true} if the given row matches the filter, {@code false} otherwise.
+ */
+ boolean isMatch(Object[] row);
+
+ /**
+ * Helper method to construct a RowMatcher based on the given filter.
+ */
+ public static RowMatcher getRowMatcher(FilterContext filter, ValueExtractorFactory valueExtractorFactory) {
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r815193524
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/PreAggregateGapfillFilterHandler.java
##########
@@ -0,0 +1,74 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FilterContext;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.core.query.reduce.filter.ColumnValueExtractor;
+import org.apache.pinot.core.query.reduce.filter.LiteralValueExtractor;
+import org.apache.pinot.core.query.reduce.filter.RowMatcher;
+import org.apache.pinot.core.query.reduce.filter.ValueExtractor;
+import org.apache.pinot.core.query.reduce.filter.ValueExtractorFactory;
+import org.apache.pinot.core.util.GapfillUtils;
+
+/**
+ * Handler for Filter clause of PreAggregateGapFill.
+ */
+public class PreAggregateGapfillFilterHandler implements ValueExtractorFactory {
+ private final RowMatcher _rowMatcher;
+ private final DataSchema _dataSchema;
+ private final Map<String, Integer> _indexes;
+
+ public PreAggregateGapfillFilterHandler(FilterContext filter, DataSchema dataSchema) {
+ _dataSchema = dataSchema;
+ _indexes = new HashMap<>();
+ for (int i = 0; i < _dataSchema.size(); i++) {
+ _indexes.put(_dataSchema.getColumnName(i), i);
+ }
+ _rowMatcher = RowMatcher.getRowMatcher(filter, this);
+ }
+
+ /**
+ * Returns {@code true} if the given row matches the HAVING clause, {@code false} otherwise.
+ */
+ public boolean isMatch(Object[] row) {
+ return _rowMatcher.isMatch(row);
+ }
+
+ /**
+ * Returns a ValueExtractor based on the given expression.
+ */
+ @Override
+ public ValueExtractor getValueExtractor(ExpressionContext expression) {
+ expression = GapfillUtils.stripGapfill(expression);
+ if (expression.getType() == ExpressionContext.Type.LITERAL) {
+ // Literal
+ return new LiteralValueExtractor(expression.getLiteral());
+ }
+
+ if (expression.getType() == ExpressionContext.Type.IDENTIFIER) {
+ return new ColumnValueExtractor(_indexes.get(expression.getIdentifier()), _dataSchema);
+ } else {
+ return new ColumnValueExtractor(_indexes.get(expression.getFunction().toString()), _dataSchema);
Review comment:
The dataschema here is generated by PreAggregationGapFillDataTableReducer. FunctionContext.toString() is used in this case.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (2e4a1ca) into [master](https://codecov.io/gh/apache/pinot/commit/21632dadb8cd2d8b77aec523a758d73a64f70b07?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (21632da) will **decrease** coverage by `9.78%`.
> The diff coverage is `78.25%`.
> :exclamation: Current head 2e4a1ca differs from pull request most recent head 69b8c46. Consider uploading reports for the commit 69b8c46 to get more accurate results
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 71.00% 61.21% -9.79%
+ Complexity 4320 4158 -162
============================================
Files 1629 1627 -2
Lines 85132 85481 +349
Branches 12812 12946 +134
============================================
- Hits 60445 52326 -8119
- Misses 20526 29300 +8774
+ Partials 4161 3855 -306
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `27.31% <19.28%> (-0.31%)` | :arrow_down: |
| unittests1 | `67.08% <81.45%> (-0.29%)` | :arrow_down: |
| unittests2 | `?` | |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `62.11% <0.00%> (-9.56%)` | :arrow_down: |
| [...on/src/main/java/org/apache/pinot/serde/SerDe.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zZXJkZS9TZXJEZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [.../controller/api/resources/PinotTableInstances.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29udHJvbGxlci9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29udHJvbGxlci9hcGkvcmVzb3VyY2VzL1Bpbm90VGFibGVJbnN0YW5jZXMuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...ntroller/helix/core/PinotHelixResourceManager.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29udHJvbGxlci9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29udHJvbGxlci9oZWxpeC9jb3JlL1Bpbm90SGVsaXhSZXNvdXJjZU1hbmFnZXIuamF2YQ==) | `32.82% <0.00%> (-33.22%)` | :arrow_down: |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `91.58% <0.00%> (-0.60%)` | :arrow_down: |
| [.../plugin/inputformat/thrift/ThriftRecordReader.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtcGx1Z2lucy9waW5vdC1pbnB1dC1mb3JtYXQvcGlub3QtdGhyaWZ0L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9wbHVnaW4vaW5wdXRmb3JtYXQvdGhyaWZ0L1RocmlmdFJlY29yZFJlYWRlci5qYXZh) | `0.00% <0.00%> (-90.70%)` | :arrow_down: |
| [...ot/plugin/minion/tasks/SegmentConversionUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtcGx1Z2lucy9waW5vdC1taW5pb24tdGFza3MvcGlub3QtbWluaW9uLWJ1aWx0aW4tdGFza3Mvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL3Bpbm90L3BsdWdpbi9taW5pb24vdGFza3MvU2VnbWVudENvbnZlcnNpb25VdGlscy5qYXZh) | `38.33% <0.00%> (-37.94%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...e/pinot/core/transport/InstanceRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS90cmFuc3BvcnQvSW5zdGFuY2VSZXF1ZXN0SGFuZGxlci5qYXZh) | `59.75% <50.00%> (-1.01%)` | :arrow_down: |
| [...rg/apache/pinot/core/transport/ServerChannels.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS90cmFuc3BvcnQvU2VydmVyQ2hhbm5lbHMuamF2YQ==) | `86.88% <50.00%> (-2.95%)` | :arrow_down: |
| ... and [328 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [21632da...69b8c46](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] siddharthteotia commented on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
siddharthteotia commented on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1030254695
@weixiangsun is this no longer being worked on ?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (25cb3ef) into [master](https://codecov.io/gh/apache/pinot/commit/f67b5aacc5b494a2f5d78e87d67a84ea3aadc99a?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (f67b5aa) will **decrease** coverage by `43.45%`.
> The diff coverage is `17.24%`.
> :exclamation: Current head 25cb3ef differs from pull request most recent head df0b289. Consider uploading reports for the commit df0b289 to get more accurate results
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 70.76% 27.30% -43.46%
=============================================
Files 1631 1629 -2
Lines 85490 85760 +270
Branches 12878 12996 +118
=============================================
- Hits 60499 23419 -37080
- Misses 20819 60167 +39348
+ Partials 4172 2174 -1998
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `27.30% <17.24%> (-0.21%)` | :arrow_down: |
| unittests1 | `?` | |
| unittests2 | `?` | |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `62.11% <0.00%> (-9.74%)` | :arrow_down: |
| [...ache/pinot/core/plan/GapfillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...not/core/query/reduce/GapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwRmlsbERhdGFUYWJsZVJlZHVjZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [.../pinot/core/query/reduce/GapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `69.25% <0.00%> (-22.93%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `19.14% <13.27%> (-44.49%)` | :arrow_down: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `68.05% <30.00%> (-19.45%)` | :arrow_down: |
| [.../pinot/core/query/reduce/ResultReducerFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUmVzdWx0UmVkdWNlckZhY3RvcnkuamF2YQ==) | `61.11% <33.33%> (-20.14%)` | :arrow_down: |
| [...query/request/context/utils/QueryContextUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvUXVlcnlDb250ZXh0VXRpbHMuamF2YQ==) | `46.15% <36.36%> (-31.12%)` | :arrow_down: |
| ... and [1200 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [f67b5aa...df0b289](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r820373189
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/utils/BrokerRequestToQueryContextConverter.java
##########
@@ -53,12 +55,81 @@ private BrokerRequestToQueryContextConverter() {
* Converts the given {@link BrokerRequest} into a {@link QueryContext}.
*/
public static QueryContext convert(BrokerRequest brokerRequest) {
- return brokerRequest.getPinotQuery() != null ? convertSQL(brokerRequest) : convertPQL(brokerRequest);
+ if (brokerRequest.getPinotQuery() != null) {
+ QueryContext queryContext = convertSQL(brokerRequest.getPinotQuery(), brokerRequest);
+ queryContext.setGapfillType(GapfillUtils.getGapfillType(queryContext));
+ validateForGapfillQuery(queryContext);
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r806333226
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/utils/QueryContextUtils.java
##########
@@ -38,7 +39,11 @@ private QueryContextUtils() {
* Returns {@code true} if the given query is a selection query, {@code false} otherwise.
*/
public static boolean isSelectionQuery(QueryContext query) {
- return query.getAggregationFunctions() == null;
+ if (GapfillUtils.isGapfill(query)) {
+ return isSelectionOnlyQuery(query.getSubQueryContext());
+ } else {
+ return query.getAggregationFunctions() == null;
+ }
Review comment:
We can not assume that subquery exists then this is automatically a gapfill query since this will block subquery feature.
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/utils/QueryContextUtils.java
##########
@@ -47,16 +52,22 @@ public static boolean isSelectionQuery(QueryContext query) {
* Selection-only query at this moment means selection query without order-by.
*/
public static boolean isSelectionOnlyQuery(QueryContext query) {
- return query.getAggregationFunctions() == null && query.getOrderByExpressions() == null;
+ return query.getAggregationFunctions() == null
+ && query.getOrderByExpressions() == null
+ && !GapfillUtils.isGapfill(query);
Review comment:
Done
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/utils/QueryContextUtils.java
##########
@@ -47,16 +52,22 @@ public static boolean isSelectionQuery(QueryContext query) {
* Selection-only query at this moment means selection query without order-by.
*/
public static boolean isSelectionOnlyQuery(QueryContext query) {
- return query.getAggregationFunctions() == null && query.getOrderByExpressions() == null;
+ return query.getAggregationFunctions() == null
+ && query.getOrderByExpressions() == null
+ && !GapfillUtils.isGapfill(query);
}
/**
- * Returns {@code true} if the given query is an aggregation query, {@code false} otherwise.
+ * Returns {@code trgue} if the given query is an agregation query, {@code false} otherwise.
*/
public static boolean isAggregationQuery(QueryContext query) {
- AggregationFunction[] aggregationFunctions = query.getAggregationFunctions();
- return aggregationFunctions != null && (aggregationFunctions.length != 1
- || !(aggregationFunctions[0] instanceof DistinctAggregationFunction));
+ if (GapfillUtils.isGapfill(query)) {
Review comment:
Done
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/utils/QueryContextUtils.java
##########
@@ -38,7 +39,11 @@ private QueryContextUtils() {
* Returns {@code true} if the given query is a selection query, {@code false} otherwise.
*/
public static boolean isSelectionQuery(QueryContext query) {
- return query.getAggregationFunctions() == null;
+ if (GapfillUtils.isGapfill(query)) {
Review comment:
Done
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/operator/combine/GapfillGroupByOrderByCombineOperator.java
##########
@@ -0,0 +1,263 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.operator.combine;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import org.apache.pinot.common.exception.QueryException;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.response.ProcessingException;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.core.common.Operator;
+import org.apache.pinot.core.data.table.ConcurrentIndexedTable;
+import org.apache.pinot.core.data.table.IndexedTable;
+import org.apache.pinot.core.data.table.IntermediateRecord;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.data.table.Record;
+import org.apache.pinot.core.data.table.UnboundedConcurrentIndexedTable;
+import org.apache.pinot.core.operator.AcquireReleaseColumnsSegmentOperator;
+import org.apache.pinot.core.operator.blocks.IntermediateResultsBlock;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.groupby.AggregationGroupByResult;
+import org.apache.pinot.core.query.aggregation.groupby.GroupKeyGenerator;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.core.util.GroupByUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * Combine operator for aggregation group-by queries with SQL semantic.
+ * TODO: Use CombineOperatorUtils.getNumThreadsForQuery() to get the parallelism of the query instead of using
+ * all threads
+ */
+@SuppressWarnings("rawtypes")
+public class GapfillGroupByOrderByCombineOperator extends BaseCombineOperator {
+ public static final int MAX_TRIM_THRESHOLD = 1_000_000_000;
+ private static final Logger LOGGER = LoggerFactory.getLogger(GapfillGroupByOrderByCombineOperator.class);
+ private static final String OPERATOR_NAME = "GapfillGroupByOrderByCombineOperator";
+ private static final String EXPLAIN_NAME = "GAPFILL_COMBINE_GROUPBY_ORDERBY";
Review comment:
Done
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/plan/GapfillAggregationGroupByOrderByPlanNode.java
##########
@@ -0,0 +1,110 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.plan;
+
+import com.google.common.base.Preconditions;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.core.operator.filter.BaseFilterOperator;
+import org.apache.pinot.core.operator.query.AggregationGroupByOrderByOperator;
+import org.apache.pinot.core.operator.transform.TransformOperator;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionUtils;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.startree.CompositePredicateEvaluator;
+import org.apache.pinot.core.startree.StarTreeUtils;
+import org.apache.pinot.core.startree.plan.StarTreeTransformPlanNode;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.segment.spi.IndexSegment;
+import org.apache.pinot.segment.spi.index.startree.AggregationFunctionColumnPair;
+import org.apache.pinot.segment.spi.index.startree.StarTreeV2;
+
+
+/**
+ * The <code>GapfillAggregationGroupByOrderByPlanNode</code> class provides the execution plan for gapfill aggregation
+ * group-by order-by query on a single segment.
+ */
+@SuppressWarnings("rawtypes")
+public class GapfillAggregationGroupByOrderByPlanNode implements PlanNode {
Review comment:
This class is removed due to query syntax change. Done
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/plan/GapfillAggregationPlanNode.java
##########
@@ -0,0 +1,175 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.plan;
+
+import com.google.common.base.Preconditions;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.core.common.Operator;
+import org.apache.pinot.core.operator.blocks.IntermediateResultsBlock;
+import org.apache.pinot.core.operator.filter.BaseFilterOperator;
+import org.apache.pinot.core.operator.query.AggregationOperator;
+import org.apache.pinot.core.operator.query.DictionaryBasedAggregationOperator;
+import org.apache.pinot.core.operator.query.MetadataBasedAggregationOperator;
+import org.apache.pinot.core.operator.transform.TransformOperator;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionUtils;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.startree.CompositePredicateEvaluator;
+import org.apache.pinot.core.startree.StarTreeUtils;
+import org.apache.pinot.core.startree.plan.StarTreeTransformPlanNode;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.segment.spi.AggregationFunctionType;
+import org.apache.pinot.segment.spi.IndexSegment;
+import org.apache.pinot.segment.spi.index.reader.Dictionary;
+import org.apache.pinot.segment.spi.index.startree.AggregationFunctionColumnPair;
+import org.apache.pinot.segment.spi.index.startree.StarTreeV2;
+
+
+/**
+ * The <code>GapfillAggregationPlanNode</code> class provides the execution plan for gapfill aggregation only query on
+ * a single segment.
+ */
+@SuppressWarnings("rawtypes")
+public class GapfillAggregationPlanNode implements PlanNode {
Review comment:
Removed
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/operator/combine/GapfillGroupByOrderByCombineOperator.java
##########
@@ -0,0 +1,263 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.operator.combine;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import org.apache.pinot.common.exception.QueryException;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.response.ProcessingException;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.core.common.Operator;
+import org.apache.pinot.core.data.table.ConcurrentIndexedTable;
+import org.apache.pinot.core.data.table.IndexedTable;
+import org.apache.pinot.core.data.table.IntermediateRecord;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.data.table.Record;
+import org.apache.pinot.core.data.table.UnboundedConcurrentIndexedTable;
+import org.apache.pinot.core.operator.AcquireReleaseColumnsSegmentOperator;
+import org.apache.pinot.core.operator.blocks.IntermediateResultsBlock;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.groupby.AggregationGroupByResult;
+import org.apache.pinot.core.query.aggregation.groupby.GroupKeyGenerator;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.core.util.GroupByUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * Combine operator for aggregation group-by queries with SQL semantic.
+ * TODO: Use CombineOperatorUtils.getNumThreadsForQuery() to get the parallelism of the query instead of using
+ * all threads
+ */
+@SuppressWarnings("rawtypes")
+public class GapfillGroupByOrderByCombineOperator extends BaseCombineOperator {
Review comment:
Removed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r806333226
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/utils/QueryContextUtils.java
##########
@@ -38,7 +39,11 @@ private QueryContextUtils() {
* Returns {@code true} if the given query is a selection query, {@code false} otherwise.
*/
public static boolean isSelectionQuery(QueryContext query) {
- return query.getAggregationFunctions() == null;
+ if (GapfillUtils.isGapfill(query)) {
+ return isSelectionOnlyQuery(query.getSubQueryContext());
+ } else {
+ return query.getAggregationFunctions() == null;
+ }
Review comment:
We can not assume that subquery exists then this is automatically a gapfill query since this will block subquery feature.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r806341741
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/operator/combine/GapfillGroupByOrderByCombineOperator.java
##########
@@ -0,0 +1,263 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.operator.combine;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import org.apache.pinot.common.exception.QueryException;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.response.ProcessingException;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.core.common.Operator;
+import org.apache.pinot.core.data.table.ConcurrentIndexedTable;
+import org.apache.pinot.core.data.table.IndexedTable;
+import org.apache.pinot.core.data.table.IntermediateRecord;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.data.table.Record;
+import org.apache.pinot.core.data.table.UnboundedConcurrentIndexedTable;
+import org.apache.pinot.core.operator.AcquireReleaseColumnsSegmentOperator;
+import org.apache.pinot.core.operator.blocks.IntermediateResultsBlock;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.groupby.AggregationGroupByResult;
+import org.apache.pinot.core.query.aggregation.groupby.GroupKeyGenerator;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.core.util.GroupByUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * Combine operator for aggregation group-by queries with SQL semantic.
+ * TODO: Use CombineOperatorUtils.getNumThreadsForQuery() to get the parallelism of the query instead of using
+ * all threads
+ */
+@SuppressWarnings("rawtypes")
+public class GapfillGroupByOrderByCombineOperator extends BaseCombineOperator {
+ public static final int MAX_TRIM_THRESHOLD = 1_000_000_000;
+ private static final Logger LOGGER = LoggerFactory.getLogger(GapfillGroupByOrderByCombineOperator.class);
+ private static final String OPERATOR_NAME = "GapfillGroupByOrderByCombineOperator";
+ private static final String EXPLAIN_NAME = "GAPFILL_COMBINE_GROUPBY_ORDERBY";
Review comment:
Done
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (572ab88) into [master](https://codecov.io/gh/apache/pinot/commit/df39bdacf09dff5a00f5180a5d1ce838710b45a4?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (df39bda) will **decrease** coverage by `6.49%`.
> The diff coverage is `81.17%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 71.01% 64.51% -6.50%
+ Complexity 4314 4313 -1
============================================
Files 1624 1589 -35
Lines 84873 83621 -1252
Branches 12791 12743 -48
============================================
- Hits 60273 53950 -6323
- Misses 20453 25794 +5341
+ Partials 4147 3877 -270
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `67.57% <81.17%> (+0.12%)` | :arrow_up: |
| unittests2 | `14.01% <0.00%> (-0.10%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `86.08% <0.00%> (-6.10%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...che/pinot/core/query/reduce/filter/RowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXIuamF2YQ==) | `66.66% <66.66%> (ø)` | |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `75.24% <81.42%> (+11.61%)` | :arrow_up: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `83.33% <81.81%> (-4.17%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `91.89% <83.33%> (+0.22%)` | :arrow_up: |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `84.90% <84.90%> (ø)` | |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `85.71% <85.71%> (ø)` | |
| [.../core/query/reduce/filter/PredicateRowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1ByZWRpY2F0ZVJvd01hdGNoZXIuamF2YQ==) | `87.50% <87.50%> (ø)` | |
| ... and [389 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [df39bda...572ab88](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (f0544c7) into [master](https://codecov.io/gh/apache/pinot/commit/916d807c8f67b32c1a430692f74134c9c976c33d?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (916d807) will **decrease** coverage by `57.00%`.
> The diff coverage is `0.00%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 71.02% 14.02% -57.01%
+ Complexity 4314 81 -4233
=============================================
Files 1626 1591 -35
Lines 84929 83729 -1200
Branches 12783 12744 -39
=============================================
- Hits 60325 11739 -48586
- Misses 20462 71113 +50651
+ Partials 4142 877 -3265
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `?` | |
| unittests2 | `14.02% <0.00%> (-0.09%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `24.70% <0.00%> (-46.82%)` | :arrow_down: |
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `0.00% <0.00%> (-87.93%)` | :arrow_down: |
| [...org/apache/pinot/core/data/table/IndexedTable.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9kYXRhL3RhYmxlL0luZGV4ZWRUYWJsZS5qYXZh) | `0.00% <0.00%> (-84.75%)` | :arrow_down: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (-87.50%)` | :arrow_down: |
| [...ache/pinot/core/plan/GapfillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...pinot/core/plan/maker/InstancePlanMakerImplV2.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL21ha2VyL0luc3RhbmNlUGxhbk1ha2VySW1wbFYyLmphdmE=) | `0.00% <0.00%> (-76.77%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...e/pinot/core/query/reduce/HavingFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvSGF2aW5nRmlsdGVySGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.31%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.77%)` | :arrow_down: |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| ... and [1320 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [916d807...f0544c7](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1067037083
> Discussed offline. We can simplify the logic by handling the whole gapfill processing on the broker side in the `BrokerRequestHandler`:
>
> 1. When getting a gapfill query, rewrite it to the regular non-gapfill query (if the leaf subquery is gap-fill query, trim off the gap-fill part and select all the required columns)
> 2. Send the non-gapfill query as regular query
> 3. After reducing the server responses to `BrokerResponse`, apply gapfill to the `ResultTable` within the `BrokerResponse` and set the new `ResultTable`
>
> With this logic, all the gapfill handling logic are handled at broker side in the same place without introducing unnecessary overhead to the server
@Jackie-Jiang I have already addressed your comments. Can you take another look?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (57e966b) into [master](https://codecov.io/gh/apache/pinot/commit/fb572bd0aba20d2b8a83320df6dd24cb0c654b30?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (fb572bd) will **decrease** coverage by `5.62%`.
> The diff coverage is `69.39%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 70.39% 64.76% -5.63%
Complexity 4308 4308
============================================
Files 1623 1591 -32
Lines 84365 83239 -1126
Branches 12657 12635 -22
============================================
- Hits 59386 53912 -5474
- Misses 20876 25432 +4556
+ Partials 4103 3895 -208
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| unittests1 | `67.88% <69.39%> (-0.02%)` | :arrow_down: |
| unittests2 | `14.11% <0.00%> (-0.10%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...he/pinot/core/plan/GapfillAggregationPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxBZ2dyZWdhdGlvblBsYW5Ob2RlLmphdmE=) | `0.00% <0.00%> (ø)` | |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `86.08% <0.00%> (-6.10%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...plan/GapfillAggregationGroupByOrderByPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxBZ2dyZWdhdGlvbkdyb3VwQnlPcmRlckJ5UGxhbk5vZGUuamF2YQ==) | `51.21% <51.21%> (ø)` | |
| [.../combine/GapfillGroupByOrderByCombineOperator.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9vcGVyYXRvci9jb21iaW5lL0dhcGZpbGxHcm91cEJ5T3JkZXJCeUNvbWJpbmVPcGVyYXRvci5qYXZh) | `58.88% <58.88%> (ø)` | |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `81.66% <60.00%> (-2.27%)` | :arrow_down: |
| [...che/pinot/core/query/reduce/filter/RowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXIuamF2YQ==) | `66.66% <66.66%> (ø)` | |
| [...query/request/context/utils/QueryContextUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvUXVlcnlDb250ZXh0VXRpbHMuamF2YQ==) | `74.00% <70.00%> (-1.00%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `73.61% <82.92%> (+9.97%)` | :arrow_up: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `91.89% <83.33%> (+0.22%)` | :arrow_up: |
| ... and [354 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [fb572bd...57e966b](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (f78cb1d) into [master](https://codecov.io/gh/apache/pinot/commit/916d807c8f67b32c1a430692f74134c9c976c33d?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (916d807) will **increase** coverage by `0.02%`.
> The diff coverage is `82.00%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
+ Coverage 71.02% 71.05% +0.02%
Complexity 4314 4314
============================================
Files 1626 1636 +10
Lines 84929 85563 +634
Branches 12783 12941 +158
============================================
+ Hits 60325 60796 +471
- Misses 20462 20561 +99
- Partials 4142 4206 +64
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.47% <14.78%> (-0.32%)` | :arrow_down: |
| integration2 | `27.22% <14.78%> (-0.19%)` | :arrow_down: |
| unittests1 | `67.52% <82.23%> (+0.13%)` | :arrow_up: |
| unittests2 | `13.99% <0.00%> (-0.12%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `71.57% <0.00%> (+0.05%)` | :arrow_up: |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `91.58% <0.00%> (-0.60%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...che/pinot/core/query/reduce/filter/RowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXIuamF2YQ==) | `66.66% <66.66%> (ø)` | |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `75.24% <81.42%> (+11.61%)` | :arrow_up: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `86.36% <81.81%> (-1.14%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `92.00% <83.33%> (+0.23%)` | :arrow_up: |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `85.71% <85.71%> (ø)` | |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `86.73% <86.73%> (ø)` | |
| ... and [47 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [916d807...f78cb1d](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r815146809
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/PreAggregationGapFillDataTableReducer.java
##########
@@ -0,0 +1,775 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.PriorityQueue;
+import java.util.Set;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import org.apache.pinot.common.exception.QueryException;
+import org.apache.pinot.common.metrics.BrokerMetrics;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.QueryProcessingException;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.ConcurrentIndexedTable;
+import org.apache.pinot.core.data.table.IndexedTable;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.data.table.Record;
+import org.apache.pinot.core.data.table.SimpleIndexedTable;
+import org.apache.pinot.core.data.table.UnboundedConcurrentIndexedTable;
+import org.apache.pinot.core.operator.combine.GroupByOrderByCombineOperator;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.query.selection.SelectionOperatorUtils;
+import org.apache.pinot.core.transport.ServerRoutingInstance;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.core.util.GroupByUtils;
+import org.apache.pinot.core.util.trace.TraceRunnable;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+/**
+ * Helper class to reduce and set Aggregation results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class PreAggregationGapFillDataTableReducer implements DataTableReducer {
+ private static final int MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE = 2; // TBD, find a better value.
+
+ private final QueryContext _queryContext;
+
+ private final int _limitForAggregatedResult;
+ private int _limitForGapfilledResult;
+
+ private final DateTimeGranularitySpec _dateTimeGranularity;
+ private final DateTimeFormatSpec _dateTimeFormatter;
+ private final long _startMs;
+ private final long _endMs;
+ private final long _timeBucketSize;
+
+ private final List<Integer> _groupByKeyIndexes;
+ private boolean [] _isGroupBySelections;
+ private final Set<Key> _groupByKeys;
+ private final Map<Key, Object[]> _previousByGroupKey;
+ private final Map<String, ExpressionContext> _fillExpressions;
+ private final List<ExpressionContext> _timeSeries;
+ private final GapfillUtils.GapfillType _gapfillType;
+
+ PreAggregationGapFillDataTableReducer(QueryContext queryContext) {
+ _queryContext = queryContext;
+ _gapfillType = queryContext.getGapfillType();
+ _limitForAggregatedResult = queryContext.getLimit();
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ _limitForGapfilledResult = queryContext.getLimit();
+ } else {
+ _limitForGapfilledResult = queryContext.getSubQueryContext().getLimit();
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext);
+
+ Preconditions.checkArgument(
+ gapFillSelection != null && gapFillSelection.getFunction() != null, "Gapfill Expression should be function.");
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ Preconditions.checkArgument(
+ args.size() > 5, "PreAggregateGapFill does not have correct number of arguments.");
+ Preconditions.checkArgument(
+ args.get(1).getLiteral() != null, "The second argument of PostAggregateGapFill should be TimeFormatter.");
+ Preconditions.checkArgument(
+ args.get(2).getLiteral() != null, "The third argument of PostAggregateGapFill should be start time.");
+ Preconditions.checkArgument(
+ args.get(3).getLiteral() != null, "The fourth argument of PostAggregateGapFill should be end time.");
+ Preconditions.checkArgument(
+ args.get(4).getLiteral() != null, "The fifth argument of PostAggregateGapFill should be time bucket size.");
+
+ _dateTimeFormatter = new DateTimeFormatSpec(args.get(1).getLiteral());
+ _dateTimeGranularity = new DateTimeGranularitySpec(args.get(4).getLiteral());
+ String start = args.get(2).getLiteral();
+ _startMs = truncate(_dateTimeFormatter.fromFormatToMillis(start));
+ String end = args.get(3).getLiteral();
+ _endMs = truncate(_dateTimeFormatter.fromFormatToMillis(end));
+ _timeBucketSize = _dateTimeGranularity.granularityToMillis();
+
+ _fillExpressions = GapfillUtils.getFillExpressions(gapFillSelection);
+
+ _previousByGroupKey = new HashMap<>();
+ _groupByKeyIndexes = new ArrayList<>();
+ _groupByKeys = new HashSet<>();
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ Preconditions.checkArgument(timeseriesOn != null, "The TimeSeriesOn expressions should be specified.");
+ _timeSeries = timeseriesOn.getFunction().getArguments();
+ }
+
+ private void replaceColumnNameWithAlias(DataSchema dataSchema) {
+ QueryContext queryContext;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = _queryContext.getSubQueryContext().getSubQueryContext();
+ } else if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ queryContext = _queryContext;
+ } else {
+ queryContext = _queryContext.getSubQueryContext();
+ }
+ List<String> aliasList = queryContext.getAliasList();
+ Map<String, String> columnNameToAliasMap = new HashMap<>();
+ for (int i = 0; i < aliasList.size(); i++) {
+ if (aliasList.get(i) != null) {
+ ExpressionContext selection = queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(selection)) {
+ selection = selection.getFunction().getArguments().get(0);
+ }
+ columnNameToAliasMap.put(selection.toString(), aliasList.get(i));
+ }
+ }
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ if (columnNameToAliasMap.containsKey(dataSchema.getColumnNames()[i])) {
+ dataSchema.getColumnNames()[i] = columnNameToAliasMap.get(dataSchema.getColumnNames()[i]);
+ }
+ }
+ }
+
+ /**
+ * Computes the number of reduce threads to use per query.
+ * <ul>
+ * <li> Use single thread if number of data tables to reduce is less than
+ * {@value #MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE}.</li>
+ * <li> Else, use min of max allowed reduce threads per query, and number of data tables.</li>
+ * </ul>
+ *
+ * @param numDataTables Number of data tables to reduce
+ * @param maxReduceThreadsPerQuery Max allowed reduce threads per query
+ * @return Number of reduce threads to use for the query
+ */
+ private int getNumReduceThreadsToUse(int numDataTables, int maxReduceThreadsPerQuery) {
+ // Use single thread if number of data tables < MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE.
+ if (numDataTables < MIN_DATA_TABLES_FOR_CONCURRENT_REDUCE) {
+ return Math.min(1, numDataTables); // Number of data tables can be zero.
+ }
+
+ return Math.min(maxReduceThreadsPerQuery, numDataTables);
+ }
+
+ private IndexedTable getIndexedTable(DataSchema dataSchema, Collection<DataTable> dataTablesToReduce,
+ DataTableReducerContext reducerContext)
+ throws TimeoutException {
+ long start = System.currentTimeMillis();
+ int numDataTables = dataTablesToReduce.size();
+
+ // Get the number of threads to use for reducing.
+ // In case of single reduce thread, fall back to SimpleIndexedTable to avoid redundant locking/unlocking calls.
+ int numReduceThreadsToUse = getNumReduceThreadsToUse(numDataTables, reducerContext.getMaxReduceThreadsPerQuery());
+ int limit = _queryContext.getLimit();
+ // TODO: Make minTrimSize configurable
+ int trimSize = GroupByUtils.getTableCapacity(limit);
+ // NOTE: For query with HAVING clause, use trimSize as resultSize to ensure the result accuracy.
+ // TODO: Resolve the HAVING clause within the IndexedTable before returning the result
+ int resultSize = _queryContext.getHavingFilter() != null ? trimSize : limit;
+ int trimThreshold = reducerContext.getGroupByTrimThreshold();
+ IndexedTable indexedTable;
+ if (numReduceThreadsToUse <= 1) {
+ indexedTable = new SimpleIndexedTable(dataSchema, _queryContext, resultSize, trimSize, trimThreshold);
+ } else {
+ if (trimThreshold >= GroupByOrderByCombineOperator.MAX_TRIM_THRESHOLD) {
+ // special case of trim threshold where it is set to max value.
+ // there won't be any trimming during upsert in this case.
+ // thus we can avoid the overhead of read-lock and write-lock
+ // in the upsert method.
+ indexedTable = new UnboundedConcurrentIndexedTable(dataSchema, _queryContext, resultSize);
+ } else {
+ indexedTable = new ConcurrentIndexedTable(dataSchema, _queryContext, resultSize, trimSize, trimThreshold);
+ }
+ }
+
+ Future[] futures = new Future[numDataTables];
+ CountDownLatch countDownLatch = new CountDownLatch(numDataTables);
+
+ // Create groups of data tables that each thread can process concurrently.
+ // Given that numReduceThreads is <= numDataTables, each group will have at least one data table.
+ ArrayList<DataTable> dataTables = new ArrayList<>(dataTablesToReduce);
+ List<List<DataTable>> reduceGroups = new ArrayList<>(numReduceThreadsToUse);
+
+ for (int i = 0; i < numReduceThreadsToUse; i++) {
+ reduceGroups.add(new ArrayList<>());
+ }
+ for (int i = 0; i < numDataTables; i++) {
+ reduceGroups.get(i % numReduceThreadsToUse).add(dataTables.get(i));
+ }
+
+ int cnt = 0;
+ ColumnDataType[] storedColumnDataTypes = dataSchema.getStoredColumnDataTypes();
+ int numColumns = storedColumnDataTypes.length;
+ for (List<DataTable> reduceGroup : reduceGroups) {
+ futures[cnt++] = reducerContext.getExecutorService().submit(new TraceRunnable() {
+ @Override
+ public void runJob() {
+ for (DataTable dataTable : reduceGroup) {
+ int numRows = dataTable.getNumberOfRows();
+
+ try {
+ for (int rowId = 0; rowId < numRows; rowId++) {
+ Object[] values = new Object[numColumns];
+ for (int colId = 0; colId < numColumns; colId++) {
+ switch (storedColumnDataTypes[colId]) {
+ case INT:
+ values[colId] = dataTable.getInt(rowId, colId);
+ break;
+ case LONG:
+ values[colId] = dataTable.getLong(rowId, colId);
+ break;
+ case FLOAT:
+ values[colId] = dataTable.getFloat(rowId, colId);
+ break;
+ case DOUBLE:
+ values[colId] = dataTable.getDouble(rowId, colId);
+ break;
+ case STRING:
+ values[colId] = dataTable.getString(rowId, colId);
+ break;
+ case BYTES:
+ values[colId] = dataTable.getBytes(rowId, colId);
+ break;
+ case OBJECT:
+ values[colId] = dataTable.getObject(rowId, colId);
+ break;
+ // Add other aggregation intermediate result / group-by column type supports here
+ default:
+ throw new IllegalStateException();
+ }
+ }
+ indexedTable.upsert(new Record(values));
+ }
+ } finally {
+ countDownLatch.countDown();
+ }
+ }
+ }
+ });
+ }
+
+ try {
+ long timeOutMs = reducerContext.getReduceTimeOutMs() - (System.currentTimeMillis() - start);
+ countDownLatch.await(timeOutMs, TimeUnit.MILLISECONDS);
+ } catch (InterruptedException e) {
+ for (Future future : futures) {
+ if (!future.isDone()) {
+ future.cancel(true);
+ }
+ }
+ throw new TimeoutException("Timed out in broker reduce phase.");
+ }
+
+ indexedTable.finish(true);
+ return indexedTable;
+ }
+
+ /**
+ * Here are three things that happen
+ * 1. Sort the result sets from all pinot servers based on timestamp
+ * 2. Gapfill the data for missing entities per time bucket
+ * 3. Aggregate the dataset per time bucket.
+ */
+ @Override
+ public void reduceAndSetResults(String tableName, DataSchema dataSchema,
+ Map<ServerRoutingInstance, DataTable> dataTableMap, BrokerResponseNative brokerResponseNative,
+ DataTableReducerContext reducerContext, BrokerMetrics brokerMetrics) {
+ DataSchema resultTableSchema = getResultTableDataSchema(dataSchema);
+ if (dataTableMap.isEmpty()) {
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, Collections.emptyList()));
+ return;
+ }
+
+ String[] columns = dataSchema.getColumnNames();
+
+ Map<String, Integer> indexes = new HashMap<>();
+ for (int i = 0; i < columns.length; i++) {
+ indexes.put(columns[i], i);
+ }
+
+ _isGroupBySelections = new boolean[dataSchema.getColumnDataTypes().length];
+
+ // The first one argument of timeSeries is time column. The left ones are defining entity.
+ for (ExpressionContext entityColum : _timeSeries) {
+ int index = indexes.get(entityColum.getIdentifier());
+ _isGroupBySelections[index] = true;
+ _groupByKeyIndexes.add(index);
+ }
+
+ List<Object[]> sortedRawRows;
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL_AGGREGATE
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL_SELECT) {
+ sortedRawRows = mergeAndSort(dataTableMap.values(), dataSchema);
+ } else {
+ try {
+ IndexedTable indexedTable = getIndexedTable(dataSchema, dataTableMap.values(), reducerContext);
+ sortedRawRows = mergeAndSort(indexedTable, dataSchema);
+ } catch (TimeoutException e) {
+ brokerResponseNative.getProcessingExceptions()
+ .add(new QueryProcessingException(QueryException.BROKER_TIMEOUT_ERROR_CODE, e.getMessage()));
+ return;
+ }
+ }
+ List<Object[]> resultRows;
+ replaceColumnNameWithAlias(dataSchema);
+ if (_queryContext.getAggregationFunctions() != null) {
+ validateGroupByForOuterQuery();
+ }
+
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL_AGGREGATE
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL_SELECT) {
+ List<Object[]> gapfilledRows = gapFillAndAggregate(sortedRawRows, resultTableSchema, dataSchema);
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL_SELECT) {
+ List<String> selectionColumns = SelectionOperatorUtils.getSelectionColumns(_queryContext, dataSchema);
+ resultRows = new ArrayList<>(gapfilledRows.size());
+
+ Map<String, Integer> columnNameToIndexMap = new HashMap<>(dataSchema.getColumnNames().length);
+ String[] columnNames = dataSchema.getColumnNames();
+ for (int i = 0; i < columnNames.length; i++) {
+ columnNameToIndexMap.put(columnNames[i], i);
+ }
+
+ ColumnDataType[] columnDataTypes = dataSchema.getColumnDataTypes();
+ ColumnDataType[] resultColumnDataTypes = new ColumnDataType[selectionColumns.size()];
+ for (int i = 0; i < resultColumnDataTypes.length; i++) {
+ String name = selectionColumns.get(i);
+ int index = columnNameToIndexMap.get(name);
+ resultColumnDataTypes[i] = columnDataTypes[index];
+ }
+
+ for (Object[] row : gapfilledRows) {
+ Object[] resultRow = new Object[selectionColumns.size()];
+ for (int i = 0; i < selectionColumns.size(); i++) {
+ int index = columnNameToIndexMap.get(selectionColumns.get(i));
+ resultRow[i] = resultColumnDataTypes[i].convertAndFormat(row[index]);
+ }
+ resultRows.add(resultRow);
+ }
+ } else {
+ resultRows = gapfilledRows;
+ }
+ } else {
+ this.setupColumnTypeForAggregatedColum(dataSchema.getColumnDataTypes());
+ ColumnDataType[] columnDataTypes = dataSchema.getColumnDataTypes();
+ for (Object[] row : sortedRawRows) {
+ extractFinalAggregationResults(row);
+ for (int i = 0; i < columnDataTypes.length; i++) {
+ row[i] = columnDataTypes[i].convert(row[i]);
+ }
+ }
+ resultRows = gapFillAndAggregate(sortedRawRows, resultTableSchema, dataSchema);
+ }
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, resultRows));
+ }
+
+ private void extractFinalAggregationResults(Object[] row) {
+ AggregationFunction[] aggregationFunctions;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL) {
+ aggregationFunctions = _queryContext.getSubQueryContext().getAggregationFunctions();
+ } else {
+ aggregationFunctions = _queryContext.getSubQueryContext().getSubQueryContext().getAggregationFunctions();
+ }
+ int numAggregationFunctionsForInnerQuery = aggregationFunctions == null ? 0 : aggregationFunctions.length;
+ for (int i = 0; i < numAggregationFunctionsForInnerQuery; i++) {
+ int valueIndex = _timeSeries.size() + 1 + i;
+ row[valueIndex] = aggregationFunctions[i].extractFinalResult(row[valueIndex]);
+ }
+ }
+
+ private void setupColumnTypeForAggregatedColum(ColumnDataType[] columnDataTypes) {
+ AggregationFunction[] aggregationFunctions;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL) {
+ aggregationFunctions = _queryContext.getSubQueryContext().getAggregationFunctions();
+ } else {
+ aggregationFunctions = _queryContext.getSubQueryContext().getSubQueryContext().getAggregationFunctions();
+ }
+ int numAggregationFunctionsForInnerQuery = aggregationFunctions == null ? 0 : aggregationFunctions.length;
+ for (int i = 0; i < numAggregationFunctionsForInnerQuery; i++) {
+ columnDataTypes[_timeSeries.size() + 1 + i] = aggregationFunctions[i].getFinalResultColumnType();
+ }
+ }
+
+ /**
+ * Constructs the DataSchema for the ResultTable.
+ */
+ private DataSchema getResultTableDataSchema(DataSchema dataSchema) {
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ return dataSchema;
+ }
+
+ int numOfColumns = _queryContext.getSelectExpressions().size();
+ String [] columnNames = new String[numOfColumns];
+ ColumnDataType [] columnDataTypes = new ColumnDataType[numOfColumns];
+ for (int i = 0; i < numOfColumns; i++) {
+ ExpressionContext expressionContext = _queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(expressionContext)) {
+ expressionContext = expressionContext.getFunction().getArguments().get(0);
+ }
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ columnNames[i] = expressionContext.getIdentifier();
+ columnDataTypes[i] = ColumnDataType.STRING;
+ } else {
+ FunctionContext functionContext = expressionContext.getFunction();
+ AggregationFunction aggregationFunction =
+ AggregationFunctionFactory.getAggregationFunction(functionContext, _queryContext);
+ columnDataTypes[i] = aggregationFunction.getFinalResultColumnType();
+ columnNames[i] = functionContext.toString();
+ }
+ }
+ return new DataSchema(columnNames, columnDataTypes);
+ }
+
+ private Key constructGroupKeys(Object[] row) {
+ Object [] groupKeys = new Object[_groupByKeyIndexes.size()];
+ for (int i = 0; i < _groupByKeyIndexes.size(); i++) {
+ groupKeys[i] = row[_groupByKeyIndexes.get(i)];
+ }
+ return new Key(groupKeys);
+ }
+
+ private long truncate(long epoch) {
+ int sz = _dateTimeGranularity.getSize();
+ return epoch / sz * sz;
+ }
+
+ private List<Object[]> gapFillAndAggregate(List<Object[]> sortedRows,
+ DataSchema dataSchemaForAggregatedResult,
+ DataSchema dataSchema) {
+ List<Object[]> result = new ArrayList<>();
+
+ PreAggregateGapfillFilterHandler postGapfillFilterHandler = null;
+ if (_queryContext.getSubQueryContext() != null && _queryContext.getFilter() != null) {
+ postGapfillFilterHandler = new PreAggregateGapfillFilterHandler(_queryContext.getFilter(), dataSchema);
+ }
+ PreAggregateGapfillFilterHandler postAggregateHavingFilterHandler = null;
+ if (_queryContext.getHavingFilter() != null) {
+ postAggregateHavingFilterHandler = new PreAggregateGapfillFilterHandler(
+ _queryContext.getHavingFilter(), dataSchemaForAggregatedResult);
+ }
+ Object[] previous = null;
+ Iterator<Object[]> sortedIterator = sortedRows.iterator();
+ for (long time = _startMs; time < _endMs; time += _timeBucketSize) {
+ List<Object[]> bucketedResult = new ArrayList<>();
+ previous = gapfill(time, bucketedResult, sortedIterator, previous, dataSchema, postGapfillFilterHandler);
+ if (_queryContext.getAggregationFunctions() == null) {
+ result.addAll(bucketedResult);
+ } else if (bucketedResult.size() > 0) {
+ List<Object[]> aggregatedRows = aggregateGapfilledData(bucketedResult, dataSchema);
+ for (Object[] aggregatedRow : aggregatedRows) {
+ if (postAggregateHavingFilterHandler == null || postAggregateHavingFilterHandler.isMatch(aggregatedRow)) {
+ result.add(aggregatedRow);
+ }
+ if (result.size() >= _limitForAggregatedResult) {
+ return result;
+ }
+ }
+ }
+ }
+ return result;
+ }
+
+ private Object[] gapfill(long bucketTime,
+ List<Object[]> bucketedResult,
+ Iterator<Object[]> sortedIterator,
+ Object[] previous,
+ DataSchema dataSchema,
+ PreAggregateGapfillFilterHandler postGapfillFilterHandler) {
+ ColumnDataType[] resultColumnDataTypes = dataSchema.getColumnDataTypes();
+ int numResultColumns = resultColumnDataTypes.length;
+ Set<Key> keys = new HashSet<>(_groupByKeys);
+ if (previous == null && sortedIterator.hasNext()) {
+ previous = sortedIterator.next();
+ }
+
+ while (previous != null) {
+ Object[] resultRow = previous;
+ for (int i = 0; i < resultColumnDataTypes.length; i++) {
+ resultRow[i] = resultColumnDataTypes[i].format(resultRow[i]);
+ }
+
+ long timeCol = _dateTimeFormatter.fromFormatToMillis(String.valueOf(resultRow[0]));
+ if (timeCol > bucketTime) {
+ break;
+ }
+ if (timeCol == bucketTime) {
+ if (postGapfillFilterHandler == null || postGapfillFilterHandler.isMatch(previous)) {
+ if (bucketedResult.size() >= _limitForGapfilledResult) {
+ _limitForGapfilledResult = 0;
+ break;
+ } else {
+ bucketedResult.add(resultRow);
+ }
+ }
+ Key key = constructGroupKeys(resultRow);
+ keys.remove(key);
+ _previousByGroupKey.put(key, resultRow);
+ }
+ if (sortedIterator.hasNext()) {
+ previous = sortedIterator.next();
+ } else {
+ previous = null;
+ }
+ }
+
+ for (Key key : keys) {
+ Object[] gapfillRow = new Object[numResultColumns];
+ int keyIndex = 0;
+ if (resultColumnDataTypes[0] == ColumnDataType.LONG) {
+ gapfillRow[0] = Long.valueOf(_dateTimeFormatter.fromMillisToFormat(bucketTime));
+ } else {
+ gapfillRow[0] = _dateTimeFormatter.fromMillisToFormat(bucketTime);
+ }
+ for (int i = 1; i < _isGroupBySelections.length; i++) {
+ if (_isGroupBySelections[i]) {
+ gapfillRow[i] = key.getValues()[keyIndex++];
+ } else {
+ gapfillRow[i] = getFillValue(i, dataSchema.getColumnName(i), key, resultColumnDataTypes[i]);
+ }
+ }
+
+ if (postGapfillFilterHandler == null || postGapfillFilterHandler.isMatch(gapfillRow)) {
+ if (bucketedResult.size() >= _limitForGapfilledResult) {
+ break;
+ } else {
+ bucketedResult.add(gapfillRow);
+ }
+ }
+ }
+ if (_limitForGapfilledResult > _groupByKeys.size()) {
+ _limitForGapfilledResult -= _groupByKeys.size();
+ } else {
+ _limitForGapfilledResult = 0;
+ }
+ return previous;
+ }
+
+ /**
+ * Make sure that the outer query has the group by clause and the group by clause has the time bucket.
+ */
+ private void validateGroupByForOuterQuery() {
+ List<ExpressionContext> groupbyExpressions = _queryContext.getGroupByExpressions();
+ Preconditions.checkArgument(groupbyExpressions != null, "No GroupBy Clause.");
+ List<ExpressionContext> innerSelections = _queryContext.getSubQueryContext().getSelectExpressions();
+ String timeBucketCol = null;
+ List<String> strAlias = _queryContext.getSubQueryContext().getAliasList();
+ for (int i = 0; i < innerSelections.size(); i++) {
+ ExpressionContext innerSelection = innerSelections.get(i);
+ if (GapfillUtils.isGapfill(innerSelection)) {
+ if (strAlias.get(i) != null) {
+ timeBucketCol = strAlias.get(i);
+ } else {
+ timeBucketCol = innerSelection.getFunction().getArguments().get(0).toString();
+ }
+ break;
+ }
+ }
+
+ Preconditions.checkArgument(timeBucketCol != null, "No Group By timebucket.");
+
+ boolean findTimeBucket = false;
+ for (ExpressionContext groupbyExp : groupbyExpressions) {
+ if (timeBucketCol.equals(groupbyExp.toString())) {
+ findTimeBucket = true;
+ break;
+ }
+ }
+
+ Preconditions.checkArgument(findTimeBucket, "No Group By timebucket.");
+ }
+
+ private List<Object[]> aggregateGapfilledData(List<Object[]> bucketedRows, DataSchema dataSchema) {
Review comment:
IndexedTable is used to merge the Intermediate Aggregate Result. We do not need merge the intermediate aggregated result inside outer aggregation since there are no multiple segments.
If the subquery is aggregation query, the intermediate result from different pinot segments has already been merged before gapfill happens. IndexedTable is used to merge the intermediate result there.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (0285775) into [master](https://codecov.io/gh/apache/pinot/commit/3f98ce37fdaef0335fcd82e621489d65751b1f55?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (3f98ce3) will **decrease** coverage by `0.93%`.
> The diff coverage is `81.66%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 70.72% 69.79% -0.94%
Complexity 4242 4242
============================================
Files 1631 1641 +10
Lines 85279 85899 +620
Branches 12844 12997 +153
============================================
- Hits 60316 59950 -366
- Misses 20799 21770 +971
- Partials 4164 4179 +15
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.63% <13.89%> (-0.06%)` | :arrow_down: |
| integration2 | `?` | |
| unittests1 | `67.11% <81.89%> (+0.13%)` | :arrow_up: |
| unittests2 | `14.03% <0.00%> (-0.07%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `69.92% <0.00%> (-1.93%)` | :arrow_down: |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `91.58% <0.00%> (-0.60%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...ot/core/query/reduce/filter/RowMatcherFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXJGYWN0b3J5LmphdmE=) | `66.66% <66.66%> (ø)` | |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `75.00% <81.15%> (+11.36%)` | :arrow_up: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `83.33% <81.81%> (-4.17%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `92.00% <83.33%> (+0.23%)` | :arrow_up: |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| [.../pinot/core/query/reduce/GapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `85.71% <85.71%> (ø)` | |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `86.73% <86.73%> (ø)` | |
| ... and [108 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [3f98ce3...0285775](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (0285775) into [master](https://codecov.io/gh/apache/pinot/commit/3f98ce37fdaef0335fcd82e621489d65751b1f55?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (3f98ce3) will **decrease** coverage by `56.69%`.
> The diff coverage is `0.00%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 70.72% 14.03% -56.70%
+ Complexity 4242 81 -4161
=============================================
Files 1631 1596 -35
Lines 85279 84017 -1262
Branches 12844 12793 -51
=============================================
- Hits 60316 11789 -48527
- Misses 20799 71344 +50545
+ Partials 4164 884 -3280
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `?` | |
| unittests2 | `14.03% <0.00%> (-0.07%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `24.41% <0.00%> (-47.44%)` | :arrow_down: |
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `0.00% <0.00%> (-86.56%)` | :arrow_down: |
| [...org/apache/pinot/core/data/table/IndexedTable.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9kYXRhL3RhYmxlL0luZGV4ZWRUYWJsZS5qYXZh) | `0.00% <0.00%> (-84.75%)` | :arrow_down: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (-87.50%)` | :arrow_down: |
| [...ache/pinot/core/plan/GapfillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...pinot/core/plan/maker/InstancePlanMakerImplV2.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL21ha2VyL0luc3RhbmNlUGxhbk1ha2VySW1wbFYyLmphdmE=) | `0.00% <0.00%> (-76.77%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [.../pinot/core/query/reduce/GapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...e/pinot/core/query/reduce/HavingFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvSGF2aW5nRmlsdGVySGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.31%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.77%)` | :arrow_down: |
| ... and [1323 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [3f98ce3...0285775](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829487992
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/filter/ValueExtractorFactory.java
##########
@@ -0,0 +1,29 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce.filter;
+
+import org.apache.pinot.common.request.context.ExpressionContext;
+
+
+/**
+ * Value extractor for the post-aggregation function or pre-aggregation gap fill.
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829518555
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/selection/SelectionOperatorUtils.java
##########
@@ -88,6 +83,8 @@ private SelectionOperatorUtils() {
ThreadLocal.withInitial(() -> new DecimalFormat(FLOAT_PATTERN, DECIMAL_FORMAT_SYMBOLS));
private static final ThreadLocal<DecimalFormat> THREAD_LOCAL_DOUBLE_FORMAT =
ThreadLocal.withInitial(() -> new DecimalFormat(DOUBLE_PATTERN, DECIMAL_FORMAT_SYMBOLS));
+ private SelectionOperatorUtils() {
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829434243
##########
File path: pinot-broker/src/main/java/org/apache/pinot/broker/requesthandler/BaseBrokerRequestHandler.java
##########
@@ -217,6 +218,10 @@ private BrokerResponseNative handleSQLRequest(long requestId, String query, Json
requestStatistics.setErrorCode(QueryException.PQL_PARSING_ERROR_CODE);
return new BrokerResponseNative(QueryException.getException(QueryException.PQL_PARSING_ERROR, e));
}
+
+ BrokerRequest originalBrokerRequest = brokerRequest;
+ brokerRequest = GapfillUtils.stripGapfill(originalBrokerRequest);
Review comment:
Fixed
##########
File path: pinot-broker/src/main/java/org/apache/pinot/broker/requesthandler/BaseBrokerRequestHandler.java
##########
@@ -2183,9 +2192,9 @@ private void attachTimeBoundary(String rawTableName, BrokerRequest brokerRequest
* Processes the optimized broker requests for both OFFLINE and REALTIME table.
*/
protected abstract BrokerResponseNative processBrokerRequest(long requestId, BrokerRequest originalBrokerRequest,
- @Nullable BrokerRequest offlineBrokerRequest, @Nullable Map<ServerInstance, List<String>> offlineRoutingTable,
- @Nullable BrokerRequest realtimeBrokerRequest, @Nullable Map<ServerInstance, List<String>> realtimeRoutingTable,
- long timeoutMs, ServerStats serverStats, RequestStatistics requestStatistics)
+ BrokerRequest brokerRequest, @Nullable BrokerRequest offlineBrokerRequest, @Nullable Map<ServerInstance,
Review comment:
Fixed
##########
File path: pinot-common/src/main/java/org/apache/pinot/sql/parsers/CalciteSqlParser.java
##########
@@ -67,9 +67,7 @@
public class CalciteSqlParser {
- private CalciteSqlParser() {
- }
-
+ public static final List<QueryRewriter> QUERY_REWRITERS = new ArrayList<>(QueryRewriterFactory.getQueryRewriters());
Review comment:
Fixed
##########
File path: pinot-common/src/main/java/org/apache/pinot/sql/parsers/CalciteSqlParser.java
##########
@@ -100,6 +95,9 @@ private CalciteSqlParser() {
private static final Pattern OPTIONS_REGEX_PATTEN =
Pattern.compile("option\\s*\\(([^\\)]+)\\)", Pattern.CASE_INSENSITIVE);
+ private CalciteSqlParser() {
Review comment:
Fixed
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/BrokerReduceService.java
##########
@@ -108,7 +108,12 @@ public BrokerResponseNative reduceOnDataTable(BrokerRequest brokerRequest,
dataTableReducer.reduceAndSetResults(rawTableName, cachedDataSchema, dataTableMap, brokerResponseNative,
new DataTableReducerContext(_reduceExecutorService, _maxReduceThreadsPerQuery, reduceTimeOutMs,
_groupByTrimThreshold), brokerMetrics);
- updateAlias(queryContext, brokerResponseNative);
+ QueryContext originalQueryContext = BrokerRequestToQueryContextConverter.convert(originalBrokerRequest);
Review comment:
Fixed
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/ColumnDataToBlockValSetConverter.java
##########
@@ -0,0 +1,181 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.List;
+import javax.annotation.Nullable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.segment.spi.index.reader.Dictionary;
+import org.apache.pinot.spi.data.FieldSpec;
+
+
+/**
+ * As for Gapfilling Function, all raw data will be retrieved from the pinot
+ * server and merged on the pinot broker. The data will be in {@link DataTable}
+ * format.
+ * As part of Gapfilling Function execution plan, the aggregation function will
+ * work on the merged data on pinot broker. The aggregation function only takes
+ * the {@link BlockValSet} format.
+ * This is the Helper class to convert the data from {@link DataTable} to the
+ * block of values {@link BlockValSet} which used as input to the aggregation
+ * function.
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
Review comment:
Fixed
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/ColumnDataToBlockValSetConverter.java
##########
@@ -0,0 +1,181 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.List;
+import javax.annotation.Nullable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.segment.spi.index.reader.Dictionary;
+import org.apache.pinot.spi.data.FieldSpec;
+
+
+/**
+ * As for Gapfilling Function, all raw data will be retrieved from the pinot
+ * server and merged on the pinot broker. The data will be in {@link DataTable}
+ * format.
+ * As part of Gapfilling Function execution plan, the aggregation function will
+ * work on the merged data on pinot broker. The aggregation function only takes
+ * the {@link BlockValSet} format.
+ * This is the Helper class to convert the data from {@link DataTable} to the
+ * block of values {@link BlockValSet} which used as input to the aggregation
+ * function.
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class ColumnDataToBlockValSetConverter implements BlockValSet {
Review comment:
Fixed
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/GapFillProcessor.java
##########
@@ -0,0 +1,455 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+
+/**
+ * Helper class to reduce and set gap fill results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class GapFillProcessor {
Review comment:
Fixed
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/QueryContext.java
##########
@@ -188,6 +194,10 @@ public FilterContext getHavingFilter() {
return _orderByExpressions;
}
+ public QueryContext getSubQueryContext() {
Review comment:
Fixed
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/QueryContext.java
##########
@@ -85,6 +86,7 @@
// Keep the BrokerRequest to make incremental changes
// TODO: Remove it once the whole query engine is using the QueryContext
private final BrokerRequest _brokerRequest;
+ private final QueryContext _subQueryContext;
Review comment:
Fixed
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/QueryContext.java
##########
@@ -375,6 +393,57 @@ public String toString() {
private Map<String, String> _queryOptions;
private Map<String, String> _debugOptions;
private BrokerRequest _brokerRequest;
+ private QueryContext _subQueryContext;
Review comment:
Fixed
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/QueryContext.java
##########
@@ -436,6 +505,11 @@ public Builder setBrokerRequest(BrokerRequest brokerRequest) {
return this;
}
+ public Builder setSubqueryContext(QueryContext subQueryContext) {
Review comment:
Fixed
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/utils/BrokerRequestToQueryContextConverter.java
##########
@@ -42,23 +42,42 @@
import org.apache.pinot.common.utils.request.FilterQueryTree;
import org.apache.pinot.common.utils.request.RequestUtils;
import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
import org.apache.pinot.segment.spi.AggregationFunctionType;
public class BrokerRequestToQueryContextConverter {
private BrokerRequestToQueryContextConverter() {
}
+ /**
+ * Validate the gapfill query.
+ */
+ public static void validateGapfillQuery(BrokerRequest brokerRequest) {
+ if (brokerRequest.getPinotQuery() != null) {
+ QueryContext queryContext = convertSQL(brokerRequest.getPinotQuery(), brokerRequest);
+ GapfillUtils.setGapfillType(queryContext);
+ }
+ }
+
/**
* Converts the given {@link BrokerRequest} into a {@link QueryContext}.
*/
public static QueryContext convert(BrokerRequest brokerRequest) {
- return brokerRequest.getPinotQuery() != null ? convertSQL(brokerRequest) : convertPQL(brokerRequest);
+ if (brokerRequest.getPinotQuery() != null) {
+ QueryContext queryContext = convertSQL(brokerRequest.getPinotQuery(), brokerRequest);
+ GapfillUtils.setGapfillType(queryContext);
+ return queryContext;
+ } else {
+ return convertPQL(brokerRequest);
+ }
}
- private static QueryContext convertSQL(BrokerRequest brokerRequest) {
- PinotQuery pinotQuery = brokerRequest.getPinotQuery();
-
+ private static QueryContext convertSQL(PinotQuery pinotQuery, BrokerRequest brokerRequest) {
+ QueryContext subQueryContext = null;
Review comment:
Fixed
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/QueryContext.java
##########
@@ -375,6 +393,57 @@ public String toString() {
private Map<String, String> _queryOptions;
private Map<String, String> _debugOptions;
private BrokerRequest _brokerRequest;
+ private QueryContext _subQueryContext;
+
+ /**
+ * Helper method to extract AGGREGATION FunctionContexts and FILTER FilterContexts from the given expression.
+ */
+ private static void getAggregations(ExpressionContext expression,
+ List<Pair<FunctionContext, FilterContext>> filteredAggregations) {
+ FunctionContext function = expression.getFunction();
+ if (function == null) {
+ return;
+ }
+ if (function.getType() == FunctionContext.Type.AGGREGATION) {
+ // Aggregation
+ filteredAggregations.add(Pair.of(function, null));
+ } else {
+ List<ExpressionContext> arguments = function.getArguments();
+ if (function.getFunctionName().equalsIgnoreCase("filter")) {
+ // Filtered aggregation
+ Preconditions.checkState(arguments.size() == 2, "FILTER must contain 2 arguments");
+ FunctionContext aggregation = arguments.get(0).getFunction();
+ Preconditions.checkState(aggregation != null && aggregation.getType() == FunctionContext.Type.AGGREGATION,
+ "First argument of FILTER must be an aggregation function");
+ ExpressionContext filterExpression = arguments.get(1);
+ Preconditions.checkState(filterExpression.getFunction() != null
+ && filterExpression.getFunction().getType() == FunctionContext.Type.TRANSFORM,
+ "Second argument of FILTER must be a filter expression");
+ FilterContext filter = RequestContextUtils.getFilter(filterExpression);
+ filteredAggregations.add(Pair.of(aggregation, filter));
+ } else {
+ // Transform
+ for (ExpressionContext argument : arguments) {
+ getAggregations(argument, filteredAggregations);
+ }
+ }
+ }
+ }
+
+ /**
+ * Helper method to extract AGGREGATION FunctionContexts and FILTER FilterContexts from the given filter.
+ */
+ private static void getAggregations(FilterContext filter,
+ List<Pair<FunctionContext, FilterContext>> filteredAggregations) {
+ List<FilterContext> children = filter.getChildren();
+ if (children != null) {
+ for (FilterContext child : children) {
+ getAggregations(child, filteredAggregations);
+ }
+ } else {
+ getAggregations(filter.getPredicate().getLhs(), filteredAggregations);
+ }
+ }
Review comment:
Fixed
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/ColumnDataToBlockValSetConverter.java
##########
@@ -0,0 +1,181 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.List;
+import javax.annotation.Nullable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.segment.spi.index.reader.Dictionary;
+import org.apache.pinot.spi.data.FieldSpec;
+
+
+/**
+ * As for Gapfilling Function, all raw data will be retrieved from the pinot
+ * server and merged on the pinot broker. The data will be in {@link DataTable}
+ * format.
+ * As part of Gapfilling Function execution plan, the aggregation function will
+ * work on the merged data on pinot broker. The aggregation function only takes
+ * the {@link BlockValSet} format.
+ * This is the Helper class to convert the data from {@link DataTable} to the
+ * block of values {@link BlockValSet} which used as input to the aggregation
+ * function.
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class ColumnDataToBlockValSetConverter implements BlockValSet {
+
+ private final FieldSpec.DataType _dataType;
+ private final List<Object[]> _rows;
+ private final int _columnIndex;
+
+ public ColumnDataToBlockValSetConverter(DataSchema.ColumnDataType columnDataType, List<Object[]> rows,
+ int columnIndex) {
+ _dataType = columnDataType.toDataType();
+ _rows = rows;
+ _columnIndex = columnIndex;
+ }
+
+ @Override
+ public FieldSpec.DataType getValueType() {
+ return _dataType;
+ }
+
+ @Override
+ public boolean isSingleValue() {
+ return true;
+ }
+
+ @Nullable
+ @Override
+ public Dictionary getDictionary() {
+ throw new UnsupportedOperationException("Not supported");
Review comment:
Fixed
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/ColumnDataToBlockValSetConverter.java
##########
@@ -0,0 +1,181 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.List;
+import javax.annotation.Nullable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.segment.spi.index.reader.Dictionary;
+import org.apache.pinot.spi.data.FieldSpec;
+
+
+/**
+ * As for Gapfilling Function, all raw data will be retrieved from the pinot
+ * server and merged on the pinot broker. The data will be in {@link DataTable}
+ * format.
+ * As part of Gapfilling Function execution plan, the aggregation function will
+ * work on the merged data on pinot broker. The aggregation function only takes
+ * the {@link BlockValSet} format.
+ * This is the Helper class to convert the data from {@link DataTable} to the
+ * block of values {@link BlockValSet} which used as input to the aggregation
+ * function.
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class ColumnDataToBlockValSetConverter implements BlockValSet {
+
+ private final FieldSpec.DataType _dataType;
+ private final List<Object[]> _rows;
+ private final int _columnIndex;
+
+ public ColumnDataToBlockValSetConverter(DataSchema.ColumnDataType columnDataType, List<Object[]> rows,
+ int columnIndex) {
+ _dataType = columnDataType.toDataType();
+ _rows = rows;
+ _columnIndex = columnIndex;
+ }
+
+ @Override
+ public FieldSpec.DataType getValueType() {
+ return _dataType;
+ }
+
+ @Override
+ public boolean isSingleValue() {
+ return true;
+ }
+
+ @Nullable
+ @Override
+ public Dictionary getDictionary() {
+ throw new UnsupportedOperationException("Not supported");
+ }
+
+ @Override
+ public int[] getDictionaryIdsSV() {
+ throw new UnsupportedOperationException("Not supported");
Review comment:
Fixed
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/ColumnDataToBlockValSetConverter.java
##########
@@ -0,0 +1,181 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.List;
+import javax.annotation.Nullable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.segment.spi.index.reader.Dictionary;
+import org.apache.pinot.spi.data.FieldSpec;
+
+
+/**
+ * As for Gapfilling Function, all raw data will be retrieved from the pinot
+ * server and merged on the pinot broker. The data will be in {@link DataTable}
+ * format.
+ * As part of Gapfilling Function execution plan, the aggregation function will
+ * work on the merged data on pinot broker. The aggregation function only takes
+ * the {@link BlockValSet} format.
+ * This is the Helper class to convert the data from {@link DataTable} to the
+ * block of values {@link BlockValSet} which used as input to the aggregation
+ * function.
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class ColumnDataToBlockValSetConverter implements BlockValSet {
+
+ private final FieldSpec.DataType _dataType;
+ private final List<Object[]> _rows;
+ private final int _columnIndex;
+
+ public ColumnDataToBlockValSetConverter(DataSchema.ColumnDataType columnDataType, List<Object[]> rows,
+ int columnIndex) {
+ _dataType = columnDataType.toDataType();
+ _rows = rows;
+ _columnIndex = columnIndex;
+ }
+
+ @Override
+ public FieldSpec.DataType getValueType() {
+ return _dataType;
+ }
+
+ @Override
+ public boolean isSingleValue() {
+ return true;
+ }
+
+ @Nullable
+ @Override
+ public Dictionary getDictionary() {
+ throw new UnsupportedOperationException("Not supported");
+ }
+
+ @Override
+ public int[] getDictionaryIdsSV() {
+ throw new UnsupportedOperationException("Not supported");
+ }
+
+ @Override
+ public int[] getIntValuesSV() {
+ if (_dataType == FieldSpec.DataType.INT) {
+ int[] result = new int[_rows.size()];
+ for (int i = 0; i < result.length; i++) {
+ result[i] = (Integer) _rows.get(i)[_columnIndex];
+ }
+ return result;
+ }
+ throw new UnsupportedOperationException("Not supported");
Review comment:
Fixed
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/ColumnDataToBlockValSetConverter.java
##########
@@ -0,0 +1,181 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.List;
+import javax.annotation.Nullable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.segment.spi.index.reader.Dictionary;
+import org.apache.pinot.spi.data.FieldSpec;
+
+
+/**
+ * As for Gapfilling Function, all raw data will be retrieved from the pinot
Review comment:
Fixed
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/ColumnDataToBlockValSetConverter.java
##########
@@ -0,0 +1,181 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.List;
+import javax.annotation.Nullable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.segment.spi.index.reader.Dictionary;
+import org.apache.pinot.spi.data.FieldSpec;
+
+
+/**
+ * As for Gapfilling Function, all raw data will be retrieved from the pinot
+ * server and merged on the pinot broker. The data will be in {@link DataTable}
+ * format.
+ * As part of Gapfilling Function execution plan, the aggregation function will
+ * work on the merged data on pinot broker. The aggregation function only takes
+ * the {@link BlockValSet} format.
+ * This is the Helper class to convert the data from {@link DataTable} to the
+ * block of values {@link BlockValSet} which used as input to the aggregation
+ * function.
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class ColumnDataToBlockValSetConverter implements BlockValSet {
+
+ private final FieldSpec.DataType _dataType;
+ private final List<Object[]> _rows;
+ private final int _columnIndex;
+
+ public ColumnDataToBlockValSetConverter(DataSchema.ColumnDataType columnDataType, List<Object[]> rows,
+ int columnIndex) {
+ _dataType = columnDataType.toDataType();
+ _rows = rows;
+ _columnIndex = columnIndex;
+ }
+
+ @Override
+ public FieldSpec.DataType getValueType() {
+ return _dataType;
+ }
+
+ @Override
+ public boolean isSingleValue() {
+ return true;
+ }
+
+ @Nullable
+ @Override
+ public Dictionary getDictionary() {
+ throw new UnsupportedOperationException("Not supported");
+ }
+
+ @Override
+ public int[] getDictionaryIdsSV() {
+ throw new UnsupportedOperationException("Not supported");
+ }
+
+ @Override
+ public int[] getIntValuesSV() {
+ if (_dataType == FieldSpec.DataType.INT) {
+ int[] result = new int[_rows.size()];
+ for (int i = 0; i < result.length; i++) {
+ result[i] = (Integer) _rows.get(i)[_columnIndex];
+ }
+ return result;
+ }
+ throw new UnsupportedOperationException("Not supported");
+ }
+
+ @Override
+ public long[] getLongValuesSV() {
+ if (_dataType == FieldSpec.DataType.LONG) {
+ long[] result = new long[_rows.size()];
+ for (int i = 0; i < result.length; i++) {
+ result[i] = (Long) _rows.get(i)[_columnIndex];
+ }
+ return result;
+ }
+ throw new UnsupportedOperationException("Not supported");
+ }
+
+ @Override
+ public float[] getFloatValuesSV() {
+ if (_dataType == FieldSpec.DataType.FLOAT) {
+ float[] result = new float[_rows.size()];
+ for (int i = 0; i < result.length; i++) {
+ result[i] = (Float) _rows.get(i)[_columnIndex];
+ }
+ return result;
+ }
+ throw new UnsupportedOperationException("Not supported");
+ }
+
+ @Override
+ public double[] getDoubleValuesSV() {
+ if (_dataType == FieldSpec.DataType.DOUBLE) {
+ double[] result = new double[_rows.size()];
+ for (int i = 0; i < result.length; i++) {
+ result[i] = (Double) _rows.get(i)[_columnIndex];
+ }
+ return result;
+ } else if (_dataType == FieldSpec.DataType.INT) {
+ double[] result = new double[_rows.size()];
+ for (int i = 0; i < result.length; i++) {
+ result[i] = ((Integer) _rows.get(i)[_columnIndex]).doubleValue();
+ }
+ return result;
+ }
+ throw new UnsupportedOperationException("Not supported");
+ }
+
+ @Override
+ public String[] getStringValuesSV() {
+ if (_dataType == FieldSpec.DataType.STRING) {
+ String[] result = new String[_rows.size()];
+ for (int i = 0; i < result.length; i++) {
+ result[i] = (String) _rows.get(i)[_columnIndex];
+ }
+ return result;
+ }
+ throw new UnsupportedOperationException("Not supported");
Review comment:
Fixed
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/filter/ValueExtractorFactory.java
##########
@@ -0,0 +1,29 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce.filter;
+
+import org.apache.pinot.common.request.context.ExpressionContext;
+
+
+/**
+ * Value extractor for the post-aggregation function or pre-aggregation gap fill.
Review comment:
Fixed
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/filter/RowMatcherFactory.java
##########
@@ -0,0 +1,44 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce.filter;
+
+import org.apache.pinot.common.request.context.FilterContext;
+
+
+/**
+ * Factory for RowMatcher.
+ */
+public interface RowMatcherFactory {
Review comment:
Fixed
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/filter/ValueExtractorFactory.java
##########
@@ -0,0 +1,29 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce.filter;
+
+import org.apache.pinot.common.request.context.ExpressionContext;
+
+
+/**
+ * Value extractor for the post-aggregation function or pre-aggregation gap fill.
+ */
+public interface ValueExtractorFactory {
+ ValueExtractor getValueExtractor(ExpressionContext expression);
Review comment:
Fixed
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/GapfillFilterHandler.java
##########
@@ -0,0 +1,76 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FilterContext;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.core.query.reduce.filter.ColumnValueExtractor;
+import org.apache.pinot.core.query.reduce.filter.LiteralValueExtractor;
+import org.apache.pinot.core.query.reduce.filter.RowMatcher;
+import org.apache.pinot.core.query.reduce.filter.RowMatcherFactory;
+import org.apache.pinot.core.query.reduce.filter.ValueExtractor;
+import org.apache.pinot.core.query.reduce.filter.ValueExtractorFactory;
+import org.apache.pinot.core.util.GapfillUtils;
+
+
+/**
+ * Handler for Filter clause of GapFill.
+ */
+public class GapfillFilterHandler implements ValueExtractorFactory {
+ private final RowMatcher _rowMatcher;
+ private final DataSchema _dataSchema;
+ private final Map<String, Integer> _indexes;
+
+ public GapfillFilterHandler(FilterContext filter, DataSchema dataSchema) {
+ _dataSchema = dataSchema;
+ _indexes = new HashMap<>();
+ for (int i = 0; i < _dataSchema.size(); i++) {
+ _indexes.put(_dataSchema.getColumnName(i), i);
+ }
+ _rowMatcher = RowMatcherFactory.getRowMatcher(filter, this);
+ }
+
+ /**
+ * Returns {@code true} if the given row matches the HAVING clause, {@code false} otherwise.
+ */
+ public boolean isMatch(Object[] row) {
+ return _rowMatcher.isMatch(row);
+ }
+
+ /**
+ * Returns a ValueExtractor based on the given expression.
+ */
+ @Override
+ public ValueExtractor getValueExtractor(ExpressionContext expression) {
+ expression = GapfillUtils.stripGapfill(expression);
+ if (expression.getType() == ExpressionContext.Type.LITERAL) {
+ // Literal
+ return new LiteralValueExtractor(expression.getLiteral());
+ }
+
+ if (expression.getType() == ExpressionContext.Type.IDENTIFIER) {
+ return new ColumnValueExtractor(_indexes.get(expression.getIdentifier()), _dataSchema);
+ } else {
+ return new ColumnValueExtractor(_indexes.get(expression.getFunction().toString()), _dataSchema);
Review comment:
Add TODO
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/GapfillFilterHandler.java
##########
@@ -0,0 +1,76 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FilterContext;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.core.query.reduce.filter.ColumnValueExtractor;
+import org.apache.pinot.core.query.reduce.filter.LiteralValueExtractor;
+import org.apache.pinot.core.query.reduce.filter.RowMatcher;
+import org.apache.pinot.core.query.reduce.filter.RowMatcherFactory;
+import org.apache.pinot.core.query.reduce.filter.ValueExtractor;
+import org.apache.pinot.core.query.reduce.filter.ValueExtractorFactory;
+import org.apache.pinot.core.util.GapfillUtils;
+
+
+/**
+ * Handler for Filter clause of GapFill.
+ */
+public class GapfillFilterHandler implements ValueExtractorFactory {
+ private final RowMatcher _rowMatcher;
+ private final DataSchema _dataSchema;
+ private final Map<String, Integer> _indexes;
+
+ public GapfillFilterHandler(FilterContext filter, DataSchema dataSchema) {
+ _dataSchema = dataSchema;
+ _indexes = new HashMap<>();
+ for (int i = 0; i < _dataSchema.size(); i++) {
+ _indexes.put(_dataSchema.getColumnName(i), i);
Review comment:
Add TODO
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/selection/SelectionOperatorUtils.java
##########
@@ -88,6 +83,8 @@ private SelectionOperatorUtils() {
ThreadLocal.withInitial(() -> new DecimalFormat(FLOAT_PATTERN, DECIMAL_FORMAT_SYMBOLS));
private static final ThreadLocal<DecimalFormat> THREAD_LOCAL_DOUBLE_FORMAT =
ThreadLocal.withInitial(() -> new DecimalFormat(DOUBLE_PATTERN, DECIMAL_FORMAT_SYMBOLS));
+ private SelectionOperatorUtils() {
Review comment:
Fixed
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/selection/SelectionOperatorUtils.java
##########
@@ -391,6 +388,9 @@ public static DataTable getDataTableFromRows(Collection<Object[]> rows, DataSche
row[i] = dataTable.getStringArray(rowId, i);
break;
+ case OBJECT:
Review comment:
Revert it.
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/transport/QueryRouter.java
##########
@@ -114,6 +115,7 @@ public AsyncQueryResponse submitQuery(long requestId, String rawTableName,
Map<ServerRoutingInstance, InstanceRequest> requestMap = new HashMap<>();
if (offlineBrokerRequest != null) {
assert offlineRoutingTable != null;
+ BrokerRequestToQueryContextConverter.validateGapfillQuery(offlineBrokerRequest);
Review comment:
Fixed
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/utils/BrokerRequestToQueryContextConverter.java
##########
@@ -42,23 +42,42 @@
import org.apache.pinot.common.utils.request.FilterQueryTree;
import org.apache.pinot.common.utils.request.RequestUtils;
import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
import org.apache.pinot.segment.spi.AggregationFunctionType;
public class BrokerRequestToQueryContextConverter {
private BrokerRequestToQueryContextConverter() {
}
+ /**
+ * Validate the gapfill query.
+ */
+ public static void validateGapfillQuery(BrokerRequest brokerRequest) {
Review comment:
Fixed
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/util/GapfillUtils.java
##########
@@ -71,12 +86,15 @@ public static boolean isFill(ExpressionContext expressionContext) {
return false;
}
- return FILL.equals(canonicalizeFunctionName(expressionContext.getFunction().getFunctionName()));
+ return FILL.equalsIgnoreCase(canonicalizeFunctionName(expressionContext.getFunction().getFunctionName()));
Review comment:
Fixed
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/util/GapfillUtils.java
##########
@@ -119,4 +137,265 @@ static public Serializable getDefaultValue(DataSchema.ColumnDataType dataType) {
private static String canonicalizeFunctionName(String functionName) {
return StringUtils.remove(functionName, '_').toLowerCase();
}
+
+ public static boolean isGapfill(ExpressionContext expressionContext) {
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ return false;
+ }
+
+ return GAP_FILL.equals(canonicalizeFunctionName(expressionContext.getFunction().getFunctionName()));
+ }
+
+ private static boolean isGapfill(QueryContext queryContext) {
+ for (ExpressionContext expressionContext : queryContext.getSelectExpressions()) {
+ if (isGapfill(expressionContext)) {
+ return true;
+ }
+ }
+ return false;
+ }
+
+ /**
+ * Get the gapfill type for queryContext. Also do the validation for gapfill request.
+ * @param queryContext
+ */
+ public static void setGapfillType(QueryContext queryContext) {
+ GapfillType gapfillType = null;
+ if (queryContext.getSubQueryContext() == null) {
+ if (isGapfill(queryContext)) {
+ Preconditions.checkArgument(queryContext.getAggregationFunctions() == null,
+ "Aggregation and Gapfill can not be in the same sql statement.");
+ gapfillType = GapfillType.GAP_FILL;
+ }
+ } else if (isGapfill(queryContext)) {
+ Preconditions.checkArgument(queryContext.getSubQueryContext().getAggregationFunctions() != null,
+ "Select and Gapfill should be in the same sql statement.");
+ Preconditions.checkArgument(queryContext.getSubQueryContext().getSubQueryContext() == null,
+ "There is no three levels nesting sql when the outer query is gapfill.");
+ gapfillType = GapfillType.AGGREGATE_GAP_FILL;
+ } else if (isGapfill(queryContext.getSubQueryContext())) {
+ if (queryContext.getAggregationFunctions() == null) {
+ gapfillType = GapfillType.GAP_FILL_SELECT;
+ } else if (queryContext.getSubQueryContext().getSubQueryContext() == null) {
+ gapfillType = GapfillType.GAP_FILL_AGGREGATE;
+ } else {
+ Preconditions
+ .checkArgument(queryContext.getSubQueryContext().getSubQueryContext().getAggregationFunctions() != null,
+ "Select cannot happen before gapfill.");
+ gapfillType = GapfillType.AGGREGATE_GAP_FILL_AGGREGATE;
+ }
+ }
+
+ queryContext.setGapfillType(gapfillType);
+ if (gapfillType == null) {
+ return;
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext);
+
+ Preconditions.checkArgument(gapFillSelection != null && gapFillSelection.getFunction() != null,
+ "Gapfill Expression should be function.");
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ Preconditions.checkArgument(args.size() > 5, "PreAggregateGapFill does not have correct number of arguments.");
+ Preconditions.checkArgument(args.get(1).getLiteral() != null,
+ "The second argument of PostAggregateGapFill should be TimeFormatter.");
+ Preconditions.checkArgument(args.get(2).getLiteral() != null,
+ "The third argument of PostAggregateGapFill should be start time.");
+ Preconditions.checkArgument(args.get(3).getLiteral() != null,
+ "The fourth argument of PostAggregateGapFill should be end time.");
+ Preconditions.checkArgument(args.get(4).getLiteral() != null,
+ "The fifth argument of PostAggregateGapFill should be time bucket size.");
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ Preconditions.checkArgument(timeseriesOn != null, "The TimeSeriesOn expressions should be specified.");
+
+ if (queryContext.getAggregationFunctions() == null) {
+ return;
+ }
+
+ List<ExpressionContext> groupbyExpressions = queryContext.getGroupByExpressions();
+ Preconditions.checkArgument(groupbyExpressions != null, "No GroupBy Clause.");
+ List<ExpressionContext> innerSelections = queryContext.getSubQueryContext().getSelectExpressions();
+ String timeBucketCol = null;
+ List<String> strAlias = queryContext.getSubQueryContext().getAliasList();
+ for (int i = 0; i < innerSelections.size(); i++) {
+ ExpressionContext innerSelection = innerSelections.get(i);
+ if (GapfillUtils.isGapfill(innerSelection)) {
+ if (strAlias.get(i) != null) {
+ timeBucketCol = strAlias.get(i);
+ } else {
+ timeBucketCol = innerSelection.getFunction().getArguments().get(0).toString();
+ }
+ break;
+ }
+ }
+
+ Preconditions.checkArgument(timeBucketCol != null, "No Group By timebucket.");
+
+ boolean findTimeBucket = false;
+ for (ExpressionContext groupbyExp : groupbyExpressions) {
+ if (timeBucketCol.equals(groupbyExp.toString())) {
+ findTimeBucket = true;
+ break;
+ }
+ }
+
+ Preconditions.checkArgument(findTimeBucket, "No Group By timebucket.");
+ }
+
+ private static ExpressionContext findGapfillExpressionContext(QueryContext queryContext) {
+ for (ExpressionContext expressionContext : queryContext.getSelectExpressions()) {
+ if (isGapfill(expressionContext)) {
+ return expressionContext;
+ }
+ }
+ return null;
+ }
+
+ public static ExpressionContext getGapfillExpressionContext(QueryContext queryContext) {
+ GapfillType gapfillType = queryContext.getGapfillType();
+ if (gapfillType == GapfillType.AGGREGATE_GAP_FILL || gapfillType == GapfillType.GAP_FILL) {
+ return findGapfillExpressionContext(queryContext);
+ } else if (gapfillType == GapfillType.GAP_FILL_AGGREGATE || gapfillType == GapfillType.AGGREGATE_GAP_FILL_AGGREGATE
+ || gapfillType == GapfillType.GAP_FILL_SELECT) {
+ return findGapfillExpressionContext(queryContext.getSubQueryContext());
+ } else {
+ return null;
+ }
+ }
+
+ public static int findTimeBucketColumnIndex(QueryContext queryContext) {
+ GapfillType gapfillType = queryContext.getGapfillType();
+ if (gapfillType == GapfillType.GAP_FILL_AGGREGATE
+ || gapfillType == GapfillType.GAP_FILL_SELECT
+ || gapfillType == GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = queryContext.getSubQueryContext();
+ }
+ List<ExpressionContext> expressionContexts = queryContext.getSelectExpressions();
+ for (int i = 0; i < expressionContexts.size(); i++) {
+ if (isGapfill(expressionContexts.get(i))) {
+ return i;
+ }
+ }
+ return -1;
+ }
+
+ public static ExpressionContext getTimeSeriesOnExpressionContext(ExpressionContext gapFillSelection) {
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ for (int i = STARTING_INDEX_OF_OPTIONAL_ARGS_FOR_PRE_AGGREGATE_GAP_FILL; i < args.size(); i++) {
+ if (GapfillUtils.isTimeSeriesOn(args.get(i))) {
+ return args.get(i);
+ }
+ }
+ return null;
+ }
+
+ public static Map<String, ExpressionContext> getFillExpressions(ExpressionContext gapFillSelection) {
+ Map<String, ExpressionContext> fillExpressions = new HashMap<>();
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ for (int i = STARTING_INDEX_OF_OPTIONAL_ARGS_FOR_PRE_AGGREGATE_GAP_FILL; i < args.size(); i++) {
+ if (GapfillUtils.isFill(args.get(i))) {
+ ExpressionContext fillExpression = args.get(i);
+ fillExpressions.put(fillExpression.getFunction().getArguments().get(0).getIdentifier(), fillExpression);
+ }
+ }
+ return fillExpressions;
+ }
+
+ public static String getTableName(PinotQuery pinotQuery) {
+ while (pinotQuery.getDataSource().getSubquery() != null) {
+ pinotQuery = pinotQuery.getDataSource().getSubquery();
+ }
+ return pinotQuery.getDataSource().getTableName();
+ }
+
+ public static BrokerRequest stripGapfill(BrokerRequest brokerRequest) {
+ if (brokerRequest.getPinotQuery().getDataSource() == null) {
+ return brokerRequest;
+ }
+ QueryContext queryContext = BrokerRequestToQueryContextConverter.convert(brokerRequest);
+ GapfillUtils.GapfillType gapfillType = queryContext.getGapfillType();
+ if (gapfillType == null) {
+ return brokerRequest;
+ }
+ switch (gapfillType) {
+ // one sql query with gapfill only
+ case GAP_FILL:
+ return stripGapfill(brokerRequest.getPinotQuery());
+ // gapfill as subquery, the outer query may have the filter
+ case GAP_FILL_SELECT:
+ // gapfill as subquery, the outer query has the aggregation
+ case GAP_FILL_AGGREGATE:
+ // aggregation as subqery, the outer query is gapfill
+ case AGGREGATE_GAP_FILL:
+ return stripGapfill(brokerRequest.getPinotQuery().getDataSource().getSubquery());
+ // aggegration as second nesting subquery, gapfill as first nesting subquery, different aggregation as outer query
+ case AGGREGATE_GAP_FILL_AGGREGATE:
+ return stripGapfill(brokerRequest.getPinotQuery().getDataSource().getSubquery().getDataSource().getSubquery());
+ default:
+ return brokerRequest;
+ }
+ }
+
+ private static BrokerRequest stripGapfill(PinotQuery pinotQuery) {
+ PinotQuery copy = new PinotQuery(pinotQuery);
+ BrokerRequest brokerRequest = new BrokerRequest();
+ brokerRequest.setPinotQuery(copy);
+ // Set table name in broker request because it is used for access control, query routing etc.
+ DataSource dataSource = copy.getDataSource();
+ if (dataSource != null) {
+ QuerySource querySource = new QuerySource();
+ querySource.setTableName(dataSource.getTableName());
+ brokerRequest.setQuerySource(querySource);
+ }
+ List<Expression> selectList = copy.getSelectList();
+ for (int i = 0; i < selectList.size(); i++) {
+ Expression select = selectList.get(i);
+ if (select.getType() != ExpressionType.FUNCTION) {
+ continue;
+ }
+ if (GAP_FILL.equalsIgnoreCase(select.getFunctionCall().getOperator())) {
Review comment:
Fixed
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/util/GapfillUtils.java
##########
@@ -119,4 +137,265 @@ static public Serializable getDefaultValue(DataSchema.ColumnDataType dataType) {
private static String canonicalizeFunctionName(String functionName) {
return StringUtils.remove(functionName, '_').toLowerCase();
}
+
+ public static boolean isGapfill(ExpressionContext expressionContext) {
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ return false;
+ }
+
+ return GAP_FILL.equals(canonicalizeFunctionName(expressionContext.getFunction().getFunctionName()));
+ }
+
+ private static boolean isGapfill(QueryContext queryContext) {
+ for (ExpressionContext expressionContext : queryContext.getSelectExpressions()) {
+ if (isGapfill(expressionContext)) {
+ return true;
+ }
+ }
+ return false;
+ }
+
+ /**
+ * Get the gapfill type for queryContext. Also do the validation for gapfill request.
+ * @param queryContext
+ */
+ public static void setGapfillType(QueryContext queryContext) {
+ GapfillType gapfillType = null;
+ if (queryContext.getSubQueryContext() == null) {
+ if (isGapfill(queryContext)) {
+ Preconditions.checkArgument(queryContext.getAggregationFunctions() == null,
+ "Aggregation and Gapfill can not be in the same sql statement.");
+ gapfillType = GapfillType.GAP_FILL;
+ }
+ } else if (isGapfill(queryContext)) {
+ Preconditions.checkArgument(queryContext.getSubQueryContext().getAggregationFunctions() != null,
+ "Select and Gapfill should be in the same sql statement.");
+ Preconditions.checkArgument(queryContext.getSubQueryContext().getSubQueryContext() == null,
+ "There is no three levels nesting sql when the outer query is gapfill.");
+ gapfillType = GapfillType.AGGREGATE_GAP_FILL;
+ } else if (isGapfill(queryContext.getSubQueryContext())) {
+ if (queryContext.getAggregationFunctions() == null) {
+ gapfillType = GapfillType.GAP_FILL_SELECT;
+ } else if (queryContext.getSubQueryContext().getSubQueryContext() == null) {
+ gapfillType = GapfillType.GAP_FILL_AGGREGATE;
+ } else {
+ Preconditions
+ .checkArgument(queryContext.getSubQueryContext().getSubQueryContext().getAggregationFunctions() != null,
+ "Select cannot happen before gapfill.");
+ gapfillType = GapfillType.AGGREGATE_GAP_FILL_AGGREGATE;
+ }
+ }
+
+ queryContext.setGapfillType(gapfillType);
+ if (gapfillType == null) {
+ return;
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext);
+
+ Preconditions.checkArgument(gapFillSelection != null && gapFillSelection.getFunction() != null,
+ "Gapfill Expression should be function.");
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ Preconditions.checkArgument(args.size() > 5, "PreAggregateGapFill does not have correct number of arguments.");
+ Preconditions.checkArgument(args.get(1).getLiteral() != null,
+ "The second argument of PostAggregateGapFill should be TimeFormatter.");
+ Preconditions.checkArgument(args.get(2).getLiteral() != null,
+ "The third argument of PostAggregateGapFill should be start time.");
+ Preconditions.checkArgument(args.get(3).getLiteral() != null,
+ "The fourth argument of PostAggregateGapFill should be end time.");
+ Preconditions.checkArgument(args.get(4).getLiteral() != null,
+ "The fifth argument of PostAggregateGapFill should be time bucket size.");
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ Preconditions.checkArgument(timeseriesOn != null, "The TimeSeriesOn expressions should be specified.");
+
+ if (queryContext.getAggregationFunctions() == null) {
+ return;
+ }
+
+ List<ExpressionContext> groupbyExpressions = queryContext.getGroupByExpressions();
+ Preconditions.checkArgument(groupbyExpressions != null, "No GroupBy Clause.");
+ List<ExpressionContext> innerSelections = queryContext.getSubQueryContext().getSelectExpressions();
+ String timeBucketCol = null;
+ List<String> strAlias = queryContext.getSubQueryContext().getAliasList();
+ for (int i = 0; i < innerSelections.size(); i++) {
+ ExpressionContext innerSelection = innerSelections.get(i);
+ if (GapfillUtils.isGapfill(innerSelection)) {
+ if (strAlias.get(i) != null) {
+ timeBucketCol = strAlias.get(i);
+ } else {
+ timeBucketCol = innerSelection.getFunction().getArguments().get(0).toString();
+ }
+ break;
+ }
+ }
+
+ Preconditions.checkArgument(timeBucketCol != null, "No Group By timebucket.");
+
+ boolean findTimeBucket = false;
+ for (ExpressionContext groupbyExp : groupbyExpressions) {
+ if (timeBucketCol.equals(groupbyExp.toString())) {
+ findTimeBucket = true;
+ break;
+ }
+ }
+
+ Preconditions.checkArgument(findTimeBucket, "No Group By timebucket.");
+ }
+
+ private static ExpressionContext findGapfillExpressionContext(QueryContext queryContext) {
+ for (ExpressionContext expressionContext : queryContext.getSelectExpressions()) {
+ if (isGapfill(expressionContext)) {
+ return expressionContext;
+ }
+ }
+ return null;
+ }
+
+ public static ExpressionContext getGapfillExpressionContext(QueryContext queryContext) {
+ GapfillType gapfillType = queryContext.getGapfillType();
+ if (gapfillType == GapfillType.AGGREGATE_GAP_FILL || gapfillType == GapfillType.GAP_FILL) {
+ return findGapfillExpressionContext(queryContext);
+ } else if (gapfillType == GapfillType.GAP_FILL_AGGREGATE || gapfillType == GapfillType.AGGREGATE_GAP_FILL_AGGREGATE
+ || gapfillType == GapfillType.GAP_FILL_SELECT) {
+ return findGapfillExpressionContext(queryContext.getSubQueryContext());
+ } else {
+ return null;
+ }
+ }
+
+ public static int findTimeBucketColumnIndex(QueryContext queryContext) {
+ GapfillType gapfillType = queryContext.getGapfillType();
+ if (gapfillType == GapfillType.GAP_FILL_AGGREGATE
+ || gapfillType == GapfillType.GAP_FILL_SELECT
+ || gapfillType == GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = queryContext.getSubQueryContext();
+ }
+ List<ExpressionContext> expressionContexts = queryContext.getSelectExpressions();
+ for (int i = 0; i < expressionContexts.size(); i++) {
+ if (isGapfill(expressionContexts.get(i))) {
+ return i;
+ }
+ }
+ return -1;
+ }
+
+ public static ExpressionContext getTimeSeriesOnExpressionContext(ExpressionContext gapFillSelection) {
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ for (int i = STARTING_INDEX_OF_OPTIONAL_ARGS_FOR_PRE_AGGREGATE_GAP_FILL; i < args.size(); i++) {
+ if (GapfillUtils.isTimeSeriesOn(args.get(i))) {
+ return args.get(i);
+ }
+ }
+ return null;
+ }
+
+ public static Map<String, ExpressionContext> getFillExpressions(ExpressionContext gapFillSelection) {
+ Map<String, ExpressionContext> fillExpressions = new HashMap<>();
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ for (int i = STARTING_INDEX_OF_OPTIONAL_ARGS_FOR_PRE_AGGREGATE_GAP_FILL; i < args.size(); i++) {
+ if (GapfillUtils.isFill(args.get(i))) {
+ ExpressionContext fillExpression = args.get(i);
+ fillExpressions.put(fillExpression.getFunction().getArguments().get(0).getIdentifier(), fillExpression);
+ }
+ }
+ return fillExpressions;
+ }
+
+ public static String getTableName(PinotQuery pinotQuery) {
+ while (pinotQuery.getDataSource().getSubquery() != null) {
+ pinotQuery = pinotQuery.getDataSource().getSubquery();
+ }
+ return pinotQuery.getDataSource().getTableName();
+ }
+
+ public static BrokerRequest stripGapfill(BrokerRequest brokerRequest) {
+ if (brokerRequest.getPinotQuery().getDataSource() == null) {
+ return brokerRequest;
+ }
+ QueryContext queryContext = BrokerRequestToQueryContextConverter.convert(brokerRequest);
Review comment:
Fixed
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/QueryContext.java
##########
@@ -133,6 +137,8 @@ private QueryContext(String tableName, List<ExpressionContext> selectExpressions
_queryOptions = queryOptions;
_debugOptions = debugOptions;
_brokerRequest = brokerRequest;
+ _gapfillType = null;
Review comment:
Fixed
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/util/GapfillUtils.java
##########
@@ -119,4 +137,265 @@ static public Serializable getDefaultValue(DataSchema.ColumnDataType dataType) {
private static String canonicalizeFunctionName(String functionName) {
return StringUtils.remove(functionName, '_').toLowerCase();
}
+
+ public static boolean isGapfill(ExpressionContext expressionContext) {
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ return false;
+ }
+
+ return GAP_FILL.equals(canonicalizeFunctionName(expressionContext.getFunction().getFunctionName()));
+ }
+
+ private static boolean isGapfill(QueryContext queryContext) {
+ for (ExpressionContext expressionContext : queryContext.getSelectExpressions()) {
+ if (isGapfill(expressionContext)) {
+ return true;
+ }
+ }
+ return false;
+ }
+
+ /**
+ * Get the gapfill type for queryContext. Also do the validation for gapfill request.
+ * @param queryContext
+ */
+ public static void setGapfillType(QueryContext queryContext) {
+ GapfillType gapfillType = null;
+ if (queryContext.getSubQueryContext() == null) {
+ if (isGapfill(queryContext)) {
+ Preconditions.checkArgument(queryContext.getAggregationFunctions() == null,
+ "Aggregation and Gapfill can not be in the same sql statement.");
+ gapfillType = GapfillType.GAP_FILL;
+ }
+ } else if (isGapfill(queryContext)) {
+ Preconditions.checkArgument(queryContext.getSubQueryContext().getAggregationFunctions() != null,
+ "Select and Gapfill should be in the same sql statement.");
+ Preconditions.checkArgument(queryContext.getSubQueryContext().getSubQueryContext() == null,
+ "There is no three levels nesting sql when the outer query is gapfill.");
+ gapfillType = GapfillType.AGGREGATE_GAP_FILL;
+ } else if (isGapfill(queryContext.getSubQueryContext())) {
+ if (queryContext.getAggregationFunctions() == null) {
+ gapfillType = GapfillType.GAP_FILL_SELECT;
+ } else if (queryContext.getSubQueryContext().getSubQueryContext() == null) {
+ gapfillType = GapfillType.GAP_FILL_AGGREGATE;
+ } else {
+ Preconditions
+ .checkArgument(queryContext.getSubQueryContext().getSubQueryContext().getAggregationFunctions() != null,
+ "Select cannot happen before gapfill.");
+ gapfillType = GapfillType.AGGREGATE_GAP_FILL_AGGREGATE;
+ }
+ }
+
+ queryContext.setGapfillType(gapfillType);
+ if (gapfillType == null) {
+ return;
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext);
+
+ Preconditions.checkArgument(gapFillSelection != null && gapFillSelection.getFunction() != null,
+ "Gapfill Expression should be function.");
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ Preconditions.checkArgument(args.size() > 5, "PreAggregateGapFill does not have correct number of arguments.");
+ Preconditions.checkArgument(args.get(1).getLiteral() != null,
+ "The second argument of PostAggregateGapFill should be TimeFormatter.");
+ Preconditions.checkArgument(args.get(2).getLiteral() != null,
+ "The third argument of PostAggregateGapFill should be start time.");
+ Preconditions.checkArgument(args.get(3).getLiteral() != null,
+ "The fourth argument of PostAggregateGapFill should be end time.");
+ Preconditions.checkArgument(args.get(4).getLiteral() != null,
+ "The fifth argument of PostAggregateGapFill should be time bucket size.");
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ Preconditions.checkArgument(timeseriesOn != null, "The TimeSeriesOn expressions should be specified.");
+
+ if (queryContext.getAggregationFunctions() == null) {
+ return;
+ }
+
+ List<ExpressionContext> groupbyExpressions = queryContext.getGroupByExpressions();
+ Preconditions.checkArgument(groupbyExpressions != null, "No GroupBy Clause.");
+ List<ExpressionContext> innerSelections = queryContext.getSubQueryContext().getSelectExpressions();
+ String timeBucketCol = null;
+ List<String> strAlias = queryContext.getSubQueryContext().getAliasList();
+ for (int i = 0; i < innerSelections.size(); i++) {
+ ExpressionContext innerSelection = innerSelections.get(i);
+ if (GapfillUtils.isGapfill(innerSelection)) {
+ if (strAlias.get(i) != null) {
+ timeBucketCol = strAlias.get(i);
+ } else {
+ timeBucketCol = innerSelection.getFunction().getArguments().get(0).toString();
+ }
+ break;
+ }
+ }
+
+ Preconditions.checkArgument(timeBucketCol != null, "No Group By timebucket.");
+
+ boolean findTimeBucket = false;
+ for (ExpressionContext groupbyExp : groupbyExpressions) {
+ if (timeBucketCol.equals(groupbyExp.toString())) {
+ findTimeBucket = true;
+ break;
+ }
+ }
+
+ Preconditions.checkArgument(findTimeBucket, "No Group By timebucket.");
+ }
+
+ private static ExpressionContext findGapfillExpressionContext(QueryContext queryContext) {
+ for (ExpressionContext expressionContext : queryContext.getSelectExpressions()) {
+ if (isGapfill(expressionContext)) {
+ return expressionContext;
+ }
+ }
+ return null;
+ }
+
+ public static ExpressionContext getGapfillExpressionContext(QueryContext queryContext) {
+ GapfillType gapfillType = queryContext.getGapfillType();
+ if (gapfillType == GapfillType.AGGREGATE_GAP_FILL || gapfillType == GapfillType.GAP_FILL) {
+ return findGapfillExpressionContext(queryContext);
+ } else if (gapfillType == GapfillType.GAP_FILL_AGGREGATE || gapfillType == GapfillType.AGGREGATE_GAP_FILL_AGGREGATE
+ || gapfillType == GapfillType.GAP_FILL_SELECT) {
+ return findGapfillExpressionContext(queryContext.getSubQueryContext());
+ } else {
+ return null;
+ }
+ }
+
+ public static int findTimeBucketColumnIndex(QueryContext queryContext) {
+ GapfillType gapfillType = queryContext.getGapfillType();
+ if (gapfillType == GapfillType.GAP_FILL_AGGREGATE
+ || gapfillType == GapfillType.GAP_FILL_SELECT
+ || gapfillType == GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = queryContext.getSubQueryContext();
+ }
+ List<ExpressionContext> expressionContexts = queryContext.getSelectExpressions();
+ for (int i = 0; i < expressionContexts.size(); i++) {
+ if (isGapfill(expressionContexts.get(i))) {
+ return i;
+ }
+ }
+ return -1;
+ }
+
+ public static ExpressionContext getTimeSeriesOnExpressionContext(ExpressionContext gapFillSelection) {
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ for (int i = STARTING_INDEX_OF_OPTIONAL_ARGS_FOR_PRE_AGGREGATE_GAP_FILL; i < args.size(); i++) {
+ if (GapfillUtils.isTimeSeriesOn(args.get(i))) {
+ return args.get(i);
+ }
+ }
+ return null;
+ }
+
+ public static Map<String, ExpressionContext> getFillExpressions(ExpressionContext gapFillSelection) {
+ Map<String, ExpressionContext> fillExpressions = new HashMap<>();
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ for (int i = STARTING_INDEX_OF_OPTIONAL_ARGS_FOR_PRE_AGGREGATE_GAP_FILL; i < args.size(); i++) {
+ if (GapfillUtils.isFill(args.get(i))) {
+ ExpressionContext fillExpression = args.get(i);
+ fillExpressions.put(fillExpression.getFunction().getArguments().get(0).getIdentifier(), fillExpression);
+ }
+ }
+ return fillExpressions;
+ }
+
+ public static String getTableName(PinotQuery pinotQuery) {
+ while (pinotQuery.getDataSource().getSubquery() != null) {
+ pinotQuery = pinotQuery.getDataSource().getSubquery();
+ }
+ return pinotQuery.getDataSource().getTableName();
+ }
+
+ public static BrokerRequest stripGapfill(BrokerRequest brokerRequest) {
+ if (brokerRequest.getPinotQuery().getDataSource() == null) {
+ return brokerRequest;
+ }
+ QueryContext queryContext = BrokerRequestToQueryContextConverter.convert(brokerRequest);
+ GapfillUtils.GapfillType gapfillType = queryContext.getGapfillType();
+ if (gapfillType == null) {
+ return brokerRequest;
+ }
+ switch (gapfillType) {
+ // one sql query with gapfill only
+ case GAP_FILL:
+ return stripGapfill(brokerRequest.getPinotQuery());
+ // gapfill as subquery, the outer query may have the filter
+ case GAP_FILL_SELECT:
+ // gapfill as subquery, the outer query has the aggregation
+ case GAP_FILL_AGGREGATE:
+ // aggregation as subqery, the outer query is gapfill
+ case AGGREGATE_GAP_FILL:
+ return stripGapfill(brokerRequest.getPinotQuery().getDataSource().getSubquery());
+ // aggegration as second nesting subquery, gapfill as first nesting subquery, different aggregation as outer query
+ case AGGREGATE_GAP_FILL_AGGREGATE:
+ return stripGapfill(brokerRequest.getPinotQuery().getDataSource().getSubquery().getDataSource().getSubquery());
+ default:
+ return brokerRequest;
+ }
+ }
+
+ private static BrokerRequest stripGapfill(PinotQuery pinotQuery) {
+ PinotQuery copy = new PinotQuery(pinotQuery);
+ BrokerRequest brokerRequest = new BrokerRequest();
+ brokerRequest.setPinotQuery(copy);
+ // Set table name in broker request because it is used for access control, query routing etc.
+ DataSource dataSource = copy.getDataSource();
+ if (dataSource != null) {
+ QuerySource querySource = new QuerySource();
+ querySource.setTableName(dataSource.getTableName());
+ brokerRequest.setQuerySource(querySource);
+ }
+ List<Expression> selectList = copy.getSelectList();
+ for (int i = 0; i < selectList.size(); i++) {
+ Expression select = selectList.get(i);
+ if (select.getType() != ExpressionType.FUNCTION) {
+ continue;
+ }
+ if (GAP_FILL.equalsIgnoreCase(select.getFunctionCall().getOperator())) {
+ selectList.set(i, select.getFunctionCall().getOperands().get(0));
+ break;
+ }
+ if (AS.equalsIgnoreCase(select.getFunctionCall().getOperator())
+ && select.getFunctionCall().getOperands().get(0).getType() == ExpressionType.FUNCTION
+ && GAP_FILL.equalsIgnoreCase(select.getFunctionCall().getOperands().get(0).getFunctionCall().getOperator())) {
+ select.getFunctionCall().getOperands().set(0,
+ select.getFunctionCall().getOperands().get(0).getFunctionCall().getOperands().get(0));
+ break;
+ }
+ }
+
+ for (Expression orderBy : copy.getOrderByList()) {
+ if (orderBy.getType() != ExpressionType.FUNCTION) {
+ continue;
+ }
+ if (orderBy.getFunctionCall().getOperands().get(0).getType() == ExpressionType.FUNCTION
+ && GAP_FILL.equalsIgnoreCase(
+ orderBy.getFunctionCall().getOperands().get(0).getFunctionCall().getOperator())) {
+ orderBy.getFunctionCall().getOperands().set(0,
+ orderBy.getFunctionCall().getOperands().get(0).getFunctionCall().getOperands().get(0));
+ break;
+ }
+ }
+ return brokerRequest;
+ }
+
+ public enum GapfillType {
Review comment:
Fixed.
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/GapFillProcessor.java
##########
@@ -0,0 +1,455 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+
+/**
+ * Helper class to reduce and set gap fill results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class GapFillProcessor {
+ private final QueryContext _queryContext;
+
+ private final int _limitForAggregatedResult;
+ private final DateTimeGranularitySpec _dateTimeGranularity;
+ private final DateTimeFormatSpec _dateTimeFormatter;
+ private final long _startMs;
+ private final long _endMs;
+ private final long _timeBucketSize;
+ private final int _numOfTimeBuckets;
+ private final List<Integer> _groupByKeyIndexes;
+ private final Set<Key> _groupByKeys;
+ private final Map<Key, Object[]> _previousByGroupKey;
+ private final Map<String, ExpressionContext> _fillExpressions;
+ private final List<ExpressionContext> _timeSeries;
+ private final GapfillUtils.GapfillType _gapfillType;
+ private int _limitForGapfilledResult;
+ private boolean[] _isGroupBySelections;
+ private final int _timeBucketColumnIndex;
+ private int[] _sourceColumnIndexForResultSchema = null;
+
+ GapFillProcessor(QueryContext queryContext) {
+ _queryContext = queryContext;
+ _gapfillType = queryContext.getGapfillType();
+ _limitForAggregatedResult = queryContext.getLimit();
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ _limitForGapfilledResult = queryContext.getLimit();
+ } else {
+ _limitForGapfilledResult = queryContext.getSubQueryContext().getLimit();
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext);
+ _timeBucketColumnIndex = GapfillUtils.findTimeBucketColumnIndex(queryContext);
+
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+
+ _dateTimeFormatter = new DateTimeFormatSpec(args.get(1).getLiteral());
+ _dateTimeGranularity = new DateTimeGranularitySpec(args.get(4).getLiteral());
+ String start = args.get(2).getLiteral();
+ _startMs = truncate(_dateTimeFormatter.fromFormatToMillis(start));
+ String end = args.get(3).getLiteral();
+ _endMs = truncate(_dateTimeFormatter.fromFormatToMillis(end));
+ _timeBucketSize = _dateTimeGranularity.granularityToMillis();
+ _numOfTimeBuckets = (int) ((_endMs - _startMs) / _timeBucketSize);
+
+ _fillExpressions = GapfillUtils.getFillExpressions(gapFillSelection);
+
+ _previousByGroupKey = new HashMap<>();
+ _groupByKeyIndexes = new ArrayList<>();
+ _groupByKeys = new HashSet<>();
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ _timeSeries = timeseriesOn.getFunction().getArguments();
+ }
+
+ private int findBucketIndex(long time) {
+ return (int) ((time - _startMs) / _timeBucketSize);
+ }
+
+ private void replaceColumnNameWithAlias(DataSchema dataSchema) {
+ QueryContext queryContext;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = _queryContext.getSubQueryContext().getSubQueryContext();
+ } else if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ queryContext = _queryContext;
+ } else {
+ queryContext = _queryContext.getSubQueryContext();
+ }
+ List<String> aliasList = queryContext.getAliasList();
+ Map<String, String> columnNameToAliasMap = new HashMap<>();
+ for (int i = 0; i < aliasList.size(); i++) {
+ if (aliasList.get(i) != null) {
+ ExpressionContext selection = queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(selection)) {
+ selection = selection.getFunction().getArguments().get(0);
+ }
+ columnNameToAliasMap.put(selection.toString(), aliasList.get(i));
+ }
+ }
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ if (columnNameToAliasMap.containsKey(dataSchema.getColumnNames()[i])) {
+ dataSchema.getColumnNames()[i] = columnNameToAliasMap.get(dataSchema.getColumnNames()[i]);
+ }
+ }
+ }
+
+ /**
+ * Here are three things that happen
+ * 1. Sort the result sets from all pinot servers based on timestamp
+ * 2. Gapfill the data for missing entities per time bucket
+ * 3. Aggregate the dataset per time bucket.
+ */
+ public void process(BrokerResponseNative brokerResponseNative) {
+ DataSchema dataSchema = brokerResponseNative.getResultTable().getDataSchema();
+ DataSchema resultTableSchema = getResultTableDataSchema(dataSchema);
+ if (brokerResponseNative.getResultTable().getRows().isEmpty()) {
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, Collections.emptyList()));
+ return;
+ }
+
+ String[] columns = dataSchema.getColumnNames();
+
+ Map<String, Integer> indexes = new HashMap<>();
+ for (int i = 0; i < columns.length; i++) {
+ indexes.put(columns[i], i);
+ }
+
+ _isGroupBySelections = new boolean[dataSchema.getColumnDataTypes().length];
+
+ // The first one argument of timeSeries is time column. The left ones are defining entity.
+ for (ExpressionContext entityColum : _timeSeries) {
+ int index = indexes.get(entityColum.getIdentifier());
+ _isGroupBySelections[index] = true;
+ }
+
+ for (int i = 0; i < _isGroupBySelections.length; i++) {
+ if (_isGroupBySelections[i]) {
+ _groupByKeyIndexes.add(i);
+ }
+ }
+
+ List<Object[]>[] timeBucketedRawRows = putRawRowsIntoTimeBucket(brokerResponseNative.getResultTable().getRows());
+
+ List<Object[]> resultRows;
+ replaceColumnNameWithAlias(dataSchema);
+
+ if (_queryContext.getAggregationFunctions() == null) {
+
+ Map<String, Integer> sourceColumnsIndexes = new HashMap<>();
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ sourceColumnsIndexes.put(dataSchema.getColumnName(i), i);
+ }
+ _sourceColumnIndexForResultSchema = new int[resultTableSchema.getColumnNames().length];
+ for (int i = 0; i < _sourceColumnIndexForResultSchema.length; i++) {
+ _sourceColumnIndexForResultSchema[i] = sourceColumnsIndexes.get(resultTableSchema.getColumnName(i));
+ }
+ }
+
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL_AGGREGATE || _gapfillType == GapfillUtils.GapfillType.GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL_SELECT) {
+ List<Object[]> gapfilledRows = gapFillAndAggregate(timeBucketedRawRows, resultTableSchema, dataSchema);
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL_SELECT) {
+ resultRows = new ArrayList<>(gapfilledRows.size());
+ resultRows.addAll(gapfilledRows);
+ } else {
+ resultRows = gapfilledRows;
+ }
+ } else {
+ resultRows = gapFillAndAggregate(timeBucketedRawRows, resultTableSchema, dataSchema);
+ }
Review comment:
Good catch, Fixed.
##########
File path: pinot-controller/src/main/java/org/apache/pinot/controller/api/resources/PinotQueryResource.java
##########
@@ -162,8 +163,7 @@ public String getQueryResponse(String query, String traceEnabled, String queryOp
String inputTableName;
switch (querySyntax) {
case CommonConstants.Broker.Request.SQL:
- inputTableName =
- SQL_QUERY_COMPILER.compileToBrokerRequest(query).getPinotQuery().getDataSource().getTableName();
+ inputTableName = GapfillUtils.getTableName(SQL_QUERY_COMPILER.compileToBrokerRequest(query).getPinotQuery());
Review comment:
Fixed.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (3c7b767) into [master](https://codecov.io/gh/apache/pinot/commit/3f98ce37fdaef0335fcd82e621489d65751b1f55?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (3f98ce3) will **decrease** coverage by `40.28%`.
> The diff coverage is `16.58%`.
> :exclamation: Current head 3c7b767 differs from pull request most recent head 5850c3c. Consider uploading reports for the commit 5850c3c to get more accurate results
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 70.72% 30.44% -40.29%
=============================================
Files 1631 1629 -2
Lines 85279 85606 +327
Branches 12844 12975 +131
=============================================
- Hits 60316 26062 -34254
- Misses 20799 57201 +36402
+ Partials 4164 2343 -1821
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.67% <16.08%> (-0.02%)` | :arrow_down: |
| integration2 | `27.33% <16.58%> (-0.16%)` | :arrow_down: |
| unittests1 | `?` | |
| unittests2 | `?` | |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `62.92% <0.00%> (-8.93%)` | :arrow_down: |
| [...ache/pinot/core/plan/GapfillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...not/core/query/reduce/GapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwRmlsbERhdGFUYWJsZVJlZHVjZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [.../pinot/core/query/reduce/GapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `69.25% <0.00%> (-22.93%)` | :arrow_down: |
| [...xt/utils/BrokerRequestToQueryContextConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvQnJva2VyUmVxdWVzdFRvUXVlcnlDb250ZXh0Q29udmVydGVyLmphdmE=) | `74.09% <20.00%> (-24.30%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `27.35% <21.79%> (-36.28%)` | :arrow_down: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `68.05% <30.00%> (-19.45%)` | :arrow_down: |
| [...org/apache/pinot/core/data/table/IndexedTable.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9kYXRhL3RhYmxlL0luZGV4ZWRUYWJsZS5qYXZh) | `80.32% <33.33%> (-4.42%)` | :arrow_down: |
| ... and [1141 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [3f98ce3...5850c3c](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (7746a7a) into [master](https://codecov.io/gh/apache/pinot/commit/3f98ce37fdaef0335fcd82e621489d65751b1f55?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (3f98ce3) will **decrease** coverage by `1.22%`.
> The diff coverage is `80.91%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 70.72% 69.50% -1.23%
Complexity 4242 4242
============================================
Files 1631 1641 +10
Lines 85279 85964 +685
Branches 12844 13020 +176
============================================
- Hits 60316 59745 -571
- Misses 20799 22049 +1250
- Partials 4164 4170 +6
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `27.21% <15.98%> (-0.28%)` | :arrow_down: |
| unittests1 | `67.13% <80.62%> (+0.14%)` | :arrow_up: |
| unittests2 | `13.98% <0.00%> (-0.12%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `70.91% <0.00%> (-0.94%)` | :arrow_down: |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `91.58% <0.00%> (-0.60%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...ot/core/query/reduce/filter/RowMatcherFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXJGYWN0b3J5LmphdmE=) | `66.66% <66.66%> (ø)` | |
| [...xt/utils/BrokerRequestToQueryContextConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvQnJva2VyUmVxdWVzdFRvUXVlcnlDb250ZXh0Q29udmVydGVyLmphdmE=) | `90.80% <73.58%> (-7.59%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `70.75% <74.35%> (+7.11%)` | :arrow_up: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `83.33% <75.00%> (-4.17%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `92.10% <75.00%> (+0.34%)` | :arrow_up: |
| [...ache/pinot/core/plan/GapfillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `83.33% <83.33%> (ø)` | |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| ... and [120 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [3f98ce3...7746a7a](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829434243
##########
File path: pinot-broker/src/main/java/org/apache/pinot/broker/requesthandler/BaseBrokerRequestHandler.java
##########
@@ -217,6 +218,10 @@ private BrokerResponseNative handleSQLRequest(long requestId, String query, Json
requestStatistics.setErrorCode(QueryException.PQL_PARSING_ERROR_CODE);
return new BrokerResponseNative(QueryException.getException(QueryException.PQL_PARSING_ERROR, e));
}
+
+ BrokerRequest originalBrokerRequest = brokerRequest;
+ brokerRequest = GapfillUtils.stripGapfill(originalBrokerRequest);
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829534730
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/utils/BrokerRequestToQueryContextConverter.java
##########
@@ -42,23 +42,42 @@
import org.apache.pinot.common.utils.request.FilterQueryTree;
import org.apache.pinot.common.utils.request.RequestUtils;
import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
import org.apache.pinot.segment.spi.AggregationFunctionType;
public class BrokerRequestToQueryContextConverter {
private BrokerRequestToQueryContextConverter() {
}
+ /**
+ * Validate the gapfill query.
+ */
+ public static void validateGapfillQuery(BrokerRequest brokerRequest) {
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r830748297
##########
File path: pinot-core/src/test/java/org/apache/pinot/queries/BaseQueriesTest.java
##########
@@ -197,21 +198,31 @@ protected BrokerResponseNative getBrokerResponseForSqlQuery(String sqlQuery, Pla
}
queryOptions.put(Request.QueryOptionKey.GROUP_BY_MODE, Request.SQL);
queryOptions.put(Request.QueryOptionKey.RESPONSE_FORMAT, Request.SQL);
+ BrokerRequest strippedBrokerRequest = GapfillUtils.stripGapfill(brokerRequest);
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (25cb3ef) into [master](https://codecov.io/gh/apache/pinot/commit/f67b5aacc5b494a2f5d78e87d67a84ea3aadc99a?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (f67b5aa) will **decrease** coverage by `40.24%`.
> The diff coverage is `17.24%`.
> :exclamation: Current head 25cb3ef differs from pull request most recent head df0b289. Consider uploading reports for the commit df0b289 to get more accurate results
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 70.76% 30.52% -40.25%
=============================================
Files 1631 1629 -2
Lines 85490 85760 +270
Branches 12878 12996 +118
=============================================
- Hits 60499 26174 -34325
- Misses 20819 57226 +36407
+ Partials 4172 2360 -1812
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.68% <16.71%> (-0.20%)` | :arrow_down: |
| integration2 | `27.30% <17.24%> (-0.21%)` | :arrow_down: |
| unittests1 | `?` | |
| unittests2 | `?` | |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `62.92% <0.00%> (-8.93%)` | :arrow_down: |
| [...ache/pinot/core/plan/GapfillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...not/core/query/reduce/GapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwRmlsbERhdGFUYWJsZVJlZHVjZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [.../pinot/core/query/reduce/GapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `69.25% <0.00%> (-22.93%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `19.14% <13.27%> (-44.49%)` | :arrow_down: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `68.05% <30.00%> (-19.45%)` | :arrow_down: |
| [.../pinot/core/query/reduce/ResultReducerFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUmVzdWx0UmVkdWNlckZhY3RvcnkuamF2YQ==) | `61.11% <33.33%> (-20.14%)` | :arrow_down: |
| [...query/request/context/utils/QueryContextUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvUXVlcnlDb250ZXh0VXRpbHMuamF2YQ==) | `46.15% <36.36%> (-31.12%)` | :arrow_down: |
| ... and [1140 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [f67b5aa...df0b289](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r785550667
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/BlockValSetImpl.java
##########
@@ -0,0 +1,171 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.List;
+import javax.annotation.Nullable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.segment.spi.index.reader.Dictionary;
+import org.apache.pinot.spi.data.FieldSpec;
+
+
+/**
+ * Helper class to convert the result rows to BlockValSet.
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class BlockValSetImpl implements BlockValSet {
Review comment:
Done
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (b570851) into [master](https://codecov.io/gh/apache/pinot/commit/1d1a7d34709b6a89985a610f46dd1c97d6c9271a?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (1d1a7d3) will **decrease** coverage by `33.34%`.
> The diff coverage is `16.40%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 71.40% 38.06% -33.35%
+ Complexity 4223 81 -4142
=============================================
Files 1597 1608 +11
Lines 82903 83309 +406
Branches 12369 12452 +83
=============================================
- Hits 59201 31709 -27492
- Misses 19689 49173 +29484
+ Partials 4013 2427 -1586
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `29.02% <15.94%> (+0.03%)` | :arrow_up: |
| integration2 | `27.55% <16.40%> (-0.15%)` | :arrow_down: |
| unittests1 | `?` | |
| unittests2 | `14.28% <0.00%> (-0.08%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `81.03% <0.00%> (-6.47%)` | :arrow_down: |
| [...inot/core/plan/PreAggGapFillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL1ByZUFnZ0dhcEZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...pinot/core/plan/maker/InstancePlanMakerImplV2.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL21ha2VyL0luc3RhbmNlUGxhbk1ha2VySW1wbFYyLmphdmE=) | `59.40% <0.00%> (-17.37%)` | :arrow_down: |
| [...pache/pinot/core/query/reduce/BlockValSetImpl.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQmxvY2tWYWxTZXRJbXBsLmphdmE=) | `0.00% <0.00%> (ø)` | |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...PreAggregationGapFillSelectionOperatorService.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsU2VsZWN0aW9uT3BlcmF0b3JTZXJ2aWNlLmphdmE=) | `0.00% <0.00%> (ø)` | |
| [.../pinot/core/query/reduce/ResultReducerFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUmVzdWx0UmVkdWNlckZhY3RvcnkuamF2YQ==) | `61.11% <0.00%> (-20.14%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `28.26% <7.14%> (-35.38%)` | :arrow_down: |
| [...pinot/core/query/request/context/QueryContext.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvUXVlcnlDb250ZXh0LmphdmE=) | `88.14% <33.33%> (-9.77%)` | :arrow_down: |
| ... and [917 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [1d1a7d3...b570851](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] siddharthteotia commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
siddharthteotia commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r787216329
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/QueryContext.java
##########
@@ -85,6 +85,7 @@
// Keep the BrokerRequest to make incremental changes
// TODO: Remove it once the whole query engine is using the QueryContext
private final BrokerRequest _brokerRequest;
+ private QueryContext _preAggregateGapFillQueryContext;
Review comment:
> even if it might impact the subquery feature, it can be fixed as part of the subquery feature development.
This ^^ is what I am concerned about. We are making changes to the wire object. We will have to live with it and if the decision taken now somehow does not hold for generic subquery then we will find ourselves retro-fitting subquery onto what we do now.
Note that generic subquery does not only include subquery in FROM clause but also in WHERE and other parts of a subquery. I haven't spend full time figuring out how to model all kinds of subquery in Pinot so just being extra careful here.
I think generic subquery will work and will be more cleaner. So may be let's go with it
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (e949697) into [master](https://codecov.io/gh/apache/pinot/commit/1d1a7d34709b6a89985a610f46dd1c97d6c9271a?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (1d1a7d3) will **decrease** coverage by `6.50%`.
> The diff coverage is `73.37%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 71.40% 64.90% -6.51%
- Complexity 4223 4226 +3
============================================
Files 1597 1565 -32
Lines 82903 81446 -1457
Branches 12369 12246 -123
============================================
- Hits 59201 52864 -6337
- Misses 19689 24791 +5102
+ Partials 4013 3791 -222
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `68.14% <73.37%> (+<0.01%)` | :arrow_up: |
| unittests2 | `14.29% <0.00%> (-0.08%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...query/request/context/utils/QueryContextUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvUXVlcnlDb250ZXh0VXRpbHMuamF2YQ==) | `69.56% <20.00%> (-7.71%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `58.73% <54.83%> (-4.91%)` | :arrow_down: |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `63.88% <63.88%> (ø)` | |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `92.42% <66.66%> (+0.11%)` | :arrow_up: |
| [...che/pinot/core/query/reduce/filter/RowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXIuamF2YQ==) | `66.66% <66.66%> (ø)` | |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| [...PreAggregationGapFillSelectionOperatorService.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsU2VsZWN0aW9uT3BlcmF0b3JTZXJ2aWNlLmphdmE=) | `83.51% <83.51%> (ø)` | |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `85.71% <85.71%> (ø)` | |
| [.../core/query/reduce/filter/PredicateRowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1ByZWRpY2F0ZVJvd01hdGNoZXIuamF2YQ==) | `87.50% <87.50%> (ø)` | |
| ... and [395 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [1d1a7d3...e949697](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (e949697) into [master](https://codecov.io/gh/apache/pinot/commit/1d1a7d34709b6a89985a610f46dd1c97d6c9271a?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (1d1a7d3) will **decrease** coverage by `57.11%`.
> The diff coverage is `0.00%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 71.40% 14.29% -57.12%
+ Complexity 4223 81 -4142
=============================================
Files 1597 1565 -32
Lines 82903 81446 -1457
Branches 12369 12246 -123
=============================================
- Hits 59201 11641 -47560
- Misses 19689 68937 +49248
+ Partials 4013 868 -3145
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `?` | |
| unittests2 | `14.29% <0.00%> (-0.08%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `0.00% <0.00%> (-87.78%)` | :arrow_down: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (-87.50%)` | :arrow_down: |
| [...inot/core/plan/PreAggGapFillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL1ByZUFnZ0dhcEZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...pinot/core/plan/maker/InstancePlanMakerImplV2.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL21ha2VyL0luc3RhbmNlUGxhbk1ha2VySW1wbFYyLmphdmE=) | `0.00% <0.00%> (-76.77%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...e/pinot/core/query/reduce/HavingFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvSGF2aW5nRmlsdGVySGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.31%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `0.00% <0.00%> (-92.31%)` | :arrow_down: |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...PreAggregationGapFillSelectionOperatorService.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsU2VsZWN0aW9uT3BlcmF0b3JTZXJ2aWNlLmphdmE=) | `0.00% <0.00%> (ø)` | |
| ... and [1296 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [1d1a7d3...e949697](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r820366054
##########
File path: pinot-common/src/main/java/org/apache/pinot/sql/parsers/CalciteSqlParser.java
##########
@@ -369,6 +369,9 @@ private static PinotQuery compileCalciteSqlToPinotQuery(String sql) {
DataSource dataSource = new DataSource();
dataSource.setTableName(fromNode.toString());
pinotQuery.setDataSource(dataSource);
+ if (fromNode instanceof SqlSelect || fromNode instanceof SqlOrderBy) {
Review comment:
Here is the NullPointerException when I made your suggested change.
java.lang.NullPointerException
at org.apache.pinot.spi.utils.builder.TableNameBuilder.tableHasTypeSuffix(TableNameBuilder.java:73)
at org.apache.pinot.spi.utils.builder.TableNameBuilder.extractRawTableName(TableNameBuilder.java:100)
at org.apache.pinot.core.query.reduce.BrokerReduceService.reduceOnDataTable(BrokerReduceService.java:95)
at org.apache.pinot.queries.BaseQueriesTest.getBrokerResponse(BaseQueriesTest.java:238)
at org.apache.pinot.queries.BaseQueriesTest.getBrokerResponseForSqlQuery(BaseQueriesTest.java:201)
at org.apache.pinot.queries.BaseQueriesTest.getBrokerResponseForSqlQuery(BaseQueriesTest.java:173)
at org.apache.pinot.queries.GapfillQueriesTest.datetimeconvertGapfillTestAggregateAggregateOrderBy(GapfillQueriesTest.java:3554)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:108)
at org.testng.internal.Invoker.invokeMethod(Invoker.java:661)
at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:869)
at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1193)
at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:126)
at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:109)
at org.testng.TestRunner.privateRun(TestRunner.java:744)
at org.testng.TestRunner.run(TestRunner.java:602)
at org.testng.SuiteRunner.runTest(SuiteRunner.java:380)
at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:375)
at org.testng.SuiteRunner.privateRun(SuiteRunner.java:340)
at org.testng.SuiteRunner.run(SuiteRunner.java:289)
at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
at org.testng.TestNG.runSuitesSequentially(TestNG.java:1301)
at org.testng.TestNG.runSuitesLocally(TestNG.java:1226)
at org.testng.TestNG.runSuites(TestNG.java:1144)
at org.testng.TestNG.run(TestNG.java:1115)
at com.intellij.rt.testng.IDEARemoteTestNG.run(IDEARemoteTestNG.java:66)
at com.intellij.rt.testng.RemoteTestNGStarter.main(RemoteTestNGStarter.java:109)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r819259743
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/QueryContext.java
##########
@@ -133,6 +137,8 @@ private QueryContext(String tableName, List<ExpressionContext> selectExpressions
_queryOptions = queryOptions;
_debugOptions = debugOptions;
_brokerRequest = brokerRequest;
+ _gapfillType = null;
Review comment:
GAP_FILL_NONE was my first version of implementation. @Jackie-Jiang proposed "null" option. I am fine with both solutions, but I do not want to make another change. I prefer to keep as it is.
@amrishlal Please let me know if you have strong opinion about it.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r819261079
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/utils/BrokerRequestToQueryContextConverter.java
##########
@@ -53,12 +55,81 @@ private BrokerRequestToQueryContextConverter() {
* Converts the given {@link BrokerRequest} into a {@link QueryContext}.
*/
public static QueryContext convert(BrokerRequest brokerRequest) {
- return brokerRequest.getPinotQuery() != null ? convertSQL(brokerRequest) : convertPQL(brokerRequest);
+ if (brokerRequest.getPinotQuery() != null) {
+ QueryContext queryContext = convertSQL(brokerRequest.getPinotQuery(), brokerRequest);
+ queryContext.setGapfillType(GapfillUtils.getGapfillType(queryContext));
Review comment:
The GapfillType is decided when all nested query context(s) are constructed. It is hard to be decided before each QueryContext is constructed.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1060199775
> It might be good to have a two or three integration test level queries as well since both broker and server are involved. Otherwise looks good to me assuming that you take care of moving the syntax checks to Broker side.
Do you have an example about how to add the integration test I can add? I do not see any query-related test against broker and server
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (98cf976) into [master](https://codecov.io/gh/apache/pinot/commit/3f98ce37fdaef0335fcd82e621489d65751b1f55?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (3f98ce3) will **decrease** coverage by `56.62%`.
> The diff coverage is `0.00%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 70.72% 14.09% -56.63%
+ Complexity 4242 81 -4161
=============================================
Files 1631 1596 -35
Lines 85279 84221 -1058
Branches 12844 12830 -14
=============================================
- Hits 60316 11874 -48442
- Misses 20799 71456 +50657
+ Partials 4164 891 -3273
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `?` | |
| unittests2 | `14.09% <0.00%> (-0.01%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `24.41% <0.00%> (-47.44%)` | :arrow_down: |
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `0.00% <0.00%> (-86.56%)` | :arrow_down: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (-87.50%)` | :arrow_down: |
| [...ache/pinot/core/plan/GapfillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...pinot/core/plan/maker/InstancePlanMakerImplV2.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL21ha2VyL0luc3RhbmNlUGxhbk1ha2VySW1wbFYyLmphdmE=) | `0.00% <0.00%> (-76.77%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...not/core/query/reduce/GapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwRmlsbERhdGFUYWJsZVJlZHVjZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [.../pinot/core/query/reduce/GapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...e/pinot/core/query/reduce/HavingFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvSGF2aW5nRmlsdGVySGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.31%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.77%)` | :arrow_down: |
| ... and [1327 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [3f98ce3...98cf976](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r803194033
##########
File path: pinot-core/src/test/java/org/apache/pinot/queries/PreAggregationGapfillQueriesTest.java
##########
@@ -0,0 +1,3088 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.queries;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import org.apache.commons.io.FileUtils;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.segment.local.indexsegment.immutable.ImmutableSegmentLoader;
+import org.apache.pinot.segment.local.segment.creator.impl.SegmentIndexCreationDriverImpl;
+import org.apache.pinot.segment.local.segment.readers.GenericRowRecordReader;
+import org.apache.pinot.segment.spi.ImmutableSegment;
+import org.apache.pinot.segment.spi.IndexSegment;
+import org.apache.pinot.segment.spi.creator.SegmentGeneratorConfig;
+import org.apache.pinot.spi.config.table.TableConfig;
+import org.apache.pinot.spi.config.table.TableType;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+import org.apache.pinot.spi.data.FieldSpec.DataType;
+import org.apache.pinot.spi.data.Schema;
+import org.apache.pinot.spi.data.readers.GenericRow;
+import org.apache.pinot.spi.utils.ReadMode;
+import org.apache.pinot.spi.utils.builder.TableConfigBuilder;
+import org.testng.Assert;
+import org.testng.annotations.AfterClass;
+import org.testng.annotations.BeforeClass;
+import org.testng.annotations.Test;
+
+
+/**
+ * Queries test for Gapfill queries.
+ */
+@SuppressWarnings("rawtypes")
+public class PreAggregationGapfillQueriesTest extends BaseQueriesTest {
+ private static final File INDEX_DIR = new File(FileUtils.getTempDirectory(), "PostAggregationGapfillQueriesTest");
+ private static final String RAW_TABLE_NAME = "parkingData";
+ private static final String SEGMENT_NAME = "testSegment";
+
+ private static final int NUM_LOTS = 4;
+
+ private static final String IS_OCCUPIED_COLUMN = "isOccupied";
+ private static final String LEVEL_ID_COLUMN = "levelId";
+ private static final String LOT_ID_COLUMN = "lotId";
+ private static final String EVENT_TIME_COLUMN = "eventTime";
+ private static final Schema SCHEMA = new Schema.SchemaBuilder()
+ .addSingleValueDimension(IS_OCCUPIED_COLUMN, DataType.INT)
+ .addSingleValueDimension(LOT_ID_COLUMN, DataType.STRING)
+ .addSingleValueDimension(LEVEL_ID_COLUMN, DataType.STRING)
+ .addSingleValueDimension(EVENT_TIME_COLUMN, DataType.LONG)
+ .setPrimaryKeyColumns(Arrays.asList(LOT_ID_COLUMN, EVENT_TIME_COLUMN))
+ .build();
+ private static final TableConfig TABLE_CONFIG = new TableConfigBuilder(TableType.OFFLINE).setTableName(RAW_TABLE_NAME)
+ .build();
+
+ private IndexSegment _indexSegment;
+ private List<IndexSegment> _indexSegments;
+
+ @Override
+ protected String getFilter() {
+ // NOTE: Use a match all filter to switch between DictionaryBasedAggregationOperator and AggregationOperator
+ return " WHERE eventTime >= 0";
+ }
+
+ @Override
+ protected IndexSegment getIndexSegment() {
+ return _indexSegment;
+ }
+
+ @Override
+ protected List<IndexSegment> getIndexSegments() {
+ return _indexSegments;
+ }
+
+ GenericRow createRow(String time, int levelId, int lotId, boolean isOccupied) {
+ DateTimeFormatSpec dateTimeFormatter
+ = new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ GenericRow parkingRow = new GenericRow();
+ parkingRow.putValue(EVENT_TIME_COLUMN, dateTimeFormatter.fromFormatToMillis(time));
+ parkingRow.putValue(LEVEL_ID_COLUMN, "Level_" + String.valueOf(levelId));
+ parkingRow.putValue(LOT_ID_COLUMN, "LotId_" + String.valueOf(lotId));
+ parkingRow.putValue(IS_OCCUPIED_COLUMN, isOccupied);
+ return parkingRow;
+ }
+
+ @BeforeClass
+ public void setUp()
+ throws Exception {
+ FileUtils.deleteDirectory(INDEX_DIR);
+
+ List<GenericRow> records = new ArrayList<>(NUM_LOTS * 2);
+ records.add(createRow("2021-11-07 04:11:00.000", 0, 0, true));
+ records.add(createRow("2021-11-07 04:21:00.000", 0, 0, true));
+ records.add(createRow("2021-11-07 04:31:00.000", 1, 0, true));
+ records.add(createRow("2021-11-07 05:17:00.000", 0, 1, true));
+ records.add(createRow("2021-11-07 05:37:00.000", 0, 1, true));
+ records.add(createRow("2021-11-07 05:47:00.000", 1, 2, true));
+ records.add(createRow("2021-11-07 06:25:00.000", 0, 2, true));
+ records.add(createRow("2021-11-07 06:35:00.000", 0, 2, true));
+ records.add(createRow("2021-11-07 06:36:00.000", 1, 1, true));
+ records.add(createRow("2021-11-07 07:44:00.000", 0, 3, true));
+ records.add(createRow("2021-11-07 07:46:00.000", 1, 3, true));
+ records.add(createRow("2021-11-07 07:54:00.000", 1, 3, true));
+ records.add(createRow("2021-11-07 08:44:00.000", 0, 2, false));
+ records.add(createRow("2021-11-07 08:44:00.000", 1, 2, false));
+ records.add(createRow("2021-11-07 09:31:00.000", 0, 3, false));
+ records.add(createRow("2021-11-07 09:31:00.000", 1, 3, false));
+ records.add(createRow("2021-11-07 10:17:00.000", 0, 0, false));
+ records.add(createRow("2021-11-07 10:33:00.000", 0, 0, false));
+ records.add(createRow("2021-11-07 10:33:00.000", 1, 0, false));
+ records.add(createRow("2021-11-07 11:54:00.000", 0, 1, false));
+ records.add(createRow("2021-11-07 11:57:00.000", 1, 1, false));
+
+ SegmentGeneratorConfig segmentGeneratorConfig = new SegmentGeneratorConfig(TABLE_CONFIG, SCHEMA);
+ segmentGeneratorConfig.setTableName(RAW_TABLE_NAME);
+ segmentGeneratorConfig.setSegmentName(SEGMENT_NAME);
+ segmentGeneratorConfig.setOutDir(INDEX_DIR.getPath());
+
+ SegmentIndexCreationDriverImpl driver = new SegmentIndexCreationDriverImpl();
+ driver.init(segmentGeneratorConfig, new GenericRowRecordReader(records));
+ driver.build();
+
+ ImmutableSegment immutableSegment = ImmutableSegmentLoader.load(new File(INDEX_DIR, SEGMENT_NAME), ReadMode.mmap);
+ _indexSegment = immutableSegment;
+ _indexSegments = Arrays.asList(immutableSegment);
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestSelectSelect() {
+ DateTimeFormatSpec dateTimeFormatter
+ = new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, lotId, isOccupied, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
Review comment:
We should be able to support alias for the subquery.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (4a0d902) into [master](https://codecov.io/gh/apache/pinot/commit/262dc50e236ed2af25a0cf8c67658a48731ce573?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (262dc50) will **decrease** coverage by `6.76%`.
> The diff coverage is `75.54%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 70.83% 64.06% -6.77%
+ Complexity 4258 4252 -6
============================================
Files 1636 1600 -36
Lines 85804 84442 -1362
Branches 12920 12855 -65
============================================
- Hits 60779 54101 -6678
- Misses 20836 26423 +5587
+ Partials 4189 3918 -271
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `66.96% <76.19%> (+0.01%)` | :arrow_up: |
| unittests2 | `14.08% <0.31%> (-0.10%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/GrpcBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvR3JwY0Jyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `0.00% <ø> (-78.58%)` | :arrow_down: |
| [...thandler/SingleConnectionBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvU2luZ2xlQ29ubmVjdGlvbkJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `13.20% <0.00%> (-73.83%)` | :arrow_down: |
| [...t/controller/api/resources/PinotQueryResource.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29udHJvbGxlci9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29udHJvbGxlci9hcGkvcmVzb3VyY2VzL1Bpbm90UXVlcnlSZXNvdXJjZS5qYXZh) | `0.00% <0.00%> (-50.35%)` | :arrow_down: |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `86.08% <0.00%> (-6.10%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `23.88% <33.33%> (-47.95%)` | :arrow_down: |
| [...query/request/context/utils/QueryContextUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvUXVlcnlDb250ZXh0VXRpbHMuamF2YQ==) | `65.38% <36.36%> (-11.89%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `64.11% <65.19%> (+0.47%)` | :arrow_up: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `91.89% <66.66%> (+0.12%)` | :arrow_up: |
| [...ot/core/query/reduce/filter/RowMatcherFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXJGYWN0b3J5LmphdmE=) | `66.66% <66.66%> (ø)` | |
| ... and [402 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [262dc50...4a0d902](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (4a0d902) into [master](https://codecov.io/gh/apache/pinot/commit/262dc50e236ed2af25a0cf8c67658a48731ce573?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (262dc50) will **decrease** coverage by `0.00%`.
> The diff coverage is `77.11%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 70.83% 70.82% -0.01%
+ Complexity 4258 4252 -6
============================================
Files 1636 1645 +9
Lines 85804 86326 +522
Branches 12920 13059 +139
============================================
+ Hits 60779 61143 +364
- Misses 20836 20945 +109
- Partials 4189 4238 +49
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.73% <19.43%> (-0.23%)` | :arrow_down: |
| integration2 | `27.44% <19.74%> (-0.15%)` | :arrow_down: |
| unittests1 | `66.96% <76.19%> (+0.01%)` | :arrow_up: |
| unittests2 | `14.08% <0.31%> (-0.10%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/GrpcBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvR3JwY0Jyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `78.57% <ø> (ø)` | |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `91.58% <0.00%> (-0.60%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...query/request/context/utils/QueryContextUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvUXVlcnlDb250ZXh0VXRpbHMuamF2YQ==) | `67.30% <36.36%> (-9.97%)` | :arrow_down: |
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `71.57% <66.66%> (-0.26%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `91.89% <66.66%> (+0.12%)` | :arrow_up: |
| [...ot/core/query/reduce/filter/RowMatcherFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXJGYWN0b3J5LmphdmE=) | `66.66% <66.66%> (ø)` | |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `65.55% <66.85%> (+1.91%)` | :arrow_up: |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `88.01% <84.61%> (-0.18%)` | :arrow_down: |
| ... and [47 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [262dc50...4a0d902](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (4dd7d6c) into [master](https://codecov.io/gh/apache/pinot/commit/262dc50e236ed2af25a0cf8c67658a48731ce573?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (262dc50) will **decrease** coverage by `42.00%`.
> The diff coverage is `19.56%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 70.83% 28.82% -42.01%
=============================================
Files 1636 1633 -3
Lines 85804 85972 +168
Branches 12920 13021 +101
=============================================
- Hits 60779 24785 -35994
- Misses 20836 58900 +38064
+ Partials 4189 2287 -1902
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.82% <19.56%> (-0.13%)` | :arrow_down: |
| integration2 | `?` | |
| unittests1 | `?` | |
| unittests2 | `?` | |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/GrpcBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvR3JwY0Jyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `0.00% <ø> (-78.58%)` | :arrow_down: |
| [...t/controller/api/resources/PinotQueryResource.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29udHJvbGxlci9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29udHJvbGxlci9hcGkvcmVzb3VyY2VzL1Bpbm90UXVlcnlSZXNvdXJjZS5qYXZh) | `0.00% <0.00%> (-50.35%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...ache/pinot/core/query/reduce/GapFillProcessor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwRmlsbFByb2Nlc3Nvci5qYXZh) | `0.00% <0.00%> (ø)` | |
| [.../pinot/core/query/reduce/GapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `69.25% <0.00%> (-22.93%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `15.31% <11.04%> (-48.33%)` | :arrow_down: |
| [...query/request/context/utils/QueryContextUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvUXVlcnlDb250ZXh0VXRpbHMuamF2YQ==) | `44.23% <36.36%> (-33.05%)` | :arrow_down: |
| [...e/pinot/core/query/reduce/BrokerReduceService.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQnJva2VyUmVkdWNlU2VydmljZS5qYXZh) | `87.80% <40.00%> (-9.50%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `83.78% <50.00%> (-7.99%)` | :arrow_down: |
| ... and [1197 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [262dc50...4dd7d6c](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (572ab88) into [master](https://codecov.io/gh/apache/pinot/commit/df39bdacf09dff5a00f5180a5d1ce838710b45a4?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (df39bda) will **decrease** coverage by `0.95%`.
> The diff coverage is `81.17%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 71.01% 70.05% -0.96%
+ Complexity 4314 4313 -1
============================================
Files 1624 1634 +10
Lines 84873 85502 +629
Branches 12791 12947 +156
============================================
- Hits 60273 59899 -374
- Misses 20453 21437 +984
- Partials 4147 4166 +19
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.43% <14.88%> (-0.22%)` | :arrow_down: |
| integration2 | `?` | |
| unittests1 | `67.57% <81.17%> (+0.12%)` | :arrow_up: |
| unittests2 | `14.01% <0.00%> (-0.10%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `91.58% <0.00%> (-0.60%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...che/pinot/core/query/reduce/filter/RowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXIuamF2YQ==) | `66.66% <66.66%> (ø)` | |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `75.24% <81.42%> (+11.61%)` | :arrow_up: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `83.33% <81.81%> (-4.17%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `91.89% <83.33%> (+0.22%)` | :arrow_up: |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `84.90% <84.90%> (ø)` | |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `85.71% <85.71%> (ø)` | |
| [.../core/query/reduce/filter/PredicateRowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1ByZWRpY2F0ZVJvd01hdGNoZXIuamF2YQ==) | `87.50% <87.50%> (ø)` | |
| ... and [104 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [df39bda...572ab88](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r806445191
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/operator/combine/GapfillGroupByOrderByCombineOperator.java
##########
@@ -0,0 +1,263 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.operator.combine;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import org.apache.pinot.common.exception.QueryException;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.response.ProcessingException;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.core.common.Operator;
+import org.apache.pinot.core.data.table.ConcurrentIndexedTable;
+import org.apache.pinot.core.data.table.IndexedTable;
+import org.apache.pinot.core.data.table.IntermediateRecord;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.data.table.Record;
+import org.apache.pinot.core.data.table.UnboundedConcurrentIndexedTable;
+import org.apache.pinot.core.operator.AcquireReleaseColumnsSegmentOperator;
+import org.apache.pinot.core.operator.blocks.IntermediateResultsBlock;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.groupby.AggregationGroupByResult;
+import org.apache.pinot.core.query.aggregation.groupby.GroupKeyGenerator;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.core.util.GroupByUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * Combine operator for aggregation group-by queries with SQL semantic.
+ * TODO: Use CombineOperatorUtils.getNumThreadsForQuery() to get the parallelism of the query instead of using
+ * all threads
+ */
+@SuppressWarnings("rawtypes")
+public class GapfillGroupByOrderByCombineOperator extends BaseCombineOperator {
Review comment:
Removed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r803194608
##########
File path: pinot-core/src/test/java/org/apache/pinot/queries/PreAggregationGapfillQueriesTest.java
##########
@@ -0,0 +1,3088 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.queries;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import org.apache.commons.io.FileUtils;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.segment.local.indexsegment.immutable.ImmutableSegmentLoader;
+import org.apache.pinot.segment.local.segment.creator.impl.SegmentIndexCreationDriverImpl;
+import org.apache.pinot.segment.local.segment.readers.GenericRowRecordReader;
+import org.apache.pinot.segment.spi.ImmutableSegment;
+import org.apache.pinot.segment.spi.IndexSegment;
+import org.apache.pinot.segment.spi.creator.SegmentGeneratorConfig;
+import org.apache.pinot.spi.config.table.TableConfig;
+import org.apache.pinot.spi.config.table.TableType;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+import org.apache.pinot.spi.data.FieldSpec.DataType;
+import org.apache.pinot.spi.data.Schema;
+import org.apache.pinot.spi.data.readers.GenericRow;
+import org.apache.pinot.spi.utils.ReadMode;
+import org.apache.pinot.spi.utils.builder.TableConfigBuilder;
+import org.testng.Assert;
+import org.testng.annotations.AfterClass;
+import org.testng.annotations.BeforeClass;
+import org.testng.annotations.Test;
+
+
+/**
+ * Queries test for Gapfill queries.
+ */
+@SuppressWarnings("rawtypes")
+public class PreAggregationGapfillQueriesTest extends BaseQueriesTest {
+ private static final File INDEX_DIR = new File(FileUtils.getTempDirectory(), "PostAggregationGapfillQueriesTest");
+ private static final String RAW_TABLE_NAME = "parkingData";
+ private static final String SEGMENT_NAME = "testSegment";
+
+ private static final int NUM_LOTS = 4;
+
+ private static final String IS_OCCUPIED_COLUMN = "isOccupied";
+ private static final String LEVEL_ID_COLUMN = "levelId";
+ private static final String LOT_ID_COLUMN = "lotId";
+ private static final String EVENT_TIME_COLUMN = "eventTime";
+ private static final Schema SCHEMA = new Schema.SchemaBuilder()
+ .addSingleValueDimension(IS_OCCUPIED_COLUMN, DataType.INT)
+ .addSingleValueDimension(LOT_ID_COLUMN, DataType.STRING)
+ .addSingleValueDimension(LEVEL_ID_COLUMN, DataType.STRING)
+ .addSingleValueDimension(EVENT_TIME_COLUMN, DataType.LONG)
+ .setPrimaryKeyColumns(Arrays.asList(LOT_ID_COLUMN, EVENT_TIME_COLUMN))
+ .build();
+ private static final TableConfig TABLE_CONFIG = new TableConfigBuilder(TableType.OFFLINE).setTableName(RAW_TABLE_NAME)
+ .build();
+
+ private IndexSegment _indexSegment;
+ private List<IndexSegment> _indexSegments;
+
+ @Override
+ protected String getFilter() {
+ // NOTE: Use a match all filter to switch between DictionaryBasedAggregationOperator and AggregationOperator
+ return " WHERE eventTime >= 0";
+ }
+
+ @Override
+ protected IndexSegment getIndexSegment() {
+ return _indexSegment;
+ }
+
+ @Override
+ protected List<IndexSegment> getIndexSegments() {
+ return _indexSegments;
+ }
+
+ GenericRow createRow(String time, int levelId, int lotId, boolean isOccupied) {
+ DateTimeFormatSpec dateTimeFormatter
+ = new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ GenericRow parkingRow = new GenericRow();
+ parkingRow.putValue(EVENT_TIME_COLUMN, dateTimeFormatter.fromFormatToMillis(time));
+ parkingRow.putValue(LEVEL_ID_COLUMN, "Level_" + String.valueOf(levelId));
+ parkingRow.putValue(LOT_ID_COLUMN, "LotId_" + String.valueOf(lotId));
+ parkingRow.putValue(IS_OCCUPIED_COLUMN, isOccupied);
+ return parkingRow;
+ }
+
+ @BeforeClass
+ public void setUp()
+ throws Exception {
+ FileUtils.deleteDirectory(INDEX_DIR);
+
+ List<GenericRow> records = new ArrayList<>(NUM_LOTS * 2);
+ records.add(createRow("2021-11-07 04:11:00.000", 0, 0, true));
+ records.add(createRow("2021-11-07 04:21:00.000", 0, 0, true));
+ records.add(createRow("2021-11-07 04:31:00.000", 1, 0, true));
+ records.add(createRow("2021-11-07 05:17:00.000", 0, 1, true));
+ records.add(createRow("2021-11-07 05:37:00.000", 0, 1, true));
+ records.add(createRow("2021-11-07 05:47:00.000", 1, 2, true));
+ records.add(createRow("2021-11-07 06:25:00.000", 0, 2, true));
+ records.add(createRow("2021-11-07 06:35:00.000", 0, 2, true));
+ records.add(createRow("2021-11-07 06:36:00.000", 1, 1, true));
+ records.add(createRow("2021-11-07 07:44:00.000", 0, 3, true));
+ records.add(createRow("2021-11-07 07:46:00.000", 1, 3, true));
+ records.add(createRow("2021-11-07 07:54:00.000", 1, 3, true));
+ records.add(createRow("2021-11-07 08:44:00.000", 0, 2, false));
+ records.add(createRow("2021-11-07 08:44:00.000", 1, 2, false));
+ records.add(createRow("2021-11-07 09:31:00.000", 0, 3, false));
+ records.add(createRow("2021-11-07 09:31:00.000", 1, 3, false));
+ records.add(createRow("2021-11-07 10:17:00.000", 0, 0, false));
+ records.add(createRow("2021-11-07 10:33:00.000", 0, 0, false));
+ records.add(createRow("2021-11-07 10:33:00.000", 1, 0, false));
+ records.add(createRow("2021-11-07 11:54:00.000", 0, 1, false));
+ records.add(createRow("2021-11-07 11:57:00.000", 1, 1, false));
+
+ SegmentGeneratorConfig segmentGeneratorConfig = new SegmentGeneratorConfig(TABLE_CONFIG, SCHEMA);
+ segmentGeneratorConfig.setTableName(RAW_TABLE_NAME);
+ segmentGeneratorConfig.setSegmentName(SEGMENT_NAME);
+ segmentGeneratorConfig.setOutDir(INDEX_DIR.getPath());
+
+ SegmentIndexCreationDriverImpl driver = new SegmentIndexCreationDriverImpl();
+ driver.init(segmentGeneratorConfig, new GenericRowRecordReader(records));
+ driver.build();
+
+ ImmutableSegment immutableSegment = ImmutableSegmentLoader.load(new File(INDEX_DIR, SEGMENT_NAME), ReadMode.mmap);
+ _indexSegment = immutableSegment;
+ _indexSegments = Arrays.asList(immutableSegment);
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestSelectSelect() {
+ DateTimeFormatSpec dateTimeFormatter
+ = new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, lotId, isOccupied, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
Review comment:
Create a work item and put the TODO. Will address it in separate PR.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] amrishlal commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
amrishlal commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r807577077
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/plan/CombinePlanNode.java
##########
@@ -161,8 +162,18 @@ public BaseCombineOperator run() {
// Streaming query (only support selection only)
return new StreamingSelectionOnlyCombineOperator(operators, _queryContext, _executorService, _streamObserver);
}
+ GapfillUtils.GapfillType gapfillType = GapfillUtils.getGapfillType(_queryContext);
Review comment:
I think this can be replaced by `_queryContext.getGapfillType()`?
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/plan/CombinePlanNode.java
##########
@@ -161,8 +162,18 @@ public BaseCombineOperator run() {
// Streaming query (only support selection only)
return new StreamingSelectionOnlyCombineOperator(operators, _queryContext, _executorService, _streamObserver);
}
+ GapfillUtils.GapfillType gapfillType = GapfillUtils.getGapfillType(_queryContext);
Review comment:
minor: if possible, the first if should deal with the common case, the second if with the second most common case, and so on although there aren't too many ifs here, so it shouldn't cause a big impact.
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/util/GapfillUtils.java
##########
@@ -31,7 +36,25 @@
*/
public class GapfillUtils {
private static final String POST_AGGREGATE_GAP_FILL = "postaggregategapfill";
+ private static final String GAP_FILL = "gapfill";
private static final String FILL = "fill";
+ private static final String TIME_SERIES_ON = "timeSeriesOn";
+ private static final int STARTING_INDEX_OF_OPTIONAL_ARGS_FOR_PRE_AGGREGATE_GAP_FILL = 5;
+
+ public enum GapfillType {
+ // one sql query with gapfill only
+ Gapfill,
+ // gapfill as subquery, the outer query may have the filter
+ GapfillSelect,
+ // gapfill as subquery, the outer query has the aggregation
+ GapfillAggregate,
+ // aggregation as subqery, the outer query is gapfill
+ AggregateGapfill,
+ // aggegration as second nesting subquery, gapfill as fist nesting subquery, different aggregation as outer query
+ AggregateGapfillAggregate,
+ // no gapfill at all.
+ None
Review comment:
Enum parameters in Pinot are usually upper case with underscores for example GAP_FILL.
##########
File path: pinot-core/src/test/java/org/apache/pinot/queries/PreAggregationGapfillQueriesTest.java
##########
@@ -0,0 +1,3277 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.queries;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import org.apache.commons.io.FileUtils;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.segment.local.indexsegment.immutable.ImmutableSegmentLoader;
+import org.apache.pinot.segment.local.segment.creator.impl.SegmentIndexCreationDriverImpl;
+import org.apache.pinot.segment.local.segment.readers.GenericRowRecordReader;
+import org.apache.pinot.segment.spi.ImmutableSegment;
+import org.apache.pinot.segment.spi.IndexSegment;
+import org.apache.pinot.segment.spi.creator.SegmentGeneratorConfig;
+import org.apache.pinot.spi.config.table.TableConfig;
+import org.apache.pinot.spi.config.table.TableType;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+import org.apache.pinot.spi.data.FieldSpec.DataType;
+import org.apache.pinot.spi.data.Schema;
+import org.apache.pinot.spi.data.readers.GenericRow;
+import org.apache.pinot.spi.utils.ReadMode;
+import org.apache.pinot.spi.utils.builder.TableConfigBuilder;
+import org.testng.Assert;
+import org.testng.annotations.AfterClass;
+import org.testng.annotations.BeforeClass;
+import org.testng.annotations.Test;
+
+
+/**
+ * Queries test for Gapfill queries.
+ */
+// TODO: Item 1. table alias for subquery in next PR
+// TODO: Item 2. Deprecate PostAggregateGapfill implementation in next PR
+@SuppressWarnings("rawtypes")
+public class PreAggregationGapfillQueriesTest extends BaseQueriesTest {
+ private static final File INDEX_DIR = new File(FileUtils.getTempDirectory(), "PostAggregationGapfillQueriesTest");
+ private static final String RAW_TABLE_NAME = "parkingData";
+ private static final String SEGMENT_NAME = "testSegment";
+
+ private static final int NUM_LOTS = 4;
+
+ private static final String IS_OCCUPIED_COLUMN = "isOccupied";
+ private static final String LEVEL_ID_COLUMN = "levelId";
+ private static final String LOT_ID_COLUMN = "lotId";
+ private static final String EVENT_TIME_COLUMN = "eventTime";
+ private static final Schema SCHEMA = new Schema.SchemaBuilder()
+ .addSingleValueDimension(IS_OCCUPIED_COLUMN, DataType.INT)
+ .addSingleValueDimension(LOT_ID_COLUMN, DataType.STRING)
+ .addSingleValueDimension(LEVEL_ID_COLUMN, DataType.STRING)
+ .addSingleValueDimension(EVENT_TIME_COLUMN, DataType.LONG)
+ .setPrimaryKeyColumns(Arrays.asList(LOT_ID_COLUMN, EVENT_TIME_COLUMN))
+ .build();
+ private static final TableConfig TABLE_CONFIG = new TableConfigBuilder(TableType.OFFLINE).setTableName(RAW_TABLE_NAME)
+ .build();
+
+ private IndexSegment _indexSegment;
+ private List<IndexSegment> _indexSegments;
+
+ @Override
+ protected String getFilter() {
+ // NOTE: Use a match all filter to switch between DictionaryBasedAggregationOperator and AggregationOperator
+ return " WHERE eventTime >= 0";
+ }
+
+ @Override
+ protected IndexSegment getIndexSegment() {
+ return _indexSegment;
+ }
+
+ @Override
+ protected List<IndexSegment> getIndexSegments() {
+ return _indexSegments;
+ }
+
+ GenericRow createRow(String time, int levelId, int lotId, boolean isOccupied) {
+ DateTimeFormatSpec dateTimeFormatter
+ = new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ GenericRow parkingRow = new GenericRow();
+ parkingRow.putValue(EVENT_TIME_COLUMN, dateTimeFormatter.fromFormatToMillis(time));
+ parkingRow.putValue(LEVEL_ID_COLUMN, "Level_" + String.valueOf(levelId));
+ parkingRow.putValue(LOT_ID_COLUMN, "LotId_" + String.valueOf(lotId));
+ parkingRow.putValue(IS_OCCUPIED_COLUMN, isOccupied);
+ return parkingRow;
+ }
+
+ @BeforeClass
+ public void setUp()
+ throws Exception {
+ FileUtils.deleteDirectory(INDEX_DIR);
+
+ List<GenericRow> records = new ArrayList<>(NUM_LOTS * 2);
+ records.add(createRow("2021-11-07 04:11:00.000", 0, 0, true));
+ records.add(createRow("2021-11-07 04:21:00.000", 0, 0, true));
+ records.add(createRow("2021-11-07 04:31:00.000", 1, 0, true));
+ records.add(createRow("2021-11-07 05:17:00.000", 0, 1, true));
+ records.add(createRow("2021-11-07 05:37:00.000", 0, 1, true));
+ records.add(createRow("2021-11-07 05:47:00.000", 1, 2, true));
+ records.add(createRow("2021-11-07 06:25:00.000", 0, 2, true));
+ records.add(createRow("2021-11-07 06:35:00.000", 0, 2, true));
+ records.add(createRow("2021-11-07 06:36:00.000", 1, 1, true));
+ records.add(createRow("2021-11-07 07:44:00.000", 0, 3, true));
+ records.add(createRow("2021-11-07 07:46:00.000", 1, 3, true));
+ records.add(createRow("2021-11-07 07:54:00.000", 1, 3, true));
+ records.add(createRow("2021-11-07 08:44:00.000", 0, 2, false));
+ records.add(createRow("2021-11-07 08:44:00.000", 1, 2, false));
+ records.add(createRow("2021-11-07 09:31:00.000", 0, 3, false));
+ records.add(createRow("2021-11-07 09:31:00.000", 1, 3, false));
+ records.add(createRow("2021-11-07 10:17:00.000", 0, 0, false));
+ records.add(createRow("2021-11-07 10:33:00.000", 0, 0, false));
+ records.add(createRow("2021-11-07 10:33:00.000", 1, 0, false));
+ records.add(createRow("2021-11-07 11:54:00.000", 0, 1, false));
+ records.add(createRow("2021-11-07 11:57:00.000", 1, 1, false));
+
+ SegmentGeneratorConfig segmentGeneratorConfig = new SegmentGeneratorConfig(TABLE_CONFIG, SCHEMA);
+ segmentGeneratorConfig.setTableName(RAW_TABLE_NAME);
+ segmentGeneratorConfig.setSegmentName(SEGMENT_NAME);
+ segmentGeneratorConfig.setOutDir(INDEX_DIR.getPath());
+
+ SegmentIndexCreationDriverImpl driver = new SegmentIndexCreationDriverImpl();
+ driver.init(segmentGeneratorConfig, new GenericRowRecordReader(records));
+ driver.build();
+
+ ImmutableSegment immutableSegment = ImmutableSegmentLoader.load(new File(INDEX_DIR, SEGMENT_NAME), ReadMode.mmap);
+ _indexSegment = immutableSegment;
+ _indexSegments = Arrays.asList(immutableSegment);
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestSelectSelect() {
+ DateTimeFormatSpec dateTimeFormatter
+ = new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + " GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " levelId, lotId, isOccupied "
+ + "FROM parkingData "
+ + "WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + "LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int [][] expectedOccupiedSlotsCounts1 =
+ new int [][] {{6, 6}, {8, 4}, {10, 2}, {12, 0}, {6, 4}, {4, 6}, {2, 10}, {0, 10}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = (String) gapFillRows1.get(index)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[3].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, lotId, isOccupied, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int [] expectedOccupiedSlotsCounts2 = new int [] {6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = (String) gapFillRows2.get(index)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestAggregateSelect() {
+ DateTimeFormatSpec dateTimeFormatter
+ = new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)), levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ int [][] expectedOccupiedSlotsCounts1 =
+ new int [][] {{2, 6}, {4, 4}, {6, 2}, {8, 0}, {6, 2}, {4, 4}, {2, 6}, {0, 8}};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ int index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ int ones = expectedOccupiedSlotsCounts1[i][0];
+ int zeros = expectedOccupiedSlotsCounts1[i][1];
+ int total = ones + zeros;
+ for (int k = 0; k < total; k++) {
+ String firstTimeCol = (String) gapFillRows1.get(index)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if (gapFillRows1.get(index)[3].equals(1)) {
+ ones--;
+ } else {
+ zeros--;
+ }
+ index++;
+ }
+ Assert.assertEquals(ones, 0);
+ Assert.assertEquals(zeros, 0);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows1.size(), index);
+
+ String gapfillQuery2 = "SELECT "
+ + "GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)), levelId, lotId, occupied "
+ + "FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE occupied = 1 "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ int [] expectedOccupiedSlotsCounts2 = new int [] {2, 4, 6, 8, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ index = 0;
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ for (int k = 0; k < expectedOccupiedSlotsCounts2[i]; k++) {
+ String firstTimeCol = (String) gapFillRows2.get(index)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(gapFillRows2.get(index)[3], 1);
+ index++;
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ Assert.assertEquals(gapFillRows2.size(), index);
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestGapfillAggregate() {
+ DateTimeFormatSpec dateTimeFormatter
+ = new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String dataTimeConvertQuery = "SELECT "
+ + "DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + "'1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col, "
+ + "SUM(isOccupied) "
+ + "FROM parkingData "
+ + "WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + "GROUP BY 1 "
+ + "ORDER BY 1 "
+ + "LIMIT 200";
+
+ BrokerResponseNative dateTimeConvertBrokerResponse = getBrokerResponseForSqlQuery(dataTimeConvertQuery);
+
+ ResultTable dateTimeConvertResultTable = dateTimeConvertBrokerResponse.getResultTable();
+ Assert.assertEquals(dateTimeConvertResultTable.getRows().size(), 8);
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCounts1 = new double [] {6, 8, 10, 12, 6, 4, 2, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCounts1.length);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts1.length; i++) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts1[i], gapFillRows1.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double [] expectedOccupiedSlotsCounts2 = new double [] {6, 8, 10, 12, 6, 4, 2};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCounts2.length);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCounts2.length; i++) {
+ String firstTimeCol = (String) gapFillRows2.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ Assert.assertEquals(expectedOccupiedSlotsCounts2[i], gapFillRows2.get(i)[1]);
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestGapfillAggregateWithOptionalGroupBy() {
+ DateTimeFormatSpec dateTimeFormatter
+ = new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCountsForLevel11 = new double [] {4, 5, 6, 5, 3, 2, 1, 0};
+ double [] expectedOccupiedSlotsCountsForLevel21 = new double [] {2, 3, 4, 7, 3, 2, 1, 0};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = (String) gapFillRows1.get(i + 1)[0];
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+
+ String gapfillQuery2 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " WHERE isOccupied = 1 "
+ + " GROUP BY time_col, levelId "
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse2 = getBrokerResponseForSqlQuery(gapfillQuery2);
+
+ double [] expectedOccupiedSlotsCountsForLevel12 = new double [] {4, 5, 6, 5, 3, 2, 1};
+ double [] expectedOccupiedSlotsCountsForLevel22 = new double [] {2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable2 = gapfillBrokerResponse2.getResultTable();
+ List<Object[]> gapFillRows2 = gapFillResultTable2.getRows();
+ Assert.assertEquals(gapFillRows2.size(), expectedOccupiedSlotsCountsForLevel12.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel12.length * 2; i += 2) {
+ String firstTimeCol = (String) gapFillRows2.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i)[2]);
+ }
+ firstTimeCol = (String) gapFillRows2.get(i + 1)[0];
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows2.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel12[i / 2], gapFillRows2.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows2.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel22[i / 2], gapFillRows2.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestGapfillAggregateWithHavingClause() {
+ DateTimeFormatSpec dateTimeFormatter
+ = new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, levelId, SUM(isOccupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS'), "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(isOccupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " isOccupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col, levelId "
+ + " HAVING occupied_slots_count > 0"
+ + " LIMIT 200 ";
+
+ BrokerResponseNative gapfillBrokerResponse1 = getBrokerResponseForSqlQuery(gapfillQuery1);
+
+ double [] expectedOccupiedSlotsCountsForLevel11 = new double [] {4, 5, 6, 5, 3, 2, 1};
+ double [] expectedOccupiedSlotsCountsForLevel21 = new double [] {2, 3, 4, 7, 3, 2, 1};
+ ResultTable gapFillResultTable1 = gapfillBrokerResponse1.getResultTable();
+ List<Object[]> gapFillRows1 = gapFillResultTable1.getRows();
+ Assert.assertEquals(gapFillRows1.size(), expectedOccupiedSlotsCountsForLevel11.length * 2);
+ start = dateTimeFormatter.fromFormatToMillis("2021-11-07 04:00:00.000");
+ for (int i = 0; i < expectedOccupiedSlotsCountsForLevel11.length * 2; i += 2) {
+ String firstTimeCol = (String) gapFillRows1.get(i)[0];
+ long timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i)[2]);
+ }
+ firstTimeCol = (String) gapFillRows1.get(i + 1)[0];
+ timeStamp = dateTimeFormatter.fromFormatToMillis(firstTimeCol);
+ Assert.assertEquals(timeStamp, start);
+ if ("Level_0".equals(gapFillRows1.get(i + 1)[1])) {
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel11[i / 2], gapFillRows1.get(i + 1)[2]);
+ } else {
+ Assert.assertEquals("Level_1", gapFillRows1.get(i + 1)[1]);
+ Assert.assertEquals(expectedOccupiedSlotsCountsForLevel21[i / 2], gapFillRows1.get(i + 1)[2]);
+ }
+ start += dateTimeGranularity.granularityToMillis();
+ }
+ }
+
+ @Test
+ public void datetimeconvertGapfillTestAggregateAggregate() {
+ DateTimeFormatSpec dateTimeFormatter
+ = new DateTimeFormatSpec("1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS");
+ DateTimeGranularitySpec dateTimeGranularity = new DateTimeGranularitySpec("1:HOURS");
+ long start;
+
+ String gapfillQuery1 = "SELECT "
+ + "time_col, SUM(occupied) as occupied_slots_count, time_col "
+ + "FROM ("
+ + " SELECT GapFill(time_col, "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', "
+ + " '2021-11-07 4:00:00.000', '2021-11-07 12:00:00.000', '1:HOURS',"
+ + " FILL(occupied, 'FILL_PREVIOUS_VALUE'), TIMESERIESON(levelId, lotId)) AS time_col,"
+ + " occupied, lotId, levelId"
+ + " FROM ("
+ + " SELECT DATETIMECONVERT(eventTime, '1:MILLISECONDS:EPOCH', "
+ + " '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS', '1:HOURS') AS time_col,"
+ + " lastWithTime(isOccupied, eventTime, 'INT') as occupied, lotId, levelId"
+ + " FROM parkingData "
+ + " WHERE eventTime >= 1636257600000 AND eventTime <= 1636286400000 "
+ + " GROUP BY time_col, levelId, lotId "
+ + " LIMIT 200 "
+ + " ) "
+ + " LIMIT 200 "
+ + ") "
+ + " GROUP BY time_col "
+ + " LIMIT 200 ";
Review comment:
Nice!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] amrishlal commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
amrishlal commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r792343461
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/util/GapfillUtils.java
##########
@@ -31,7 +34,10 @@
*/
public class GapfillUtils {
private static final String POST_AGGREGATE_GAP_FILL = "postaggregategapfill";
+ private static final String PRE_AGGREGATE_GAP_FILL = "preaggregategapfill";
Review comment:
I am wondering if this function should be renamed to such gapfill since the "pre aggregate" part is now clearly visible with the subquery structure?
##########
File path: pinot-common/src/main/java/org/apache/pinot/sql/parsers/CalciteSqlParser.java
##########
@@ -130,8 +130,25 @@ public static PinotQuery compileToPinotQuery(String sql)
if (!options.isEmpty()) {
sql = removeOptionsFromSql(sql);
}
+
+ SqlParser sqlParser = SqlParser.create(sql, PARSER_CONFIG);
+ SqlNode sqlNode;
+ try {
+ sqlNode = sqlParser.parseQuery();
+ } catch (SqlParseException e) {
+ throw new SqlCompilationException("Caught exception while parsing query: " + sql, e);
+ }
+
// Compile Sql without OPTION statements.
- PinotQuery pinotQuery = compileCalciteSqlToPinotQuery(sql);
+ PinotQuery pinotQuery = compileSqlNodeToPinotQuery(sqlNode);
+
+ SqlSelect sqlSelect = getSelectNode(sqlNode);
+ if (sqlSelect != null) {
+ SqlNode fromNode = sqlSelect.getFrom();
+ if (fromNode != null && (fromNode instanceof SqlSelect || fromNode instanceof SqlOrderBy)) {
+ pinotQuery.getDataSource().setSubquery(compileSqlNodeToPinotQuery(fromNode));
+ }
+ }
Review comment:
Is all this change to original code necessary? For example the original function `compileCalciteSqlToPinotQuery` could still be retained, but modified to set the subquery node in pinotQuery.
##########
File path: pinot-common/src/main/java/org/apache/pinot/sql/parsers/CalciteSqlParser.java
##########
@@ -321,32 +338,30 @@ private static void setOptions(PinotQuery pinotQuery, List<String> optionsStatem
pinotQuery.setQueryOptions(options);
}
- private static PinotQuery compileCalciteSqlToPinotQuery(String sql) {
- SqlParser sqlParser = SqlParser.create(sql, PARSER_CONFIG);
- SqlNode sqlNode;
- try {
- sqlNode = sqlParser.parseQuery();
- } catch (SqlParseException e) {
- throw new SqlCompilationException("Caught exception while parsing query: " + sql, e);
- }
-
- PinotQuery pinotQuery = new PinotQuery();
- if (sqlNode instanceof SqlExplain) {
- // Extract sql node for the query
- sqlNode = ((SqlExplain) sqlNode).getExplicandum();
- pinotQuery.setExplain(true);
- }
Review comment:
We need this for EXPLAIN queries to work.
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/plan/CombinePlanNode.java
##########
@@ -180,6 +181,8 @@ public BaseCombineOperator run() {
// Selection order-by
return new SelectionOrderByCombineOperator(operators, _queryContext, _executorService);
}
+ } else if (GapfillUtils.isPreAggregateGapfill(_queryContext)) {
+ return new SelectionOnlyCombineOperator(operators, _queryContext, _executorService);
Review comment:
Will this still work if an ORDER BY clause is present in the gap fill query? If not then perhaps some validation checks should be done to filter out unsupported usage of gapfill queries (ordery by, having, group by etc.)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (b570851) into [master](https://codecov.io/gh/apache/pinot/commit/1d1a7d34709b6a89985a610f46dd1c97d6c9271a?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (1d1a7d3) will **decrease** coverage by `33.42%`.
> The diff coverage is `16.40%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 71.40% 37.98% -33.43%
+ Complexity 4223 81 -4142
=============================================
Files 1597 1608 +11
Lines 82903 83309 +406
Branches 12369 12452 +83
=============================================
- Hits 59201 31641 -27560
- Misses 19689 49237 +29548
+ Partials 4013 2431 -1582
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.92% <15.94%> (-0.07%)` | :arrow_down: |
| integration2 | `27.55% <16.40%> (-0.15%)` | :arrow_down: |
| unittests1 | `?` | |
| unittests2 | `14.28% <0.00%> (-0.08%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `81.03% <0.00%> (-6.47%)` | :arrow_down: |
| [...inot/core/plan/PreAggGapFillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL1ByZUFnZ0dhcEZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...pinot/core/plan/maker/InstancePlanMakerImplV2.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL21ha2VyL0luc3RhbmNlUGxhbk1ha2VySW1wbFYyLmphdmE=) | `59.40% <0.00%> (-17.37%)` | :arrow_down: |
| [...pache/pinot/core/query/reduce/BlockValSetImpl.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQmxvY2tWYWxTZXRJbXBsLmphdmE=) | `0.00% <0.00%> (ø)` | |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...PreAggregationGapFillSelectionOperatorService.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsU2VsZWN0aW9uT3BlcmF0b3JTZXJ2aWNlLmphdmE=) | `0.00% <0.00%> (ø)` | |
| [.../pinot/core/query/reduce/ResultReducerFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUmVzdWx0UmVkdWNlckZhY3RvcnkuamF2YQ==) | `61.11% <0.00%> (-20.14%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `28.26% <7.14%> (-35.38%)` | :arrow_down: |
| [...pinot/core/query/request/context/QueryContext.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvUXVlcnlDb250ZXh0LmphdmE=) | `88.14% <33.33%> (-9.77%)` | :arrow_down: |
| ... and [914 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [1d1a7d3...b570851](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (b570851) into [master](https://codecov.io/gh/apache/pinot/commit/1d1a7d34709b6a89985a610f46dd1c97d6c9271a?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (1d1a7d3) will **decrease** coverage by `33.43%`.
> The diff coverage is `16.40%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 71.40% 37.97% -33.44%
+ Complexity 4223 81 -4142
=============================================
Files 1597 1608 +11
Lines 82903 83309 +406
Branches 12369 12452 +83
=============================================
- Hits 59201 31633 -27568
- Misses 19689 49246 +29557
+ Partials 4013 2430 -1583
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.92% <15.94%> (-0.07%)` | :arrow_down: |
| integration2 | `27.54% <16.40%> (-0.17%)` | :arrow_down: |
| unittests1 | `?` | |
| unittests2 | `14.28% <0.00%> (-0.08%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `81.03% <0.00%> (-6.47%)` | :arrow_down: |
| [...inot/core/plan/PreAggGapFillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL1ByZUFnZ0dhcEZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...pinot/core/plan/maker/InstancePlanMakerImplV2.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL21ha2VyL0luc3RhbmNlUGxhbk1ha2VySW1wbFYyLmphdmE=) | `59.40% <0.00%> (-17.37%)` | :arrow_down: |
| [...pache/pinot/core/query/reduce/BlockValSetImpl.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQmxvY2tWYWxTZXRJbXBsLmphdmE=) | `0.00% <0.00%> (ø)` | |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...PreAggregationGapFillSelectionOperatorService.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsU2VsZWN0aW9uT3BlcmF0b3JTZXJ2aWNlLmphdmE=) | `0.00% <0.00%> (ø)` | |
| [.../pinot/core/query/reduce/ResultReducerFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUmVzdWx0UmVkdWNlckZhY3RvcnkuamF2YQ==) | `61.11% <0.00%> (-20.14%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `28.26% <7.14%> (-35.38%)` | :arrow_down: |
| [...pinot/core/query/request/context/QueryContext.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvUXVlcnlDb250ZXh0LmphdmE=) | `88.14% <33.33%> (-9.77%)` | :arrow_down: |
| ... and [914 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [1d1a7d3...b570851](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter commented on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter commented on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (b570851) into [master](https://codecov.io/gh/apache/pinot/commit/1d1a7d34709b6a89985a610f46dd1c97d6c9271a?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (1d1a7d3) will **decrease** coverage by `57.13%`.
> The diff coverage is `0.00%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 71.40% 14.27% -57.14%
+ Complexity 4223 81 -4142
=============================================
Files 1597 1563 -34
Lines 82903 81432 -1471
Branches 12369 12248 -121
=============================================
- Hits 59201 11627 -47574
- Misses 19689 68944 +49255
+ Partials 4013 861 -3152
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `?` | |
| unittests2 | `14.27% <0.00%> (-0.09%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...g/apache/pinot/sql/parsers/CalciteSqlCompiler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsQ29tcGlsZXIuamF2YQ==) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `0.00% <0.00%> (-87.78%)` | :arrow_down: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (-87.50%)` | :arrow_down: |
| [...inot/core/plan/PreAggGapFillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL1ByZUFnZ0dhcEZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...pinot/core/plan/maker/InstancePlanMakerImplV2.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL21ha2VyL0luc3RhbmNlUGxhbk1ha2VySW1wbFYyLmphdmE=) | `0.00% <0.00%> (-76.77%)` | :arrow_down: |
| [...pache/pinot/core/query/reduce/BlockValSetImpl.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQmxvY2tWYWxTZXRJbXBsLmphdmE=) | `0.00% <0.00%> (ø)` | |
| [...e/pinot/core/query/reduce/HavingFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvSGF2aW5nRmlsdGVySGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.31%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `0.00% <0.00%> (-92.31%)` | :arrow_down: |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `0.00% <0.00%> (ø)` | |
| ... and [1291 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [1d1a7d3...b570851](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r785603812
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/BlockValSetImpl.java
##########
@@ -0,0 +1,171 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.List;
+import javax.annotation.Nullable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.segment.spi.index.reader.Dictionary;
+import org.apache.pinot.spi.data.FieldSpec;
+
+
+/**
+ * Helper class to convert the result rows to BlockValSet.
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class BlockValSetImpl implements BlockValSet {
+
+ private final FieldSpec.DataType _dataType;
+ private final List<Object[]> _rows;
+ private final int _columnIndex;
+
+ public BlockValSetImpl(DataSchema.ColumnDataType columnDataType, List<Object[]> rows, int columnIndex) {
+ _dataType = columnDataType.toDataType();
+ _rows = rows;
+ _columnIndex = columnIndex;
+ }
+
+ @Override
+ public FieldSpec.DataType getValueType() {
+ return _dataType;
+ }
+
+ @Override
+ public boolean isSingleValue() {
+ return true;
+ }
+
+ @Nullable
+ @Override
+ public Dictionary getDictionary() {
+ throw new UnsupportedOperationException("Not supported");
+ }
+
+ @Override
+ public int[] getDictionaryIdsSV() {
+ throw new UnsupportedOperationException("Not supported");
+ }
+
+ @Override
+ public int[] getIntValuesSV() {
+ if (_dataType == FieldSpec.DataType.INT) {
+ int [] result = new int[_rows.size()];
+ for (int i = 0; i < result.length; i++) {
+ result[i] = (Integer) _rows.get(i)[_columnIndex];
+ }
+ return result;
+ }
+ throw new UnsupportedOperationException("Not supported");
+ }
+
+ @Override
+ public long[] getLongValuesSV() {
+ if (_dataType == FieldSpec.DataType.LONG) {
+ long [] result = new long[_rows.size()];
+ for (int i = 0; i < result.length; i++) {
+ result[i] = (Long) _rows.get(i)[_columnIndex];
+ }
+ return result;
+ }
+ throw new UnsupportedOperationException("Not supported");
Review comment:
Should we allow to cast Integer to Long or not?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r787319394
##########
File path: pinot-common/src/main/java/org/apache/pinot/sql/parsers/CalciteSqlParser.java
##########
@@ -117,21 +117,50 @@ private static String removeTerminatingSemicolon(String sql) {
return sql;
}
+ private static SqlNode parse(String sql) {
+ SqlParser sqlParser = SqlParser.create(sql, PARSER_CONFIG);
+ try {
+ return sqlParser.parseQuery();
+ } catch (SqlParseException e) {
+ throw new SqlCompilationException("Caught exception while parsing query: " + sql, e);
+ }
+ }
+
+ public static PinotQuery compileToPinotQueryWithSubquery(String sql)
+ throws SqlCompilationException {
+ return compileToPinotQuery(sql, true);
+ }
+
public static PinotQuery compileToPinotQuery(String sql)
throws SqlCompilationException {
- // Remove the comments from the query
- sql = removeComments(sql);
Review comment:
This is from conflict resolution. Fixed.
##########
File path: pinot-common/src/main/java/org/apache/pinot/sql/parsers/CalciteSqlParser.java
##########
@@ -117,21 +117,50 @@ private static String removeTerminatingSemicolon(String sql) {
return sql;
}
+ private static SqlNode parse(String sql) {
+ SqlParser sqlParser = SqlParser.create(sql, PARSER_CONFIG);
+ try {
+ return sqlParser.parseQuery();
+ } catch (SqlParseException e) {
+ throw new SqlCompilationException("Caught exception while parsing query: " + sql, e);
+ }
+ }
+
+ public static PinotQuery compileToPinotQueryWithSubquery(String sql)
+ throws SqlCompilationException {
+ return compileToPinotQuery(sql, true);
+ }
+
public static PinotQuery compileToPinotQuery(String sql)
throws SqlCompilationException {
- // Remove the comments from the query
- sql = removeComments(sql);
+ return compileToPinotQuery(sql, false);
Review comment:
Done
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] siddharthteotia commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
siddharthteotia commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r787173187
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/QueryContext.java
##########
@@ -85,6 +85,7 @@
// Keep the BrokerRequest to make incremental changes
// TODO: Remove it once the whole query engine is using the QueryContext
private final BrokerRequest _brokerRequest;
+ private QueryContext _preAggregateGapFillQueryContext;
Review comment:
@Jackie-Jiang
So this was one of the things which was discussed a lot. My concern was that changing the PinotQuery now to accommodate "**generic**" subquery has to be done carefully accounting for standard sql subquery syntax and semantics and we should be confident that it will hold in future
If we are really touching the FROM clause, my suggestion would be to make sure we understand calcite's treatment of simple FROM clause (table name as today) and complex FROM clause (sub-queries). Whatever we do today in the FROM clause to make this particular gapfill sub-query work should not interfere in the future when we are going to leverage / extend calcite's generic subquery planner to support all kinds of subqueries.
So this is why the path of least resistance could be to not touch PinotQuery for generic Subquery support since that may require a lot of design thinking before agreeing upon how to model any subquery in Pinot. So may be making it specific like in the above code just for gapfill is the way to go ?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r819278835
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/util/GapfillUtils.java
##########
@@ -119,4 +130,138 @@ static public Serializable getDefaultValue(DataSchema.ColumnDataType dataType) {
private static String canonicalizeFunctionName(String functionName) {
return StringUtils.remove(functionName, '_').toLowerCase();
}
+
+ public static boolean isGapfill(ExpressionContext expressionContext) {
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ return false;
+ }
+
+ return GAP_FILL.equals(canonicalizeFunctionName(expressionContext.getFunction().getFunctionName()));
+ }
+
+ private static boolean isGapfill(QueryContext queryContext) {
+ for (ExpressionContext expressionContext : queryContext.getSelectExpressions()) {
+ if (isGapfill(expressionContext)) {
+ return true;
+ }
+ }
+ return false;
+ }
+
+ public static GapfillType getGapfillType(QueryContext queryContext) {
+ if (queryContext.getSubQueryContext() == null) {
+ if (isGapfill(queryContext)) {
+ Preconditions.checkArgument(queryContext.getAggregationFunctions() == null,
+ "Aggregation and Gapfill can not be in the same sql statement.");
Review comment:
Fixed.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (cd0cbb2) into [master](https://codecov.io/gh/apache/pinot/commit/262dc50e236ed2af25a0cf8c67658a48731ce573?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (262dc50) will **decrease** coverage by `56.73%`.
> The diff coverage is `0.27%`.
> :exclamation: Current head cd0cbb2 differs from pull request most recent head 882d579. Consider uploading reports for the commit 882d579 to get more accurate results
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 70.83% 14.09% -56.74%
+ Complexity 4258 81 -4177
=============================================
Files 1636 1600 -36
Lines 85804 84538 -1266
Branches 12920 12871 -49
=============================================
- Hits 60779 11919 -48860
- Misses 20836 71726 +50890
+ Partials 4189 893 -3296
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `?` | |
| unittests2 | `14.09% <0.27%> (-0.08%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `0.00% <0.00%> (-88.20%)` | :arrow_down: |
| [...t/controller/api/resources/PinotQueryResource.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29udHJvbGxlci9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29udHJvbGxlci9hcGkvcmVzb3VyY2VzL1Bpbm90UXVlcnlSZXNvdXJjZS5qYXZh) | `0.00% <0.00%> (-50.35%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...not/core/query/reduce/GapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwRmlsbERhdGFUYWJsZVJlZHVjZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [.../pinot/core/query/reduce/GapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...e/pinot/core/query/reduce/HavingFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvSGF2aW5nRmlsdGVySGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.31%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.77%)` | :arrow_down: |
| [.../pinot/core/query/reduce/ResultReducerFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUmVzdWx0UmVkdWNlckZhY3RvcnkuamF2YQ==) | `0.00% <0.00%> (-81.25%)` | :arrow_down: |
| [.../pinot/core/query/reduce/filter/AndRowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0FuZFJvd01hdGNoZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...core/query/reduce/filter/ColumnValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0NvbHVtblZhbHVlRXh0cmFjdG9yLmphdmE=) | `0.00% <0.00%> (ø)` | |
| ... and [1325 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [262dc50...882d579](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829440530
##########
File path: pinot-common/src/main/java/org/apache/pinot/sql/parsers/CalciteSqlParser.java
##########
@@ -67,9 +67,7 @@
public class CalciteSqlParser {
- private CalciteSqlParser() {
- }
-
+ public static final List<QueryRewriter> QUERY_REWRITERS = new ArrayList<>(QueryRewriterFactory.getQueryRewriters());
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829462749
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/QueryContext.java
##########
@@ -375,6 +393,57 @@ public String toString() {
private Map<String, String> _queryOptions;
private Map<String, String> _debugOptions;
private BrokerRequest _brokerRequest;
+ private QueryContext _subQueryContext;
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829463175
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/QueryContext.java
##########
@@ -436,6 +505,11 @@ public Builder setBrokerRequest(BrokerRequest brokerRequest) {
return this;
}
+ public Builder setSubqueryContext(QueryContext subQueryContext) {
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829496631
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/filter/ValueExtractorFactory.java
##########
@@ -0,0 +1,29 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce.filter;
+
+import org.apache.pinot.common.request.context.ExpressionContext;
+
+
+/**
+ * Value extractor for the post-aggregation function or pre-aggregation gap fill.
+ */
+public interface ValueExtractorFactory {
+ ValueExtractor getValueExtractor(ExpressionContext expression);
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829483999
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/ColumnDataToBlockValSetConverter.java
##########
@@ -0,0 +1,181 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.List;
+import javax.annotation.Nullable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.segment.spi.index.reader.Dictionary;
+import org.apache.pinot.spi.data.FieldSpec;
+
+
+/**
+ * As for Gapfilling Function, all raw data will be retrieved from the pinot
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829520375
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/selection/SelectionOperatorUtils.java
##########
@@ -391,6 +388,9 @@ public static DataTable getDataTableFromRows(Collection<Object[]> rows, DataSche
row[i] = dataTable.getStringArray(rowId, i);
break;
+ case OBJECT:
Review comment:
Revert it.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r814081650
##########
File path: pinot-common/src/main/java/org/apache/pinot/sql/parsers/CalciteSqlParser.java
##########
@@ -129,8 +129,25 @@ public static PinotQuery compileToPinotQuery(String sql)
if (!options.isEmpty()) {
sql = removeOptionsFromSql(sql);
}
+
+ SqlParser sqlParser = SqlParser.create(sql, PARSER_CONFIG);
+ SqlNode sqlNode;
+ try {
+ sqlNode = sqlParser.parseQuery();
+ } catch (SqlParseException e) {
+ throw new SqlCompilationException("Caught exception while parsing query: " + sql, e);
+ }
+
// Compile Sql without OPTION statements.
- PinotQuery pinotQuery = compileCalciteSqlToPinotQuery(sql);
+ PinotQuery pinotQuery = compileSqlNodeToPinotQuery(sqlNode);
+
+ SqlSelect sqlSelect = getSelectNode(sqlNode);
+ if (sqlSelect != null) {
+ SqlNode fromNode = sqlSelect.getFrom();
+ if (fromNode != null && (fromNode instanceof SqlSelect || fromNode instanceof SqlOrderBy)) {
+ pinotQuery.getDataSource().setSubquery(compileSqlNodeToPinotQuery(fromNode));
+ }
+ }
Review comment:
Good catch! Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (a8bf363) into [master](https://codecov.io/gh/apache/pinot/commit/b05a5419c88fd61450156189c8754d8c10614423?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (b05a541) will **increase** coverage by `0.02%`.
> The diff coverage is `77.24%`.
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
+ Coverage 64.10% 64.12% +0.02%
+ Complexity 4267 4264 -3
============================================
Files 1594 1603 +9
Lines 84040 84543 +503
Branches 12719 12861 +142
============================================
+ Hits 53870 54210 +340
- Misses 26291 26403 +112
- Partials 3879 3930 +51
```
| Flag | Coverage Δ | |
|---|---|---|
| unittests1 | `67.01% <79.01%> (+<0.01%)` | :arrow_up: |
| unittests2 | `14.14% <0.33%> (-0.09%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/GrpcBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvR3JwY0Jyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `0.00% <ø> (ø)` | |
| [...thandler/SingleConnectionBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvU2luZ2xlQ29ubmVjdGlvbkJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `13.20% <0.00%> (+0.24%)` | :arrow_up: |
| [...pache/pinot/common/utils/request/RequestUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9jb21tb24vdXRpbHMvcmVxdWVzdC9SZXF1ZXN0VXRpbHMuamF2YQ==) | `85.71% <0.00%> (-1.79%)` | :arrow_down: |
| [...t/controller/api/resources/PinotQueryResource.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29udHJvbGxlci9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29udHJvbGxlci9hcGkvcmVzb3VyY2VzL1Bpbm90UXVlcnlSZXNvdXJjZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `23.68% <14.28%> (+0.02%)` | :arrow_up: |
| [...e/pinot/core/query/reduce/RowBasedBlockValSet.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUm93QmFzZWRCbG9ja1ZhbFNldC5qYXZh) | `16.12% <16.12%> (ø)` | |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `91.89% <66.66%> (+0.12%)` | :arrow_up: |
| [...ot/core/query/reduce/filter/RowMatcherFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXJGYWN0b3J5LmphdmE=) | `66.66% <66.66%> (ø)` | |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `72.28% <74.21%> (+8.64%)` | :arrow_up: |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| ... and [28 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [b05a541...a8bf363](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] Jackie-Jiang merged pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
Jackie-Jiang merged pull request #8029:
URL: https://github.com/apache/pinot/pull/8029
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r830719075
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/GapfillProcessor.java
##########
@@ -0,0 +1,471 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.function.CountAggregationFunction;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+
+/**
+ * Helper class to reduce and set gap fill results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class GapfillProcessor {
+ private final QueryContext _queryContext;
+
+ private final int _limitForAggregatedResult;
+ private final DateTimeGranularitySpec _gapfillDateTimeGranularity;
+ private final DateTimeGranularitySpec _postGapfillDateTimeGranularity;
+ private final DateTimeFormatSpec _dateTimeFormatter;
+ private final long _startMs;
+ private final long _endMs;
+ private final long _gapfillTimeBucketSize;
+ private final long _postGapfillTimeBucketSize;
+ private final int _numOfTimeBuckets;
+ private final List<Integer> _groupByKeyIndexes;
+ private final Set<Key> _groupByKeys;
+ private final Map<Key, Object[]> _previousByGroupKey;
+ private final Map<String, ExpressionContext> _fillExpressions;
+ private final List<ExpressionContext> _timeSeries;
+ private final GapfillUtils.GapfillType _gapfillType;
+ private int _limitForGapfilledResult;
+ private boolean[] _isGroupBySelections;
+ private final int _timeBucketColumnIndex;
+ private int[] _sourceColumnIndexForResultSchema = null;
+ private final int _aggregationSize;
+
+ GapfillProcessor(QueryContext queryContext, GapfillUtils.GapfillType gapfillType) {
+ _queryContext = queryContext;
+ _gapfillType = gapfillType;
+ _limitForAggregatedResult = queryContext.getLimit();
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ _limitForGapfilledResult = queryContext.getLimit();
+ } else {
+ _limitForGapfilledResult = queryContext.getSubquery().getLimit();
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext, _gapfillType);
+ _timeBucketColumnIndex = GapfillUtils.findTimeBucketColumnIndex(queryContext, _gapfillType);
+
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+
+ _dateTimeFormatter = new DateTimeFormatSpec(args.get(1).getLiteral());
+ _gapfillDateTimeGranularity = new DateTimeGranularitySpec(args.get(4).getLiteral());
+ _postGapfillDateTimeGranularity = new DateTimeGranularitySpec(args.get(5).getLiteral());
+ String start = args.get(2).getLiteral();
+ _startMs = truncate(_dateTimeFormatter.fromFormatToMillis(start));
+ String end = args.get(3).getLiteral();
+ _endMs = truncate(_dateTimeFormatter.fromFormatToMillis(end));
+ _gapfillTimeBucketSize = _gapfillDateTimeGranularity.granularityToMillis();
+ _postGapfillTimeBucketSize = _postGapfillDateTimeGranularity.granularityToMillis();
+ _numOfTimeBuckets = (int) ((_endMs - _startMs) / _gapfillTimeBucketSize);
+
+ _fillExpressions = GapfillUtils.getFillExpressions(gapFillSelection);
+
+ _aggregationSize = (int) (_postGapfillTimeBucketSize / _gapfillTimeBucketSize);
+
+ _previousByGroupKey = new HashMap<>();
+ _groupByKeyIndexes = new ArrayList<>();
+ _groupByKeys = new HashSet<>();
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ _timeSeries = timeseriesOn.getFunction().getArguments();
+ }
+
+ private int findGapfillBucketIndex(long time) {
+ return (int) ((time - _startMs) / _gapfillTimeBucketSize);
+ }
+
+ private void replaceColumnNameWithAlias(DataSchema dataSchema) {
+ QueryContext queryContext;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = _queryContext.getSubquery().getSubquery();
+ } else if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ queryContext = _queryContext;
+ } else {
+ queryContext = _queryContext.getSubquery();
+ }
+ List<String> aliasList = queryContext.getAliasList();
+ Map<String, String> columnNameToAliasMap = new HashMap<>();
+ for (int i = 0; i < aliasList.size(); i++) {
+ if (aliasList.get(i) != null) {
+ ExpressionContext selection = queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(selection)) {
+ selection = selection.getFunction().getArguments().get(0);
+ }
+ columnNameToAliasMap.put(selection.toString(), aliasList.get(i));
+ }
+ }
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ if (columnNameToAliasMap.containsKey(dataSchema.getColumnNames()[i])) {
+ dataSchema.getColumnNames()[i] = columnNameToAliasMap.get(dataSchema.getColumnNames()[i]);
+ }
+ }
+ }
+
+ /**
+ * Here are three things that happen
+ * 1. Sort the result sets from all pinot servers based on timestamp
+ * 2. Gapfill the data for missing entities per time bucket
+ * 3. Aggregate the dataset per time bucket.
+ */
+ public void process(BrokerResponseNative brokerResponseNative) {
+ DataSchema dataSchema = brokerResponseNative.getResultTable().getDataSchema();
+ DataSchema resultTableSchema = getResultTableDataSchema(dataSchema);
+ if (brokerResponseNative.getResultTable().getRows().isEmpty()) {
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, Collections.emptyList()));
+ return;
+ }
+
+ String[] columns = dataSchema.getColumnNames();
+
+ Map<String, Integer> indexes = new HashMap<>();
+ for (int i = 0; i < columns.length; i++) {
+ indexes.put(columns[i], i);
+ }
+
+ _isGroupBySelections = new boolean[dataSchema.getColumnDataTypes().length];
+
+ // The first one argument of timeSeries is time column. The left ones are defining entity.
+ for (ExpressionContext entityColum : _timeSeries) {
+ int index = indexes.get(entityColum.getIdentifier());
+ _isGroupBySelections[index] = true;
+ }
+
+ for (int i = 0; i < _isGroupBySelections.length; i++) {
+ if (_isGroupBySelections[i]) {
+ _groupByKeyIndexes.add(i);
+ }
+ }
+
+ List<Object[]>[] timeBucketedRawRows = putRawRowsIntoTimeBucket(brokerResponseNative.getResultTable().getRows());
+
+ replaceColumnNameWithAlias(dataSchema);
+
+ if (_queryContext.getAggregationFunctions() == null) {
+
+ Map<String, Integer> sourceColumnsIndexes = new HashMap<>();
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ sourceColumnsIndexes.put(dataSchema.getColumnName(i), i);
+ }
+ _sourceColumnIndexForResultSchema = new int[resultTableSchema.getColumnNames().length];
+ for (int i = 0; i < _sourceColumnIndexForResultSchema.length; i++) {
+ _sourceColumnIndexForResultSchema[i] = sourceColumnsIndexes.get(resultTableSchema.getColumnName(i));
+ }
+ }
+
+ List<Object[]> resultRows = gapFillAndAggregate(timeBucketedRawRows, resultTableSchema, dataSchema);
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, resultRows));
+ }
+
+ /**
+ * Constructs the DataSchema for the ResultTable.
+ */
+ private DataSchema getResultTableDataSchema(DataSchema dataSchema) {
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ return dataSchema;
+ }
+
+ int numOfColumns = _queryContext.getSelectExpressions().size();
+ String[] columnNames = new String[numOfColumns];
+ ColumnDataType[] columnDataTypes = new ColumnDataType[numOfColumns];
+ for (int i = 0; i < numOfColumns; i++) {
+ ExpressionContext expressionContext = _queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(expressionContext)) {
+ expressionContext = expressionContext.getFunction().getArguments().get(0);
+ }
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ columnNames[i] = expressionContext.getIdentifier();
+ columnDataTypes[i] = ColumnDataType.STRING;
+ } else {
+ FunctionContext functionContext = expressionContext.getFunction();
+ AggregationFunction aggregationFunction =
+ AggregationFunctionFactory.getAggregationFunction(functionContext, _queryContext);
+ columnDataTypes[i] = aggregationFunction.getFinalResultColumnType();
+ columnNames[i] = functionContext.toString();
+ }
+ }
+ return new DataSchema(columnNames, columnDataTypes);
+ }
+
+ private Key constructGroupKeys(Object[] row) {
+ Object[] groupKeys = new Object[_groupByKeyIndexes.size()];
+ for (int i = 0; i < _groupByKeyIndexes.size(); i++) {
+ groupKeys[i] = row[_groupByKeyIndexes.get(i)];
+ }
+ return new Key(groupKeys);
+ }
+
+ private long truncate(long epoch) {
+ int sz = _gapfillDateTimeGranularity.getSize();
+ return epoch / sz * sz;
+ }
+
+ private List<Object[]> gapFillAndAggregate(List<Object[]>[] timeBucketedRawRows,
+ DataSchema dataSchemaForAggregatedResult, DataSchema dataSchema) {
+ List<Object[]> result = new ArrayList<>();
+
+ GapfillFilterHandler postGapfillFilterHandler = null;
+ if (_queryContext.getSubquery() != null && _queryContext.getFilter() != null) {
+ postGapfillFilterHandler = new GapfillFilterHandler(_queryContext.getFilter(), dataSchema);
+ }
+ GapfillFilterHandler postAggregateHavingFilterHandler = null;
+ if (_queryContext.getHavingFilter() != null) {
+ postAggregateHavingFilterHandler =
+ new GapfillFilterHandler(_queryContext.getHavingFilter(), dataSchemaForAggregatedResult);
+ }
+ long start = _startMs;
+ ColumnDataType[] resultColumnDataTypes = dataSchema.getColumnDataTypes();
+ List<Object[]> bucketedResult = new ArrayList<>();
+ for (long time = _startMs; time < _endMs; time += _gapfillTimeBucketSize) {
+ int index = findGapfillBucketIndex(time);
+ gapfill(time, bucketedResult, timeBucketedRawRows[index], dataSchema, postGapfillFilterHandler);
+ if (_queryContext.getAggregationFunctions() == null) {
+ for (Object [] row : bucketedResult) {
+ Object[] resultRow = new Object[_sourceColumnIndexForResultSchema.length];
+ for (int i = 0; i < _sourceColumnIndexForResultSchema.length; i++) {
+ resultRow[i] = row[_sourceColumnIndexForResultSchema[i]];
+ }
+ result.add(resultRow);
+ }
+ bucketedResult = new ArrayList<>();
+ } else if (index % _aggregationSize == _aggregationSize - 1 && bucketedResult.size() > 0) {
Review comment:
Yes, we need update the start if it is empty.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r831528011
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/GapfillProcessor.java
##########
@@ -0,0 +1,477 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.function.CountAggregationFunction;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+
+/**
+ * Helper class to reduce and set gap fill results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class GapfillProcessor {
+ private final QueryContext _queryContext;
+
+ private final int _limitForAggregatedResult;
+ private final DateTimeGranularitySpec _gapfillDateTimeGranularity;
+ private final DateTimeGranularitySpec _postGapfillDateTimeGranularity;
+ private final DateTimeFormatSpec _dateTimeFormatter;
+ private final long _startMs;
+ private final long _endMs;
+ private final long _gapfillTimeBucketSize;
+ private final long _postGapfillTimeBucketSize;
+ private final int _numOfTimeBuckets;
+ private final List<Integer> _groupByKeyIndexes;
+ private final Set<Key> _groupByKeys;
+ private final Map<Key, Object[]> _previousByGroupKey;
+ private final Map<String, ExpressionContext> _fillExpressions;
+ private final List<ExpressionContext> _timeSeries;
+ private final GapfillUtils.GapfillType _gapfillType;
+ private int _limitForGapfilledResult;
+ private boolean[] _isGroupBySelections;
+ private final int _timeBucketColumnIndex;
+ private int[] _sourceColumnIndexForResultSchema = null;
+ private final int _aggregationSize;
+
+ GapfillProcessor(QueryContext queryContext, GapfillUtils.GapfillType gapfillType) {
+ _queryContext = queryContext;
+ _gapfillType = gapfillType;
+ _limitForAggregatedResult = queryContext.getLimit();
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ _limitForGapfilledResult = queryContext.getLimit();
+ } else {
+ _limitForGapfilledResult = queryContext.getSubquery().getLimit();
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext, _gapfillType);
+ _timeBucketColumnIndex = GapfillUtils.findTimeBucketColumnIndex(queryContext, _gapfillType);
+
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+
+ _dateTimeFormatter = new DateTimeFormatSpec(args.get(1).getLiteral());
+ _gapfillDateTimeGranularity = new DateTimeGranularitySpec(args.get(4).getLiteral());
+ if (args.get(5).getLiteral() == null) {
+ _postGapfillDateTimeGranularity = _gapfillDateTimeGranularity;
+ } else {
+ _postGapfillDateTimeGranularity = new DateTimeGranularitySpec(args.get(5).getLiteral());
+ }
+ String start = args.get(2).getLiteral();
+ _startMs = truncate(_dateTimeFormatter.fromFormatToMillis(start));
+ String end = args.get(3).getLiteral();
+ _endMs = truncate(_dateTimeFormatter.fromFormatToMillis(end));
+ _gapfillTimeBucketSize = _gapfillDateTimeGranularity.granularityToMillis();
+ _postGapfillTimeBucketSize = _postGapfillDateTimeGranularity.granularityToMillis();
+ _numOfTimeBuckets = (int) ((_endMs - _startMs) / _gapfillTimeBucketSize);
+
+ _fillExpressions = GapfillUtils.getFillExpressions(gapFillSelection);
+
+ _aggregationSize = (int) (_postGapfillTimeBucketSize / _gapfillTimeBucketSize);
+
+ _previousByGroupKey = new HashMap<>();
+ _groupByKeyIndexes = new ArrayList<>();
+ _groupByKeys = new HashSet<>();
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ _timeSeries = timeseriesOn.getFunction().getArguments();
+ }
+
+ private int findGapfillBucketIndex(long time) {
+ return (int) ((time - _startMs) / _gapfillTimeBucketSize);
+ }
+
+ private void replaceColumnNameWithAlias(DataSchema dataSchema) {
+ QueryContext queryContext;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = _queryContext.getSubquery().getSubquery();
+ } else if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ queryContext = _queryContext;
+ } else {
+ queryContext = _queryContext.getSubquery();
+ }
+ List<String> aliasList = queryContext.getAliasList();
+ Map<String, String> columnNameToAliasMap = new HashMap<>();
+ for (int i = 0; i < aliasList.size(); i++) {
+ if (aliasList.get(i) != null) {
+ ExpressionContext selection = queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(selection)) {
+ selection = selection.getFunction().getArguments().get(0);
+ }
+ columnNameToAliasMap.put(selection.toString(), aliasList.get(i));
+ }
+ }
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ if (columnNameToAliasMap.containsKey(dataSchema.getColumnNames()[i])) {
+ dataSchema.getColumnNames()[i] = columnNameToAliasMap.get(dataSchema.getColumnNames()[i]);
+ }
+ }
+ }
+
+ /**
+ * Here are three things that happen
+ * 1. Sort the result sets from all pinot servers based on timestamp
+ * 2. Gapfill the data for missing entities per time bucket
+ * 3. Aggregate the dataset per time bucket.
+ */
+ public void process(BrokerResponseNative brokerResponseNative) {
+ DataSchema dataSchema = brokerResponseNative.getResultTable().getDataSchema();
+ DataSchema resultTableSchema = getResultTableDataSchema(dataSchema);
+ if (brokerResponseNative.getResultTable().getRows().isEmpty()) {
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, Collections.emptyList()));
+ return;
+ }
+
+ String[] columns = dataSchema.getColumnNames();
+
+ Map<String, Integer> indexes = new HashMap<>();
+ for (int i = 0; i < columns.length; i++) {
+ indexes.put(columns[i], i);
+ }
+
+ _isGroupBySelections = new boolean[dataSchema.getColumnDataTypes().length];
+
+ // The first one argument of timeSeries is time column. The left ones are defining entity.
+ for (ExpressionContext entityColum : _timeSeries) {
+ int index = indexes.get(entityColum.getIdentifier());
+ _isGroupBySelections[index] = true;
+ }
+
+ for (int i = 0; i < _isGroupBySelections.length; i++) {
+ if (_isGroupBySelections[i]) {
+ _groupByKeyIndexes.add(i);
+ }
+ }
+
+ List<Object[]>[] timeBucketedRawRows = putRawRowsIntoTimeBucket(brokerResponseNative.getResultTable().getRows());
+
+ replaceColumnNameWithAlias(dataSchema);
+
+ if (_queryContext.getAggregationFunctions() == null) {
+
+ Map<String, Integer> sourceColumnsIndexes = new HashMap<>();
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ sourceColumnsIndexes.put(dataSchema.getColumnName(i), i);
+ }
+ _sourceColumnIndexForResultSchema = new int[resultTableSchema.getColumnNames().length];
+ for (int i = 0; i < _sourceColumnIndexForResultSchema.length; i++) {
+ _sourceColumnIndexForResultSchema[i] = sourceColumnsIndexes.get(resultTableSchema.getColumnName(i));
+ }
+ }
+
+ List<Object[]> resultRows = gapFillAndAggregate(timeBucketedRawRows, resultTableSchema, dataSchema);
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, resultRows));
+ }
+
+ /**
+ * Constructs the DataSchema for the ResultTable.
+ */
+ private DataSchema getResultTableDataSchema(DataSchema dataSchema) {
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ return dataSchema;
+ }
+
+ int numOfColumns = _queryContext.getSelectExpressions().size();
+ String[] columnNames = new String[numOfColumns];
+ ColumnDataType[] columnDataTypes = new ColumnDataType[numOfColumns];
+ for (int i = 0; i < numOfColumns; i++) {
+ ExpressionContext expressionContext = _queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(expressionContext)) {
+ expressionContext = expressionContext.getFunction().getArguments().get(0);
+ }
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ columnNames[i] = expressionContext.getIdentifier();
+ columnDataTypes[i] = ColumnDataType.STRING;
+ } else {
+ FunctionContext functionContext = expressionContext.getFunction();
+ AggregationFunction aggregationFunction =
+ AggregationFunctionFactory.getAggregationFunction(functionContext, _queryContext);
+ columnDataTypes[i] = aggregationFunction.getFinalResultColumnType();
+ columnNames[i] = functionContext.toString();
+ }
+ }
+ return new DataSchema(columnNames, columnDataTypes);
+ }
+
+ private Key constructGroupKeys(Object[] row) {
+ Object[] groupKeys = new Object[_groupByKeyIndexes.size()];
+ for (int i = 0; i < _groupByKeyIndexes.size(); i++) {
+ groupKeys[i] = row[_groupByKeyIndexes.get(i)];
+ }
+ return new Key(groupKeys);
+ }
+
+ private long truncate(long epoch) {
+ int sz = _gapfillDateTimeGranularity.getSize();
+ return epoch / sz * sz;
+ }
+
+ private List<Object[]> gapFillAndAggregate(List<Object[]>[] timeBucketedRawRows,
+ DataSchema dataSchemaForAggregatedResult, DataSchema dataSchema) {
+ List<Object[]> result = new ArrayList<>();
+
+ GapfillFilterHandler postGapfillFilterHandler = null;
+ if (_queryContext.getSubquery() != null && _queryContext.getFilter() != null) {
+ postGapfillFilterHandler = new GapfillFilterHandler(_queryContext.getFilter(), dataSchema);
+ }
+ GapfillFilterHandler postAggregateHavingFilterHandler = null;
+ if (_queryContext.getHavingFilter() != null) {
+ postAggregateHavingFilterHandler =
+ new GapfillFilterHandler(_queryContext.getHavingFilter(), dataSchemaForAggregatedResult);
+ }
+ long start = _startMs;
+ ColumnDataType[] resultColumnDataTypes = dataSchema.getColumnDataTypes();
+ List<Object[]> bucketedResult = new ArrayList<>();
+ for (long time = _startMs; time < _endMs; time += _gapfillTimeBucketSize) {
+ int index = findGapfillBucketIndex(time);
+ gapfill(time, bucketedResult, timeBucketedRawRows[index], dataSchema, postGapfillFilterHandler);
+ if (_queryContext.getAggregationFunctions() == null) {
+ for (Object [] row : bucketedResult) {
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r819934610
##########
File path: pinot-common/src/main/java/org/apache/pinot/sql/parsers/CalciteSqlParser.java
##########
@@ -369,6 +369,9 @@ private static PinotQuery compileCalciteSqlToPinotQuery(String sql) {
DataSource dataSource = new DataSource();
dataSource.setTableName(fromNode.toString());
pinotQuery.setDataSource(dataSource);
+ if (fromNode instanceof SqlSelect || fromNode instanceof SqlOrderBy) {
Review comment:
Can not leave TableName as null. I will accept your other suggestion.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (3c27643) into [master](https://codecov.io/gh/apache/pinot/commit/3f98ce37fdaef0335fcd82e621489d65751b1f55?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (3f98ce3) will **decrease** coverage by `6.44%`.
> The diff coverage is `82.08%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 70.72% 64.27% -6.45%
+ Complexity 4242 4240 -2
============================================
Files 1631 1596 -35
Lines 85279 84078 -1201
Branches 12844 12809 -35
============================================
- Hits 60316 54045 -6271
- Misses 20799 26135 +5336
+ Partials 4164 3898 -266
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `67.14% <82.29%> (+0.15%)` | :arrow_up: |
| unittests2 | `14.08% <0.00%> (-0.02%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `24.41% <0.00%> (-47.44%)` | :arrow_down: |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `86.08% <0.00%> (-6.10%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...ot/core/query/reduce/filter/RowMatcherFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXJGYWN0b3J5LmphdmE=) | `66.66% <66.66%> (ø)` | |
| [...pinot/core/plan/maker/InstancePlanMakerImplV2.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL21ha2VyL0luc3RhbmNlUGxhbk1ha2VySW1wbFYyLmphdmE=) | `66.36% <73.68%> (-10.41%)` | :arrow_down: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `80.55% <75.00%> (-6.95%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `92.10% <75.00%> (+0.34%)` | :arrow_up: |
| [...xt/utils/BrokerRequestToQueryContextConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvQnJva2VyUmVxdWVzdFRvUXVlcnlDb250ZXh0Q29udmVydGVyLmphdmE=) | `90.96% <77.77%> (-7.43%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `76.41% <82.05%> (+12.77%)` | :arrow_up: |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| ... and [391 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [3f98ce3...3c27643](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (d69d109) into [master](https://codecov.io/gh/apache/pinot/commit/3f98ce37fdaef0335fcd82e621489d65751b1f55?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (3f98ce3) will **decrease** coverage by `42.21%`.
> The diff coverage is `15.90%`.
> :exclamation: Current head d69d109 differs from pull request most recent head 7746a7a. Consider uploading reports for the commit 7746a7a to get more accurate results
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 70.72% 28.51% -42.22%
=============================================
Files 1631 1629 -2
Lines 85279 85610 +331
Branches 12844 12982 +138
=============================================
- Hits 60316 24411 -35905
- Misses 20799 58945 +38146
+ Partials 4164 2254 -1910
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.51% <15.90%> (-0.18%)` | :arrow_down: |
| integration2 | `?` | |
| unittests1 | `?` | |
| unittests2 | `?` | |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `61.13% <0.00%> (-10.73%)` | :arrow_down: |
| [...ache/pinot/core/plan/GapfillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...not/core/query/reduce/GapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwRmlsbERhdGFUYWJsZVJlZHVjZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [.../pinot/core/query/reduce/GapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `69.25% <0.00%> (-22.93%)` | :arrow_down: |
| [...xt/utils/BrokerRequestToQueryContextConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvQnJva2VyUmVxdWVzdFRvUXVlcnlDb250ZXh0Q29udmVydGVyLmphdmE=) | `70.68% <16.98%> (-27.70%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `27.35% <21.79%> (-36.28%)` | :arrow_down: |
| [...pinot/core/plan/maker/InstancePlanMakerImplV2.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL21ha2VyL0luc3RhbmNlUGxhbk1ha2VySW1wbFYyLmphdmE=) | `43.63% <26.31%> (-33.14%)` | :arrow_down: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `61.11% <30.00%> (-26.39%)` | :arrow_down: |
| ... and [1193 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [3f98ce3...7746a7a](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r793133539
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/util/GapfillUtils.java
##########
@@ -31,7 +34,10 @@
*/
public class GapfillUtils {
private static final String POST_AGGREGATE_GAP_FILL = "postaggregategapfill";
+ private static final String PRE_AGGREGATE_GAP_FILL = "preaggregategapfill";
Review comment:
Done
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r793120029
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/PreAggregationGapFillSelectionOperatorService.java
##########
@@ -0,0 +1,388 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Comparator;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.PriorityQueue;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FilterContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.query.selection.SelectionOperatorUtils;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+
+/**
+ * The <code>PreAggregationGapFillSelectionOperatorService</code> class provides the services for selection queries with
+ * <code>ORDER BY</code>.
+ * <p>Expected behavior:
+ * <ul>
+ * <li>
+ * Return selection results with the same order of columns as user passed in.
Review comment:
Done
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (bbde0cc) into [master](https://codecov.io/gh/apache/pinot/commit/cc2f3fe196d29a0d716bfee07add9b761e8fa98e?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (cc2f3fe) will **increase** coverage by `0.06%`.
> The diff coverage is `74.56%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
+ Coverage 64.63% 64.69% +0.06%
- Complexity 4260 4261 +1
============================================
Files 1562 1572 +10
Lines 81525 81856 +331
Branches 12252 12325 +73
============================================
+ Hits 52695 52959 +264
- Misses 25072 25093 +21
- Partials 3758 3804 +46
```
| Flag | Coverage Δ | |
|---|---|---|
| unittests1 | `67.88% <74.56%> (+0.02%)` | :arrow_up: |
| unittests2 | `14.19% <0.00%> (-0.03%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...query/request/context/utils/QueryContextUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvUXVlcnlDb250ZXh0VXRpbHMuamF2YQ==) | `69.56% <20.00%> (-5.44%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `58.73% <54.83%> (-4.91%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `92.42% <66.66%> (+0.11%)` | :arrow_up: |
| [...che/pinot/core/query/reduce/filter/RowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXIuamF2YQ==) | `66.66% <66.66%> (ø)` | |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `83.76% <83.76%> (ø)` | |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `85.71% <85.71%> (ø)` | |
| [.../core/query/reduce/filter/PredicateRowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1ByZWRpY2F0ZVJvd01hdGNoZXIuamF2YQ==) | `87.50% <87.50%> (ø)` | |
| [...inot/core/plan/PreAggGapFillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL1ByZUFnZ0dhcEZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `90.00% <90.00%> (ø)` | |
| ... and [22 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [cc2f3fe...bbde0cc](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (f0544c7) into [master](https://codecov.io/gh/apache/pinot/commit/916d807c8f67b32c1a430692f74134c9c976c33d?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (916d807) will **decrease** coverage by `6.58%`.
> The diff coverage is `82.00%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 71.02% 64.44% -6.59%
Complexity 4314 4314
============================================
Files 1626 1591 -35
Lines 84929 83729 -1200
Branches 12783 12744 -39
============================================
- Hits 60325 53963 -6362
- Misses 20462 25880 +5418
+ Partials 4142 3886 -256
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `67.47% <82.23%> (+0.08%)` | :arrow_up: |
| unittests2 | `14.02% <0.00%> (-0.09%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `24.70% <0.00%> (-46.82%)` | :arrow_down: |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `86.08% <0.00%> (-6.10%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...che/pinot/core/query/reduce/filter/RowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXIuamF2YQ==) | `66.66% <66.66%> (ø)` | |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `75.24% <81.42%> (+11.61%)` | :arrow_up: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `83.33% <81.81%> (-4.17%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `92.00% <83.33%> (+0.23%)` | :arrow_up: |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `85.71% <85.71%> (ø)` | |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `86.73% <86.73%> (ø)` | |
| ... and [398 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [916d807...f0544c7](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1030278033
@siddharthteotia I am actively working on it.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (e4043fe) into [master](https://codecov.io/gh/apache/pinot/commit/af742f7d7e1dbe4325c982f3a0164927d8a0037f?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (af742f7) will **decrease** coverage by `39.90%`.
> The diff coverage is `11.94%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 70.32% 30.42% -39.91%
=============================================
Files 1624 1626 +2
Lines 84176 84522 +346
Branches 12600 12731 +131
=============================================
- Hits 59196 25714 -33482
- Misses 20906 56511 +35605
+ Partials 4074 2297 -1777
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.65% <11.67%> (-0.15%)` | :arrow_down: |
| integration2 | `27.37% <11.94%> (?)` | |
| unittests1 | `?` | |
| unittests2 | `?` | |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [.../combine/GapfillGroupByOrderByCombineOperator.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9vcGVyYXRvci9jb21iaW5lL0dhcGZpbGxHcm91cEJ5T3JkZXJCeUNvbWJpbmVPcGVyYXRvci5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...plan/GapfillAggregationGroupByOrderByPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxBZ2dyZWdhdGlvbkdyb3VwQnlPcmRlckJ5UGxhbk5vZGUuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...he/pinot/core/plan/GapfillAggregationPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxBZ2dyZWdhdGlvblBsYW5Ob2RlLmphdmE=) | `0.00% <0.00%> (ø)` | |
| [...ache/pinot/core/plan/GapfillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `69.25% <0.00%> (-22.93%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `27.02% <18.60%> (-36.61%)` | :arrow_down: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `78.33% <20.00%> (-5.60%)` | :arrow_down: |
| ... and [1180 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [af742f7...e4043fe](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r806333514
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/utils/QueryContextUtils.java
##########
@@ -47,16 +52,22 @@ public static boolean isSelectionQuery(QueryContext query) {
* Selection-only query at this moment means selection query without order-by.
*/
public static boolean isSelectionOnlyQuery(QueryContext query) {
- return query.getAggregationFunctions() == null && query.getOrderByExpressions() == null;
+ return query.getAggregationFunctions() == null
+ && query.getOrderByExpressions() == null
+ && !GapfillUtils.isGapfill(query);
Review comment:
Done
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r806376002
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/plan/GapfillAggregationGroupByOrderByPlanNode.java
##########
@@ -0,0 +1,110 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.plan;
+
+import com.google.common.base.Preconditions;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.core.operator.filter.BaseFilterOperator;
+import org.apache.pinot.core.operator.query.AggregationGroupByOrderByOperator;
+import org.apache.pinot.core.operator.transform.TransformOperator;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionUtils;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.startree.CompositePredicateEvaluator;
+import org.apache.pinot.core.startree.StarTreeUtils;
+import org.apache.pinot.core.startree.plan.StarTreeTransformPlanNode;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.segment.spi.IndexSegment;
+import org.apache.pinot.segment.spi.index.startree.AggregationFunctionColumnPair;
+import org.apache.pinot.segment.spi.index.startree.StarTreeV2;
+
+
+/**
+ * The <code>GapfillAggregationGroupByOrderByPlanNode</code> class provides the execution plan for gapfill aggregation
+ * group-by order-by query on a single segment.
+ */
+@SuppressWarnings("rawtypes")
+public class GapfillAggregationGroupByOrderByPlanNode implements PlanNode {
Review comment:
This class is removed due to query syntax change. Done
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (572ab88) into [master](https://codecov.io/gh/apache/pinot/commit/df39bdacf09dff5a00f5180a5d1ce838710b45a4?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (df39bda) will **decrease** coverage by `57.00%`.
> The diff coverage is `0.00%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 71.01% 14.01% -57.01%
+ Complexity 4314 81 -4233
=============================================
Files 1624 1589 -35
Lines 84873 83621 -1252
Branches 12791 12743 -48
=============================================
- Hits 60273 11718 -48555
- Misses 20453 71032 +50579
+ Partials 4147 871 -3276
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `?` | |
| unittests2 | `14.01% <0.00%> (-0.10%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `0.00% <0.00%> (-87.93%)` | :arrow_down: |
| [...org/apache/pinot/core/data/table/IndexedTable.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9kYXRhL3RhYmxlL0luZGV4ZWRUYWJsZS5qYXZh) | `0.00% <0.00%> (-84.75%)` | :arrow_down: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (-87.50%)` | :arrow_down: |
| [...ache/pinot/core/plan/GapfillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...pinot/core/plan/maker/InstancePlanMakerImplV2.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL21ha2VyL0luc3RhbmNlUGxhbk1ha2VySW1wbFYyLmphdmE=) | `0.00% <0.00%> (-76.77%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [...e/pinot/core/query/reduce/HavingFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvSGF2aW5nRmlsdGVySGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.31%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.67%)` | :arrow_down: |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `0.00% <0.00%> (ø)` | |
| ... and [1315 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [df39bda...572ab88](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r814092555
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/data/table/IndexedTable.java
##########
@@ -65,10 +66,15 @@ protected IndexedTable(DataSchema dataSchema, QueryContext queryContext, int res
_lookupMap = lookupMap;
_resultSize = resultSize;
- List<ExpressionContext> groupByExpressions = queryContext.getGroupByExpressions();
+ List<ExpressionContext> groupByExpressions;
+ if (queryContext.getGapfillType() != GapfillUtils.GapfillType.NONE) {
+ groupByExpressions = GapfillUtils.getGroupByExpressions(queryContext);
+ } else {
+ groupByExpressions = queryContext.getGroupByExpressions();
+ }
+ _aggregationFunctions = queryContext.getAggregationFunctions();
Review comment:
Fixed.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (5e68133) into [master](https://codecov.io/gh/apache/pinot/commit/21632dadb8cd2d8b77aec523a758d73a64f70b07?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (21632da) will **decrease** coverage by `3.91%`.
> The diff coverage is `81.49%`.
> :exclamation: Current head 5e68133 differs from pull request most recent head 3d822b7. Consider uploading reports for the commit 3d822b7 to get more accurate results
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 71.00% 67.08% -3.92%
+ Complexity 4320 4158 -162
============================================
Files 1629 1239 -390
Lines 85132 62734 -22398
Branches 12812 9816 -2996
============================================
- Hits 60445 42084 -18361
+ Misses 20526 17637 -2889
+ Partials 4161 3013 -1148
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `67.08% <81.49%> (-0.30%)` | :arrow_down: |
| unittests2 | `?` | |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...on/src/main/java/org/apache/pinot/serde/SerDe.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zZXJkZS9TZXJEZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `86.08% <0.00%> (-6.10%)` | :arrow_down: |
| [...a/org/apache/pinot/spi/filesystem/BasePinotFS.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3Qtc3BpL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcGkvZmlsZXN5c3RlbS9CYXNlUGlub3RGUy5qYXZh) | `70.58% <ø> (ø)` | |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...e/pinot/core/transport/InstanceRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS90cmFuc3BvcnQvSW5zdGFuY2VSZXF1ZXN0SGFuZGxlci5qYXZh) | `52.43% <50.00%> (-8.33%)` | :arrow_down: |
| [...rg/apache/pinot/core/transport/ServerChannels.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS90cmFuc3BvcnQvU2VydmVyQ2hhbm5lbHMuamF2YQ==) | `73.77% <50.00%> (-16.07%)` | :arrow_down: |
| [.../org/apache/pinot/spi/filesystem/LocalPinotFS.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3Qtc3BpL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcGkvZmlsZXN5c3RlbS9Mb2NhbFBpbm90RlMuamF2YQ==) | `80.00% <50.00%> (+7.27%)` | :arrow_up: |
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `85.44% <65.51%> (-2.48%)` | :arrow_down: |
| [...ot/core/query/reduce/filter/RowMatcherFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXJGYWN0b3J5LmphdmE=) | `66.66% <66.66%> (ø)` | |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `75.00% <81.15%> (+11.36%)` | :arrow_up: |
| ... and [657 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [21632da...3d822b7](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r815388611
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/plan/GapfillSelectionPlanNode.java
##########
@@ -0,0 +1,90 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.plan;
+
+import com.google.common.base.Preconditions;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.core.common.Operator;
+import org.apache.pinot.core.operator.blocks.IntermediateResultsBlock;
+import org.apache.pinot.core.operator.query.SelectionOnlyOperator;
+import org.apache.pinot.core.operator.transform.TransformOperator;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.query.selection.SelectionOperatorUtils;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.segment.spi.IndexSegment;
+
+
+/**
+ * The <code>PreAggGapFillSelectionPlanNode</code> class provides the execution
+ * plan for pre-aggregate gapfill query on a single segment.
+ */
+public class GapfillSelectionPlanNode implements PlanNode {
+ private final IndexSegment _indexSegment;
+ private final QueryContext _queryContext;
+
+ public GapfillSelectionPlanNode(IndexSegment indexSegment, QueryContext queryContext) {
+ _indexSegment = indexSegment;
+ _queryContext = queryContext;
+ }
+
+ @Override
+ public Operator<IntermediateResultsBlock> run() {
+ int limit = _queryContext.getLimit();
+
+ QueryContext queryContext = getSelectQueryContext();
+ Preconditions.checkArgument(queryContext.getOrderByExpressions() == null,
+ "The gapfill query should not have orderby expression.");
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829451951
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/GapFillProcessor.java
##########
@@ -0,0 +1,455 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+
+/**
+ * Helper class to reduce and set gap fill results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class GapFillProcessor {
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829438895
##########
File path: pinot-broker/src/main/java/org/apache/pinot/broker/requesthandler/BaseBrokerRequestHandler.java
##########
@@ -2183,9 +2192,9 @@ private void attachTimeBoundary(String rawTableName, BrokerRequest brokerRequest
* Processes the optimized broker requests for both OFFLINE and REALTIME table.
*/
protected abstract BrokerResponseNative processBrokerRequest(long requestId, BrokerRequest originalBrokerRequest,
- @Nullable BrokerRequest offlineBrokerRequest, @Nullable Map<ServerInstance, List<String>> offlineRoutingTable,
- @Nullable BrokerRequest realtimeBrokerRequest, @Nullable Map<ServerInstance, List<String>> realtimeRoutingTable,
- long timeoutMs, ServerStats serverStats, RequestStatistics requestStatistics)
+ BrokerRequest brokerRequest, @Nullable BrokerRequest offlineBrokerRequest, @Nullable Map<ServerInstance,
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829527472
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/transport/QueryRouter.java
##########
@@ -114,6 +115,7 @@ public AsyncQueryResponse submitQuery(long requestId, String rawTableName,
Map<ServerRoutingInstance, InstanceRequest> requestMap = new HashMap<>();
if (offlineBrokerRequest != null) {
assert offlineRoutingTable != null;
+ BrokerRequestToQueryContextConverter.validateGapfillQuery(offlineBrokerRequest);
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829494003
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/filter/RowMatcherFactory.java
##########
@@ -0,0 +1,44 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce.filter;
+
+import org.apache.pinot.common.request.context.FilterContext;
+
+
+/**
+ * Factory for RowMatcher.
+ */
+public interface RowMatcherFactory {
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829469382
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/ColumnDataToBlockValSetConverter.java
##########
@@ -0,0 +1,181 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.List;
+import javax.annotation.Nullable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.segment.spi.index.reader.Dictionary;
+import org.apache.pinot.spi.data.FieldSpec;
+
+
+/**
+ * As for Gapfilling Function, all raw data will be retrieved from the pinot
+ * server and merged on the pinot broker. The data will be in {@link DataTable}
+ * format.
+ * As part of Gapfilling Function execution plan, the aggregation function will
+ * work on the merged data on pinot broker. The aggregation function only takes
+ * the {@link BlockValSet} format.
+ * This is the Helper class to convert the data from {@link DataTable} to the
+ * block of values {@link BlockValSet} which used as input to the aggregation
+ * function.
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class ColumnDataToBlockValSetConverter implements BlockValSet {
+
+ private final FieldSpec.DataType _dataType;
+ private final List<Object[]> _rows;
+ private final int _columnIndex;
+
+ public ColumnDataToBlockValSetConverter(DataSchema.ColumnDataType columnDataType, List<Object[]> rows,
+ int columnIndex) {
+ _dataType = columnDataType.toDataType();
+ _rows = rows;
+ _columnIndex = columnIndex;
+ }
+
+ @Override
+ public FieldSpec.DataType getValueType() {
+ return _dataType;
+ }
+
+ @Override
+ public boolean isSingleValue() {
+ return true;
+ }
+
+ @Nullable
+ @Override
+ public Dictionary getDictionary() {
+ throw new UnsupportedOperationException("Not supported");
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829468126
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/QueryContext.java
##########
@@ -375,6 +393,57 @@ public String toString() {
private Map<String, String> _queryOptions;
private Map<String, String> _debugOptions;
private BrokerRequest _brokerRequest;
+ private QueryContext _subQueryContext;
+
+ /**
+ * Helper method to extract AGGREGATION FunctionContexts and FILTER FilterContexts from the given expression.
+ */
+ private static void getAggregations(ExpressionContext expression,
+ List<Pair<FunctionContext, FilterContext>> filteredAggregations) {
+ FunctionContext function = expression.getFunction();
+ if (function == null) {
+ return;
+ }
+ if (function.getType() == FunctionContext.Type.AGGREGATION) {
+ // Aggregation
+ filteredAggregations.add(Pair.of(function, null));
+ } else {
+ List<ExpressionContext> arguments = function.getArguments();
+ if (function.getFunctionName().equalsIgnoreCase("filter")) {
+ // Filtered aggregation
+ Preconditions.checkState(arguments.size() == 2, "FILTER must contain 2 arguments");
+ FunctionContext aggregation = arguments.get(0).getFunction();
+ Preconditions.checkState(aggregation != null && aggregation.getType() == FunctionContext.Type.AGGREGATION,
+ "First argument of FILTER must be an aggregation function");
+ ExpressionContext filterExpression = arguments.get(1);
+ Preconditions.checkState(filterExpression.getFunction() != null
+ && filterExpression.getFunction().getType() == FunctionContext.Type.TRANSFORM,
+ "Second argument of FILTER must be a filter expression");
+ FilterContext filter = RequestContextUtils.getFilter(filterExpression);
+ filteredAggregations.add(Pair.of(aggregation, filter));
+ } else {
+ // Transform
+ for (ExpressionContext argument : arguments) {
+ getAggregations(argument, filteredAggregations);
+ }
+ }
+ }
+ }
+
+ /**
+ * Helper method to extract AGGREGATION FunctionContexts and FILTER FilterContexts from the given filter.
+ */
+ private static void getAggregations(FilterContext filter,
+ List<Pair<FunctionContext, FilterContext>> filteredAggregations) {
+ List<FilterContext> children = filter.getChildren();
+ if (children != null) {
+ for (FilterContext child : children) {
+ getAggregations(child, filteredAggregations);
+ }
+ } else {
+ getAggregations(filter.getPredicate().getLhs(), filteredAggregations);
+ }
+ }
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (1bdd2f3) into [master](https://codecov.io/gh/apache/pinot/commit/360a2051c1eb20af552b8222eda97eb4a3e95387?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (360a205) will **decrease** coverage by `3.74%`.
> The diff coverage is `78.36%`.
> :exclamation: Current head 1bdd2f3 differs from pull request most recent head 8741eec. Consider uploading reports for the commit 8741eec to get more accurate results
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 70.81% 67.06% -3.75%
+ Complexity 4264 4185 -79
============================================
Files 1639 1246 -393
Lines 85919 63031 -22888
Branches 12921 9864 -3057
============================================
- Hits 60840 42271 -18569
+ Misses 20873 17738 -3135
+ Partials 4206 3022 -1184
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `67.06% <78.36%> (+0.10%)` | :arrow_up: |
| unittests2 | `?` | |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...pache/pinot/common/utils/request/RequestUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9jb21tb24vdXRpbHMvcmVxdWVzdC9SZXF1ZXN0VXRpbHMuamF2YQ==) | `85.71% <0.00%> (-1.79%)` | :arrow_down: |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `86.64% <ø> (-5.54%)` | :arrow_down: |
| [...e/pinot/core/query/reduce/RowBasedBlockValSet.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUm93QmFzZWRCbG9ja1ZhbFNldC5qYXZh) | `16.12% <16.12%> (ø)` | |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `91.89% <66.66%> (+0.12%)` | :arrow_up: |
| [...ot/core/query/reduce/filter/RowMatcherFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXJGYWN0b3J5LmphdmE=) | `66.66% <66.66%> (ø)` | |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `71.27% <74.35%> (+7.63%)` | :arrow_up: |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| [.../pinot/core/query/reduce/GapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `85.71% <85.71%> (ø)` | |
| [.../core/query/reduce/filter/PredicateRowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1ByZWRpY2F0ZVJvd01hdGNoZXIuamF2YQ==) | `87.50% <87.50%> (ø)` | |
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `86.60% <88.88%> (-1.10%)` | :arrow_down: |
| ... and [642 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [360a205...8741eec](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] Jackie-Jiang commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
Jackie-Jiang commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r831517930
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/GapfillProcessor.java
##########
@@ -0,0 +1,477 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.function.CountAggregationFunction;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+
+/**
+ * Helper class to reduce and set gap fill results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class GapfillProcessor {
+ private final QueryContext _queryContext;
+
+ private final int _limitForAggregatedResult;
+ private final DateTimeGranularitySpec _gapfillDateTimeGranularity;
+ private final DateTimeGranularitySpec _postGapfillDateTimeGranularity;
+ private final DateTimeFormatSpec _dateTimeFormatter;
+ private final long _startMs;
+ private final long _endMs;
+ private final long _gapfillTimeBucketSize;
+ private final long _postGapfillTimeBucketSize;
+ private final int _numOfTimeBuckets;
+ private final List<Integer> _groupByKeyIndexes;
+ private final Set<Key> _groupByKeys;
+ private final Map<Key, Object[]> _previousByGroupKey;
+ private final Map<String, ExpressionContext> _fillExpressions;
+ private final List<ExpressionContext> _timeSeries;
+ private final GapfillUtils.GapfillType _gapfillType;
+ private int _limitForGapfilledResult;
+ private boolean[] _isGroupBySelections;
+ private final int _timeBucketColumnIndex;
+ private int[] _sourceColumnIndexForResultSchema = null;
+ private final int _aggregationSize;
+
+ GapfillProcessor(QueryContext queryContext, GapfillUtils.GapfillType gapfillType) {
+ _queryContext = queryContext;
+ _gapfillType = gapfillType;
+ _limitForAggregatedResult = queryContext.getLimit();
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ _limitForGapfilledResult = queryContext.getLimit();
+ } else {
+ _limitForGapfilledResult = queryContext.getSubquery().getLimit();
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext, _gapfillType);
+ _timeBucketColumnIndex = GapfillUtils.findTimeBucketColumnIndex(queryContext, _gapfillType);
+
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+
+ _dateTimeFormatter = new DateTimeFormatSpec(args.get(1).getLiteral());
+ _gapfillDateTimeGranularity = new DateTimeGranularitySpec(args.get(4).getLiteral());
+ if (args.get(5).getLiteral() == null) {
+ _postGapfillDateTimeGranularity = _gapfillDateTimeGranularity;
+ } else {
+ _postGapfillDateTimeGranularity = new DateTimeGranularitySpec(args.get(5).getLiteral());
+ }
+ String start = args.get(2).getLiteral();
+ _startMs = truncate(_dateTimeFormatter.fromFormatToMillis(start));
+ String end = args.get(3).getLiteral();
+ _endMs = truncate(_dateTimeFormatter.fromFormatToMillis(end));
+ _gapfillTimeBucketSize = _gapfillDateTimeGranularity.granularityToMillis();
+ _postGapfillTimeBucketSize = _postGapfillDateTimeGranularity.granularityToMillis();
+ _numOfTimeBuckets = (int) ((_endMs - _startMs) / _gapfillTimeBucketSize);
+
+ _fillExpressions = GapfillUtils.getFillExpressions(gapFillSelection);
+
+ _aggregationSize = (int) (_postGapfillTimeBucketSize / _gapfillTimeBucketSize);
+
+ _previousByGroupKey = new HashMap<>();
+ _groupByKeyIndexes = new ArrayList<>();
+ _groupByKeys = new HashSet<>();
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ _timeSeries = timeseriesOn.getFunction().getArguments();
+ }
+
+ private int findGapfillBucketIndex(long time) {
+ return (int) ((time - _startMs) / _gapfillTimeBucketSize);
+ }
+
+ private void replaceColumnNameWithAlias(DataSchema dataSchema) {
+ QueryContext queryContext;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = _queryContext.getSubquery().getSubquery();
+ } else if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ queryContext = _queryContext;
+ } else {
+ queryContext = _queryContext.getSubquery();
+ }
+ List<String> aliasList = queryContext.getAliasList();
+ Map<String, String> columnNameToAliasMap = new HashMap<>();
+ for (int i = 0; i < aliasList.size(); i++) {
+ if (aliasList.get(i) != null) {
+ ExpressionContext selection = queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(selection)) {
+ selection = selection.getFunction().getArguments().get(0);
+ }
+ columnNameToAliasMap.put(selection.toString(), aliasList.get(i));
+ }
+ }
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ if (columnNameToAliasMap.containsKey(dataSchema.getColumnNames()[i])) {
+ dataSchema.getColumnNames()[i] = columnNameToAliasMap.get(dataSchema.getColumnNames()[i]);
+ }
+ }
+ }
+
+ /**
+ * Here are three things that happen
+ * 1. Sort the result sets from all pinot servers based on timestamp
+ * 2. Gapfill the data for missing entities per time bucket
+ * 3. Aggregate the dataset per time bucket.
+ */
+ public void process(BrokerResponseNative brokerResponseNative) {
+ DataSchema dataSchema = brokerResponseNative.getResultTable().getDataSchema();
+ DataSchema resultTableSchema = getResultTableDataSchema(dataSchema);
+ if (brokerResponseNative.getResultTable().getRows().isEmpty()) {
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, Collections.emptyList()));
+ return;
+ }
+
+ String[] columns = dataSchema.getColumnNames();
+
+ Map<String, Integer> indexes = new HashMap<>();
+ for (int i = 0; i < columns.length; i++) {
+ indexes.put(columns[i], i);
+ }
+
+ _isGroupBySelections = new boolean[dataSchema.getColumnDataTypes().length];
+
+ // The first one argument of timeSeries is time column. The left ones are defining entity.
+ for (ExpressionContext entityColum : _timeSeries) {
+ int index = indexes.get(entityColum.getIdentifier());
+ _isGroupBySelections[index] = true;
+ }
+
+ for (int i = 0; i < _isGroupBySelections.length; i++) {
+ if (_isGroupBySelections[i]) {
+ _groupByKeyIndexes.add(i);
+ }
+ }
+
+ List<Object[]>[] timeBucketedRawRows = putRawRowsIntoTimeBucket(brokerResponseNative.getResultTable().getRows());
+
+ replaceColumnNameWithAlias(dataSchema);
+
+ if (_queryContext.getAggregationFunctions() == null) {
+
+ Map<String, Integer> sourceColumnsIndexes = new HashMap<>();
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ sourceColumnsIndexes.put(dataSchema.getColumnName(i), i);
+ }
+ _sourceColumnIndexForResultSchema = new int[resultTableSchema.getColumnNames().length];
+ for (int i = 0; i < _sourceColumnIndexForResultSchema.length; i++) {
+ _sourceColumnIndexForResultSchema[i] = sourceColumnsIndexes.get(resultTableSchema.getColumnName(i));
+ }
+ }
+
+ List<Object[]> resultRows = gapFillAndAggregate(timeBucketedRawRows, resultTableSchema, dataSchema);
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, resultRows));
+ }
+
+ /**
+ * Constructs the DataSchema for the ResultTable.
+ */
+ private DataSchema getResultTableDataSchema(DataSchema dataSchema) {
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ return dataSchema;
+ }
+
+ int numOfColumns = _queryContext.getSelectExpressions().size();
+ String[] columnNames = new String[numOfColumns];
+ ColumnDataType[] columnDataTypes = new ColumnDataType[numOfColumns];
+ for (int i = 0; i < numOfColumns; i++) {
+ ExpressionContext expressionContext = _queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(expressionContext)) {
+ expressionContext = expressionContext.getFunction().getArguments().get(0);
+ }
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ columnNames[i] = expressionContext.getIdentifier();
+ columnDataTypes[i] = ColumnDataType.STRING;
+ } else {
+ FunctionContext functionContext = expressionContext.getFunction();
+ AggregationFunction aggregationFunction =
+ AggregationFunctionFactory.getAggregationFunction(functionContext, _queryContext);
+ columnDataTypes[i] = aggregationFunction.getFinalResultColumnType();
+ columnNames[i] = functionContext.toString();
+ }
+ }
+ return new DataSchema(columnNames, columnDataTypes);
+ }
+
+ private Key constructGroupKeys(Object[] row) {
+ Object[] groupKeys = new Object[_groupByKeyIndexes.size()];
+ for (int i = 0; i < _groupByKeyIndexes.size(); i++) {
+ groupKeys[i] = row[_groupByKeyIndexes.get(i)];
+ }
+ return new Key(groupKeys);
+ }
+
+ private long truncate(long epoch) {
+ int sz = _gapfillDateTimeGranularity.getSize();
+ return epoch / sz * sz;
+ }
+
+ private List<Object[]> gapFillAndAggregate(List<Object[]>[] timeBucketedRawRows,
+ DataSchema dataSchemaForAggregatedResult, DataSchema dataSchema) {
+ List<Object[]> result = new ArrayList<>();
+
+ GapfillFilterHandler postGapfillFilterHandler = null;
+ if (_queryContext.getSubquery() != null && _queryContext.getFilter() != null) {
+ postGapfillFilterHandler = new GapfillFilterHandler(_queryContext.getFilter(), dataSchema);
+ }
+ GapfillFilterHandler postAggregateHavingFilterHandler = null;
+ if (_queryContext.getHavingFilter() != null) {
+ postAggregateHavingFilterHandler =
+ new GapfillFilterHandler(_queryContext.getHavingFilter(), dataSchemaForAggregatedResult);
+ }
+ long start = _startMs;
+ ColumnDataType[] resultColumnDataTypes = dataSchema.getColumnDataTypes();
+ List<Object[]> bucketedResult = new ArrayList<>();
+ for (long time = _startMs; time < _endMs; time += _gapfillTimeBucketSize) {
+ int index = findGapfillBucketIndex(time);
+ gapfill(time, bucketedResult, timeBucketedRawRows[index], dataSchema, postGapfillFilterHandler);
+ if (_queryContext.getAggregationFunctions() == null) {
+ for (Object [] row : bucketedResult) {
Review comment:
(code format) reformat this file
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/GapfillProcessor.java
##########
@@ -0,0 +1,477 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.function.CountAggregationFunction;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+
+/**
+ * Helper class to reduce and set gap fill results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class GapfillProcessor {
+ private final QueryContext _queryContext;
+
+ private final int _limitForAggregatedResult;
+ private final DateTimeGranularitySpec _gapfillDateTimeGranularity;
+ private final DateTimeGranularitySpec _postGapfillDateTimeGranularity;
+ private final DateTimeFormatSpec _dateTimeFormatter;
+ private final long _startMs;
+ private final long _endMs;
+ private final long _gapfillTimeBucketSize;
+ private final long _postGapfillTimeBucketSize;
+ private final int _numOfTimeBuckets;
+ private final List<Integer> _groupByKeyIndexes;
+ private final Set<Key> _groupByKeys;
+ private final Map<Key, Object[]> _previousByGroupKey;
+ private final Map<String, ExpressionContext> _fillExpressions;
+ private final List<ExpressionContext> _timeSeries;
+ private final GapfillUtils.GapfillType _gapfillType;
+ private int _limitForGapfilledResult;
+ private boolean[] _isGroupBySelections;
+ private final int _timeBucketColumnIndex;
+ private int[] _sourceColumnIndexForResultSchema = null;
+ private final int _aggregationSize;
+
+ GapfillProcessor(QueryContext queryContext, GapfillUtils.GapfillType gapfillType) {
+ _queryContext = queryContext;
+ _gapfillType = gapfillType;
+ _limitForAggregatedResult = queryContext.getLimit();
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ _limitForGapfilledResult = queryContext.getLimit();
+ } else {
+ _limitForGapfilledResult = queryContext.getSubquery().getLimit();
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext, _gapfillType);
+ _timeBucketColumnIndex = GapfillUtils.findTimeBucketColumnIndex(queryContext, _gapfillType);
+
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+
+ _dateTimeFormatter = new DateTimeFormatSpec(args.get(1).getLiteral());
+ _gapfillDateTimeGranularity = new DateTimeGranularitySpec(args.get(4).getLiteral());
+ if (args.get(5).getLiteral() == null) {
+ _postGapfillDateTimeGranularity = _gapfillDateTimeGranularity;
+ } else {
+ _postGapfillDateTimeGranularity = new DateTimeGranularitySpec(args.get(5).getLiteral());
+ }
+ String start = args.get(2).getLiteral();
+ _startMs = truncate(_dateTimeFormatter.fromFormatToMillis(start));
+ String end = args.get(3).getLiteral();
+ _endMs = truncate(_dateTimeFormatter.fromFormatToMillis(end));
+ _gapfillTimeBucketSize = _gapfillDateTimeGranularity.granularityToMillis();
+ _postGapfillTimeBucketSize = _postGapfillDateTimeGranularity.granularityToMillis();
+ _numOfTimeBuckets = (int) ((_endMs - _startMs) / _gapfillTimeBucketSize);
+
+ _fillExpressions = GapfillUtils.getFillExpressions(gapFillSelection);
+
+ _aggregationSize = (int) (_postGapfillTimeBucketSize / _gapfillTimeBucketSize);
+
+ _previousByGroupKey = new HashMap<>();
+ _groupByKeyIndexes = new ArrayList<>();
+ _groupByKeys = new HashSet<>();
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ _timeSeries = timeseriesOn.getFunction().getArguments();
+ }
+
+ private int findGapfillBucketIndex(long time) {
+ return (int) ((time - _startMs) / _gapfillTimeBucketSize);
+ }
+
+ private void replaceColumnNameWithAlias(DataSchema dataSchema) {
+ QueryContext queryContext;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = _queryContext.getSubquery().getSubquery();
+ } else if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ queryContext = _queryContext;
+ } else {
+ queryContext = _queryContext.getSubquery();
+ }
+ List<String> aliasList = queryContext.getAliasList();
+ Map<String, String> columnNameToAliasMap = new HashMap<>();
+ for (int i = 0; i < aliasList.size(); i++) {
+ if (aliasList.get(i) != null) {
+ ExpressionContext selection = queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(selection)) {
+ selection = selection.getFunction().getArguments().get(0);
+ }
+ columnNameToAliasMap.put(selection.toString(), aliasList.get(i));
+ }
+ }
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ if (columnNameToAliasMap.containsKey(dataSchema.getColumnNames()[i])) {
+ dataSchema.getColumnNames()[i] = columnNameToAliasMap.get(dataSchema.getColumnNames()[i]);
+ }
+ }
+ }
+
+ /**
+ * Here are three things that happen
+ * 1. Sort the result sets from all pinot servers based on timestamp
+ * 2. Gapfill the data for missing entities per time bucket
+ * 3. Aggregate the dataset per time bucket.
+ */
+ public void process(BrokerResponseNative brokerResponseNative) {
+ DataSchema dataSchema = brokerResponseNative.getResultTable().getDataSchema();
+ DataSchema resultTableSchema = getResultTableDataSchema(dataSchema);
+ if (brokerResponseNative.getResultTable().getRows().isEmpty()) {
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, Collections.emptyList()));
+ return;
+ }
+
+ String[] columns = dataSchema.getColumnNames();
+
+ Map<String, Integer> indexes = new HashMap<>();
+ for (int i = 0; i < columns.length; i++) {
+ indexes.put(columns[i], i);
+ }
+
+ _isGroupBySelections = new boolean[dataSchema.getColumnDataTypes().length];
+
+ // The first one argument of timeSeries is time column. The left ones are defining entity.
+ for (ExpressionContext entityColum : _timeSeries) {
+ int index = indexes.get(entityColum.getIdentifier());
+ _isGroupBySelections[index] = true;
+ }
+
+ for (int i = 0; i < _isGroupBySelections.length; i++) {
+ if (_isGroupBySelections[i]) {
+ _groupByKeyIndexes.add(i);
+ }
+ }
+
+ List<Object[]>[] timeBucketedRawRows = putRawRowsIntoTimeBucket(brokerResponseNative.getResultTable().getRows());
+
+ replaceColumnNameWithAlias(dataSchema);
+
+ if (_queryContext.getAggregationFunctions() == null) {
+
+ Map<String, Integer> sourceColumnsIndexes = new HashMap<>();
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ sourceColumnsIndexes.put(dataSchema.getColumnName(i), i);
+ }
+ _sourceColumnIndexForResultSchema = new int[resultTableSchema.getColumnNames().length];
+ for (int i = 0; i < _sourceColumnIndexForResultSchema.length; i++) {
+ _sourceColumnIndexForResultSchema[i] = sourceColumnsIndexes.get(resultTableSchema.getColumnName(i));
+ }
+ }
+
+ List<Object[]> resultRows = gapFillAndAggregate(timeBucketedRawRows, resultTableSchema, dataSchema);
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, resultRows));
+ }
+
+ /**
+ * Constructs the DataSchema for the ResultTable.
+ */
+ private DataSchema getResultTableDataSchema(DataSchema dataSchema) {
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ return dataSchema;
+ }
+
+ int numOfColumns = _queryContext.getSelectExpressions().size();
+ String[] columnNames = new String[numOfColumns];
+ ColumnDataType[] columnDataTypes = new ColumnDataType[numOfColumns];
+ for (int i = 0; i < numOfColumns; i++) {
+ ExpressionContext expressionContext = _queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(expressionContext)) {
+ expressionContext = expressionContext.getFunction().getArguments().get(0);
+ }
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ columnNames[i] = expressionContext.getIdentifier();
+ columnDataTypes[i] = ColumnDataType.STRING;
+ } else {
+ FunctionContext functionContext = expressionContext.getFunction();
+ AggregationFunction aggregationFunction =
+ AggregationFunctionFactory.getAggregationFunction(functionContext, _queryContext);
+ columnDataTypes[i] = aggregationFunction.getFinalResultColumnType();
+ columnNames[i] = functionContext.toString();
+ }
+ }
+ return new DataSchema(columnNames, columnDataTypes);
+ }
+
+ private Key constructGroupKeys(Object[] row) {
+ Object[] groupKeys = new Object[_groupByKeyIndexes.size()];
+ for (int i = 0; i < _groupByKeyIndexes.size(); i++) {
+ groupKeys[i] = row[_groupByKeyIndexes.get(i)];
+ }
+ return new Key(groupKeys);
+ }
+
+ private long truncate(long epoch) {
+ int sz = _gapfillDateTimeGranularity.getSize();
+ return epoch / sz * sz;
+ }
+
+ private List<Object[]> gapFillAndAggregate(List<Object[]>[] timeBucketedRawRows,
+ DataSchema dataSchemaForAggregatedResult, DataSchema dataSchema) {
+ List<Object[]> result = new ArrayList<>();
+
+ GapfillFilterHandler postGapfillFilterHandler = null;
+ if (_queryContext.getSubquery() != null && _queryContext.getFilter() != null) {
+ postGapfillFilterHandler = new GapfillFilterHandler(_queryContext.getFilter(), dataSchema);
+ }
+ GapfillFilterHandler postAggregateHavingFilterHandler = null;
+ if (_queryContext.getHavingFilter() != null) {
+ postAggregateHavingFilterHandler =
+ new GapfillFilterHandler(_queryContext.getHavingFilter(), dataSchemaForAggregatedResult);
+ }
+ long start = _startMs;
+ ColumnDataType[] resultColumnDataTypes = dataSchema.getColumnDataTypes();
+ List<Object[]> bucketedResult = new ArrayList<>();
+ for (long time = _startMs; time < _endMs; time += _gapfillTimeBucketSize) {
+ int index = findGapfillBucketIndex(time);
+ gapfill(time, bucketedResult, timeBucketedRawRows[index], dataSchema, postGapfillFilterHandler);
+ if (_queryContext.getAggregationFunctions() == null) {
Review comment:
We may simplify the same granularity case
```suggestion
if (_queryContext.getAggregationFunctions() == null || _aggregationSize == 1) {
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r830415123
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/GapfillProcessor.java
##########
@@ -0,0 +1,471 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.function.CountAggregationFunction;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+
+/**
+ * Helper class to reduce and set gap fill results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class GapfillProcessor {
+ private final QueryContext _queryContext;
+
+ private final int _limitForAggregatedResult;
+ private final DateTimeGranularitySpec _gapfillDateTimeGranularity;
+ private final DateTimeGranularitySpec _postGapfillDateTimeGranularity;
+ private final DateTimeFormatSpec _dateTimeFormatter;
+ private final long _startMs;
+ private final long _endMs;
+ private final long _gapfillTimeBucketSize;
+ private final long _postGapfillTimeBucketSize;
+ private final int _numOfTimeBuckets;
+ private final List<Integer> _groupByKeyIndexes;
+ private final Set<Key> _groupByKeys;
+ private final Map<Key, Object[]> _previousByGroupKey;
+ private final Map<String, ExpressionContext> _fillExpressions;
+ private final List<ExpressionContext> _timeSeries;
+ private final GapfillUtils.GapfillType _gapfillType;
+ private int _limitForGapfilledResult;
+ private boolean[] _isGroupBySelections;
+ private final int _timeBucketColumnIndex;
+ private int[] _sourceColumnIndexForResultSchema = null;
+ private final int _aggregationSize;
+
+ GapfillProcessor(QueryContext queryContext, GapfillUtils.GapfillType gapfillType) {
+ _queryContext = queryContext;
+ _gapfillType = gapfillType;
+ _limitForAggregatedResult = queryContext.getLimit();
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ _limitForGapfilledResult = queryContext.getLimit();
+ } else {
+ _limitForGapfilledResult = queryContext.getSubquery().getLimit();
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext, _gapfillType);
+ _timeBucketColumnIndex = GapfillUtils.findTimeBucketColumnIndex(queryContext, _gapfillType);
+
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+
+ _dateTimeFormatter = new DateTimeFormatSpec(args.get(1).getLiteral());
+ _gapfillDateTimeGranularity = new DateTimeGranularitySpec(args.get(4).getLiteral());
+ _postGapfillDateTimeGranularity = new DateTimeGranularitySpec(args.get(5).getLiteral());
+ String start = args.get(2).getLiteral();
+ _startMs = truncate(_dateTimeFormatter.fromFormatToMillis(start));
+ String end = args.get(3).getLiteral();
+ _endMs = truncate(_dateTimeFormatter.fromFormatToMillis(end));
+ _gapfillTimeBucketSize = _gapfillDateTimeGranularity.granularityToMillis();
+ _postGapfillTimeBucketSize = _postGapfillDateTimeGranularity.granularityToMillis();
+ _numOfTimeBuckets = (int) ((_endMs - _startMs) / _gapfillTimeBucketSize);
+
+ _fillExpressions = GapfillUtils.getFillExpressions(gapFillSelection);
+
+ _aggregationSize = (int) (_postGapfillTimeBucketSize / _gapfillTimeBucketSize);
+
+ _previousByGroupKey = new HashMap<>();
+ _groupByKeyIndexes = new ArrayList<>();
+ _groupByKeys = new HashSet<>();
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ _timeSeries = timeseriesOn.getFunction().getArguments();
+ }
+
+ private int findGapfillBucketIndex(long time) {
+ return (int) ((time - _startMs) / _gapfillTimeBucketSize);
+ }
+
+ private void replaceColumnNameWithAlias(DataSchema dataSchema) {
+ QueryContext queryContext;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = _queryContext.getSubquery().getSubquery();
+ } else if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ queryContext = _queryContext;
+ } else {
+ queryContext = _queryContext.getSubquery();
+ }
+ List<String> aliasList = queryContext.getAliasList();
+ Map<String, String> columnNameToAliasMap = new HashMap<>();
+ for (int i = 0; i < aliasList.size(); i++) {
+ if (aliasList.get(i) != null) {
+ ExpressionContext selection = queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(selection)) {
+ selection = selection.getFunction().getArguments().get(0);
+ }
+ columnNameToAliasMap.put(selection.toString(), aliasList.get(i));
+ }
+ }
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ if (columnNameToAliasMap.containsKey(dataSchema.getColumnNames()[i])) {
+ dataSchema.getColumnNames()[i] = columnNameToAliasMap.get(dataSchema.getColumnNames()[i]);
+ }
+ }
+ }
+
+ /**
+ * Here are three things that happen
+ * 1. Sort the result sets from all pinot servers based on timestamp
+ * 2. Gapfill the data for missing entities per time bucket
+ * 3. Aggregate the dataset per time bucket.
+ */
+ public void process(BrokerResponseNative brokerResponseNative) {
+ DataSchema dataSchema = brokerResponseNative.getResultTable().getDataSchema();
+ DataSchema resultTableSchema = getResultTableDataSchema(dataSchema);
+ if (brokerResponseNative.getResultTable().getRows().isEmpty()) {
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, Collections.emptyList()));
+ return;
+ }
+
+ String[] columns = dataSchema.getColumnNames();
+
+ Map<String, Integer> indexes = new HashMap<>();
+ for (int i = 0; i < columns.length; i++) {
+ indexes.put(columns[i], i);
+ }
+
+ _isGroupBySelections = new boolean[dataSchema.getColumnDataTypes().length];
+
+ // The first one argument of timeSeries is time column. The left ones are defining entity.
+ for (ExpressionContext entityColum : _timeSeries) {
+ int index = indexes.get(entityColum.getIdentifier());
+ _isGroupBySelections[index] = true;
+ }
+
+ for (int i = 0; i < _isGroupBySelections.length; i++) {
+ if (_isGroupBySelections[i]) {
+ _groupByKeyIndexes.add(i);
+ }
+ }
+
+ List<Object[]>[] timeBucketedRawRows = putRawRowsIntoTimeBucket(brokerResponseNative.getResultTable().getRows());
+
+ replaceColumnNameWithAlias(dataSchema);
+
+ if (_queryContext.getAggregationFunctions() == null) {
+
+ Map<String, Integer> sourceColumnsIndexes = new HashMap<>();
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ sourceColumnsIndexes.put(dataSchema.getColumnName(i), i);
+ }
+ _sourceColumnIndexForResultSchema = new int[resultTableSchema.getColumnNames().length];
+ for (int i = 0; i < _sourceColumnIndexForResultSchema.length; i++) {
+ _sourceColumnIndexForResultSchema[i] = sourceColumnsIndexes.get(resultTableSchema.getColumnName(i));
+ }
+ }
+
+ List<Object[]> resultRows = gapFillAndAggregate(timeBucketedRawRows, resultTableSchema, dataSchema);
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, resultRows));
+ }
+
+ /**
+ * Constructs the DataSchema for the ResultTable.
+ */
+ private DataSchema getResultTableDataSchema(DataSchema dataSchema) {
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ return dataSchema;
+ }
+
+ int numOfColumns = _queryContext.getSelectExpressions().size();
+ String[] columnNames = new String[numOfColumns];
+ ColumnDataType[] columnDataTypes = new ColumnDataType[numOfColumns];
+ for (int i = 0; i < numOfColumns; i++) {
+ ExpressionContext expressionContext = _queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(expressionContext)) {
+ expressionContext = expressionContext.getFunction().getArguments().get(0);
+ }
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ columnNames[i] = expressionContext.getIdentifier();
+ columnDataTypes[i] = ColumnDataType.STRING;
+ } else {
+ FunctionContext functionContext = expressionContext.getFunction();
+ AggregationFunction aggregationFunction =
+ AggregationFunctionFactory.getAggregationFunction(functionContext, _queryContext);
+ columnDataTypes[i] = aggregationFunction.getFinalResultColumnType();
+ columnNames[i] = functionContext.toString();
+ }
+ }
+ return new DataSchema(columnNames, columnDataTypes);
+ }
+
+ private Key constructGroupKeys(Object[] row) {
+ Object[] groupKeys = new Object[_groupByKeyIndexes.size()];
+ for (int i = 0; i < _groupByKeyIndexes.size(); i++) {
+ groupKeys[i] = row[_groupByKeyIndexes.get(i)];
+ }
+ return new Key(groupKeys);
+ }
+
+ private long truncate(long epoch) {
+ int sz = _gapfillDateTimeGranularity.getSize();
+ return epoch / sz * sz;
+ }
+
+ private List<Object[]> gapFillAndAggregate(List<Object[]>[] timeBucketedRawRows,
+ DataSchema dataSchemaForAggregatedResult, DataSchema dataSchema) {
+ List<Object[]> result = new ArrayList<>();
+
+ GapfillFilterHandler postGapfillFilterHandler = null;
+ if (_queryContext.getSubquery() != null && _queryContext.getFilter() != null) {
+ postGapfillFilterHandler = new GapfillFilterHandler(_queryContext.getFilter(), dataSchema);
+ }
+ GapfillFilterHandler postAggregateHavingFilterHandler = null;
+ if (_queryContext.getHavingFilter() != null) {
+ postAggregateHavingFilterHandler =
+ new GapfillFilterHandler(_queryContext.getHavingFilter(), dataSchemaForAggregatedResult);
+ }
+ long start = _startMs;
+ ColumnDataType[] resultColumnDataTypes = dataSchema.getColumnDataTypes();
+ List<Object[]> bucketedResult = new ArrayList<>();
+ for (long time = _startMs; time < _endMs; time += _gapfillTimeBucketSize) {
+ int index = findGapfillBucketIndex(time);
+ gapfill(time, bucketedResult, timeBucketedRawRows[index], dataSchema, postGapfillFilterHandler);
+ if (_queryContext.getAggregationFunctions() == null) {
+ for (Object [] row : bucketedResult) {
+ Object[] resultRow = new Object[_sourceColumnIndexForResultSchema.length];
+ for (int i = 0; i < _sourceColumnIndexForResultSchema.length; i++) {
+ resultRow[i] = row[_sourceColumnIndexForResultSchema[i]];
+ }
+ result.add(resultRow);
+ }
+ bucketedResult = new ArrayList<>();
+ } else if (index % _aggregationSize == _aggregationSize - 1 && bucketedResult.size() > 0) {
+ Object timeCol;
+ if (resultColumnDataTypes[_timeBucketColumnIndex] == ColumnDataType.LONG) {
+ timeCol = Long.valueOf(_dateTimeFormatter.fromMillisToFormat(start));
+ } else {
+ timeCol = _dateTimeFormatter.fromMillisToFormat(start);
+ }
+ List<Object[]> aggregatedRows = aggregateGapfilledData(timeCol, bucketedResult, dataSchema);
+ for (Object[] aggregatedRow : aggregatedRows) {
+ if (postAggregateHavingFilterHandler == null || postAggregateHavingFilterHandler.isMatch(aggregatedRow)) {
+ result.add(aggregatedRow);
+ }
+ if (result.size() >= _limitForAggregatedResult) {
+ return result;
+ }
+ }
+ bucketedResult = new ArrayList<>();
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r830414833
##########
File path: pinot-core/src/test/java/org/apache/pinot/queries/BaseQueriesTest.java
##########
@@ -197,21 +198,31 @@ protected BrokerResponseNative getBrokerResponseForSqlQuery(String sqlQuery, Pla
}
queryOptions.put(Request.QueryOptionKey.GROUP_BY_MODE, Request.SQL);
queryOptions.put(Request.QueryOptionKey.RESPONSE_FORMAT, Request.SQL);
+ BrokerRequest strippedBrokerRequest = GapfillUtils.stripGapfill(brokerRequest);
+ queryOptions = strippedBrokerRequest.getPinotQuery().getQueryOptions();
+ if (queryOptions == null) {
+ queryOptions = new HashMap<>();
+ strippedBrokerRequest.getPinotQuery().setQueryOptions(queryOptions);
+ }
+ queryOptions.put(Request.QueryOptionKey.GROUP_BY_MODE, Request.SQL);
+ queryOptions.put(Request.QueryOptionKey.RESPONSE_FORMAT, Request.SQL);
QueryContext queryContext = BrokerRequestToQueryContextConverter.convert(brokerRequest);
- return getBrokerResponse(queryContext, planMaker);
+ QueryContext strippedQueryContext = BrokerRequestToQueryContextConverter.convert(strippedBrokerRequest);
+ return getBrokerResponse(queryContext, strippedQueryContext, planMaker);
}
/**
* Run query on multiple index segments with custom plan maker.
* <p>Use this to test the whole flow from server to broker.
* <p>The result should be equivalent to querying 4 identical index segments.
*/
- private BrokerResponseNative getBrokerResponse(QueryContext queryContext, PlanMaker planMaker) {
+ private BrokerResponseNative getBrokerResponse(
+ QueryContext queryContext, QueryContext strippedQueryContext, PlanMaker planMaker) {
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r831475849
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/GapfillProcessor.java
##########
@@ -0,0 +1,473 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.function.CountAggregationFunction;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+
+/**
+ * Helper class to reduce and set gap fill results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class GapfillProcessor {
+ private final QueryContext _queryContext;
+
+ private final int _limitForAggregatedResult;
+ private final DateTimeGranularitySpec _gapfillDateTimeGranularity;
+ private final DateTimeGranularitySpec _postGapfillDateTimeGranularity;
+ private final DateTimeFormatSpec _dateTimeFormatter;
+ private final long _startMs;
+ private final long _endMs;
+ private final long _gapfillTimeBucketSize;
+ private final long _postGapfillTimeBucketSize;
+ private final int _numOfTimeBuckets;
+ private final List<Integer> _groupByKeyIndexes;
+ private final Set<Key> _groupByKeys;
+ private final Map<Key, Object[]> _previousByGroupKey;
+ private final Map<String, ExpressionContext> _fillExpressions;
+ private final List<ExpressionContext> _timeSeries;
+ private final GapfillUtils.GapfillType _gapfillType;
+ private int _limitForGapfilledResult;
+ private boolean[] _isGroupBySelections;
+ private final int _timeBucketColumnIndex;
+ private int[] _sourceColumnIndexForResultSchema = null;
+ private final int _aggregationSize;
+
+ GapfillProcessor(QueryContext queryContext, GapfillUtils.GapfillType gapfillType) {
+ _queryContext = queryContext;
+ _gapfillType = gapfillType;
+ _limitForAggregatedResult = queryContext.getLimit();
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ _limitForGapfilledResult = queryContext.getLimit();
+ } else {
+ _limitForGapfilledResult = queryContext.getSubquery().getLimit();
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext, _gapfillType);
+ _timeBucketColumnIndex = GapfillUtils.findTimeBucketColumnIndex(queryContext, _gapfillType);
+
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+
+ _dateTimeFormatter = new DateTimeFormatSpec(args.get(1).getLiteral());
+ _gapfillDateTimeGranularity = new DateTimeGranularitySpec(args.get(4).getLiteral());
+ _postGapfillDateTimeGranularity = new DateTimeGranularitySpec(args.get(5).getLiteral());
+ String start = args.get(2).getLiteral();
+ _startMs = truncate(_dateTimeFormatter.fromFormatToMillis(start));
+ String end = args.get(3).getLiteral();
+ _endMs = truncate(_dateTimeFormatter.fromFormatToMillis(end));
+ _gapfillTimeBucketSize = _gapfillDateTimeGranularity.granularityToMillis();
+ _postGapfillTimeBucketSize = _postGapfillDateTimeGranularity.granularityToMillis();
+ _numOfTimeBuckets = (int) ((_endMs - _startMs) / _gapfillTimeBucketSize);
+
+ _fillExpressions = GapfillUtils.getFillExpressions(gapFillSelection);
+
+ _aggregationSize = (int) (_postGapfillTimeBucketSize / _gapfillTimeBucketSize);
+
+ _previousByGroupKey = new HashMap<>();
+ _groupByKeyIndexes = new ArrayList<>();
+ _groupByKeys = new HashSet<>();
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ _timeSeries = timeseriesOn.getFunction().getArguments();
+ }
+
+ private int findGapfillBucketIndex(long time) {
+ return (int) ((time - _startMs) / _gapfillTimeBucketSize);
+ }
+
+ private void replaceColumnNameWithAlias(DataSchema dataSchema) {
+ QueryContext queryContext;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = _queryContext.getSubquery().getSubquery();
+ } else if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ queryContext = _queryContext;
+ } else {
+ queryContext = _queryContext.getSubquery();
+ }
+ List<String> aliasList = queryContext.getAliasList();
+ Map<String, String> columnNameToAliasMap = new HashMap<>();
+ for (int i = 0; i < aliasList.size(); i++) {
+ if (aliasList.get(i) != null) {
+ ExpressionContext selection = queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(selection)) {
+ selection = selection.getFunction().getArguments().get(0);
+ }
+ columnNameToAliasMap.put(selection.toString(), aliasList.get(i));
+ }
+ }
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ if (columnNameToAliasMap.containsKey(dataSchema.getColumnNames()[i])) {
+ dataSchema.getColumnNames()[i] = columnNameToAliasMap.get(dataSchema.getColumnNames()[i]);
+ }
+ }
+ }
+
+ /**
+ * Here are three things that happen
+ * 1. Sort the result sets from all pinot servers based on timestamp
+ * 2. Gapfill the data for missing entities per time bucket
+ * 3. Aggregate the dataset per time bucket.
+ */
+ public void process(BrokerResponseNative brokerResponseNative) {
+ DataSchema dataSchema = brokerResponseNative.getResultTable().getDataSchema();
+ DataSchema resultTableSchema = getResultTableDataSchema(dataSchema);
+ if (brokerResponseNative.getResultTable().getRows().isEmpty()) {
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, Collections.emptyList()));
+ return;
+ }
+
+ String[] columns = dataSchema.getColumnNames();
+
+ Map<String, Integer> indexes = new HashMap<>();
+ for (int i = 0; i < columns.length; i++) {
+ indexes.put(columns[i], i);
+ }
+
+ _isGroupBySelections = new boolean[dataSchema.getColumnDataTypes().length];
+
+ // The first one argument of timeSeries is time column. The left ones are defining entity.
+ for (ExpressionContext entityColum : _timeSeries) {
+ int index = indexes.get(entityColum.getIdentifier());
+ _isGroupBySelections[index] = true;
+ }
+
+ for (int i = 0; i < _isGroupBySelections.length; i++) {
+ if (_isGroupBySelections[i]) {
+ _groupByKeyIndexes.add(i);
+ }
+ }
+
+ List<Object[]>[] timeBucketedRawRows = putRawRowsIntoTimeBucket(brokerResponseNative.getResultTable().getRows());
+
+ replaceColumnNameWithAlias(dataSchema);
+
+ if (_queryContext.getAggregationFunctions() == null) {
+
+ Map<String, Integer> sourceColumnsIndexes = new HashMap<>();
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ sourceColumnsIndexes.put(dataSchema.getColumnName(i), i);
+ }
+ _sourceColumnIndexForResultSchema = new int[resultTableSchema.getColumnNames().length];
+ for (int i = 0; i < _sourceColumnIndexForResultSchema.length; i++) {
+ _sourceColumnIndexForResultSchema[i] = sourceColumnsIndexes.get(resultTableSchema.getColumnName(i));
+ }
+ }
+
+ List<Object[]> resultRows = gapFillAndAggregate(timeBucketedRawRows, resultTableSchema, dataSchema);
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, resultRows));
+ }
+
+ /**
+ * Constructs the DataSchema for the ResultTable.
+ */
+ private DataSchema getResultTableDataSchema(DataSchema dataSchema) {
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ return dataSchema;
+ }
+
+ int numOfColumns = _queryContext.getSelectExpressions().size();
+ String[] columnNames = new String[numOfColumns];
+ ColumnDataType[] columnDataTypes = new ColumnDataType[numOfColumns];
+ for (int i = 0; i < numOfColumns; i++) {
+ ExpressionContext expressionContext = _queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(expressionContext)) {
+ expressionContext = expressionContext.getFunction().getArguments().get(0);
+ }
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ columnNames[i] = expressionContext.getIdentifier();
+ columnDataTypes[i] = ColumnDataType.STRING;
+ } else {
+ FunctionContext functionContext = expressionContext.getFunction();
+ AggregationFunction aggregationFunction =
+ AggregationFunctionFactory.getAggregationFunction(functionContext, _queryContext);
+ columnDataTypes[i] = aggregationFunction.getFinalResultColumnType();
+ columnNames[i] = functionContext.toString();
+ }
+ }
+ return new DataSchema(columnNames, columnDataTypes);
+ }
+
+ private Key constructGroupKeys(Object[] row) {
+ Object[] groupKeys = new Object[_groupByKeyIndexes.size()];
+ for (int i = 0; i < _groupByKeyIndexes.size(); i++) {
+ groupKeys[i] = row[_groupByKeyIndexes.get(i)];
+ }
+ return new Key(groupKeys);
+ }
+
+ private long truncate(long epoch) {
+ int sz = _gapfillDateTimeGranularity.getSize();
+ return epoch / sz * sz;
+ }
+
+ private List<Object[]> gapFillAndAggregate(List<Object[]>[] timeBucketedRawRows,
+ DataSchema dataSchemaForAggregatedResult, DataSchema dataSchema) {
+ List<Object[]> result = new ArrayList<>();
+
+ GapfillFilterHandler postGapfillFilterHandler = null;
+ if (_queryContext.getSubquery() != null && _queryContext.getFilter() != null) {
+ postGapfillFilterHandler = new GapfillFilterHandler(_queryContext.getFilter(), dataSchema);
+ }
+ GapfillFilterHandler postAggregateHavingFilterHandler = null;
+ if (_queryContext.getHavingFilter() != null) {
+ postAggregateHavingFilterHandler =
+ new GapfillFilterHandler(_queryContext.getHavingFilter(), dataSchemaForAggregatedResult);
+ }
+ long start = _startMs;
+ ColumnDataType[] resultColumnDataTypes = dataSchema.getColumnDataTypes();
+ List<Object[]> bucketedResult = new ArrayList<>();
+ for (long time = _startMs; time < _endMs; time += _gapfillTimeBucketSize) {
+ int index = findGapfillBucketIndex(time);
+ gapfill(time, bucketedResult, timeBucketedRawRows[index], dataSchema, postGapfillFilterHandler);
+ if (_queryContext.getAggregationFunctions() == null) {
+ for (Object [] row : bucketedResult) {
+ Object[] resultRow = new Object[_sourceColumnIndexForResultSchema.length];
+ for (int i = 0; i < _sourceColumnIndexForResultSchema.length; i++) {
+ resultRow[i] = row[_sourceColumnIndexForResultSchema[i]];
+ }
+ result.add(resultRow);
+ }
+ bucketedResult.clear();
+ } else if (index % _aggregationSize == _aggregationSize - 1) {
+ if (bucketedResult.size() > 0) {
+ Object timeCol;
+ if (resultColumnDataTypes[_timeBucketColumnIndex] == ColumnDataType.LONG) {
+ timeCol = Long.valueOf(_dateTimeFormatter.fromMillisToFormat(start));
+ } else {
+ timeCol = _dateTimeFormatter.fromMillisToFormat(start);
+ }
+ List<Object[]> aggregatedRows = aggregateGapfilledData(timeCol, bucketedResult, dataSchema);
+ for (Object[] aggregatedRow : aggregatedRows) {
+ if (postAggregateHavingFilterHandler == null || postAggregateHavingFilterHandler.isMatch(aggregatedRow)) {
+ result.add(aggregatedRow);
+ }
+ if (result.size() >= _limitForAggregatedResult) {
+ return result;
+ }
+ }
+ bucketedResult.clear();
+ }
+ start = time + _gapfillTimeBucketSize;
+ }
+ }
+ return result;
+ }
+
+ private void gapfill(long bucketTime, List<Object[]> bucketedResult, List<Object[]> rawRowsForBucket,
+ DataSchema dataSchema, GapfillFilterHandler postGapfillFilterHandler) {
+ ColumnDataType[] resultColumnDataTypes = dataSchema.getColumnDataTypes();
+ int numResultColumns = resultColumnDataTypes.length;
+ Set<Key> keys = new HashSet<>(_groupByKeys);
+
+ if (rawRowsForBucket != null) {
+ for (Object[] resultRow : rawRowsForBucket) {
+ for (int i = 0; i < resultColumnDataTypes.length; i++) {
+ resultRow[i] = resultColumnDataTypes[i].format(resultRow[i]);
+ }
+
+ long timeCol = _dateTimeFormatter.fromFormatToMillis(String.valueOf(resultRow[0]));
+ if (timeCol > bucketTime) {
+ break;
+ }
+ if (timeCol == bucketTime) {
+ if (postGapfillFilterHandler == null || postGapfillFilterHandler.isMatch(resultRow)) {
+ if (bucketedResult.size() >= _limitForGapfilledResult) {
+ _limitForGapfilledResult = 0;
+ break;
+ } else {
+ bucketedResult.add(resultRow);
+ }
+ }
+ Key key = constructGroupKeys(resultRow);
+ keys.remove(key);
+ _previousByGroupKey.put(key, resultRow);
+ }
+ }
+ }
+
+ for (Key key : keys) {
+ Object[] gapfillRow = new Object[numResultColumns];
+ int keyIndex = 0;
+ if (resultColumnDataTypes[_timeBucketColumnIndex] == ColumnDataType.LONG) {
+ gapfillRow[0] = Long.valueOf(_dateTimeFormatter.fromMillisToFormat(bucketTime));
+ } else {
+ gapfillRow[0] = _dateTimeFormatter.fromMillisToFormat(bucketTime);
+ }
+ for (int i = 1; i < _isGroupBySelections.length; i++) {
+ if (_isGroupBySelections[i]) {
+ gapfillRow[i] = key.getValues()[keyIndex++];
+ } else {
+ gapfillRow[i] = getFillValue(i, dataSchema.getColumnName(i), key, resultColumnDataTypes[i]);
+ }
+ }
+
+ if (postGapfillFilterHandler == null || postGapfillFilterHandler.isMatch(gapfillRow)) {
+ if (bucketedResult.size() >= _limitForGapfilledResult) {
+ break;
+ } else {
+ bucketedResult.add(gapfillRow);
+ }
+ }
+ }
+ if (_limitForGapfilledResult > _groupByKeys.size()) {
+ _limitForGapfilledResult -= _groupByKeys.size();
+ } else {
+ _limitForGapfilledResult = 0;
+ }
+ }
+
+ private List<Object[]> aggregateGapfilledData(Object timeCol, List<Object[]> bucketedRows, DataSchema dataSchema) {
+ List<ExpressionContext> groupbyExpressions = _queryContext.getGroupByExpressions();
+ Preconditions.checkArgument(groupbyExpressions != null, "No GroupBy Clause.");
+ Map<String, Integer> indexes = new HashMap<>();
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ indexes.put(dataSchema.getColumnName(i), i);
+ }
+
+ for (Object [] bucketedRow : bucketedRows) {
+ bucketedRow[_timeBucketColumnIndex] = timeCol;
+ }
+
+ Map<List<Object>, Integer> groupKeyIndexes = new HashMap<>();
+ int[] groupKeyArray = new int[bucketedRows.size()];
+ List<Object[]> aggregatedResult = new ArrayList<>();
+ for (int i = 0; i < bucketedRows.size(); i++) {
+ Object[] bucketedRow = bucketedRows.get(i);
+ List<Object> groupKey = new ArrayList<>(groupbyExpressions.size());
+ for (ExpressionContext groupbyExpression : groupbyExpressions) {
+ int columnIndex = indexes.get(groupbyExpression.toString());
+ groupKey.add(bucketedRow[columnIndex]);
+ }
+ if (groupKeyIndexes.containsKey(groupKey)) {
+ groupKeyArray[i] = groupKeyIndexes.get(groupKey);
+ } else {
+ // create the new groupBy Result row and fill the group by key
+ groupKeyArray[i] = groupKeyIndexes.size();
+ groupKeyIndexes.put(groupKey, groupKeyIndexes.size());
+ Object[] row = new Object[_queryContext.getSelectExpressions().size()];
+ for (int j = 0; j < _queryContext.getSelectExpressions().size(); j++) {
+ ExpressionContext expressionContext = _queryContext.getSelectExpressions().get(j);
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ row[j] = bucketedRow[indexes.get(expressionContext.toString())];
+ }
+ }
+ aggregatedResult.add(row);
+ }
+ }
+
+ Map<ExpressionContext, BlockValSet> blockValSetMap = new HashMap<>();
+ for (int i = 1; i < dataSchema.getColumnNames().length; i++) {
+ blockValSetMap.put(ExpressionContext.forIdentifier(dataSchema.getColumnName(i)),
+ new RowBasedBlockValSet(dataSchema.getColumnDataType(i), bucketedRows, i));
+ }
+
+ for (int i = 0; i < _queryContext.getSelectExpressions().size(); i++) {
Review comment:
I need the selection index of aggregationFunction.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (9b4ea2f) into [master](https://codecov.io/gh/apache/pinot/commit/24d4fd268d28473ffd3ce1ce262322391810f356?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (24d4fd2) will **decrease** coverage by `0.01%`.
> The diff coverage is `69.82%`.
> :exclamation: Current head 9b4ea2f differs from pull request most recent head 99bb25d. Consider uploading reports for the commit 99bb25d to get more accurate results
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 70.79% 70.78% -0.02%
Complexity 4264 4264
============================================
Files 1640 1651 +11
Lines 85931 86550 +619
Branches 12922 13076 +154
============================================
+ Hits 60837 61261 +424
- Misses 20899 21047 +148
- Partials 4195 4242 +47
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.70% <23.29%> (-0.20%)` | :arrow_down: |
| integration2 | `27.42% <23.74%> (-0.11%)` | :arrow_down: |
| unittests1 | `66.99% <61.08%> (+0.04%)` | :arrow_up: |
| unittests2 | `14.11% <1.11%> (-0.11%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/GrpcBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvR3JwY0Jyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `75.67% <ø> (ø)` | |
| [...not/ingestion/common/DefaultControllerRestApi.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtcGx1Z2lucy9waW5vdC1iYXRjaC1pbmdlc3Rpb24vdjBfZGVwcmVjYXRlZC9waW5vdC1pbmdlc3Rpb24tY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9pbmdlc3Rpb24vY29tbW9uL0RlZmF1bHRDb250cm9sbGVyUmVzdEFwaS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...ot/plugin/minion/tasks/SegmentConversionUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtcGx1Z2lucy9waW5vdC1taW5pb24tdGFza3MvcGlub3QtbWluaW9uLWJ1aWx0aW4tdGFza3Mvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL3Bpbm90L3BsdWdpbi9taW5pb24vdGFza3MvU2VnbWVudENvbnZlcnNpb25VdGlscy5qYXZh) | `76.66% <ø> (ø)` | |
| [...he/pinot/segment/local/utils/SegmentPushUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3Qtc2VnbWVudC1sb2NhbC9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3Qvc2VnbWVudC9sb2NhbC91dGlscy9TZWdtZW50UHVzaFV0aWxzLmphdmE=) | `13.48% <0.00%> (ø)` | |
| [...e/pinot/core/query/reduce/RowBasedBlockValSet.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUm93QmFzZWRCbG9ja1ZhbFNldC5qYXZh) | `16.12% <16.12%> (ø)` | |
| [...inot/controller/helix/ControllerRequestClient.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29udHJvbGxlci9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29udHJvbGxlci9oZWxpeC9Db250cm9sbGVyUmVxdWVzdENsaWVudC5qYXZh) | `17.77% <17.77%> (ø)` | |
| [...pache/pinot/common/utils/request/RequestUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9jb21tb24vdXRpbHMvcmVxdWVzdC9SZXF1ZXN0VXRpbHMuamF2YQ==) | `86.39% <33.33%> (-1.11%)` | :arrow_down: |
| [...org/apache/pinot/common/utils/http/HttpClient.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9jb21tb24vdXRpbHMvaHR0cC9IdHRwQ2xpZW50LmphdmE=) | `44.65% <44.65%> (ø)` | |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `91.89% <66.66%> (+0.12%)` | :arrow_up: |
| [...ot/core/query/reduce/filter/RowMatcherFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXJGYWN0b3J5LmphdmE=) | `66.66% <66.66%> (ø)` | |
| ... and [52 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [24d4fd2...99bb25d](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r830718303
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/BrokerReduceService.java
##########
@@ -103,11 +104,23 @@ public BrokerResponseNative reduceOnDataTable(BrokerRequest brokerRequest,
return brokerResponseNative;
}
- QueryContext queryContext = BrokerRequestToQueryContextConverter.convert(brokerRequest);
- DataTableReducer dataTableReducer = ResultReducerFactory.getResultReducer(queryContext);
+ QueryContext serverQueryContext = BrokerRequestToQueryContextConverter.convert(serverBrokerRequest);
+ DataTableReducer dataTableReducer = ResultReducerFactory.getResultReducer(serverQueryContext);
dataTableReducer.reduceAndSetResults(rawTableName, cachedDataSchema, dataTableMap, brokerResponseNative,
new DataTableReducerContext(_reduceExecutorService, _maxReduceThreadsPerQuery, reduceTimeOutMs,
_groupByTrimThreshold), brokerMetrics);
+ QueryContext queryContext;
+ if (brokerRequest == serverBrokerRequest) {
+ queryContext = serverQueryContext;
+ } else {
+ queryContext = BrokerRequestToQueryContextConverter.convert(brokerRequest);
+ }
+
+ GapfillUtils.GapfillType gapfillType = GapfillUtils.getGapfillType(queryContext);
+ if (gapfillType != null) {
+ GapfillProcessor gapfillProcessor = new GapfillProcessor(queryContext, gapfillType);
+ gapfillProcessor.process(brokerResponseNative);
+ }
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r788269852
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/PreAggregationGapFillSelectionOperatorService.java
##########
@@ -0,0 +1,388 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Comparator;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.PriorityQueue;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FilterContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.query.selection.SelectionOperatorUtils;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+
+/**
+ * The <code>PreAggregationGapFillSelectionOperatorService</code> class provides the services for selection queries with
+ * <code>ORDER BY</code>.
+ * <p>Expected behavior:
+ * <ul>
+ * <li>
+ * Return selection results with the same order of columns as user passed in.
Review comment:
Make the comment match with the class
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r787189880
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/QueryContext.java
##########
@@ -85,6 +85,7 @@
// Keep the BrokerRequest to make incremental changes
// TODO: Remove it once the whole query engine is using the QueryContext
private final BrokerRequest _brokerRequest;
+ private QueryContext _preAggregateGapFillQueryContext;
Review comment:
@siddharthteotia @Jackie-Jiang
Q: If we are really touching the FROM clause, my suggestion would be to make sure we understand calcite's treatment of simple FROM clause (table name as today) and complex FROM clause (sub-queries).
A: From calcite's perspective, the simple FROM clause (table name only) will be compiled as SqlIdentifier. The complex FROM clause will be compiled as SqlOrderBy object or SqlSelect object depending on if there is any order by.
It might hold true in the future to make subquery as part of DataSource instead of making the particular gapfill sub-query.
I lean towards generic approach since
1. it is consistent with the calcite
2. even if it might impact the subquery feature, it can be fixed as part of the subquery feature development.
@siddharthteotia any other suggestion?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (37d1bbe) into [master](https://codecov.io/gh/apache/pinot/commit/1d1a7d34709b6a89985a610f46dd1c97d6c9271a?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (1d1a7d3) will **decrease** coverage by `6.49%`.
> The diff coverage is `73.34%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 71.40% 64.91% -6.50%
- Complexity 4223 4224 +1
============================================
Files 1597 1565 -32
Lines 82903 81443 -1460
Branches 12369 12249 -120
============================================
- Hits 59201 52865 -6336
- Misses 19689 24789 +5100
+ Partials 4013 3789 -224
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `68.14% <73.34%> (+<0.01%)` | :arrow_up: |
| unittests2 | `14.29% <0.00%> (-0.07%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...pache/pinot/core/query/reduce/BlockValSetImpl.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQmxvY2tWYWxTZXRJbXBsLmphdmE=) | `17.30% <17.30%> (ø)` | |
| [...query/request/context/utils/QueryContextUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvUXVlcnlDb250ZXh0VXRpbHMuamF2YQ==) | `69.56% <20.00%> (-7.71%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `56.52% <42.85%> (-7.12%)` | :arrow_down: |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `63.88% <63.88%> (ø)` | |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `92.42% <66.66%> (+0.11%)` | :arrow_up: |
| [...che/pinot/core/query/reduce/filter/RowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXIuamF2YQ==) | `66.66% <66.66%> (ø)` | |
| [...PreAggregationGapFillSelectionOperatorService.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsU2VsZWN0aW9uT3BlcmF0b3JTZXJ2aWNlLmphdmE=) | `81.86% <81.86%> (ø)` | |
| [...inot/core/plan/PreAggGapFillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL1ByZUFnZ0dhcEZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `82.75% <82.75%> (ø)` | |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `85.71% <85.71%> (ø)` | |
| ... and [394 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [1d1a7d3...37d1bbe](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (37d1bbe) into [master](https://codecov.io/gh/apache/pinot/commit/1d1a7d34709b6a89985a610f46dd1c97d6c9271a?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (1d1a7d3) will **decrease** coverage by `57.11%`.
> The diff coverage is `0.00%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 71.40% 14.29% -57.12%
+ Complexity 4223 81 -4142
=============================================
Files 1597 1565 -32
Lines 82903 81443 -1460
Branches 12369 12249 -120
=============================================
- Hits 59201 11645 -47556
- Misses 19689 68937 +49248
+ Partials 4013 861 -3152
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `?` | |
| unittests2 | `14.29% <0.00%> (-0.07%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...g/apache/pinot/sql/parsers/CalciteSqlCompiler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsQ29tcGlsZXIuamF2YQ==) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `0.00% <0.00%> (-87.78%)` | :arrow_down: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (-87.50%)` | :arrow_down: |
| [...inot/core/plan/PreAggGapFillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL1ByZUFnZ0dhcEZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...pinot/core/plan/maker/InstancePlanMakerImplV2.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL21ha2VyL0luc3RhbmNlUGxhbk1ha2VySW1wbFYyLmphdmE=) | `0.00% <0.00%> (-76.77%)` | :arrow_down: |
| [...pache/pinot/core/query/reduce/BlockValSetImpl.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQmxvY2tWYWxTZXRJbXBsLmphdmE=) | `0.00% <0.00%> (ø)` | |
| [...e/pinot/core/query/reduce/HavingFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvSGF2aW5nRmlsdGVySGFuZGxlci5qYXZh) | `0.00% <0.00%> (-91.31%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `0.00% <0.00%> (-92.31%)` | :arrow_down: |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `0.00% <0.00%> (ø)` | |
| ... and [1297 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [1d1a7d3...37d1bbe](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (b570851) into [master](https://codecov.io/gh/apache/pinot/commit/1d1a7d34709b6a89985a610f46dd1c97d6c9271a?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (1d1a7d3) will **decrease** coverage by `33.44%`.
> The diff coverage is `16.40%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 71.40% 37.96% -33.45%
+ Complexity 4223 81 -4142
=============================================
Files 1597 1608 +11
Lines 82903 83309 +406
Branches 12369 12452 +83
=============================================
- Hits 59201 31627 -27574
- Misses 19689 49251 +29562
+ Partials 4013 2431 -1582
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.92% <15.94%> (-0.07%)` | :arrow_down: |
| integration2 | `27.54% <16.40%> (-0.17%)` | :arrow_down: |
| unittests1 | `?` | |
| unittests2 | `14.27% <0.00%> (-0.09%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `81.03% <0.00%> (-6.47%)` | :arrow_down: |
| [...inot/core/plan/PreAggGapFillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL1ByZUFnZ0dhcEZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...pinot/core/plan/maker/InstancePlanMakerImplV2.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL21ha2VyL0luc3RhbmNlUGxhbk1ha2VySW1wbFYyLmphdmE=) | `59.40% <0.00%> (-17.37%)` | :arrow_down: |
| [...pache/pinot/core/query/reduce/BlockValSetImpl.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQmxvY2tWYWxTZXRJbXBsLmphdmE=) | `0.00% <0.00%> (ø)` | |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...PreAggregationGapFillSelectionOperatorService.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsU2VsZWN0aW9uT3BlcmF0b3JTZXJ2aWNlLmphdmE=) | `0.00% <0.00%> (ø)` | |
| [.../pinot/core/query/reduce/ResultReducerFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUmVzdWx0UmVkdWNlckZhY3RvcnkuamF2YQ==) | `61.11% <0.00%> (-20.14%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `28.26% <7.14%> (-35.38%)` | :arrow_down: |
| [...pinot/core/query/request/context/QueryContext.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvUXVlcnlDb250ZXh0LmphdmE=) | `88.14% <33.33%> (-9.77%)` | :arrow_down: |
| ... and [914 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [1d1a7d3...b570851](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (b570851) into [master](https://codecov.io/gh/apache/pinot/commit/1d1a7d34709b6a89985a610f46dd1c97d6c9271a?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (1d1a7d3) will **decrease** coverage by `34.95%`.
> The diff coverage is `15.94%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
- Coverage 71.40% 36.45% -34.96%
+ Complexity 4223 81 -4142
=============================================
Files 1597 1608 +11
Lines 82903 83309 +406
Branches 12369 12452 +83
=============================================
- Hits 59201 30374 -28827
- Misses 19689 50556 +30867
+ Partials 4013 2379 -1634
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.92% <15.94%> (-0.07%)` | :arrow_down: |
| integration2 | `?` | |
| unittests1 | `?` | |
| unittests2 | `14.27% <0.00%> (-0.09%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `72.41% <0.00%> (-15.09%)` | :arrow_down: |
| [...inot/core/plan/PreAggGapFillSelectionPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL1ByZUFnZ0dhcEZpbGxTZWxlY3Rpb25QbGFuTm9kZS5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...pinot/core/plan/maker/InstancePlanMakerImplV2.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL21ha2VyL0luc3RhbmNlUGxhbk1ha2VySW1wbFYyLmphdmE=) | `45.54% <0.00%> (-31.23%)` | :arrow_down: |
| [...pache/pinot/core/query/reduce/BlockValSetImpl.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQmxvY2tWYWxTZXRJbXBsLmphdmE=) | `0.00% <0.00%> (ø)` | |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `0.00% <0.00%> (ø)` | |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `0.00% <0.00%> (ø)` | |
| [...PreAggregationGapFillSelectionOperatorService.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsU2VsZWN0aW9uT3BlcmF0b3JTZXJ2aWNlLmphdmE=) | `0.00% <0.00%> (ø)` | |
| [.../pinot/core/query/reduce/ResultReducerFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUmVzdWx0UmVkdWNlckZhY3RvcnkuamF2YQ==) | `55.55% <0.00%> (-25.70%)` | :arrow_down: |
| [...query/request/context/utils/QueryContextUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvUXVlcnlDb250ZXh0VXRpbHMuamF2YQ==) | `43.47% <0.00%> (-33.80%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `28.26% <7.14%> (-35.38%)` | :arrow_down: |
| ... and [975 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [1d1a7d3...b570851](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (f78cb1d) into [master](https://codecov.io/gh/apache/pinot/commit/916d807c8f67b32c1a430692f74134c9c976c33d?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (916d807) will **decrease** coverage by `6.57%`.
> The diff coverage is `82.00%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 71.02% 64.45% -6.58%
Complexity 4314 4314
============================================
Files 1626 1591 -35
Lines 84929 83682 -1247
Branches 12783 12737 -46
============================================
- Hits 60325 53936 -6389
- Misses 20462 25863 +5401
+ Partials 4142 3883 -259
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `67.52% <82.23%> (+0.13%)` | :arrow_up: |
| unittests2 | `13.99% <0.00%> (-0.12%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `24.70% <0.00%> (-46.82%)` | :arrow_down: |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `86.08% <0.00%> (-6.10%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...che/pinot/core/query/reduce/filter/RowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXIuamF2YQ==) | `66.66% <66.66%> (ø)` | |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `75.24% <81.42%> (+11.61%)` | :arrow_up: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `83.33% <81.81%> (-4.17%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `92.00% <83.33%> (+0.23%)` | :arrow_up: |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| [...query/reduce/PreAggregateGapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRlR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `85.71% <85.71%> (ø)` | |
| [.../reduce/PreAggregationGapFillDataTableReducer.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUHJlQWdncmVnYXRpb25HYXBGaWxsRGF0YVRhYmxlUmVkdWNlci5qYXZh) | `86.73% <86.73%> (ø)` | |
| ... and [393 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [916d807...f78cb1d](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r808724195
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/plan/CombinePlanNode.java
##########
@@ -161,8 +162,18 @@ public BaseCombineOperator run() {
// Streaming query (only support selection only)
return new StreamingSelectionOnlyCombineOperator(operators, _queryContext, _executorService, _streamObserver);
}
+ GapfillUtils.GapfillType gapfillType = GapfillUtils.getGapfillType(_queryContext);
Review comment:
The reason I need handle the gapfill query first because otherwise I need make sure the query is not gapfill query for last two ifs. I prefer to keep them as they are.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r808721015
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/util/GapfillUtils.java
##########
@@ -31,7 +36,25 @@
*/
public class GapfillUtils {
private static final String POST_AGGREGATE_GAP_FILL = "postaggregategapfill";
+ private static final String GAP_FILL = "gapfill";
private static final String FILL = "fill";
+ private static final String TIME_SERIES_ON = "timeSeriesOn";
+ private static final int STARTING_INDEX_OF_OPTIONAL_ARGS_FOR_PRE_AGGREGATE_GAP_FILL = 5;
+
+ public enum GapfillType {
+ // one sql query with gapfill only
+ Gapfill,
+ // gapfill as subquery, the outer query may have the filter
+ GapfillSelect,
+ // gapfill as subquery, the outer query has the aggregation
+ GapfillAggregate,
+ // aggregation as subqery, the outer query is gapfill
+ AggregateGapfill,
+ // aggegration as second nesting subquery, gapfill as fist nesting subquery, different aggregation as outer query
+ AggregateGapfillAggregate,
+ // no gapfill at all.
+ None
Review comment:
DONE
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (a5316f7) into [master](https://codecov.io/gh/apache/pinot/commit/3f98ce37fdaef0335fcd82e621489d65751b1f55?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (3f98ce3) will **decrease** coverage by `6.53%`.
> The diff coverage is `82.09%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 70.72% 64.19% -6.54%
+ Complexity 4242 4241 -1
============================================
Files 1631 1596 -35
Lines 85279 84069 -1210
Branches 12844 12808 -36
============================================
- Hits 60316 53964 -6352
- Misses 20799 26214 +5415
+ Partials 4164 3891 -273
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `67.14% <82.30%> (+0.15%)` | :arrow_up: |
| unittests2 | `13.99% <0.00%> (-0.10%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `24.41% <0.00%> (-47.44%)` | :arrow_down: |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `86.08% <0.00%> (-6.10%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...ot/core/query/reduce/filter/RowMatcherFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXJGYWN0b3J5LmphdmE=) | `66.66% <66.66%> (ø)` | |
| [...pinot/core/plan/maker/InstancePlanMakerImplV2.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL21ha2VyL0luc3RhbmNlUGxhbk1ha2VySW1wbFYyLmphdmE=) | `65.74% <70.58%> (-11.03%)` | :arrow_down: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `92.10% <75.00%> (+0.34%)` | :arrow_up: |
| [...xt/utils/BrokerRequestToQueryContextConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvQnJva2VyUmVxdWVzdFRvUXVlcnlDb250ZXh0Q29udmVydGVyLmphdmE=) | `90.85% <76.74%> (-7.54%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `76.53% <82.85%> (+12.89%)` | :arrow_up: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `83.33% <83.33%> (-4.17%)` | :arrow_down: |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| ... and [390 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [3f98ce3...a5316f7](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r814182944
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/PreAggregateGapfillFilterHandler.java
##########
@@ -0,0 +1,74 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FilterContext;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.core.query.reduce.filter.ColumnValueExtractor;
+import org.apache.pinot.core.query.reduce.filter.LiteralValueExtractor;
+import org.apache.pinot.core.query.reduce.filter.RowMatcher;
+import org.apache.pinot.core.query.reduce.filter.ValueExtractor;
+import org.apache.pinot.core.query.reduce.filter.ValueExtractorFactory;
+import org.apache.pinot.core.util.GapfillUtils;
+
+/**
+ * Handler for Filter clause of PreAggregateGapFill.
+ */
+public class PreAggregateGapfillFilterHandler implements ValueExtractorFactory {
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r819258819
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/util/GapfillUtils.java
##########
@@ -31,7 +36,10 @@
*/
public class GapfillUtils {
private static final String POST_AGGREGATE_GAP_FILL = "postaggregategapfill";
Review comment:
PostAggregationGapfillQueriesTest is still using this one. I will deprecate it as part of next PR.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (98cf976) into [master](https://codecov.io/gh/apache/pinot/commit/3f98ce37fdaef0335fcd82e621489d65751b1f55?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (3f98ce3) will **increase** coverage by `0.10%`.
> The diff coverage is `79.51%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
+ Coverage 70.72% 70.83% +0.10%
- Complexity 4242 4248 +6
============================================
Files 1631 1641 +10
Lines 85279 86103 +824
Branches 12844 13034 +190
============================================
+ Hits 60316 60993 +677
- Misses 20799 20892 +93
- Partials 4164 4218 +54
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `28.69% <16.03%> (+<0.01%)` | :arrow_up: |
| integration2 | `27.31% <16.57%> (-0.19%)` | :arrow_down: |
| unittests1 | `67.13% <79.18%> (+0.14%)` | :arrow_up: |
| unittests2 | `14.09% <0.00%> (-0.01%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `71.72% <0.00%> (-0.13%)` | :arrow_down: |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `91.58% <0.00%> (-0.60%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `91.89% <66.66%> (+0.12%)` | :arrow_up: |
| [...ot/core/query/reduce/filter/RowMatcherFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXJGYWN0b3J5LmphdmE=) | `66.66% <66.66%> (ø)` | |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `68.79% <70.79%> (+5.15%)` | :arrow_up: |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `83.33% <75.00%> (-4.17%)` | :arrow_down: |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| [...org/apache/pinot/sql/parsers/CalciteSqlParser.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9zcWwvcGFyc2Vycy9DYWxjaXRlU3FsUGFyc2VyLmphdmE=) | `86.38% <85.71%> (-0.18%)` | :arrow_down: |
| [.../pinot/core/query/reduce/GapfillFilterHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvR2FwZmlsbEZpbHRlckhhbmRsZXIuamF2YQ==) | `85.71% <85.71%> (ø)` | |
| ... and [83 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [3f98ce3...98cf976](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] amrishlal commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
amrishlal commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r820149183
##########
File path: pinot-common/src/main/java/org/apache/pinot/sql/parsers/CalciteSqlParser.java
##########
@@ -369,6 +369,9 @@ private static PinotQuery compileCalciteSqlToPinotQuery(String sql) {
DataSource dataSource = new DataSource();
dataSource.setTableName(fromNode.toString());
pinotQuery.setDataSource(dataSource);
+ if (fromNode instanceof SqlSelect || fromNode instanceof SqlOrderBy) {
Review comment:
Not sure if I understand. If there is no table name in the FROM clause, then what is the use of setting the tableName and that too to a string that is actually represents another query? Note that the definition of datasource in query.thrift file allows for tableName being null
```
struct DataSource {
1: optional string tableName;
2: optional PinotQuery subquery;
}
```
Also Pinot supports queries without from clause, so even in current code table name (and also subquery) can be null at the same time.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829478960
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/ColumnDataToBlockValSetConverter.java
##########
@@ -0,0 +1,181 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.List;
+import javax.annotation.Nullable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.segment.spi.index.reader.Dictionary;
+import org.apache.pinot.spi.data.FieldSpec;
+
+
+/**
+ * As for Gapfilling Function, all raw data will be retrieved from the pinot
+ * server and merged on the pinot broker. The data will be in {@link DataTable}
+ * format.
+ * As part of Gapfilling Function execution plan, the aggregation function will
+ * work on the merged data on pinot broker. The aggregation function only takes
+ * the {@link BlockValSet} format.
+ * This is the Helper class to convert the data from {@link DataTable} to the
+ * block of values {@link BlockValSet} which used as input to the aggregation
+ * function.
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class ColumnDataToBlockValSetConverter implements BlockValSet {
+
+ private final FieldSpec.DataType _dataType;
+ private final List<Object[]> _rows;
+ private final int _columnIndex;
+
+ public ColumnDataToBlockValSetConverter(DataSchema.ColumnDataType columnDataType, List<Object[]> rows,
+ int columnIndex) {
+ _dataType = columnDataType.toDataType();
+ _rows = rows;
+ _columnIndex = columnIndex;
+ }
+
+ @Override
+ public FieldSpec.DataType getValueType() {
+ return _dataType;
+ }
+
+ @Override
+ public boolean isSingleValue() {
+ return true;
+ }
+
+ @Nullable
+ @Override
+ public Dictionary getDictionary() {
+ throw new UnsupportedOperationException("Not supported");
+ }
+
+ @Override
+ public int[] getDictionaryIdsSV() {
+ throw new UnsupportedOperationException("Not supported");
+ }
+
+ @Override
+ public int[] getIntValuesSV() {
+ if (_dataType == FieldSpec.DataType.INT) {
+ int[] result = new int[_rows.size()];
+ for (int i = 0; i < result.length; i++) {
+ result[i] = (Integer) _rows.get(i)[_columnIndex];
+ }
+ return result;
+ }
+ throw new UnsupportedOperationException("Not supported");
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r830751576
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/GapfillProcessor.java
##########
@@ -0,0 +1,471 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.function.CountAggregationFunction;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+
+/**
+ * Helper class to reduce and set gap fill results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class GapfillProcessor {
+ private final QueryContext _queryContext;
+
+ private final int _limitForAggregatedResult;
+ private final DateTimeGranularitySpec _gapfillDateTimeGranularity;
+ private final DateTimeGranularitySpec _postGapfillDateTimeGranularity;
+ private final DateTimeFormatSpec _dateTimeFormatter;
+ private final long _startMs;
+ private final long _endMs;
+ private final long _gapfillTimeBucketSize;
+ private final long _postGapfillTimeBucketSize;
+ private final int _numOfTimeBuckets;
+ private final List<Integer> _groupByKeyIndexes;
+ private final Set<Key> _groupByKeys;
+ private final Map<Key, Object[]> _previousByGroupKey;
+ private final Map<String, ExpressionContext> _fillExpressions;
+ private final List<ExpressionContext> _timeSeries;
+ private final GapfillUtils.GapfillType _gapfillType;
+ private int _limitForGapfilledResult;
+ private boolean[] _isGroupBySelections;
+ private final int _timeBucketColumnIndex;
+ private int[] _sourceColumnIndexForResultSchema = null;
+ private final int _aggregationSize;
+
+ GapfillProcessor(QueryContext queryContext, GapfillUtils.GapfillType gapfillType) {
+ _queryContext = queryContext;
+ _gapfillType = gapfillType;
+ _limitForAggregatedResult = queryContext.getLimit();
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ _limitForGapfilledResult = queryContext.getLimit();
+ } else {
+ _limitForGapfilledResult = queryContext.getSubquery().getLimit();
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext, _gapfillType);
+ _timeBucketColumnIndex = GapfillUtils.findTimeBucketColumnIndex(queryContext, _gapfillType);
+
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+
+ _dateTimeFormatter = new DateTimeFormatSpec(args.get(1).getLiteral());
+ _gapfillDateTimeGranularity = new DateTimeGranularitySpec(args.get(4).getLiteral());
+ _postGapfillDateTimeGranularity = new DateTimeGranularitySpec(args.get(5).getLiteral());
Review comment:
Here is background information.
Currently we only have one time bucket size. Usually one entity only has one representative state for one time bucket. Take the parking lot as an example, one parking lot can be occupied or not.
If we want to know how long the park lots are occupied per day inside the parking building. The time bucket size is one day. The parking lot state will be decided by one event within the time bucket.
By introducing the different granularity for gapfill and post-gapfill, it will make it possible to calculate the above metrics more precisely. For parking lot, we can introduce 5 minutes as gapfill bucket size, then aggregate all occupied states within post-gapfill time bucket (1 day).
Please let me know if you have any question about it. @Jackie-Jiang
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r830748992
##########
File path: pinot-core/src/test/java/org/apache/pinot/queries/BaseQueriesTest.java
##########
@@ -197,21 +198,31 @@ protected BrokerResponseNative getBrokerResponseForSqlQuery(String sqlQuery, Pla
}
queryOptions.put(Request.QueryOptionKey.GROUP_BY_MODE, Request.SQL);
queryOptions.put(Request.QueryOptionKey.RESPONSE_FORMAT, Request.SQL);
+ BrokerRequest strippedBrokerRequest = GapfillUtils.stripGapfill(brokerRequest);
+ queryOptions = strippedBrokerRequest.getPinotQuery().getQueryOptions();
+ if (queryOptions == null) {
+ queryOptions = new HashMap<>();
+ strippedBrokerRequest.getPinotQuery().setQueryOptions(queryOptions);
+ }
+ queryOptions.put(Request.QueryOptionKey.GROUP_BY_MODE, Request.SQL);
+ queryOptions.put(Request.QueryOptionKey.RESPONSE_FORMAT, Request.SQL);
QueryContext queryContext = BrokerRequestToQueryContextConverter.convert(brokerRequest);
- return getBrokerResponse(queryContext, planMaker);
+ QueryContext strippedQueryContext = BrokerRequestToQueryContextConverter.convert(strippedBrokerRequest);
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r831471705
##########
File path: pinot-broker/src/main/java/org/apache/pinot/broker/requesthandler/BaseBrokerRequestHandler.java
##########
@@ -217,7 +218,10 @@ private BrokerResponseNative handleSQLRequest(long requestId, String query, Json
requestStatistics.setErrorCode(QueryException.PQL_PARSING_ERROR_CODE);
return new BrokerResponseNative(QueryException.getException(QueryException.PQL_PARSING_ERROR, e));
}
- PinotQuery pinotQuery = brokerRequest.getPinotQuery();
+
+ BrokerRequest serverBrokerRequest = GapfillUtils.stripGapfill(brokerRequest);
+
+ PinotQuery pinotQuery = serverBrokerRequest.getPinotQuery();
Review comment:
Fixed.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r793124452
##########
File path: pinot-common/src/main/java/org/apache/pinot/sql/parsers/CalciteSqlParser.java
##########
@@ -130,8 +130,25 @@ public static PinotQuery compileToPinotQuery(String sql)
if (!options.isEmpty()) {
sql = removeOptionsFromSql(sql);
}
+
+ SqlParser sqlParser = SqlParser.create(sql, PARSER_CONFIG);
+ SqlNode sqlNode;
+ try {
+ sqlNode = sqlParser.parseQuery();
+ } catch (SqlParseException e) {
+ throw new SqlCompilationException("Caught exception while parsing query: " + sql, e);
+ }
+
// Compile Sql without OPTION statements.
- PinotQuery pinotQuery = compileCalciteSqlToPinotQuery(sql);
+ PinotQuery pinotQuery = compileSqlNodeToPinotQuery(sqlNode);
+
+ SqlSelect sqlSelect = getSelectNode(sqlNode);
+ if (sqlSelect != null) {
+ SqlNode fromNode = sqlSelect.getFrom();
+ if (fromNode != null && (fromNode instanceof SqlSelect || fromNode instanceof SqlOrderBy)) {
+ pinotQuery.getDataSource().setSubquery(compileSqlNodeToPinotQuery(fromNode));
+ }
+ }
Review comment:
I just broke down compileCalciteSqlToPinotQuery into two parts:
1. converting sql statement to SqlNode
2. construct PinotQuery from SqlNode.
This breakdown is because we need reuse the second part for constructing the pinotQuery from outer query and inner query.
So the change to original code is necessary.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (57e966b) into [master](https://codecov.io/gh/apache/pinot/commit/fb572bd0aba20d2b8a83320df6dd24cb0c654b30?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (fb572bd) will **decrease** coverage by `0.25%`.
> The diff coverage is `69.63%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 70.39% 70.13% -0.26%
Complexity 4308 4308
============================================
Files 1623 1636 +13
Lines 84365 85120 +755
Branches 12657 12839 +182
============================================
+ Hits 59386 59698 +312
- Misses 20876 21257 +381
- Partials 4103 4165 +62
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `27.38% <10.79%> (?)` | |
| unittests1 | `67.88% <69.39%> (-0.02%)` | :arrow_down: |
| unittests2 | `14.11% <0.00%> (-0.10%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...he/pinot/core/plan/GapfillAggregationPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxBZ2dyZWdhdGlvblBsYW5Ob2RlLmphdmE=) | `0.00% <0.00%> (ø)` | |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `91.58% <0.00%> (-0.60%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...plan/GapfillAggregationGroupByOrderByPlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0dhcGZpbGxBZ2dyZWdhdGlvbkdyb3VwQnlPcmRlckJ5UGxhbk5vZGUuamF2YQ==) | `51.21% <51.21%> (ø)` | |
| [.../combine/GapfillGroupByOrderByCombineOperator.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9vcGVyYXRvci9jb21iaW5lL0dhcGZpbGxHcm91cEJ5T3JkZXJCeUNvbWJpbmVPcGVyYXRvci5qYXZh) | `58.88% <58.88%> (ø)` | |
| [...va/org/apache/pinot/core/plan/CombinePlanNode.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9wbGFuL0NvbWJpbmVQbGFuTm9kZS5qYXZh) | `85.00% <60.00%> (+1.07%)` | :arrow_up: |
| [...che/pinot/core/query/reduce/filter/RowMatcher.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXIuamF2YQ==) | `66.66% <66.66%> (ø)` | |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `73.61% <82.92%> (+9.97%)` | :arrow_up: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `91.89% <83.33%> (+0.22%)` | :arrow_up: |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| ... and [185 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [fb572bd...57e966b](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] amrishlal commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
amrishlal commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r804423493
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/utils/QueryContextUtils.java
##########
@@ -38,7 +39,11 @@ private QueryContextUtils() {
* Returns {@code true} if the given query is a selection query, {@code false} otherwise.
*/
public static boolean isSelectionQuery(QueryContext query) {
- return query.getAggregationFunctions() == null;
+ if (GapfillUtils.isGapfill(query)) {
Review comment:
This call appears to be expensive since it traverses the entire select list. Can this call be made more efficient since isSelectionQuery will be called frequently by even non gapfill queries? Maybe a flag can be set during compile time to indicate whether this is a gapfill query?
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/operator/combine/GapfillGroupByOrderByCombineOperator.java
##########
@@ -0,0 +1,263 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.operator.combine;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import org.apache.pinot.common.exception.QueryException;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.response.ProcessingException;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.core.common.Operator;
+import org.apache.pinot.core.data.table.ConcurrentIndexedTable;
+import org.apache.pinot.core.data.table.IndexedTable;
+import org.apache.pinot.core.data.table.IntermediateRecord;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.data.table.Record;
+import org.apache.pinot.core.data.table.UnboundedConcurrentIndexedTable;
+import org.apache.pinot.core.operator.AcquireReleaseColumnsSegmentOperator;
+import org.apache.pinot.core.operator.blocks.IntermediateResultsBlock;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.groupby.AggregationGroupByResult;
+import org.apache.pinot.core.query.aggregation.groupby.GroupKeyGenerator;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.core.util.GroupByUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * Combine operator for aggregation group-by queries with SQL semantic.
+ * TODO: Use CombineOperatorUtils.getNumThreadsForQuery() to get the parallelism of the query instead of using
+ * all threads
+ */
+@SuppressWarnings("rawtypes")
+public class GapfillGroupByOrderByCombineOperator extends BaseCombineOperator {
+ public static final int MAX_TRIM_THRESHOLD = 1_000_000_000;
+ private static final Logger LOGGER = LoggerFactory.getLogger(GapfillGroupByOrderByCombineOperator.class);
+ private static final String OPERATOR_NAME = "GapfillGroupByOrderByCombineOperator";
+ private static final String EXPLAIN_NAME = "GAPFILL_COMBINE_GROUPBY_ORDERBY";
Review comment:
This name should be changed to COMBINE_GROUPBY_ORDERBY_GAPFILL to keep the naming scheme consistent with other Combine operators.
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/utils/QueryContextUtils.java
##########
@@ -47,16 +52,22 @@ public static boolean isSelectionQuery(QueryContext query) {
* Selection-only query at this moment means selection query without order-by.
*/
public static boolean isSelectionOnlyQuery(QueryContext query) {
- return query.getAggregationFunctions() == null && query.getOrderByExpressions() == null;
+ return query.getAggregationFunctions() == null
+ && query.getOrderByExpressions() == null
+ && !GapfillUtils.isGapfill(query);
Review comment:
A null check should be sufficient here to find out if the query is is a gapfill query or not? `query.getSubQueryContext() == null `
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/utils/QueryContextUtils.java
##########
@@ -38,7 +39,11 @@ private QueryContextUtils() {
* Returns {@code true} if the given query is a selection query, {@code false} otherwise.
*/
public static boolean isSelectionQuery(QueryContext query) {
- return query.getAggregationFunctions() == null;
+ if (GapfillUtils.isGapfill(query)) {
+ return isSelectionOnlyQuery(query.getSubQueryContext());
+ } else {
+ return query.getAggregationFunctions() == null;
+ }
Review comment:
Can this be simplified to:
`return query.getAggregationFunctions() == null && query.getSubQueryContext() == null`
because if subquery exists then this is automatically a gapfill query?
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/plan/GapfillAggregationPlanNode.java
##########
@@ -0,0 +1,175 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.plan;
+
+import com.google.common.base.Preconditions;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.core.common.Operator;
+import org.apache.pinot.core.operator.blocks.IntermediateResultsBlock;
+import org.apache.pinot.core.operator.filter.BaseFilterOperator;
+import org.apache.pinot.core.operator.query.AggregationOperator;
+import org.apache.pinot.core.operator.query.DictionaryBasedAggregationOperator;
+import org.apache.pinot.core.operator.query.MetadataBasedAggregationOperator;
+import org.apache.pinot.core.operator.transform.TransformOperator;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionUtils;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.startree.CompositePredicateEvaluator;
+import org.apache.pinot.core.startree.StarTreeUtils;
+import org.apache.pinot.core.startree.plan.StarTreeTransformPlanNode;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.segment.spi.AggregationFunctionType;
+import org.apache.pinot.segment.spi.IndexSegment;
+import org.apache.pinot.segment.spi.index.reader.Dictionary;
+import org.apache.pinot.segment.spi.index.startree.AggregationFunctionColumnPair;
+import org.apache.pinot.segment.spi.index.startree.StarTreeV2;
+
+
+/**
+ * The <code>GapfillAggregationPlanNode</code> class provides the execution plan for gapfill aggregation only query on
+ * a single segment.
+ */
+@SuppressWarnings("rawtypes")
+public class GapfillAggregationPlanNode implements PlanNode {
Review comment:
Would it be possible to extend this class from `AggregationPlanNode` or do some sort of refactoring that would allow both classes to share common code?
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/operator/combine/GapfillGroupByOrderByCombineOperator.java
##########
@@ -0,0 +1,263 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.operator.combine;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import org.apache.pinot.common.exception.QueryException;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.response.ProcessingException;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.core.common.Operator;
+import org.apache.pinot.core.data.table.ConcurrentIndexedTable;
+import org.apache.pinot.core.data.table.IndexedTable;
+import org.apache.pinot.core.data.table.IntermediateRecord;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.data.table.Record;
+import org.apache.pinot.core.data.table.UnboundedConcurrentIndexedTable;
+import org.apache.pinot.core.operator.AcquireReleaseColumnsSegmentOperator;
+import org.apache.pinot.core.operator.blocks.IntermediateResultsBlock;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.groupby.AggregationGroupByResult;
+import org.apache.pinot.core.query.aggregation.groupby.GroupKeyGenerator;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.core.util.GroupByUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * Combine operator for aggregation group-by queries with SQL semantic.
+ * TODO: Use CombineOperatorUtils.getNumThreadsForQuery() to get the parallelism of the query instead of using
+ * all threads
+ */
+@SuppressWarnings("rawtypes")
+public class GapfillGroupByOrderByCombineOperator extends BaseCombineOperator {
Review comment:
Except for setting some variables in the constructor, this file appears to be a close copy of `GroupByOrderByCombineOperator`. I am wondering if it would be possible to extend `GapfillGroupByOrderByCombineOperator` from GroupByOrderByCombineOperator to avoid code duplication?
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/utils/QueryContextUtils.java
##########
@@ -47,16 +52,22 @@ public static boolean isSelectionQuery(QueryContext query) {
* Selection-only query at this moment means selection query without order-by.
*/
public static boolean isSelectionOnlyQuery(QueryContext query) {
- return query.getAggregationFunctions() == null && query.getOrderByExpressions() == null;
+ return query.getAggregationFunctions() == null
+ && query.getOrderByExpressions() == null
+ && !GapfillUtils.isGapfill(query);
}
/**
- * Returns {@code true} if the given query is an aggregation query, {@code false} otherwise.
+ * Returns {@code trgue} if the given query is an agregation query, {@code false} otherwise.
*/
public static boolean isAggregationQuery(QueryContext query) {
- AggregationFunction[] aggregationFunctions = query.getAggregationFunctions();
- return aggregationFunctions != null && (aggregationFunctions.length != 1
- || !(aggregationFunctions[0] instanceof DistinctAggregationFunction));
+ if (GapfillUtils.isGapfill(query)) {
Review comment:
Are all gapfill queries aggregate queries or if there are exceptions?
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/plan/GapfillAggregationGroupByOrderByPlanNode.java
##########
@@ -0,0 +1,110 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.plan;
+
+import com.google.common.base.Preconditions;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.core.operator.filter.BaseFilterOperator;
+import org.apache.pinot.core.operator.query.AggregationGroupByOrderByOperator;
+import org.apache.pinot.core.operator.transform.TransformOperator;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionUtils;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.startree.CompositePredicateEvaluator;
+import org.apache.pinot.core.startree.StarTreeUtils;
+import org.apache.pinot.core.startree.plan.StarTreeTransformPlanNode;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.segment.spi.IndexSegment;
+import org.apache.pinot.segment.spi.index.startree.AggregationFunctionColumnPair;
+import org.apache.pinot.segment.spi.index.startree.StarTreeV2;
+
+
+/**
+ * The <code>GapfillAggregationGroupByOrderByPlanNode</code> class provides the execution plan for gapfill aggregation
+ * group-by order-by query on a single segment.
+ */
+@SuppressWarnings("rawtypes")
+public class GapfillAggregationGroupByOrderByPlanNode implements PlanNode {
Review comment:
Would it be possible to extend this class from `AggregationGroupByOrderByPlanNode` for code reuse or do some refactoring which would allow both classes to share common code?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] amrishlal commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
amrishlal commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r804423493
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/utils/QueryContextUtils.java
##########
@@ -38,7 +39,11 @@ private QueryContextUtils() {
* Returns {@code true} if the given query is a selection query, {@code false} otherwise.
*/
public static boolean isSelectionQuery(QueryContext query) {
- return query.getAggregationFunctions() == null;
+ if (GapfillUtils.isGapfill(query)) {
Review comment:
This call appears to be expensive since it traverses the entire select list. Can this call be made more efficient since isSelectionQuery will be called frequently by even non gapfill queries? Maybe a flag can be set during compile time to indicate whether this is a gapfill query?
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/operator/combine/GapfillGroupByOrderByCombineOperator.java
##########
@@ -0,0 +1,263 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.operator.combine;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import org.apache.pinot.common.exception.QueryException;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.response.ProcessingException;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.core.common.Operator;
+import org.apache.pinot.core.data.table.ConcurrentIndexedTable;
+import org.apache.pinot.core.data.table.IndexedTable;
+import org.apache.pinot.core.data.table.IntermediateRecord;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.data.table.Record;
+import org.apache.pinot.core.data.table.UnboundedConcurrentIndexedTable;
+import org.apache.pinot.core.operator.AcquireReleaseColumnsSegmentOperator;
+import org.apache.pinot.core.operator.blocks.IntermediateResultsBlock;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.groupby.AggregationGroupByResult;
+import org.apache.pinot.core.query.aggregation.groupby.GroupKeyGenerator;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.core.util.GroupByUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * Combine operator for aggregation group-by queries with SQL semantic.
+ * TODO: Use CombineOperatorUtils.getNumThreadsForQuery() to get the parallelism of the query instead of using
+ * all threads
+ */
+@SuppressWarnings("rawtypes")
+public class GapfillGroupByOrderByCombineOperator extends BaseCombineOperator {
+ public static final int MAX_TRIM_THRESHOLD = 1_000_000_000;
+ private static final Logger LOGGER = LoggerFactory.getLogger(GapfillGroupByOrderByCombineOperator.class);
+ private static final String OPERATOR_NAME = "GapfillGroupByOrderByCombineOperator";
+ private static final String EXPLAIN_NAME = "GAPFILL_COMBINE_GROUPBY_ORDERBY";
Review comment:
This name should be changed to COMBINE_GROUPBY_ORDERBY_GAPFILL to keep the naming scheme consistent with other Combine operators.
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/utils/QueryContextUtils.java
##########
@@ -47,16 +52,22 @@ public static boolean isSelectionQuery(QueryContext query) {
* Selection-only query at this moment means selection query without order-by.
*/
public static boolean isSelectionOnlyQuery(QueryContext query) {
- return query.getAggregationFunctions() == null && query.getOrderByExpressions() == null;
+ return query.getAggregationFunctions() == null
+ && query.getOrderByExpressions() == null
+ && !GapfillUtils.isGapfill(query);
Review comment:
A null check should be sufficient here to find out if the query is is a gapfill query or not? `query.getSubQueryContext() == null `
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/utils/QueryContextUtils.java
##########
@@ -38,7 +39,11 @@ private QueryContextUtils() {
* Returns {@code true} if the given query is a selection query, {@code false} otherwise.
*/
public static boolean isSelectionQuery(QueryContext query) {
- return query.getAggregationFunctions() == null;
+ if (GapfillUtils.isGapfill(query)) {
+ return isSelectionOnlyQuery(query.getSubQueryContext());
+ } else {
+ return query.getAggregationFunctions() == null;
+ }
Review comment:
Can this be simplified to:
`return query.getAggregationFunctions() == null && query.getSubQueryContext() == null`
because if subquery exists then this is automatically a gapfill query?
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/plan/GapfillAggregationPlanNode.java
##########
@@ -0,0 +1,175 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.plan;
+
+import com.google.common.base.Preconditions;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.core.common.Operator;
+import org.apache.pinot.core.operator.blocks.IntermediateResultsBlock;
+import org.apache.pinot.core.operator.filter.BaseFilterOperator;
+import org.apache.pinot.core.operator.query.AggregationOperator;
+import org.apache.pinot.core.operator.query.DictionaryBasedAggregationOperator;
+import org.apache.pinot.core.operator.query.MetadataBasedAggregationOperator;
+import org.apache.pinot.core.operator.transform.TransformOperator;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionUtils;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.startree.CompositePredicateEvaluator;
+import org.apache.pinot.core.startree.StarTreeUtils;
+import org.apache.pinot.core.startree.plan.StarTreeTransformPlanNode;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.segment.spi.AggregationFunctionType;
+import org.apache.pinot.segment.spi.IndexSegment;
+import org.apache.pinot.segment.spi.index.reader.Dictionary;
+import org.apache.pinot.segment.spi.index.startree.AggregationFunctionColumnPair;
+import org.apache.pinot.segment.spi.index.startree.StarTreeV2;
+
+
+/**
+ * The <code>GapfillAggregationPlanNode</code> class provides the execution plan for gapfill aggregation only query on
+ * a single segment.
+ */
+@SuppressWarnings("rawtypes")
+public class GapfillAggregationPlanNode implements PlanNode {
Review comment:
Would it be possible to extend this class from `AggregationPlanNode` or do some sort of refactoring that would allow both classes to share common code?
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/operator/combine/GapfillGroupByOrderByCombineOperator.java
##########
@@ -0,0 +1,263 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.operator.combine;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import org.apache.pinot.common.exception.QueryException;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.response.ProcessingException;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.core.common.Operator;
+import org.apache.pinot.core.data.table.ConcurrentIndexedTable;
+import org.apache.pinot.core.data.table.IndexedTable;
+import org.apache.pinot.core.data.table.IntermediateRecord;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.data.table.Record;
+import org.apache.pinot.core.data.table.UnboundedConcurrentIndexedTable;
+import org.apache.pinot.core.operator.AcquireReleaseColumnsSegmentOperator;
+import org.apache.pinot.core.operator.blocks.IntermediateResultsBlock;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.groupby.AggregationGroupByResult;
+import org.apache.pinot.core.query.aggregation.groupby.GroupKeyGenerator;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.core.util.GroupByUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * Combine operator for aggregation group-by queries with SQL semantic.
+ * TODO: Use CombineOperatorUtils.getNumThreadsForQuery() to get the parallelism of the query instead of using
+ * all threads
+ */
+@SuppressWarnings("rawtypes")
+public class GapfillGroupByOrderByCombineOperator extends BaseCombineOperator {
Review comment:
Except for setting some variables in the constructor, this file appears to be a close copy of `GroupByOrderByCombineOperator`. I am wondering if it would be possible to extend `GapfillGroupByOrderByCombineOperator` from GroupByOrderByCombineOperator to avoid code duplication?
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/utils/QueryContextUtils.java
##########
@@ -47,16 +52,22 @@ public static boolean isSelectionQuery(QueryContext query) {
* Selection-only query at this moment means selection query without order-by.
*/
public static boolean isSelectionOnlyQuery(QueryContext query) {
- return query.getAggregationFunctions() == null && query.getOrderByExpressions() == null;
+ return query.getAggregationFunctions() == null
+ && query.getOrderByExpressions() == null
+ && !GapfillUtils.isGapfill(query);
}
/**
- * Returns {@code true} if the given query is an aggregation query, {@code false} otherwise.
+ * Returns {@code trgue} if the given query is an agregation query, {@code false} otherwise.
*/
public static boolean isAggregationQuery(QueryContext query) {
- AggregationFunction[] aggregationFunctions = query.getAggregationFunctions();
- return aggregationFunctions != null && (aggregationFunctions.length != 1
- || !(aggregationFunctions[0] instanceof DistinctAggregationFunction));
+ if (GapfillUtils.isGapfill(query)) {
Review comment:
Are all gapfill queries aggregate queries or if there are exceptions?
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/plan/GapfillAggregationGroupByOrderByPlanNode.java
##########
@@ -0,0 +1,110 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.plan;
+
+import com.google.common.base.Preconditions;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.core.operator.filter.BaseFilterOperator;
+import org.apache.pinot.core.operator.query.AggregationGroupByOrderByOperator;
+import org.apache.pinot.core.operator.transform.TransformOperator;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionUtils;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.startree.CompositePredicateEvaluator;
+import org.apache.pinot.core.startree.StarTreeUtils;
+import org.apache.pinot.core.startree.plan.StarTreeTransformPlanNode;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.segment.spi.IndexSegment;
+import org.apache.pinot.segment.spi.index.startree.AggregationFunctionColumnPair;
+import org.apache.pinot.segment.spi.index.startree.StarTreeV2;
+
+
+/**
+ * The <code>GapfillAggregationGroupByOrderByPlanNode</code> class provides the execution plan for gapfill aggregation
+ * group-by order-by query on a single segment.
+ */
+@SuppressWarnings("rawtypes")
+public class GapfillAggregationGroupByOrderByPlanNode implements PlanNode {
Review comment:
Would it be possible to extend this class from `AggregationGroupByOrderByPlanNode` for code reuse or do some refactoring which would allow both classes to share common code?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Pre-Aggregation Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r806340967
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/utils/QueryContextUtils.java
##########
@@ -38,7 +39,11 @@ private QueryContextUtils() {
* Returns {@code true} if the given query is a selection query, {@code false} otherwise.
*/
public static boolean isSelectionQuery(QueryContext query) {
- return query.getAggregationFunctions() == null;
+ if (GapfillUtils.isGapfill(query)) {
Review comment:
Done
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] Jackie-Jiang commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
Jackie-Jiang commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r828299743
##########
File path: pinot-broker/src/main/java/org/apache/pinot/broker/requesthandler/BaseBrokerRequestHandler.java
##########
@@ -217,6 +218,10 @@ private BrokerResponseNative handleSQLRequest(long requestId, String query, Json
requestStatistics.setErrorCode(QueryException.PQL_PARSING_ERROR_CODE);
return new BrokerResponseNative(QueryException.getException(QueryException.PQL_PARSING_ERROR, e));
}
+
+ BrokerRequest originalBrokerRequest = brokerRequest;
+ brokerRequest = GapfillUtils.stripGapfill(originalBrokerRequest);
Review comment:
Let's name it `serverBrokerRequest`, which is the broker request sent to the server?
In `logBrokerResponse`, we should probably pass in the original broker request
##########
File path: pinot-broker/src/main/java/org/apache/pinot/broker/requesthandler/BaseBrokerRequestHandler.java
##########
@@ -2183,9 +2192,9 @@ private void attachTimeBoundary(String rawTableName, BrokerRequest brokerRequest
* Processes the optimized broker requests for both OFFLINE and REALTIME table.
*/
protected abstract BrokerResponseNative processBrokerRequest(long requestId, BrokerRequest originalBrokerRequest,
- @Nullable BrokerRequest offlineBrokerRequest, @Nullable Map<ServerInstance, List<String>> offlineRoutingTable,
- @Nullable BrokerRequest realtimeBrokerRequest, @Nullable Map<ServerInstance, List<String>> realtimeRoutingTable,
- long timeoutMs, ServerStats serverStats, RequestStatistics requestStatistics)
+ BrokerRequest brokerRequest, @Nullable BrokerRequest offlineBrokerRequest, @Nullable Map<ServerInstance,
Review comment:
Suggest renaming it to `serverBrokerRequest`. Same for the child classes
##########
File path: pinot-common/src/main/java/org/apache/pinot/sql/parsers/CalciteSqlParser.java
##########
@@ -67,9 +67,7 @@
public class CalciteSqlParser {
- private CalciteSqlParser() {
- }
-
+ public static final List<QueryRewriter> QUERY_REWRITERS = new ArrayList<>(QueryRewriterFactory.getQueryRewriters());
Review comment:
(minor) Revert these reordering changes
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/BrokerReduceService.java
##########
@@ -108,7 +108,12 @@ public BrokerResponseNative reduceOnDataTable(BrokerRequest brokerRequest,
dataTableReducer.reduceAndSetResults(rawTableName, cachedDataSchema, dataTableMap, brokerResponseNative,
new DataTableReducerContext(_reduceExecutorService, _maxReduceThreadsPerQuery, reduceTimeOutMs,
_groupByTrimThreshold), brokerMetrics);
- updateAlias(queryContext, brokerResponseNative);
+ QueryContext originalQueryContext = BrokerRequestToQueryContextConverter.convert(originalBrokerRequest);
Review comment:
Check if `originalBrokerRequest` and `serverBrokerRequest` is the same reference before applying this logic to avoid the overhead for regular queries
##########
File path: pinot-controller/src/main/java/org/apache/pinot/controller/api/resources/PinotQueryResource.java
##########
@@ -162,8 +163,7 @@ public String getQueryResponse(String query, String traceEnabled, String queryOp
String inputTableName;
switch (querySyntax) {
case CommonConstants.Broker.Request.SQL:
- inputTableName =
- SQL_QUERY_COMPILER.compileToBrokerRequest(query).getPinotQuery().getDataSource().getTableName();
+ inputTableName = GapfillUtils.getTableName(SQL_QUERY_COMPILER.compileToBrokerRequest(query).getPinotQuery());
Review comment:
Let's move this util into `RequestUtils` as it doesn't only apply to gapfill
##########
File path: pinot-common/src/main/java/org/apache/pinot/sql/parsers/CalciteSqlParser.java
##########
@@ -100,6 +95,9 @@ private CalciteSqlParser() {
private static final Pattern OPTIONS_REGEX_PATTEN =
Pattern.compile("option\\s*\\(([^\\)]+)\\)", Pattern.CASE_INSENSITIVE);
+ private CalciteSqlParser() {
Review comment:
(minor) Revert these reordering changes
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/ColumnDataToBlockValSetConverter.java
##########
@@ -0,0 +1,181 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.List;
+import javax.annotation.Nullable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.segment.spi.index.reader.Dictionary;
+import org.apache.pinot.spi.data.FieldSpec;
+
+
+/**
+ * As for Gapfilling Function, all raw data will be retrieved from the pinot
+ * server and merged on the pinot broker. The data will be in {@link DataTable}
+ * format.
+ * As part of Gapfilling Function execution plan, the aggregation function will
+ * work on the merged data on pinot broker. The aggregation function only takes
+ * the {@link BlockValSet} format.
+ * This is the Helper class to convert the data from {@link DataTable} to the
+ * block of values {@link BlockValSet} which used as input to the aggregation
+ * function.
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
Review comment:
These 2 warnings can be removed
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/ColumnDataToBlockValSetConverter.java
##########
@@ -0,0 +1,181 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.List;
+import javax.annotation.Nullable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.segment.spi.index.reader.Dictionary;
+import org.apache.pinot.spi.data.FieldSpec;
+
+
+/**
+ * As for Gapfilling Function, all raw data will be retrieved from the pinot
+ * server and merged on the pinot broker. The data will be in {@link DataTable}
+ * format.
+ * As part of Gapfilling Function execution plan, the aggregation function will
+ * work on the merged data on pinot broker. The aggregation function only takes
+ * the {@link BlockValSet} format.
+ * This is the Helper class to convert the data from {@link DataTable} to the
+ * block of values {@link BlockValSet} which used as input to the aggregation
+ * function.
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class ColumnDataToBlockValSetConverter implements BlockValSet {
Review comment:
Rename it to `RowBasedBlockValSet`
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/ColumnDataToBlockValSetConverter.java
##########
@@ -0,0 +1,181 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.List;
+import javax.annotation.Nullable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.segment.spi.index.reader.Dictionary;
+import org.apache.pinot.spi.data.FieldSpec;
+
+
+/**
+ * As for Gapfilling Function, all raw data will be retrieved from the pinot
+ * server and merged on the pinot broker. The data will be in {@link DataTable}
+ * format.
+ * As part of Gapfilling Function execution plan, the aggregation function will
+ * work on the merged data on pinot broker. The aggregation function only takes
+ * the {@link BlockValSet} format.
+ * This is the Helper class to convert the data from {@link DataTable} to the
+ * block of values {@link BlockValSet} which used as input to the aggregation
+ * function.
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class ColumnDataToBlockValSetConverter implements BlockValSet {
+
+ private final FieldSpec.DataType _dataType;
+ private final List<Object[]> _rows;
+ private final int _columnIndex;
+
+ public ColumnDataToBlockValSetConverter(DataSchema.ColumnDataType columnDataType, List<Object[]> rows,
+ int columnIndex) {
+ _dataType = columnDataType.toDataType();
+ _rows = rows;
+ _columnIndex = columnIndex;
+ }
+
+ @Override
+ public FieldSpec.DataType getValueType() {
+ return _dataType;
+ }
+
+ @Override
+ public boolean isSingleValue() {
+ return true;
+ }
+
+ @Nullable
+ @Override
+ public Dictionary getDictionary() {
+ throw new UnsupportedOperationException("Not supported");
Review comment:
```suggestion
return null;
```
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/ColumnDataToBlockValSetConverter.java
##########
@@ -0,0 +1,181 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.List;
+import javax.annotation.Nullable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.segment.spi.index.reader.Dictionary;
+import org.apache.pinot.spi.data.FieldSpec;
+
+
+/**
+ * As for Gapfilling Function, all raw data will be retrieved from the pinot
Review comment:
This class is common for all sub-query handling, so let's remove the gapfill part from the javadoc. We may also add a TODO here to support BYTES and MV in the future.
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/ColumnDataToBlockValSetConverter.java
##########
@@ -0,0 +1,181 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.List;
+import javax.annotation.Nullable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.segment.spi.index.reader.Dictionary;
+import org.apache.pinot.spi.data.FieldSpec;
+
+
+/**
+ * As for Gapfilling Function, all raw data will be retrieved from the pinot
+ * server and merged on the pinot broker. The data will be in {@link DataTable}
+ * format.
+ * As part of Gapfilling Function execution plan, the aggregation function will
+ * work on the merged data on pinot broker. The aggregation function only takes
+ * the {@link BlockValSet} format.
+ * This is the Helper class to convert the data from {@link DataTable} to the
+ * block of values {@link BlockValSet} which used as input to the aggregation
+ * function.
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class ColumnDataToBlockValSetConverter implements BlockValSet {
+
+ private final FieldSpec.DataType _dataType;
+ private final List<Object[]> _rows;
+ private final int _columnIndex;
+
+ public ColumnDataToBlockValSetConverter(DataSchema.ColumnDataType columnDataType, List<Object[]> rows,
+ int columnIndex) {
+ _dataType = columnDataType.toDataType();
+ _rows = rows;
+ _columnIndex = columnIndex;
+ }
+
+ @Override
+ public FieldSpec.DataType getValueType() {
+ return _dataType;
+ }
+
+ @Override
+ public boolean isSingleValue() {
+ return true;
+ }
+
+ @Nullable
+ @Override
+ public Dictionary getDictionary() {
+ throw new UnsupportedOperationException("Not supported");
+ }
+
+ @Override
+ public int[] getDictionaryIdsSV() {
+ throw new UnsupportedOperationException("Not supported");
Review comment:
(minor) Remove the `"Not supported"` from the exception message, same for other places
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/filter/ValueExtractorFactory.java
##########
@@ -0,0 +1,29 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce.filter;
+
+import org.apache.pinot.common.request.context.ExpressionContext;
+
+
+/**
+ * Value extractor for the post-aggregation function or pre-aggregation gap fill.
Review comment:
Update the javadoc
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/ColumnDataToBlockValSetConverter.java
##########
@@ -0,0 +1,181 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.List;
+import javax.annotation.Nullable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.segment.spi.index.reader.Dictionary;
+import org.apache.pinot.spi.data.FieldSpec;
+
+
+/**
+ * As for Gapfilling Function, all raw data will be retrieved from the pinot
+ * server and merged on the pinot broker. The data will be in {@link DataTable}
+ * format.
+ * As part of Gapfilling Function execution plan, the aggregation function will
+ * work on the merged data on pinot broker. The aggregation function only takes
+ * the {@link BlockValSet} format.
+ * This is the Helper class to convert the data from {@link DataTable} to the
+ * block of values {@link BlockValSet} which used as input to the aggregation
+ * function.
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class ColumnDataToBlockValSetConverter implements BlockValSet {
+
+ private final FieldSpec.DataType _dataType;
+ private final List<Object[]> _rows;
+ private final int _columnIndex;
+
+ public ColumnDataToBlockValSetConverter(DataSchema.ColumnDataType columnDataType, List<Object[]> rows,
+ int columnIndex) {
+ _dataType = columnDataType.toDataType();
+ _rows = rows;
+ _columnIndex = columnIndex;
+ }
+
+ @Override
+ public FieldSpec.DataType getValueType() {
+ return _dataType;
+ }
+
+ @Override
+ public boolean isSingleValue() {
+ return true;
+ }
+
+ @Nullable
+ @Override
+ public Dictionary getDictionary() {
+ throw new UnsupportedOperationException("Not supported");
+ }
+
+ @Override
+ public int[] getDictionaryIdsSV() {
+ throw new UnsupportedOperationException("Not supported");
+ }
+
+ @Override
+ public int[] getIntValuesSV() {
+ if (_dataType == FieldSpec.DataType.INT) {
+ int[] result = new int[_rows.size()];
+ for (int i = 0; i < result.length; i++) {
+ result[i] = (Integer) _rows.get(i)[_columnIndex];
+ }
+ return result;
+ }
+ throw new UnsupportedOperationException("Not supported");
Review comment:
We should support reading ints from any numeric types and string type, same for other places
```suggestion
int length = _rows.size();
int[] values = new int[length];
if (_dataType.isNumeric()) {
for (int i = 0; i <length; i++) {
values[i] = ((Number) _rows.get(i)[_columnIndex]).intValue();
}
} else if (_dataType == FieldSpec.DataType.STRING) {
for (int i = 0; i <length; i++) {
values[i] = Integer.parseInt((String) _rows.get(i)[_columnIndex]);
}
} else {
throw new IllegalStateException("Cannot read int values from data type: " + _dataType);
}
return values;
```
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/GapFillProcessor.java
##########
@@ -0,0 +1,455 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+
+/**
+ * Helper class to reduce and set gap fill results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class GapFillProcessor {
Review comment:
(minor) Rename to `GapfillProcessor` to be consistent
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/GapFillProcessor.java
##########
@@ -0,0 +1,455 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import com.google.common.base.Preconditions;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FunctionContext;
+import org.apache.pinot.common.response.broker.BrokerResponseNative;
+import org.apache.pinot.common.response.broker.ResultTable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataSchema.ColumnDataType;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.core.data.table.Key;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunction;
+import org.apache.pinot.core.query.aggregation.function.AggregationFunctionFactory;
+import org.apache.pinot.core.query.aggregation.groupby.GroupByResultHolder;
+import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
+import org.apache.pinot.spi.data.DateTimeFormatSpec;
+import org.apache.pinot.spi.data.DateTimeGranularitySpec;
+
+
+/**
+ * Helper class to reduce and set gap fill results into the BrokerResponseNative
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class GapFillProcessor {
+ private final QueryContext _queryContext;
+
+ private final int _limitForAggregatedResult;
+ private final DateTimeGranularitySpec _dateTimeGranularity;
+ private final DateTimeFormatSpec _dateTimeFormatter;
+ private final long _startMs;
+ private final long _endMs;
+ private final long _timeBucketSize;
+ private final int _numOfTimeBuckets;
+ private final List<Integer> _groupByKeyIndexes;
+ private final Set<Key> _groupByKeys;
+ private final Map<Key, Object[]> _previousByGroupKey;
+ private final Map<String, ExpressionContext> _fillExpressions;
+ private final List<ExpressionContext> _timeSeries;
+ private final GapfillUtils.GapfillType _gapfillType;
+ private int _limitForGapfilledResult;
+ private boolean[] _isGroupBySelections;
+ private final int _timeBucketColumnIndex;
+ private int[] _sourceColumnIndexForResultSchema = null;
+
+ GapFillProcessor(QueryContext queryContext) {
+ _queryContext = queryContext;
+ _gapfillType = queryContext.getGapfillType();
+ _limitForAggregatedResult = queryContext.getLimit();
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ _limitForGapfilledResult = queryContext.getLimit();
+ } else {
+ _limitForGapfilledResult = queryContext.getSubQueryContext().getLimit();
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext);
+ _timeBucketColumnIndex = GapfillUtils.findTimeBucketColumnIndex(queryContext);
+
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+
+ _dateTimeFormatter = new DateTimeFormatSpec(args.get(1).getLiteral());
+ _dateTimeGranularity = new DateTimeGranularitySpec(args.get(4).getLiteral());
+ String start = args.get(2).getLiteral();
+ _startMs = truncate(_dateTimeFormatter.fromFormatToMillis(start));
+ String end = args.get(3).getLiteral();
+ _endMs = truncate(_dateTimeFormatter.fromFormatToMillis(end));
+ _timeBucketSize = _dateTimeGranularity.granularityToMillis();
+ _numOfTimeBuckets = (int) ((_endMs - _startMs) / _timeBucketSize);
+
+ _fillExpressions = GapfillUtils.getFillExpressions(gapFillSelection);
+
+ _previousByGroupKey = new HashMap<>();
+ _groupByKeyIndexes = new ArrayList<>();
+ _groupByKeys = new HashSet<>();
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ _timeSeries = timeseriesOn.getFunction().getArguments();
+ }
+
+ private int findBucketIndex(long time) {
+ return (int) ((time - _startMs) / _timeBucketSize);
+ }
+
+ private void replaceColumnNameWithAlias(DataSchema dataSchema) {
+ QueryContext queryContext;
+ if (_gapfillType == GapfillUtils.GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = _queryContext.getSubQueryContext().getSubQueryContext();
+ } else if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL) {
+ queryContext = _queryContext;
+ } else {
+ queryContext = _queryContext.getSubQueryContext();
+ }
+ List<String> aliasList = queryContext.getAliasList();
+ Map<String, String> columnNameToAliasMap = new HashMap<>();
+ for (int i = 0; i < aliasList.size(); i++) {
+ if (aliasList.get(i) != null) {
+ ExpressionContext selection = queryContext.getSelectExpressions().get(i);
+ if (GapfillUtils.isGapfill(selection)) {
+ selection = selection.getFunction().getArguments().get(0);
+ }
+ columnNameToAliasMap.put(selection.toString(), aliasList.get(i));
+ }
+ }
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ if (columnNameToAliasMap.containsKey(dataSchema.getColumnNames()[i])) {
+ dataSchema.getColumnNames()[i] = columnNameToAliasMap.get(dataSchema.getColumnNames()[i]);
+ }
+ }
+ }
+
+ /**
+ * Here are three things that happen
+ * 1. Sort the result sets from all pinot servers based on timestamp
+ * 2. Gapfill the data for missing entities per time bucket
+ * 3. Aggregate the dataset per time bucket.
+ */
+ public void process(BrokerResponseNative brokerResponseNative) {
+ DataSchema dataSchema = brokerResponseNative.getResultTable().getDataSchema();
+ DataSchema resultTableSchema = getResultTableDataSchema(dataSchema);
+ if (brokerResponseNative.getResultTable().getRows().isEmpty()) {
+ brokerResponseNative.setResultTable(new ResultTable(resultTableSchema, Collections.emptyList()));
+ return;
+ }
+
+ String[] columns = dataSchema.getColumnNames();
+
+ Map<String, Integer> indexes = new HashMap<>();
+ for (int i = 0; i < columns.length; i++) {
+ indexes.put(columns[i], i);
+ }
+
+ _isGroupBySelections = new boolean[dataSchema.getColumnDataTypes().length];
+
+ // The first one argument of timeSeries is time column. The left ones are defining entity.
+ for (ExpressionContext entityColum : _timeSeries) {
+ int index = indexes.get(entityColum.getIdentifier());
+ _isGroupBySelections[index] = true;
+ }
+
+ for (int i = 0; i < _isGroupBySelections.length; i++) {
+ if (_isGroupBySelections[i]) {
+ _groupByKeyIndexes.add(i);
+ }
+ }
+
+ List<Object[]>[] timeBucketedRawRows = putRawRowsIntoTimeBucket(brokerResponseNative.getResultTable().getRows());
+
+ List<Object[]> resultRows;
+ replaceColumnNameWithAlias(dataSchema);
+
+ if (_queryContext.getAggregationFunctions() == null) {
+
+ Map<String, Integer> sourceColumnsIndexes = new HashMap<>();
+ for (int i = 0; i < dataSchema.getColumnNames().length; i++) {
+ sourceColumnsIndexes.put(dataSchema.getColumnName(i), i);
+ }
+ _sourceColumnIndexForResultSchema = new int[resultTableSchema.getColumnNames().length];
+ for (int i = 0; i < _sourceColumnIndexForResultSchema.length; i++) {
+ _sourceColumnIndexForResultSchema[i] = sourceColumnsIndexes.get(resultTableSchema.getColumnName(i));
+ }
+ }
+
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL_AGGREGATE || _gapfillType == GapfillUtils.GapfillType.GAP_FILL
+ || _gapfillType == GapfillUtils.GapfillType.GAP_FILL_SELECT) {
+ List<Object[]> gapfilledRows = gapFillAndAggregate(timeBucketedRawRows, resultTableSchema, dataSchema);
+ if (_gapfillType == GapfillUtils.GapfillType.GAP_FILL_SELECT) {
+ resultRows = new ArrayList<>(gapfilledRows.size());
+ resultRows.addAll(gapfilledRows);
+ } else {
+ resultRows = gapfilledRows;
+ }
+ } else {
+ resultRows = gapFillAndAggregate(timeBucketedRawRows, resultTableSchema, dataSchema);
+ }
Review comment:
I don't follow this part. Is this the same as `resultRows = gapFillAndAggregate(timeBucketedRawRows, resultTableSchema, dataSchema);`?
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/ColumnDataToBlockValSetConverter.java
##########
@@ -0,0 +1,181 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.List;
+import javax.annotation.Nullable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.segment.spi.index.reader.Dictionary;
+import org.apache.pinot.spi.data.FieldSpec;
+
+
+/**
+ * As for Gapfilling Function, all raw data will be retrieved from the pinot
+ * server and merged on the pinot broker. The data will be in {@link DataTable}
+ * format.
+ * As part of Gapfilling Function execution plan, the aggregation function will
+ * work on the merged data on pinot broker. The aggregation function only takes
+ * the {@link BlockValSet} format.
+ * This is the Helper class to convert the data from {@link DataTable} to the
+ * block of values {@link BlockValSet} which used as input to the aggregation
+ * function.
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
+public class ColumnDataToBlockValSetConverter implements BlockValSet {
+
+ private final FieldSpec.DataType _dataType;
+ private final List<Object[]> _rows;
+ private final int _columnIndex;
+
+ public ColumnDataToBlockValSetConverter(DataSchema.ColumnDataType columnDataType, List<Object[]> rows,
+ int columnIndex) {
+ _dataType = columnDataType.toDataType();
+ _rows = rows;
+ _columnIndex = columnIndex;
+ }
+
+ @Override
+ public FieldSpec.DataType getValueType() {
+ return _dataType;
+ }
+
+ @Override
+ public boolean isSingleValue() {
+ return true;
+ }
+
+ @Nullable
+ @Override
+ public Dictionary getDictionary() {
+ throw new UnsupportedOperationException("Not supported");
+ }
+
+ @Override
+ public int[] getDictionaryIdsSV() {
+ throw new UnsupportedOperationException("Not supported");
+ }
+
+ @Override
+ public int[] getIntValuesSV() {
+ if (_dataType == FieldSpec.DataType.INT) {
+ int[] result = new int[_rows.size()];
+ for (int i = 0; i < result.length; i++) {
+ result[i] = (Integer) _rows.get(i)[_columnIndex];
+ }
+ return result;
+ }
+ throw new UnsupportedOperationException("Not supported");
+ }
+
+ @Override
+ public long[] getLongValuesSV() {
+ if (_dataType == FieldSpec.DataType.LONG) {
+ long[] result = new long[_rows.size()];
+ for (int i = 0; i < result.length; i++) {
+ result[i] = (Long) _rows.get(i)[_columnIndex];
+ }
+ return result;
+ }
+ throw new UnsupportedOperationException("Not supported");
+ }
+
+ @Override
+ public float[] getFloatValuesSV() {
+ if (_dataType == FieldSpec.DataType.FLOAT) {
+ float[] result = new float[_rows.size()];
+ for (int i = 0; i < result.length; i++) {
+ result[i] = (Float) _rows.get(i)[_columnIndex];
+ }
+ return result;
+ }
+ throw new UnsupportedOperationException("Not supported");
+ }
+
+ @Override
+ public double[] getDoubleValuesSV() {
+ if (_dataType == FieldSpec.DataType.DOUBLE) {
+ double[] result = new double[_rows.size()];
+ for (int i = 0; i < result.length; i++) {
+ result[i] = (Double) _rows.get(i)[_columnIndex];
+ }
+ return result;
+ } else if (_dataType == FieldSpec.DataType.INT) {
+ double[] result = new double[_rows.size()];
+ for (int i = 0; i < result.length; i++) {
+ result[i] = ((Integer) _rows.get(i)[_columnIndex]).doubleValue();
+ }
+ return result;
+ }
+ throw new UnsupportedOperationException("Not supported");
+ }
+
+ @Override
+ public String[] getStringValuesSV() {
+ if (_dataType == FieldSpec.DataType.STRING) {
+ String[] result = new String[_rows.size()];
+ for (int i = 0; i < result.length; i++) {
+ result[i] = (String) _rows.get(i)[_columnIndex];
+ }
+ return result;
+ }
+ throw new UnsupportedOperationException("Not supported");
Review comment:
We should support reading strings from all data types
```suggestion
int length = _rows.size();
String[] values = new String[length];
for (int i = 0; i < length; i++) {
values[i] = _rows.get(i)[_columnIndex].toString();
}
return values;
```
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/GapfillFilterHandler.java
##########
@@ -0,0 +1,76 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FilterContext;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.core.query.reduce.filter.ColumnValueExtractor;
+import org.apache.pinot.core.query.reduce.filter.LiteralValueExtractor;
+import org.apache.pinot.core.query.reduce.filter.RowMatcher;
+import org.apache.pinot.core.query.reduce.filter.RowMatcherFactory;
+import org.apache.pinot.core.query.reduce.filter.ValueExtractor;
+import org.apache.pinot.core.query.reduce.filter.ValueExtractorFactory;
+import org.apache.pinot.core.util.GapfillUtils;
+
+
+/**
+ * Handler for Filter clause of GapFill.
+ */
+public class GapfillFilterHandler implements ValueExtractorFactory {
+ private final RowMatcher _rowMatcher;
+ private final DataSchema _dataSchema;
+ private final Map<String, Integer> _indexes;
+
+ public GapfillFilterHandler(FilterContext filter, DataSchema dataSchema) {
+ _dataSchema = dataSchema;
+ _indexes = new HashMap<>();
+ for (int i = 0; i < _dataSchema.size(); i++) {
+ _indexes.put(_dataSchema.getColumnName(i), i);
+ }
+ _rowMatcher = RowMatcherFactory.getRowMatcher(filter, this);
+ }
+
+ /**
+ * Returns {@code true} if the given row matches the HAVING clause, {@code false} otherwise.
+ */
+ public boolean isMatch(Object[] row) {
+ return _rowMatcher.isMatch(row);
+ }
+
+ /**
+ * Returns a ValueExtractor based on the given expression.
+ */
+ @Override
+ public ValueExtractor getValueExtractor(ExpressionContext expression) {
+ expression = GapfillUtils.stripGapfill(expression);
+ if (expression.getType() == ExpressionContext.Type.LITERAL) {
+ // Literal
+ return new LiteralValueExtractor(expression.getLiteral());
+ }
+
+ if (expression.getType() == ExpressionContext.Type.IDENTIFIER) {
+ return new ColumnValueExtractor(_indexes.get(expression.getIdentifier()), _dataSchema);
+ } else {
+ return new ColumnValueExtractor(_indexes.get(expression.getFunction().toString()), _dataSchema);
Review comment:
This does not handle transform properly (e.g. `colA - colB` where the gapfill selects `colA` and `colB`). This is handled within the `PostAggregationValueExtractor`, and we may also extract that out to be shared. (Or add a TODO to fix later)
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/GapfillFilterHandler.java
##########
@@ -0,0 +1,76 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.pinot.common.request.context.ExpressionContext;
+import org.apache.pinot.common.request.context.FilterContext;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.core.query.reduce.filter.ColumnValueExtractor;
+import org.apache.pinot.core.query.reduce.filter.LiteralValueExtractor;
+import org.apache.pinot.core.query.reduce.filter.RowMatcher;
+import org.apache.pinot.core.query.reduce.filter.RowMatcherFactory;
+import org.apache.pinot.core.query.reduce.filter.ValueExtractor;
+import org.apache.pinot.core.query.reduce.filter.ValueExtractorFactory;
+import org.apache.pinot.core.util.GapfillUtils;
+
+
+/**
+ * Handler for Filter clause of GapFill.
+ */
+public class GapfillFilterHandler implements ValueExtractorFactory {
+ private final RowMatcher _rowMatcher;
+ private final DataSchema _dataSchema;
+ private final Map<String, Integer> _indexes;
+
+ public GapfillFilterHandler(FilterContext filter, DataSchema dataSchema) {
+ _dataSchema = dataSchema;
+ _indexes = new HashMap<>();
+ for (int i = 0; i < _dataSchema.size(); i++) {
+ _indexes.put(_dataSchema.getColumnName(i), i);
Review comment:
This won't work for certain aggregations because the column name in schema is not `expression.toString()`. You may refer to `PostAggregationHandler` on how to handle the index for aggregation queries. (Or add a TODO to fix later)
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/filter/RowMatcherFactory.java
##########
@@ -0,0 +1,44 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce.filter;
+
+import org.apache.pinot.common.request.context.FilterContext;
+
+
+/**
+ * Factory for RowMatcher.
+ */
+public interface RowMatcherFactory {
Review comment:
This should be a concrete util class instead of an interface
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/QueryContext.java
##########
@@ -133,6 +137,8 @@ private QueryContext(String tableName, List<ExpressionContext> selectExpressions
_queryOptions = queryOptions;
_debugOptions = debugOptions;
_brokerRequest = brokerRequest;
+ _gapfillType = null;
Review comment:
(minor) this line is redundant
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/QueryContext.java
##########
@@ -188,6 +194,10 @@ public FilterContext getHavingFilter() {
return _orderByExpressions;
}
+ public QueryContext getSubQueryContext() {
Review comment:
```suggestion
public QueryContext getSubquery() {
```
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/QueryContext.java
##########
@@ -85,6 +86,7 @@
// Keep the BrokerRequest to make incremental changes
// TODO: Remove it once the whole query engine is using the QueryContext
private final BrokerRequest _brokerRequest;
+ private final QueryContext _subQueryContext;
Review comment:
Rename it to `_subquery` to be consistent with other variables
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/filter/ValueExtractorFactory.java
##########
@@ -0,0 +1,29 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce.filter;
+
+import org.apache.pinot.common.request.context.ExpressionContext;
+
+
+/**
+ * Value extractor for the post-aggregation function or pre-aggregation gap fill.
+ */
+public interface ValueExtractorFactory {
+ ValueExtractor getValueExtractor(ExpressionContext expression);
Review comment:
Add some javadoc
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/QueryContext.java
##########
@@ -375,6 +393,57 @@ public String toString() {
private Map<String, String> _queryOptions;
private Map<String, String> _debugOptions;
private BrokerRequest _brokerRequest;
+ private QueryContext _subQueryContext;
Review comment:
```suggestion
private QueryContext _subquery;
```
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/QueryContext.java
##########
@@ -436,6 +505,11 @@ public Builder setBrokerRequest(BrokerRequest brokerRequest) {
return this;
}
+ public Builder setSubqueryContext(QueryContext subQueryContext) {
Review comment:
```suggestion
public Builder setSubquery(QueryContext subquery) {
```
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/utils/BrokerRequestToQueryContextConverter.java
##########
@@ -42,23 +42,42 @@
import org.apache.pinot.common.utils.request.FilterQueryTree;
import org.apache.pinot.common.utils.request.RequestUtils;
import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
import org.apache.pinot.segment.spi.AggregationFunctionType;
public class BrokerRequestToQueryContextConverter {
private BrokerRequestToQueryContextConverter() {
}
+ /**
+ * Validate the gapfill query.
+ */
+ public static void validateGapfillQuery(BrokerRequest brokerRequest) {
Review comment:
Why do we need this? This can add quite big overhead to regular queries
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/util/GapfillUtils.java
##########
@@ -71,12 +86,15 @@ public static boolean isFill(ExpressionContext expressionContext) {
return false;
}
- return FILL.equals(canonicalizeFunctionName(expressionContext.getFunction().getFunctionName()));
+ return FILL.equalsIgnoreCase(canonicalizeFunctionName(expressionContext.getFunction().getFunctionName()));
Review comment:
After #8341, all function name in `FunctionContext` is already canonical, no need to canonicalize again. Same for other places
```suggestion
return expressionContext.getFunction().getFunctionName().equals(FILL);
```
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/utils/BrokerRequestToQueryContextConverter.java
##########
@@ -42,23 +42,42 @@
import org.apache.pinot.common.utils.request.FilterQueryTree;
import org.apache.pinot.common.utils.request.RequestUtils;
import org.apache.pinot.core.query.request.context.QueryContext;
+import org.apache.pinot.core.util.GapfillUtils;
import org.apache.pinot.segment.spi.AggregationFunctionType;
public class BrokerRequestToQueryContextConverter {
private BrokerRequestToQueryContextConverter() {
}
+ /**
+ * Validate the gapfill query.
+ */
+ public static void validateGapfillQuery(BrokerRequest brokerRequest) {
+ if (brokerRequest.getPinotQuery() != null) {
+ QueryContext queryContext = convertSQL(brokerRequest.getPinotQuery(), brokerRequest);
+ GapfillUtils.setGapfillType(queryContext);
+ }
+ }
+
/**
* Converts the given {@link BrokerRequest} into a {@link QueryContext}.
*/
public static QueryContext convert(BrokerRequest brokerRequest) {
- return brokerRequest.getPinotQuery() != null ? convertSQL(brokerRequest) : convertPQL(brokerRequest);
+ if (brokerRequest.getPinotQuery() != null) {
+ QueryContext queryContext = convertSQL(brokerRequest.getPinotQuery(), brokerRequest);
+ GapfillUtils.setGapfillType(queryContext);
+ return queryContext;
+ } else {
+ return convertPQL(brokerRequest);
+ }
}
- private static QueryContext convertSQL(BrokerRequest brokerRequest) {
- PinotQuery pinotQuery = brokerRequest.getPinotQuery();
-
+ private static QueryContext convertSQL(PinotQuery pinotQuery, BrokerRequest brokerRequest) {
+ QueryContext subQueryContext = null;
Review comment:
(minor)
```suggestion
QueryContext subquery = null;
```
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/request/context/QueryContext.java
##########
@@ -375,6 +393,57 @@ public String toString() {
private Map<String, String> _queryOptions;
private Map<String, String> _debugOptions;
private BrokerRequest _brokerRequest;
+ private QueryContext _subQueryContext;
+
+ /**
+ * Helper method to extract AGGREGATION FunctionContexts and FILTER FilterContexts from the given expression.
+ */
+ private static void getAggregations(ExpressionContext expression,
+ List<Pair<FunctionContext, FilterContext>> filteredAggregations) {
+ FunctionContext function = expression.getFunction();
+ if (function == null) {
+ return;
+ }
+ if (function.getType() == FunctionContext.Type.AGGREGATION) {
+ // Aggregation
+ filteredAggregations.add(Pair.of(function, null));
+ } else {
+ List<ExpressionContext> arguments = function.getArguments();
+ if (function.getFunctionName().equalsIgnoreCase("filter")) {
+ // Filtered aggregation
+ Preconditions.checkState(arguments.size() == 2, "FILTER must contain 2 arguments");
+ FunctionContext aggregation = arguments.get(0).getFunction();
+ Preconditions.checkState(aggregation != null && aggregation.getType() == FunctionContext.Type.AGGREGATION,
+ "First argument of FILTER must be an aggregation function");
+ ExpressionContext filterExpression = arguments.get(1);
+ Preconditions.checkState(filterExpression.getFunction() != null
+ && filterExpression.getFunction().getType() == FunctionContext.Type.TRANSFORM,
+ "Second argument of FILTER must be a filter expression");
+ FilterContext filter = RequestContextUtils.getFilter(filterExpression);
+ filteredAggregations.add(Pair.of(aggregation, filter));
+ } else {
+ // Transform
+ for (ExpressionContext argument : arguments) {
+ getAggregations(argument, filteredAggregations);
+ }
+ }
+ }
+ }
+
+ /**
+ * Helper method to extract AGGREGATION FunctionContexts and FILTER FilterContexts from the given filter.
+ */
+ private static void getAggregations(FilterContext filter,
+ List<Pair<FunctionContext, FilterContext>> filteredAggregations) {
+ List<FilterContext> children = filter.getChildren();
+ if (children != null) {
+ for (FilterContext child : children) {
+ getAggregations(child, filteredAggregations);
+ }
+ } else {
+ getAggregations(filter.getPredicate().getLhs(), filteredAggregations);
+ }
+ }
Review comment:
Revert the reordering change
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/selection/SelectionOperatorUtils.java
##########
@@ -88,6 +83,8 @@ private SelectionOperatorUtils() {
ThreadLocal.withInitial(() -> new DecimalFormat(FLOAT_PATTERN, DECIMAL_FORMAT_SYMBOLS));
private static final ThreadLocal<DecimalFormat> THREAD_LOCAL_DOUBLE_FORMAT =
ThreadLocal.withInitial(() -> new DecimalFormat(DOUBLE_PATTERN, DECIMAL_FORMAT_SYMBOLS));
+ private SelectionOperatorUtils() {
Review comment:
Revert the unrelated changes
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/transport/QueryRouter.java
##########
@@ -114,6 +115,7 @@ public AsyncQueryResponse submitQuery(long requestId, String rawTableName,
Map<ServerRoutingInstance, InstanceRequest> requestMap = new HashMap<>();
if (offlineBrokerRequest != null) {
assert offlineRoutingTable != null;
+ BrokerRequestToQueryContextConverter.validateGapfillQuery(offlineBrokerRequest);
Review comment:
Let's not validate gapfill here as it will add overhead to all queries (converting broker request to query context)
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/util/GapfillUtils.java
##########
@@ -119,4 +137,265 @@ static public Serializable getDefaultValue(DataSchema.ColumnDataType dataType) {
private static String canonicalizeFunctionName(String functionName) {
return StringUtils.remove(functionName, '_').toLowerCase();
}
+
+ public static boolean isGapfill(ExpressionContext expressionContext) {
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ return false;
+ }
+
+ return GAP_FILL.equals(canonicalizeFunctionName(expressionContext.getFunction().getFunctionName()));
+ }
+
+ private static boolean isGapfill(QueryContext queryContext) {
+ for (ExpressionContext expressionContext : queryContext.getSelectExpressions()) {
+ if (isGapfill(expressionContext)) {
+ return true;
+ }
+ }
+ return false;
+ }
+
+ /**
+ * Get the gapfill type for queryContext. Also do the validation for gapfill request.
+ * @param queryContext
+ */
+ public static void setGapfillType(QueryContext queryContext) {
+ GapfillType gapfillType = null;
+ if (queryContext.getSubQueryContext() == null) {
+ if (isGapfill(queryContext)) {
+ Preconditions.checkArgument(queryContext.getAggregationFunctions() == null,
+ "Aggregation and Gapfill can not be in the same sql statement.");
+ gapfillType = GapfillType.GAP_FILL;
+ }
+ } else if (isGapfill(queryContext)) {
+ Preconditions.checkArgument(queryContext.getSubQueryContext().getAggregationFunctions() != null,
+ "Select and Gapfill should be in the same sql statement.");
+ Preconditions.checkArgument(queryContext.getSubQueryContext().getSubQueryContext() == null,
+ "There is no three levels nesting sql when the outer query is gapfill.");
+ gapfillType = GapfillType.AGGREGATE_GAP_FILL;
+ } else if (isGapfill(queryContext.getSubQueryContext())) {
+ if (queryContext.getAggregationFunctions() == null) {
+ gapfillType = GapfillType.GAP_FILL_SELECT;
+ } else if (queryContext.getSubQueryContext().getSubQueryContext() == null) {
+ gapfillType = GapfillType.GAP_FILL_AGGREGATE;
+ } else {
+ Preconditions
+ .checkArgument(queryContext.getSubQueryContext().getSubQueryContext().getAggregationFunctions() != null,
+ "Select cannot happen before gapfill.");
+ gapfillType = GapfillType.AGGREGATE_GAP_FILL_AGGREGATE;
+ }
+ }
+
+ queryContext.setGapfillType(gapfillType);
+ if (gapfillType == null) {
+ return;
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext);
+
+ Preconditions.checkArgument(gapFillSelection != null && gapFillSelection.getFunction() != null,
+ "Gapfill Expression should be function.");
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ Preconditions.checkArgument(args.size() > 5, "PreAggregateGapFill does not have correct number of arguments.");
+ Preconditions.checkArgument(args.get(1).getLiteral() != null,
+ "The second argument of PostAggregateGapFill should be TimeFormatter.");
+ Preconditions.checkArgument(args.get(2).getLiteral() != null,
+ "The third argument of PostAggregateGapFill should be start time.");
+ Preconditions.checkArgument(args.get(3).getLiteral() != null,
+ "The fourth argument of PostAggregateGapFill should be end time.");
+ Preconditions.checkArgument(args.get(4).getLiteral() != null,
+ "The fifth argument of PostAggregateGapFill should be time bucket size.");
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ Preconditions.checkArgument(timeseriesOn != null, "The TimeSeriesOn expressions should be specified.");
+
+ if (queryContext.getAggregationFunctions() == null) {
+ return;
+ }
+
+ List<ExpressionContext> groupbyExpressions = queryContext.getGroupByExpressions();
+ Preconditions.checkArgument(groupbyExpressions != null, "No GroupBy Clause.");
+ List<ExpressionContext> innerSelections = queryContext.getSubQueryContext().getSelectExpressions();
+ String timeBucketCol = null;
+ List<String> strAlias = queryContext.getSubQueryContext().getAliasList();
+ for (int i = 0; i < innerSelections.size(); i++) {
+ ExpressionContext innerSelection = innerSelections.get(i);
+ if (GapfillUtils.isGapfill(innerSelection)) {
+ if (strAlias.get(i) != null) {
+ timeBucketCol = strAlias.get(i);
+ } else {
+ timeBucketCol = innerSelection.getFunction().getArguments().get(0).toString();
+ }
+ break;
+ }
+ }
+
+ Preconditions.checkArgument(timeBucketCol != null, "No Group By timebucket.");
+
+ boolean findTimeBucket = false;
+ for (ExpressionContext groupbyExp : groupbyExpressions) {
+ if (timeBucketCol.equals(groupbyExp.toString())) {
+ findTimeBucket = true;
+ break;
+ }
+ }
+
+ Preconditions.checkArgument(findTimeBucket, "No Group By timebucket.");
+ }
+
+ private static ExpressionContext findGapfillExpressionContext(QueryContext queryContext) {
+ for (ExpressionContext expressionContext : queryContext.getSelectExpressions()) {
+ if (isGapfill(expressionContext)) {
+ return expressionContext;
+ }
+ }
+ return null;
+ }
+
+ public static ExpressionContext getGapfillExpressionContext(QueryContext queryContext) {
+ GapfillType gapfillType = queryContext.getGapfillType();
+ if (gapfillType == GapfillType.AGGREGATE_GAP_FILL || gapfillType == GapfillType.GAP_FILL) {
+ return findGapfillExpressionContext(queryContext);
+ } else if (gapfillType == GapfillType.GAP_FILL_AGGREGATE || gapfillType == GapfillType.AGGREGATE_GAP_FILL_AGGREGATE
+ || gapfillType == GapfillType.GAP_FILL_SELECT) {
+ return findGapfillExpressionContext(queryContext.getSubQueryContext());
+ } else {
+ return null;
+ }
+ }
+
+ public static int findTimeBucketColumnIndex(QueryContext queryContext) {
+ GapfillType gapfillType = queryContext.getGapfillType();
+ if (gapfillType == GapfillType.GAP_FILL_AGGREGATE
+ || gapfillType == GapfillType.GAP_FILL_SELECT
+ || gapfillType == GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = queryContext.getSubQueryContext();
+ }
+ List<ExpressionContext> expressionContexts = queryContext.getSelectExpressions();
+ for (int i = 0; i < expressionContexts.size(); i++) {
+ if (isGapfill(expressionContexts.get(i))) {
+ return i;
+ }
+ }
+ return -1;
+ }
+
+ public static ExpressionContext getTimeSeriesOnExpressionContext(ExpressionContext gapFillSelection) {
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ for (int i = STARTING_INDEX_OF_OPTIONAL_ARGS_FOR_PRE_AGGREGATE_GAP_FILL; i < args.size(); i++) {
+ if (GapfillUtils.isTimeSeriesOn(args.get(i))) {
+ return args.get(i);
+ }
+ }
+ return null;
+ }
+
+ public static Map<String, ExpressionContext> getFillExpressions(ExpressionContext gapFillSelection) {
+ Map<String, ExpressionContext> fillExpressions = new HashMap<>();
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ for (int i = STARTING_INDEX_OF_OPTIONAL_ARGS_FOR_PRE_AGGREGATE_GAP_FILL; i < args.size(); i++) {
+ if (GapfillUtils.isFill(args.get(i))) {
+ ExpressionContext fillExpression = args.get(i);
+ fillExpressions.put(fillExpression.getFunction().getArguments().get(0).getIdentifier(), fillExpression);
+ }
+ }
+ return fillExpressions;
+ }
+
+ public static String getTableName(PinotQuery pinotQuery) {
+ while (pinotQuery.getDataSource().getSubquery() != null) {
+ pinotQuery = pinotQuery.getDataSource().getSubquery();
+ }
+ return pinotQuery.getDataSource().getTableName();
+ }
+
+ public static BrokerRequest stripGapfill(BrokerRequest brokerRequest) {
+ if (brokerRequest.getPinotQuery().getDataSource() == null) {
+ return brokerRequest;
+ }
+ QueryContext queryContext = BrokerRequestToQueryContextConverter.convert(brokerRequest);
Review comment:
Avoid this conversion. We may loop over the select list and see if there is `gapfill`, and directly rewrite the query when `gapfill` is found
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/selection/SelectionOperatorUtils.java
##########
@@ -391,6 +388,9 @@ public static DataTable getDataTableFromRows(Collection<Object[]> rows, DataSche
row[i] = dataTable.getStringArray(rowId, i);
break;
+ case OBJECT:
Review comment:
Does this change still apply?
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/util/GapfillUtils.java
##########
@@ -119,4 +137,265 @@ static public Serializable getDefaultValue(DataSchema.ColumnDataType dataType) {
private static String canonicalizeFunctionName(String functionName) {
return StringUtils.remove(functionName, '_').toLowerCase();
}
+
+ public static boolean isGapfill(ExpressionContext expressionContext) {
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ return false;
+ }
+
+ return GAP_FILL.equals(canonicalizeFunctionName(expressionContext.getFunction().getFunctionName()));
+ }
+
+ private static boolean isGapfill(QueryContext queryContext) {
+ for (ExpressionContext expressionContext : queryContext.getSelectExpressions()) {
+ if (isGapfill(expressionContext)) {
+ return true;
+ }
+ }
+ return false;
+ }
+
+ /**
+ * Get the gapfill type for queryContext. Also do the validation for gapfill request.
+ * @param queryContext
+ */
+ public static void setGapfillType(QueryContext queryContext) {
+ GapfillType gapfillType = null;
+ if (queryContext.getSubQueryContext() == null) {
+ if (isGapfill(queryContext)) {
+ Preconditions.checkArgument(queryContext.getAggregationFunctions() == null,
+ "Aggregation and Gapfill can not be in the same sql statement.");
+ gapfillType = GapfillType.GAP_FILL;
+ }
+ } else if (isGapfill(queryContext)) {
+ Preconditions.checkArgument(queryContext.getSubQueryContext().getAggregationFunctions() != null,
+ "Select and Gapfill should be in the same sql statement.");
+ Preconditions.checkArgument(queryContext.getSubQueryContext().getSubQueryContext() == null,
+ "There is no three levels nesting sql when the outer query is gapfill.");
+ gapfillType = GapfillType.AGGREGATE_GAP_FILL;
+ } else if (isGapfill(queryContext.getSubQueryContext())) {
+ if (queryContext.getAggregationFunctions() == null) {
+ gapfillType = GapfillType.GAP_FILL_SELECT;
+ } else if (queryContext.getSubQueryContext().getSubQueryContext() == null) {
+ gapfillType = GapfillType.GAP_FILL_AGGREGATE;
+ } else {
+ Preconditions
+ .checkArgument(queryContext.getSubQueryContext().getSubQueryContext().getAggregationFunctions() != null,
+ "Select cannot happen before gapfill.");
+ gapfillType = GapfillType.AGGREGATE_GAP_FILL_AGGREGATE;
+ }
+ }
+
+ queryContext.setGapfillType(gapfillType);
+ if (gapfillType == null) {
+ return;
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext);
+
+ Preconditions.checkArgument(gapFillSelection != null && gapFillSelection.getFunction() != null,
+ "Gapfill Expression should be function.");
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ Preconditions.checkArgument(args.size() > 5, "PreAggregateGapFill does not have correct number of arguments.");
+ Preconditions.checkArgument(args.get(1).getLiteral() != null,
+ "The second argument of PostAggregateGapFill should be TimeFormatter.");
+ Preconditions.checkArgument(args.get(2).getLiteral() != null,
+ "The third argument of PostAggregateGapFill should be start time.");
+ Preconditions.checkArgument(args.get(3).getLiteral() != null,
+ "The fourth argument of PostAggregateGapFill should be end time.");
+ Preconditions.checkArgument(args.get(4).getLiteral() != null,
+ "The fifth argument of PostAggregateGapFill should be time bucket size.");
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ Preconditions.checkArgument(timeseriesOn != null, "The TimeSeriesOn expressions should be specified.");
+
+ if (queryContext.getAggregationFunctions() == null) {
+ return;
+ }
+
+ List<ExpressionContext> groupbyExpressions = queryContext.getGroupByExpressions();
+ Preconditions.checkArgument(groupbyExpressions != null, "No GroupBy Clause.");
+ List<ExpressionContext> innerSelections = queryContext.getSubQueryContext().getSelectExpressions();
+ String timeBucketCol = null;
+ List<String> strAlias = queryContext.getSubQueryContext().getAliasList();
+ for (int i = 0; i < innerSelections.size(); i++) {
+ ExpressionContext innerSelection = innerSelections.get(i);
+ if (GapfillUtils.isGapfill(innerSelection)) {
+ if (strAlias.get(i) != null) {
+ timeBucketCol = strAlias.get(i);
+ } else {
+ timeBucketCol = innerSelection.getFunction().getArguments().get(0).toString();
+ }
+ break;
+ }
+ }
+
+ Preconditions.checkArgument(timeBucketCol != null, "No Group By timebucket.");
+
+ boolean findTimeBucket = false;
+ for (ExpressionContext groupbyExp : groupbyExpressions) {
+ if (timeBucketCol.equals(groupbyExp.toString())) {
+ findTimeBucket = true;
+ break;
+ }
+ }
+
+ Preconditions.checkArgument(findTimeBucket, "No Group By timebucket.");
+ }
+
+ private static ExpressionContext findGapfillExpressionContext(QueryContext queryContext) {
+ for (ExpressionContext expressionContext : queryContext.getSelectExpressions()) {
+ if (isGapfill(expressionContext)) {
+ return expressionContext;
+ }
+ }
+ return null;
+ }
+
+ public static ExpressionContext getGapfillExpressionContext(QueryContext queryContext) {
+ GapfillType gapfillType = queryContext.getGapfillType();
+ if (gapfillType == GapfillType.AGGREGATE_GAP_FILL || gapfillType == GapfillType.GAP_FILL) {
+ return findGapfillExpressionContext(queryContext);
+ } else if (gapfillType == GapfillType.GAP_FILL_AGGREGATE || gapfillType == GapfillType.AGGREGATE_GAP_FILL_AGGREGATE
+ || gapfillType == GapfillType.GAP_FILL_SELECT) {
+ return findGapfillExpressionContext(queryContext.getSubQueryContext());
+ } else {
+ return null;
+ }
+ }
+
+ public static int findTimeBucketColumnIndex(QueryContext queryContext) {
+ GapfillType gapfillType = queryContext.getGapfillType();
+ if (gapfillType == GapfillType.GAP_FILL_AGGREGATE
+ || gapfillType == GapfillType.GAP_FILL_SELECT
+ || gapfillType == GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = queryContext.getSubQueryContext();
+ }
+ List<ExpressionContext> expressionContexts = queryContext.getSelectExpressions();
+ for (int i = 0; i < expressionContexts.size(); i++) {
+ if (isGapfill(expressionContexts.get(i))) {
+ return i;
+ }
+ }
+ return -1;
+ }
+
+ public static ExpressionContext getTimeSeriesOnExpressionContext(ExpressionContext gapFillSelection) {
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ for (int i = STARTING_INDEX_OF_OPTIONAL_ARGS_FOR_PRE_AGGREGATE_GAP_FILL; i < args.size(); i++) {
+ if (GapfillUtils.isTimeSeriesOn(args.get(i))) {
+ return args.get(i);
+ }
+ }
+ return null;
+ }
+
+ public static Map<String, ExpressionContext> getFillExpressions(ExpressionContext gapFillSelection) {
+ Map<String, ExpressionContext> fillExpressions = new HashMap<>();
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ for (int i = STARTING_INDEX_OF_OPTIONAL_ARGS_FOR_PRE_AGGREGATE_GAP_FILL; i < args.size(); i++) {
+ if (GapfillUtils.isFill(args.get(i))) {
+ ExpressionContext fillExpression = args.get(i);
+ fillExpressions.put(fillExpression.getFunction().getArguments().get(0).getIdentifier(), fillExpression);
+ }
+ }
+ return fillExpressions;
+ }
+
+ public static String getTableName(PinotQuery pinotQuery) {
+ while (pinotQuery.getDataSource().getSubquery() != null) {
+ pinotQuery = pinotQuery.getDataSource().getSubquery();
+ }
+ return pinotQuery.getDataSource().getTableName();
+ }
+
+ public static BrokerRequest stripGapfill(BrokerRequest brokerRequest) {
+ if (brokerRequest.getPinotQuery().getDataSource() == null) {
+ return brokerRequest;
+ }
+ QueryContext queryContext = BrokerRequestToQueryContextConverter.convert(brokerRequest);
+ GapfillUtils.GapfillType gapfillType = queryContext.getGapfillType();
+ if (gapfillType == null) {
+ return brokerRequest;
+ }
+ switch (gapfillType) {
+ // one sql query with gapfill only
+ case GAP_FILL:
+ return stripGapfill(brokerRequest.getPinotQuery());
+ // gapfill as subquery, the outer query may have the filter
+ case GAP_FILL_SELECT:
+ // gapfill as subquery, the outer query has the aggregation
+ case GAP_FILL_AGGREGATE:
+ // aggregation as subqery, the outer query is gapfill
+ case AGGREGATE_GAP_FILL:
+ return stripGapfill(brokerRequest.getPinotQuery().getDataSource().getSubquery());
+ // aggegration as second nesting subquery, gapfill as first nesting subquery, different aggregation as outer query
+ case AGGREGATE_GAP_FILL_AGGREGATE:
+ return stripGapfill(brokerRequest.getPinotQuery().getDataSource().getSubquery().getDataSource().getSubquery());
+ default:
+ return brokerRequest;
+ }
+ }
+
+ private static BrokerRequest stripGapfill(PinotQuery pinotQuery) {
+ PinotQuery copy = new PinotQuery(pinotQuery);
+ BrokerRequest brokerRequest = new BrokerRequest();
+ brokerRequest.setPinotQuery(copy);
+ // Set table name in broker request because it is used for access control, query routing etc.
+ DataSource dataSource = copy.getDataSource();
+ if (dataSource != null) {
+ QuerySource querySource = new QuerySource();
+ querySource.setTableName(dataSource.getTableName());
+ brokerRequest.setQuerySource(querySource);
+ }
+ List<Expression> selectList = copy.getSelectList();
+ for (int i = 0; i < selectList.size(); i++) {
+ Expression select = selectList.get(i);
+ if (select.getType() != ExpressionType.FUNCTION) {
+ continue;
+ }
+ if (GAP_FILL.equalsIgnoreCase(select.getFunctionCall().getOperator())) {
Review comment:
(minor) function name is canonical, so you may use `equals` here
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/util/GapfillUtils.java
##########
@@ -119,4 +137,265 @@ static public Serializable getDefaultValue(DataSchema.ColumnDataType dataType) {
private static String canonicalizeFunctionName(String functionName) {
return StringUtils.remove(functionName, '_').toLowerCase();
}
+
+ public static boolean isGapfill(ExpressionContext expressionContext) {
+ if (expressionContext.getType() != ExpressionContext.Type.FUNCTION) {
+ return false;
+ }
+
+ return GAP_FILL.equals(canonicalizeFunctionName(expressionContext.getFunction().getFunctionName()));
+ }
+
+ private static boolean isGapfill(QueryContext queryContext) {
+ for (ExpressionContext expressionContext : queryContext.getSelectExpressions()) {
+ if (isGapfill(expressionContext)) {
+ return true;
+ }
+ }
+ return false;
+ }
+
+ /**
+ * Get the gapfill type for queryContext. Also do the validation for gapfill request.
+ * @param queryContext
+ */
+ public static void setGapfillType(QueryContext queryContext) {
+ GapfillType gapfillType = null;
+ if (queryContext.getSubQueryContext() == null) {
+ if (isGapfill(queryContext)) {
+ Preconditions.checkArgument(queryContext.getAggregationFunctions() == null,
+ "Aggregation and Gapfill can not be in the same sql statement.");
+ gapfillType = GapfillType.GAP_FILL;
+ }
+ } else if (isGapfill(queryContext)) {
+ Preconditions.checkArgument(queryContext.getSubQueryContext().getAggregationFunctions() != null,
+ "Select and Gapfill should be in the same sql statement.");
+ Preconditions.checkArgument(queryContext.getSubQueryContext().getSubQueryContext() == null,
+ "There is no three levels nesting sql when the outer query is gapfill.");
+ gapfillType = GapfillType.AGGREGATE_GAP_FILL;
+ } else if (isGapfill(queryContext.getSubQueryContext())) {
+ if (queryContext.getAggregationFunctions() == null) {
+ gapfillType = GapfillType.GAP_FILL_SELECT;
+ } else if (queryContext.getSubQueryContext().getSubQueryContext() == null) {
+ gapfillType = GapfillType.GAP_FILL_AGGREGATE;
+ } else {
+ Preconditions
+ .checkArgument(queryContext.getSubQueryContext().getSubQueryContext().getAggregationFunctions() != null,
+ "Select cannot happen before gapfill.");
+ gapfillType = GapfillType.AGGREGATE_GAP_FILL_AGGREGATE;
+ }
+ }
+
+ queryContext.setGapfillType(gapfillType);
+ if (gapfillType == null) {
+ return;
+ }
+
+ ExpressionContext gapFillSelection = GapfillUtils.getGapfillExpressionContext(queryContext);
+
+ Preconditions.checkArgument(gapFillSelection != null && gapFillSelection.getFunction() != null,
+ "Gapfill Expression should be function.");
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ Preconditions.checkArgument(args.size() > 5, "PreAggregateGapFill does not have correct number of arguments.");
+ Preconditions.checkArgument(args.get(1).getLiteral() != null,
+ "The second argument of PostAggregateGapFill should be TimeFormatter.");
+ Preconditions.checkArgument(args.get(2).getLiteral() != null,
+ "The third argument of PostAggregateGapFill should be start time.");
+ Preconditions.checkArgument(args.get(3).getLiteral() != null,
+ "The fourth argument of PostAggregateGapFill should be end time.");
+ Preconditions.checkArgument(args.get(4).getLiteral() != null,
+ "The fifth argument of PostAggregateGapFill should be time bucket size.");
+
+ ExpressionContext timeseriesOn = GapfillUtils.getTimeSeriesOnExpressionContext(gapFillSelection);
+ Preconditions.checkArgument(timeseriesOn != null, "The TimeSeriesOn expressions should be specified.");
+
+ if (queryContext.getAggregationFunctions() == null) {
+ return;
+ }
+
+ List<ExpressionContext> groupbyExpressions = queryContext.getGroupByExpressions();
+ Preconditions.checkArgument(groupbyExpressions != null, "No GroupBy Clause.");
+ List<ExpressionContext> innerSelections = queryContext.getSubQueryContext().getSelectExpressions();
+ String timeBucketCol = null;
+ List<String> strAlias = queryContext.getSubQueryContext().getAliasList();
+ for (int i = 0; i < innerSelections.size(); i++) {
+ ExpressionContext innerSelection = innerSelections.get(i);
+ if (GapfillUtils.isGapfill(innerSelection)) {
+ if (strAlias.get(i) != null) {
+ timeBucketCol = strAlias.get(i);
+ } else {
+ timeBucketCol = innerSelection.getFunction().getArguments().get(0).toString();
+ }
+ break;
+ }
+ }
+
+ Preconditions.checkArgument(timeBucketCol != null, "No Group By timebucket.");
+
+ boolean findTimeBucket = false;
+ for (ExpressionContext groupbyExp : groupbyExpressions) {
+ if (timeBucketCol.equals(groupbyExp.toString())) {
+ findTimeBucket = true;
+ break;
+ }
+ }
+
+ Preconditions.checkArgument(findTimeBucket, "No Group By timebucket.");
+ }
+
+ private static ExpressionContext findGapfillExpressionContext(QueryContext queryContext) {
+ for (ExpressionContext expressionContext : queryContext.getSelectExpressions()) {
+ if (isGapfill(expressionContext)) {
+ return expressionContext;
+ }
+ }
+ return null;
+ }
+
+ public static ExpressionContext getGapfillExpressionContext(QueryContext queryContext) {
+ GapfillType gapfillType = queryContext.getGapfillType();
+ if (gapfillType == GapfillType.AGGREGATE_GAP_FILL || gapfillType == GapfillType.GAP_FILL) {
+ return findGapfillExpressionContext(queryContext);
+ } else if (gapfillType == GapfillType.GAP_FILL_AGGREGATE || gapfillType == GapfillType.AGGREGATE_GAP_FILL_AGGREGATE
+ || gapfillType == GapfillType.GAP_FILL_SELECT) {
+ return findGapfillExpressionContext(queryContext.getSubQueryContext());
+ } else {
+ return null;
+ }
+ }
+
+ public static int findTimeBucketColumnIndex(QueryContext queryContext) {
+ GapfillType gapfillType = queryContext.getGapfillType();
+ if (gapfillType == GapfillType.GAP_FILL_AGGREGATE
+ || gapfillType == GapfillType.GAP_FILL_SELECT
+ || gapfillType == GapfillType.AGGREGATE_GAP_FILL_AGGREGATE) {
+ queryContext = queryContext.getSubQueryContext();
+ }
+ List<ExpressionContext> expressionContexts = queryContext.getSelectExpressions();
+ for (int i = 0; i < expressionContexts.size(); i++) {
+ if (isGapfill(expressionContexts.get(i))) {
+ return i;
+ }
+ }
+ return -1;
+ }
+
+ public static ExpressionContext getTimeSeriesOnExpressionContext(ExpressionContext gapFillSelection) {
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ for (int i = STARTING_INDEX_OF_OPTIONAL_ARGS_FOR_PRE_AGGREGATE_GAP_FILL; i < args.size(); i++) {
+ if (GapfillUtils.isTimeSeriesOn(args.get(i))) {
+ return args.get(i);
+ }
+ }
+ return null;
+ }
+
+ public static Map<String, ExpressionContext> getFillExpressions(ExpressionContext gapFillSelection) {
+ Map<String, ExpressionContext> fillExpressions = new HashMap<>();
+ List<ExpressionContext> args = gapFillSelection.getFunction().getArguments();
+ for (int i = STARTING_INDEX_OF_OPTIONAL_ARGS_FOR_PRE_AGGREGATE_GAP_FILL; i < args.size(); i++) {
+ if (GapfillUtils.isFill(args.get(i))) {
+ ExpressionContext fillExpression = args.get(i);
+ fillExpressions.put(fillExpression.getFunction().getArguments().get(0).getIdentifier(), fillExpression);
+ }
+ }
+ return fillExpressions;
+ }
+
+ public static String getTableName(PinotQuery pinotQuery) {
+ while (pinotQuery.getDataSource().getSubquery() != null) {
+ pinotQuery = pinotQuery.getDataSource().getSubquery();
+ }
+ return pinotQuery.getDataSource().getTableName();
+ }
+
+ public static BrokerRequest stripGapfill(BrokerRequest brokerRequest) {
+ if (brokerRequest.getPinotQuery().getDataSource() == null) {
+ return brokerRequest;
+ }
+ QueryContext queryContext = BrokerRequestToQueryContextConverter.convert(brokerRequest);
+ GapfillUtils.GapfillType gapfillType = queryContext.getGapfillType();
+ if (gapfillType == null) {
+ return brokerRequest;
+ }
+ switch (gapfillType) {
+ // one sql query with gapfill only
+ case GAP_FILL:
+ return stripGapfill(brokerRequest.getPinotQuery());
+ // gapfill as subquery, the outer query may have the filter
+ case GAP_FILL_SELECT:
+ // gapfill as subquery, the outer query has the aggregation
+ case GAP_FILL_AGGREGATE:
+ // aggregation as subqery, the outer query is gapfill
+ case AGGREGATE_GAP_FILL:
+ return stripGapfill(brokerRequest.getPinotQuery().getDataSource().getSubquery());
+ // aggegration as second nesting subquery, gapfill as first nesting subquery, different aggregation as outer query
+ case AGGREGATE_GAP_FILL_AGGREGATE:
+ return stripGapfill(brokerRequest.getPinotQuery().getDataSource().getSubquery().getDataSource().getSubquery());
+ default:
+ return brokerRequest;
+ }
+ }
+
+ private static BrokerRequest stripGapfill(PinotQuery pinotQuery) {
+ PinotQuery copy = new PinotQuery(pinotQuery);
+ BrokerRequest brokerRequest = new BrokerRequest();
+ brokerRequest.setPinotQuery(copy);
+ // Set table name in broker request because it is used for access control, query routing etc.
+ DataSource dataSource = copy.getDataSource();
+ if (dataSource != null) {
+ QuerySource querySource = new QuerySource();
+ querySource.setTableName(dataSource.getTableName());
+ brokerRequest.setQuerySource(querySource);
+ }
+ List<Expression> selectList = copy.getSelectList();
+ for (int i = 0; i < selectList.size(); i++) {
+ Expression select = selectList.get(i);
+ if (select.getType() != ExpressionType.FUNCTION) {
+ continue;
+ }
+ if (GAP_FILL.equalsIgnoreCase(select.getFunctionCall().getOperator())) {
+ selectList.set(i, select.getFunctionCall().getOperands().get(0));
+ break;
+ }
+ if (AS.equalsIgnoreCase(select.getFunctionCall().getOperator())
+ && select.getFunctionCall().getOperands().get(0).getType() == ExpressionType.FUNCTION
+ && GAP_FILL.equalsIgnoreCase(select.getFunctionCall().getOperands().get(0).getFunctionCall().getOperator())) {
+ select.getFunctionCall().getOperands().set(0,
+ select.getFunctionCall().getOperands().get(0).getFunctionCall().getOperands().get(0));
+ break;
+ }
+ }
+
+ for (Expression orderBy : copy.getOrderByList()) {
+ if (orderBy.getType() != ExpressionType.FUNCTION) {
+ continue;
+ }
+ if (orderBy.getFunctionCall().getOperands().get(0).getType() == ExpressionType.FUNCTION
+ && GAP_FILL.equalsIgnoreCase(
+ orderBy.getFunctionCall().getOperands().get(0).getFunctionCall().getOperator())) {
+ orderBy.getFunctionCall().getOperands().set(0,
+ orderBy.getFunctionCall().getOperands().get(0).getFunctionCall().getOperands().get(0));
+ break;
+ }
+ }
+ return brokerRequest;
+ }
+
+ public enum GapfillType {
Review comment:
Going over all the classes, I feel `GapfillType` might not be required at all. All the handling can be based on levels of subqueries.
If you feel `GapfillType` can make code cleaner, we may add a function `GapfillType getGapfillType(QueryContext queryContext)` and maintain it within the `GapfillProcessor`. No need to embed it into the `QueryContext` (in `QueryContext`, we try to maintain only common properties, not feature specific ones).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] weixiangsun commented on a change in pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
weixiangsun commented on a change in pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#discussion_r829448967
##########
File path: pinot-core/src/main/java/org/apache/pinot/core/query/reduce/ColumnDataToBlockValSetConverter.java
##########
@@ -0,0 +1,181 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.query.reduce;
+
+import java.util.List;
+import javax.annotation.Nullable;
+import org.apache.pinot.common.utils.DataSchema;
+import org.apache.pinot.common.utils.DataTable;
+import org.apache.pinot.core.common.BlockValSet;
+import org.apache.pinot.segment.spi.index.reader.Dictionary;
+import org.apache.pinot.spi.data.FieldSpec;
+
+
+/**
+ * As for Gapfilling Function, all raw data will be retrieved from the pinot
+ * server and merged on the pinot broker. The data will be in {@link DataTable}
+ * format.
+ * As part of Gapfilling Function execution plan, the aggregation function will
+ * work on the merged data on pinot broker. The aggregation function only takes
+ * the {@link BlockValSet} format.
+ * This is the Helper class to convert the data from {@link DataTable} to the
+ * block of values {@link BlockValSet} which used as input to the aggregation
+ * function.
+ */
+@SuppressWarnings({"rawtypes", "unchecked"})
Review comment:
Fixed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (597b9c0) into [master](https://codecov.io/gh/apache/pinot/commit/91c2ebbf297c4bf3fecb5f98413e9f00e324e2dc?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (91c2ebb) will **increase** coverage by `36.48%`.
> The diff coverage is `75.58%`.
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
=============================================
+ Coverage 27.62% 64.10% +36.48%
- Complexity 0 4254 +4254
=============================================
Files 1624 1600 -24
Lines 85450 84442 -1008
Branches 12882 12855 -27
=============================================
+ Hits 23604 54133 +30529
+ Misses 59631 26386 -33245
- Partials 2215 3923 +1708
```
| Flag | Coverage Δ | |
|---|---|---|
| integration2 | `?` | |
| unittests1 | `66.96% <76.22%> (?)` | |
| unittests2 | `14.11% <0.31%> (?)` | |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...roker/requesthandler/GrpcBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvR3JwY0Jyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `0.00% <ø> (-78.58%)` | :arrow_down: |
| [...thandler/SingleConnectionBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvU2luZ2xlQ29ubmVjdGlvbkJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `13.20% <0.00%> (-73.83%)` | :arrow_down: |
| [...t/controller/api/resources/PinotQueryResource.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29udHJvbGxlci9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29udHJvbGxlci9hcGkvcmVzb3VyY2VzL1Bpbm90UXVlcnlSZXNvdXJjZS5qYXZh) | `0.00% <0.00%> (-50.35%)` | :arrow_down: |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `86.08% <0.00%> (+16.37%)` | :arrow_up: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `23.88% <33.33%> (-38.58%)` | :arrow_down: |
| [...query/request/context/utils/QueryContextUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvUXVlcnlDb250ZXh0VXRpbHMuamF2YQ==) | `65.38% <36.36%> (+13.11%)` | :arrow_up: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `64.11% <65.19%> (+27.75%)` | :arrow_up: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `91.89% <66.66%> (+7.18%)` | :arrow_up: |
| [...ot/core/query/reduce/filter/RowMatcherFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXJGYWN0b3J5LmphdmE=) | `66.66% <66.66%> (ø)` | |
| ... and [1355 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [91c2ebb...597b9c0](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org
[GitHub] [pinot] codecov-commenter edited a comment on pull request #8029: Add Time-Series Gapfilling functionality.
Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #8029:
URL: https://github.com/apache/pinot/pull/8029#issuecomment-1013577523
# [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
> Merging [#8029](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (cd0cbb2) into [master](https://codecov.io/gh/apache/pinot/commit/262dc50e236ed2af25a0cf8c67658a48731ce573?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (262dc50) will **decrease** coverage by `6.71%`.
> The diff coverage is `75.98%`.
> :exclamation: Current head cd0cbb2 differs from pull request most recent head 882d579. Consider uploading reports for the commit 882d579 to get more accurate results
[![Impacted file tree graph](https://codecov.io/gh/apache/pinot/pull/8029/graphs/tree.svg?width=650&height=150&src=pr&token=4ibza2ugkz&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
```diff
@@ Coverage Diff @@
## master #8029 +/- ##
============================================
- Coverage 70.83% 64.12% -6.72%
+ Complexity 4258 4256 -2
============================================
Files 1636 1600 -36
Lines 85804 84538 -1266
Branches 12920 12871 -49
============================================
- Hits 60779 54209 -6570
- Misses 20836 26412 +5576
+ Partials 4189 3917 -272
```
| Flag | Coverage Δ | |
|---|---|---|
| integration1 | `?` | |
| integration2 | `?` | |
| unittests1 | `66.98% <76.34%> (+0.04%)` | :arrow_up: |
| unittests2 | `14.09% <0.27%> (-0.08%)` | :arrow_down: |
Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
| [Impacted Files](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
|---|---|---|
| [...t/controller/api/resources/PinotQueryResource.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29udHJvbGxlci9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29udHJvbGxlci9hcGkvcmVzb3VyY2VzL1Bpbm90UXVlcnlSZXNvdXJjZS5qYXZh) | `0.00% <0.00%> (-50.35%)` | :arrow_down: |
| [...t/core/query/selection/SelectionOperatorUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9zZWxlY3Rpb24vU2VsZWN0aW9uT3BlcmF0b3JVdGlscy5qYXZh) | `86.08% <0.00%> (-6.10%)` | :arrow_down: |
| [...query/reduce/ColumnDataToBlockValSetConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvQ29sdW1uRGF0YVRvQmxvY2tWYWxTZXRDb252ZXJ0ZXIuamF2YQ==) | `17.30% <17.30%> (ø)` | |
| [...query/request/context/utils/QueryContextUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvUXVlcnlDb250ZXh0VXRpbHMuamF2YQ==) | `65.38% <36.36%> (-11.89%)` | :arrow_down: |
| [...roker/requesthandler/BaseBrokerRequestHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvcmVxdWVzdGhhbmRsZXIvQmFzZUJyb2tlclJlcXVlc3RIYW5kbGVyLmphdmE=) | `23.90% <40.00%> (-47.92%)` | :arrow_down: |
| [.../java/org/apache/pinot/core/util/GapfillUtils.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS91dGlsL0dhcGZpbGxVdGlscy5qYXZh) | `64.55% <65.83%> (+0.91%)` | :arrow_up: |
| [...inot/core/query/reduce/PostAggregationHandler.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvUG9zdEFnZ3JlZ2F0aW9uSGFuZGxlci5qYXZh) | `91.89% <66.66%> (+0.12%)` | :arrow_up: |
| [...ot/core/query/reduce/filter/RowMatcherFactory.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL1Jvd01hdGNoZXJGYWN0b3J5LmphdmE=) | `66.66% <66.66%> (ø)` | |
| [...xt/utils/BrokerRequestToQueryContextConverter.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZXF1ZXN0L2NvbnRleHQvdXRpbHMvQnJva2VyUmVxdWVzdFRvUXVlcnlDb250ZXh0Q29udmVydGVyLmphdmE=) | `94.02% <76.92%> (-4.36%)` | :arrow_down: |
| [...ore/query/reduce/filter/LiteralValueExtractor.java](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9yZWR1Y2UvZmlsdGVyL0xpdGVyYWxWYWx1ZUV4dHJhY3Rvci5qYXZh) | `83.33% <83.33%> (ø)` | |
| ... and [399 more](https://codecov.io/gh/apache/pinot/pull/8029/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [262dc50...882d579](https://codecov.io/gh/apache/pinot/pull/8029?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pinot.apache.org
For additional commands, e-mail: commits-help@pinot.apache.org