You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@doris.apache.org by GitBox <gi...@apache.org> on 2020/08/11 13:44:55 UTC

[GitHub] [incubator-doris] marising opened a new pull request #4330: [Feature][Cache] Sql cache and partition cache #2581

marising opened a new pull request #4330:
URL: https://github.com/apache/incubator-doris/pull/4330


   1. Analyze what mode of cache can be used by query
   2. Query cache before executing query in StmtExecutor
   3. Two cache mode, sqlcache and partitioncache, are implemented
   
   ## Proposed changes
   
   Describe the big picture of your changes here to communicate to the maintainers why we should accept this pull request. If it fixes a bug or resolves a feature request, be sure to link to that issue.
   
   ## Types of changes
   
   What types of changes does your code introduce to Doris?
   _Put an `x` in the boxes that apply_
   
   - [] Bugfix (non-breaking change which fixes an issue)
   - [x] New feature (non-breaking change which adds functionality)
   - [] Breaking change (fix or feature that would cause existing functionality to not work as expected)
   - [] Documentation Update (if none of the other choices apply)
   - [] Code refactor (Modify the code structure, format the code, etc...)
   
   ## Checklist
   
   _Put an `x` in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before merging your code._
   
   - [x] I have create an issue on (Fix #ISSUE), and have described the bug/feature there in detail
   - [x] Compiling and unit tests pass locally with my changes
   - [x] I have added tests that prove my fix is effective or that my feature works
   - [x] If this change need a document change, I have updated the document
   - [x] Any dependent changes have been merged
   
   ## Further comments
   
   If this is a relatively large or complex change, kick off the discussion at dev@doris.apache.org by explaining why you chose the solution you did and what alternatives you considered, etc...
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] marising commented on a change in pull request #4330: [Feature][Cache] Sql cache and partition cache #2581

Posted by GitBox <gi...@apache.org>.
marising commented on a change in pull request #4330:
URL: https://github.com/apache/incubator-doris/pull/4330#discussion_r472922414



##########
File path: fe/fe-core/src/main/java/org/apache/doris/qe/cache/CacheAnalyzer.java
##########
@@ -0,0 +1,450 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.qe.cache;
+
+import org.apache.doris.analysis.AggregateInfo;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.CastExpr;
+import org.apache.doris.analysis.CompoundPredicate;
+import org.apache.doris.analysis.Expr;
+import org.apache.doris.analysis.InlineViewRef;
+import org.apache.doris.analysis.QueryStmt;
+import org.apache.doris.analysis.SelectStmt;
+import org.apache.doris.analysis.SlotRef;
+import org.apache.doris.analysis.StatementBase;
+import org.apache.doris.analysis.TableRef;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.RangePartitionInfo;
+import org.apache.doris.catalog.PartitionType;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.common.util.DebugUtil;
+import org.apache.doris.metric.MetricRepo;
+import org.apache.doris.planner.OlapScanNode;
+import org.apache.doris.planner.Planner;
+import org.apache.doris.planner.ScanNode;
+import org.apache.doris.qe.ConnectContext;
+import org.apache.doris.qe.RowBatch;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.Status;
+
+import com.google.common.collect.Lists;
+import org.apache.doris.thrift.TUniqueId;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+/**
+ * Analyze which caching mode a SQL is suitable for
+ * 1. T + 1 update is suitable for SQL mode
+ * 2. Partition by date, update the data of the day in near real time, which is suitable for Partition mode
+ */
+public class CacheAnalyzer {
+    private static final Logger LOG = LogManager.getLogger(CacheAnalyzer.class);
+
+    /**
+     * NoNeed : disable config or variable, not query, not scan table etc.
+     */
+    public enum CacheMode {
+        NoNeed,
+        None,
+        TTL,
+        Sql,
+        Partition
+    }
+
+    private ConnectContext context;
+    private boolean enableSqlCache = false;
+    private boolean enablePartitionCache = false;
+    private TUniqueId queryId;
+    private CacheMode cacheMode;
+    private CacheTable latestTable;
+    private StatementBase parsedStmt;
+    private SelectStmt selectStmt;
+    private List<ScanNode> scanNodes;
+    private OlapTable olapTable;
+    private RangePartitionInfo partitionInfo;
+    private Column partColumn;
+    private CompoundPredicate partitionPredicate;
+    private Cache cache;
+
+    public Cache getCache() {
+        return cache;
+    }
+
+    public CacheAnalyzer(ConnectContext context, StatementBase parsedStmt, Planner planner) {
+        this.context = context;
+        this.queryId = context.queryId();
+        this.parsedStmt = parsedStmt;
+        scanNodes = planner.getScanNodes();
+        latestTable = new CacheTable();
+        checkCacheConfig();
+    }
+
+    //for unit test
+    public CacheAnalyzer(ConnectContext context, StatementBase parsedStmt, List<ScanNode> scanNodes) {
+        this.context = context;
+        this.parsedStmt = parsedStmt;
+        this.scanNodes = scanNodes;
+        checkCacheConfig();
+    }
+
+    private void checkCacheConfig() {
+        if (Config.cache_enable_sql_mode) {
+            if (context.getSessionVariable().isEnableSqlCache()) {
+                enableSqlCache = true;
+            }
+        }
+        if (Config.cache_enable_partition_mode) {
+            if (context.getSessionVariable().isEnablePartitionCache()) {
+                enablePartitionCache = true;
+            }
+        }
+    }
+
+    public CacheMode getCacheMode() {
+        return cacheMode;
+    }
+
+    public class CacheTable implements Comparable<CacheTable> {
+        public OlapTable olapTable;
+        public long latestId;
+        public long latestVersion;
+        public long latestTime;
+
+        public CacheTable() {
+            olapTable = null;
+            latestId = 0;
+            latestVersion = 0;
+            latestTime = 0;
+        }
+
+        @Override
+        public int compareTo(CacheTable table) {
+            return (int) (table.latestTime - this.latestTime);
+        }
+
+        public void Debug() {
+            LOG.info("table {}, partition id {}, ver {}, time {}", olapTable.getName(), latestId, latestVersion, latestTime);
+        }
+    }
+
+    public boolean enableCache() {
+        return enableSqlCache || enablePartitionCache;
+    }
+
+    public boolean enableSqlCache() {
+        return enableSqlCache;
+    }
+
+    public boolean enablePartitionCache() {
+        return enablePartitionCache;
+    }
+
+    /**
+     * Check cache mode with SQL and table
+     * 1、Only Olap table
+     * 2、The update time of the table is before Config.last_version_interval_time
+     * 2、PartitionType is PartitionType.RANGE, and partition key has only one column
+     * 4、Partition key must be included in the group by clause
+     * 5、Where clause must contain only one partition key predicate
+     * CacheMode.Sql
+     * xxx FROM user_profile, updated before Config.last_version_interval_time
+     * CacheMode.Partition, partition by event_date, only the partition of today will be updated.
+     * SELECT xxx FROM app_event WHERE event_date >= 20191201 AND event_date <= 20191207 GROUP BY event_date
+     * SELECT xxx FROM app_event INNER JOIN user_Profile ON app_event.user_id = user_profile.user_id xxx
+     * SELECT xxx FROM app_event INNER JOIN user_profile ON xxx INNER JOIN site_channel ON xxx
+     */
+    public void checkCacheMode(long now) {
+        cacheMode = innerCheckCacheMode(now);
+    }
+
+    private CacheMode innerCheckCacheMode(long now) {
+        if (!enableCache()) {
+            return CacheMode.NoNeed;
+        }
+        if (!(parsedStmt instanceof SelectStmt) || scanNodes.size() == 0) {
+            return CacheMode.NoNeed;
+        }
+        MetricRepo.COUNTER_QUERY_TABLE.increase(1L);
+
+        this.selectStmt = (SelectStmt) parsedStmt;
+        //Check the last version time of the table
+        List<CacheTable> tblTimeList = Lists.newArrayList();
+        for (int i = 0; i < scanNodes.size(); i++) {
+            ScanNode node = scanNodes.get(i);
+            if (!(node instanceof OlapScanNode)) {
+                return CacheMode.None;
+            }
+            OlapScanNode oNode = (OlapScanNode) node;
+            OlapTable oTable = oNode.getOlapTable();
+            CacheTable cTable = getLastUpdateTime(oTable);
+            tblTimeList.add(cTable);
+        }
+        MetricRepo.COUNTER_QUERY_OLAP_TABLE.increase(1L);
+        Collections.sort(tblTimeList);
+        latestTable = tblTimeList.get(0);
+        latestTable.Debug();
+
+        if (now == 0) {
+            now = nowtime();
+        }
+        if (enableSqlCache() &&
+                (now - latestTable.latestTime) >= Config.cache_last_version_interval_second * 1000) {
+            LOG.info("TIME:{},{},{}", now, latestTable.latestTime, Config.cache_last_version_interval_second*1000);
+            cache = new SqlCache(this.queryId, this.selectStmt);
+            ((SqlCache) cache).setCacheInfo(this.latestTable);
+            MetricRepo.COUNTER_CACHE_MODE_SQL.increase(1L);
+            return CacheMode.Sql;
+        }
+
+        if (!enablePartitionCache()) {
+            return CacheMode.None;
+        }
+
+        //Check if selectStmt matches partition key
+        //Only one table can be updated in Config.cache_last_version_interval_second range
+        for (int i = 1; i < tblTimeList.size(); i++) {
+            if ((now - tblTimeList.get(i).latestTime) < Config.cache_last_version_interval_second * 1000) {
+                LOG.info("the time of other tables is newer than {}", Config.cache_last_version_interval_second);
+                return CacheMode.None;
+            }
+        }
+        olapTable = latestTable.olapTable;
+        if (olapTable.getPartitionInfo().getType() != PartitionType.RANGE) {
+            LOG.info("the partition of OlapTable not RANGE type");
+            return CacheMode.None;
+        }
+        partitionInfo = (RangePartitionInfo) olapTable.getPartitionInfo();
+        List<Column> columns = partitionInfo.getPartitionColumns();
+        //Partition key has only one column
+        if (columns.size() != 1) {
+            LOG.info("the size of columns for partition key is {}", columns.size());
+            return CacheMode.None;
+        }
+        partColumn = columns.get(0);
+        //Check if group expr contain partition column
+        if (!checkGroupByPartitionKey(this.selectStmt, partColumn)) {
+            LOG.info("not group by partition key, key {}", partColumn.getName());
+            return CacheMode.None;
+        }
+        //Check if whereClause have one CompoundPredicate of partition column
+        List<CompoundPredicate> compoundPredicates = Lists.newArrayList();
+        getPartitionKeyFromSelectStmt(this.selectStmt, partColumn, compoundPredicates);
+        if (compoundPredicates.size() != 1) {
+            LOG.info("the predicate size include partition key has {}", compoundPredicates.size());
+            return CacheMode.None;
+        }
+        partitionPredicate = compoundPredicates.get(0);
+        cache = new PartitionCache(this.queryId, this.selectStmt);
+        ((PartitionCache) cache).setCacheInfo(this.latestTable, this.partitionInfo, this.partColumn,
+                this.partitionPredicate);
+        MetricRepo.COUNTER_CACHE_MODE_PARTITION.increase(1L);
+        return CacheMode.Partition;
+    }
+
+    public CacheBeProxy.FetchCacheResult getCacheData() {
+        CacheProxy.FetchCacheResult cacheResult = null;
+        cacheMode = innerCheckCacheMode(0);
+        if (cacheMode == CacheMode.NoNeed) {
+            return cacheResult;
+        }
+        if (cacheMode == CacheMode.None) {
+            LOG.info("check cache mode {}, queryid {}", cacheMode, DebugUtil.printId(queryId));
+            return cacheResult;
+        }
+        Status status = new Status();
+        cacheResult = cache.getCacheData(status);
+
+        if (status.ok() && cacheResult != null) {
+            LOG.info("hit cache, mode {}, queryid {}, all count {}, value count {}, row count {}, data size {}",
+                    cacheMode, DebugUtil.printId(queryId),
+                    cacheResult.all_count, cacheResult.value_count,
+                    cacheResult.row_count, cacheResult.data_size);
+        } else {
+            LOG.info("miss cache, mode {}, queryid {}, code {}, msg {}", cacheMode,
+                    DebugUtil.printId(queryId), status.getErrorCode(), status.getErrorMsg());
+            cacheResult = null;
+        }
+        return cacheResult;
+    }
+
+    public long nowtime() {
+        return System.currentTimeMillis();
+    }
+
+    private void getPartitionKeyFromSelectStmt(SelectStmt stmt, Column partColumn,
+                                               List<CompoundPredicate> compoundPredicates) {
+        getPartitionKeyFromWhereClause(stmt.getWhereClause(), partColumn, compoundPredicates);
+        List<TableRef> tableRefs = stmt.getTableRefs();
+        for (TableRef tblRef : tableRefs) {
+            if (tblRef instanceof InlineViewRef) {
+                InlineViewRef viewRef = (InlineViewRef) tblRef;
+                QueryStmt queryStmt = viewRef.getViewStmt();
+                if (queryStmt instanceof SelectStmt) {
+                    getPartitionKeyFromSelectStmt((SelectStmt) queryStmt, partColumn, compoundPredicates);
+                }
+            }
+        }
+    }
+
+    /**
+     * Only support case 1
+     * 1.key >= a and key <= b
+     * 2.key = a or key = b

Review comment:
       Other cases are more complicated, I think the first version is simpler




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] kangkaisen commented on a change in pull request #4330: [Feature][Cache] Sql cache and partition cache #2581

Posted by GitBox <gi...@apache.org>.
kangkaisen commented on a change in pull request #4330:
URL: https://github.com/apache/incubator-doris/pull/4330#discussion_r473564840



##########
File path: fe/fe-core/src/main/java/org/apache/doris/qe/StmtExecutor.java
##########
@@ -575,6 +583,78 @@ private void handleSetStmt() {
         context.getState().setOk();
     }
 
+    private void sendChannel(MysqlChannel channel, List<CacheProxy.CacheValue> cacheValues, boolean hitAll)

Review comment:
       I think `sendChannel ` shouldn't know query cache hit logic. which only to know how to send and when finish.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] marising commented on a change in pull request #4330: [Feature][Cache] Sql cache and partition cache #2581

Posted by GitBox <gi...@apache.org>.
marising commented on a change in pull request #4330:
URL: https://github.com/apache/incubator-doris/pull/4330#discussion_r472609245



##########
File path: fe/fe-core/src/main/java/org/apache/doris/qe/StmtExecutor.java
##########
@@ -575,6 +583,78 @@ private void handleSetStmt() {
         context.getState().setOk();
     }
 
+    private void sendChannel(MysqlChannel channel, List<CacheProxy.CacheValue> cacheValues, boolean hitAll)
+            throws Exception {
+        RowBatch batch = null;
+        for (CacheBeProxy.CacheValue value : cacheValues) {
+            batch = value.getRowBatch();
+            for (ByteBuffer row : batch.getBatch().getRows()) {
+                channel.sendOnePacket(row);
+            }
+            context.updateReturnRows(batch.getBatch().getRows().size());
+        }
+        if (hitAll) {
+            if (batch != null) {
+                statisticsForAuditLog = batch.getQueryStatistics();
+            }
+            context.getState().setEof();
+            return;
+        }
+    }
+
+    private boolean handleCacheStmt(CacheAnalyzer cacheAnalyzer,MysqlChannel channel) throws Exception {
+        RowBatch batch = null;
+        CacheBeProxy.FetchCacheResult cacheResult = cacheAnalyzer.getCacheData();
+        CacheMode mode = cacheAnalyzer.getCacheMode();
+        if (cacheResult != null) {
+            isCached = true;
+            if (cacheAnalyzer.getHitRange() == Cache.HitRange.Full) {
+                sendChannel(channel, cacheResult.getValueList(), true);
+                return true;
+            }
+            //rewrite sql
+            if (mode == CacheMode.Partition) {
+                if (cacheAnalyzer.getHitRange() == Cache.HitRange.Left) {
+                    sendChannel(channel, cacheResult.getValueList(), false);
+                }
+                SelectStmt newSelectStmt = cacheAnalyzer.getRewriteStmt();
+                newSelectStmt.reset();
+                analyzer = new Analyzer(context.getCatalog(), context);
+                newSelectStmt.analyze(analyzer);
+                planner = new Planner();
+                planner.plan(newSelectStmt, analyzer, context.getSessionVariable().toThrift());
+            }
+        }
+
+        coord = new Coordinator(context, analyzer, planner);
+        QeProcessorImpl.INSTANCE.registerQuery(context.queryId(),
+                new QeProcessorImpl.QueryInfo(context, originStmt.originStmt, coord));
+        coord.exec();
+
+        while (true) {
+            batch = coord.getNext();
+            if (batch.getBatch() != null) {
+                cacheAnalyzer.copyRowBatch(batch);
+                for (ByteBuffer row : batch.getBatch().getRows()) {
+                    channel.sendOnePacket(row);
+                }
+                context.updateReturnRows(batch.getBatch().getRows().size());
+            }
+            if (batch.isEos()) {
+                break;
+            }
+        }
+        
+        if (cacheResult != null && cacheAnalyzer.getHitRange() == Cache.HitRange.Right) {
+            sendChannel(channel, cacheResult.getValueList(), false);
+        }
+
+        cacheAnalyzer.updateCache();

Review comment:
       The updateCache method determines whether the background Cache needs to be updated
   ```
       public void updateCache() {
           if (cacheMode == CacheMode.None || cacheMode == CacheMode.NoNeed) {
               return;
           }
           cache.updateCache();
       }
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] kangkaisen merged pull request #4330: [Feature][Cache] Sql cache and partition cache #2581

Posted by GitBox <gi...@apache.org>.
kangkaisen merged pull request #4330:
URL: https://github.com/apache/incubator-doris/pull/4330


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] kangkaisen commented on a change in pull request #4330: [Feature][Cache] Sql cache and partition cache #2581

Posted by GitBox <gi...@apache.org>.
kangkaisen commented on a change in pull request #4330:
URL: https://github.com/apache/incubator-doris/pull/4330#discussion_r474720148



##########
File path: fe/fe-core/src/main/java/org/apache/doris/qe/cache/PartitionCache.java
##########
@@ -0,0 +1,215 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.qe.cache;
+
+import com.google.common.collect.Lists;
+import org.apache.doris.analysis.CompoundPredicate;
+import org.apache.doris.analysis.Expr;
+import org.apache.doris.analysis.InlineViewRef;
+import org.apache.doris.analysis.QueryStmt;
+import org.apache.doris.analysis.SelectStmt;
+import org.apache.doris.analysis.TableRef;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.RangePartitionInfo;
+import org.apache.doris.common.Status;
+import org.apache.doris.common.util.DebugUtil;
+import org.apache.doris.metric.MetricRepo;
+import org.apache.doris.qe.RowBatch;
+import org.apache.doris.thrift.TUniqueId;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.util.List;
+
+public class PartitionCache extends Cache {
+    private static final Logger LOG = LogManager.getLogger(PartitionCache.class);
+    private SelectStmt nokeyStmt;
+    private SelectStmt rewriteStmt;
+    private CompoundPredicate partitionPredicate;
+    private OlapTable olapTable;
+    private RangePartitionInfo partitionInfo;
+    private Column partColumn;
+
+    private PartitionRange range;
+    private List<PartitionRange.PartitionSingle> newRangeList;
+
+    public SelectStmt getRewriteStmt() {
+        return rewriteStmt;
+    }
+
+    public SelectStmt getNokeyStmt() {
+        return nokeyStmt;
+    }
+
+    public PartitionCache(TUniqueId queryId, SelectStmt selectStmt) {
+        super(queryId, selectStmt);
+    }
+
+    public void setCacheInfo(CacheAnalyzer.CacheTable latestTable, RangePartitionInfo partitionInfo, Column partColumn,
+                             CompoundPredicate partitionPredicate) {
+        this.latestTable = latestTable;
+        this.olapTable = latestTable.olapTable;
+        this.partitionInfo = partitionInfo;
+        this.partColumn = partColumn;
+        this.partitionPredicate = partitionPredicate;
+        this.newRangeList = Lists.newArrayList();
+    }
+
+    public CacheProxy.FetchCacheResult getCacheData(Status status) {
+        CacheProxy.FetchCacheRequest request;
+        rewriteSelectStmt(null);
+        request = new CacheBeProxy.FetchCacheRequest(nokeyStmt.toSql());
+        range = new PartitionRange(this.partitionPredicate, this.olapTable,
+                this.partitionInfo);
+        if (!range.analytics()) {
+            status.setStatus("analytics range error");
+            return null;
+        }
+
+        for (PartitionRange.PartitionSingle single : range.getPartitionSingleList()) {
+            request.addParam(single.getCacheKey().realValue(),
+                    single.getPartition().getVisibleVersion(),
+                    single.getPartition().getVisibleVersionTime()
+            );
+        }
+
+        CacheProxy.FetchCacheResult cacheResult = proxy.fetchCache(request, 10000, status);
+        if (status.ok() && cacheResult != null) {
+            cacheResult.all_count = range.getPartitionSingleList().size();
+            for (CacheBeProxy.CacheValue value : cacheResult.getValueList()) {
+                range.setCacheFlag(value.param.partition_key);
+            }
+            MetricRepo.COUNTER_CACHE_HIT_PARTITION.increase(1L);
+            MetricRepo.COUNTER_CACHE_PARTITION_ALL.increase((long) range.getPartitionSingleList().size());
+            MetricRepo.COUNTER_CACHE_PARTITION_HIT.increase((long) cacheResult.getValueList().size());
+        }
+
+        range.setTooNewByID(latestTable.latestPartitionId);
+        //build rewrite sql
+        this.hitRange = range.buildDiskPartitionRange(newRangeList);
+        if (newRangeList != null && newRangeList.size() > 0) {
+            rewriteSelectStmt(newRangeList);
+        }
+        return cacheResult;
+    }
+
+    public void copyRowBatch(RowBatch rowBatch) {
+        if (rowBatchBuilder == null) {
+            rowBatchBuilder = new RowBatchBuilder(CacheAnalyzer.CacheMode.Partition);
+            rowBatchBuilder.buildPartitionIndex(selectStmt.getResultExprs(), selectStmt.getColLabels(),
+                    partColumn, range.buildUpdatePartitionRange());
+        }
+        rowBatchBuilder.copyRowData(rowBatch);
+    }
+
+    public void updateCache() {
+        if (!super.checkRowLimit()) {
+            return;
+        }
+
+        CacheBeProxy.UpdateCacheRequest updateRequest = rowBatchBuilder.buildPartitionUpdateRequest(nokeyStmt.toSql());
+        if (updateRequest.value_count > 0) {
+            CacheBeProxy proxy = new CacheBeProxy();
+            Status status = new Status();
+            proxy.updateCache(updateRequest, CacheProxy.UPDATE_TIMEOUT, status);
+            LOG.info("update cache model {}, queryid {}, sqlkey {}, value count {}, row count {}, data size {}",
+                    CacheAnalyzer.CacheMode.Partition, DebugUtil.printId(queryId),
+                    DebugUtil.printId(updateRequest.sql_key),
+                    updateRequest.value_count, updateRequest.row_count, updateRequest.data_size);
+        }
+    }
+
+    /**
+     * Set the predicate containing partition key to null
+     */
+    public void rewriteSelectStmt(List<PartitionRange.PartitionSingle> newRangeList) {
+        if (newRangeList == null || newRangeList.size() == 0) {
+            this.nokeyStmt = (SelectStmt) this.selectStmt.clone();
+            rewriteSelectStmt(nokeyStmt, this.partitionPredicate, null);
+        } else {
+            this.rewriteStmt = (SelectStmt) this.selectStmt.clone();
+            rewriteSelectStmt(rewriteStmt, this.partitionPredicate, newRangeList);
+        }
+    }
+
+    private void rewriteSelectStmt(SelectStmt newStmt, CompoundPredicate predicate,
+                                   List<PartitionRange.PartitionSingle> newRangeList) {
+        newStmt.setWhereClause(
+                rewriteWhereClause(newStmt.getWhereClause(), predicate, newRangeList)
+        );
+        List<TableRef> tableRefs = newStmt.getTableRefs();
+        for (TableRef tblRef : tableRefs) {
+            if (tblRef instanceof InlineViewRef) {
+                InlineViewRef viewRef = (InlineViewRef) tblRef;
+                QueryStmt queryStmt = viewRef.getViewStmt();
+                if (queryStmt instanceof SelectStmt) {
+                    rewriteSelectStmt((SelectStmt) queryStmt, predicate, newRangeList);
+                }
+            }
+        }
+    }
+
+    /**
+     * P1 And P2 And P3 And P4

Review comment:
       Please comment rewrite what format Expr to what another format Expr.

##########
File path: fe/fe-core/src/main/java/org/apache/doris/qe/cache/CacheAnalyzer.java
##########
@@ -0,0 +1,451 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.qe.cache;
+
+import org.apache.doris.analysis.AggregateInfo;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.CastExpr;
+import org.apache.doris.analysis.CompoundPredicate;
+import org.apache.doris.analysis.Expr;
+import org.apache.doris.analysis.InlineViewRef;
+import org.apache.doris.analysis.QueryStmt;
+import org.apache.doris.analysis.SelectStmt;
+import org.apache.doris.analysis.SlotRef;
+import org.apache.doris.analysis.StatementBase;
+import org.apache.doris.analysis.TableRef;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.RangePartitionInfo;
+import org.apache.doris.catalog.PartitionType;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.common.util.DebugUtil;
+import org.apache.doris.metric.MetricRepo;
+import org.apache.doris.planner.OlapScanNode;
+import org.apache.doris.planner.Planner;
+import org.apache.doris.planner.ScanNode;
+import org.apache.doris.qe.ConnectContext;
+import org.apache.doris.qe.RowBatch;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.Status;
+
+import com.google.common.collect.Lists;
+import org.apache.doris.thrift.TUniqueId;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+/**
+ * Analyze which caching mode a SQL is suitable for
+ * 1. T + 1 update is suitable for SQL mode
+ * 2. Partition by date, update the data of the day in near real time, which is suitable for Partition mode
+ */
+public class CacheAnalyzer {
+    private static final Logger LOG = LogManager.getLogger(CacheAnalyzer.class);
+
+    /**
+     * NoNeed : disable config or variable, not query, not scan table etc.
+     */
+    public enum CacheMode {
+        NoNeed,
+        None,
+        TTL,
+        Sql,
+        Partition
+    }
+
+    private ConnectContext context;
+    private boolean enableSqlCache = false;
+    private boolean enablePartitionCache = false;
+    private TUniqueId queryId;
+    private CacheMode cacheMode;
+    private CacheTable latestTable;
+    private StatementBase parsedStmt;
+    private SelectStmt selectStmt;
+    private List<ScanNode> scanNodes;
+    private OlapTable olapTable;
+    private RangePartitionInfo partitionInfo;
+    private Column partColumn;
+    private CompoundPredicate partitionPredicate;
+    private Cache cache;
+
+    public Cache getCache() {
+        return cache;
+    }
+
+    public CacheAnalyzer(ConnectContext context, StatementBase parsedStmt, Planner planner) {
+        this.context = context;
+        this.queryId = context.queryId();
+        this.parsedStmt = parsedStmt;
+        scanNodes = planner.getScanNodes();
+        latestTable = new CacheTable();
+        checkCacheConfig();
+    }
+
+    //for unit test
+    public CacheAnalyzer(ConnectContext context, StatementBase parsedStmt, List<ScanNode> scanNodes) {
+        this.context = context;
+        this.parsedStmt = parsedStmt;
+        this.scanNodes = scanNodes;
+        checkCacheConfig();
+    }
+
+    private void checkCacheConfig() {
+        if (Config.cache_enable_sql_mode) {
+            if (context.getSessionVariable().isEnableSqlCache()) {
+                enableSqlCache = true;
+            }
+        }
+        if (Config.cache_enable_partition_mode) {
+            if (context.getSessionVariable().isEnablePartitionCache()) {
+                enablePartitionCache = true;
+            }
+        }
+    }
+
+    public CacheMode getCacheMode() {
+        return cacheMode;
+    }
+
+    public class CacheTable implements Comparable<CacheTable> {
+        public OlapTable olapTable;
+        public long latestPartitionId;
+        public long latestVersion;
+        public long latestTime;
+
+        public CacheTable() {
+            olapTable = null;
+            latestPartitionId = 0;
+            latestVersion = 0;
+            latestTime = 0;
+        }
+
+        @Override
+        public int compareTo(CacheTable table) {
+            return (int) (table.latestTime - this.latestTime);
+        }
+
+        public void Debug() {
+            LOG.info("table {}, partition id {}, ver {}, time {}", olapTable.getName(), latestPartitionId, latestVersion, latestTime);
+        }
+    }
+
+    public boolean enableCache() {
+        return enableSqlCache || enablePartitionCache;
+    }
+
+    public boolean enableSqlCache() {
+        return enableSqlCache;
+    }
+
+    public boolean enablePartitionCache() {
+        return enablePartitionCache;
+    }
+
+    /**
+     * Check cache mode with SQL and table
+     * 1、Only Olap table
+     * 2、The update time of the table is before Config.last_version_interval_time
+     * 2、PartitionType is PartitionType.RANGE, and partition key has only one column
+     * 4、Partition key must be included in the group by clause
+     * 5、Where clause must contain only one partition key predicate
+     * CacheMode.Sql
+     * xxx FROM user_profile, updated before Config.last_version_interval_time
+     * CacheMode.Partition, partition by event_date, only the partition of today will be updated.
+     * SELECT xxx FROM app_event WHERE event_date >= 20191201 AND event_date <= 20191207 GROUP BY event_date
+     * SELECT xxx FROM app_event INNER JOIN user_Profile ON app_event.user_id = user_profile.user_id xxx
+     * SELECT xxx FROM app_event INNER JOIN user_profile ON xxx INNER JOIN site_channel ON xxx
+     */
+    public void checkCacheMode(long now) {
+        cacheMode = innerCheckCacheMode(now);
+    }
+
+    private CacheMode innerCheckCacheMode(long now) {
+        if (!enableCache()) {
+            return CacheMode.NoNeed;
+        }
+        if (!(parsedStmt instanceof SelectStmt) || scanNodes.size() == 0) {
+            return CacheMode.NoNeed;
+        }
+        MetricRepo.COUNTER_QUERY_TABLE.increase(1L);
+
+        this.selectStmt = (SelectStmt) parsedStmt;
+        //Check the last version time of the table
+        List<CacheTable> tblTimeList = Lists.newArrayList();
+        for (int i = 0; i < scanNodes.size(); i++) {
+            ScanNode node = scanNodes.get(i);
+            if (!(node instanceof OlapScanNode)) {
+                return CacheMode.None;
+            }
+            OlapScanNode oNode = (OlapScanNode) node;
+            OlapTable oTable = oNode.getOlapTable();
+            CacheTable cTable = getLastUpdateTime(oTable);
+            tblTimeList.add(cTable);
+        }
+        MetricRepo.COUNTER_QUERY_OLAP_TABLE.increase(1L);
+        Collections.sort(tblTimeList);
+        latestTable = tblTimeList.get(0);
+        latestTable.Debug();
+
+        if (now == 0) {
+            now = nowtime();
+        }
+        if (enableSqlCache() &&
+                (now - latestTable.latestTime) >= Config.cache_last_version_interval_second * 1000) {
+            LOG.info("TIME:{},{},{}", now, latestTable.latestTime, Config.cache_last_version_interval_second*1000);
+            cache = new SqlCache(this.queryId, this.selectStmt);
+            ((SqlCache) cache).setCacheInfo(this.latestTable);
+            MetricRepo.COUNTER_CACHE_MODE_SQL.increase(1L);
+            return CacheMode.Sql;
+        }
+
+        if (!enablePartitionCache()) {
+            return CacheMode.None;
+        }
+
+        //Check if selectStmt matches partition key
+        //Only one table can be updated in Config.cache_last_version_interval_second range
+        for (int i = 1; i < tblTimeList.size(); i++) {
+            if ((now - tblTimeList.get(i).latestTime) < Config.cache_last_version_interval_second * 1000) {
+                LOG.info("the time of other tables is newer than {}", Config.cache_last_version_interval_second);
+                return CacheMode.None;
+            }
+        }
+        olapTable = latestTable.olapTable;
+        if (olapTable.getPartitionInfo().getType() != PartitionType.RANGE) {
+            LOG.info("the partition of OlapTable not RANGE type");

Review comment:
       Too many log info. please change some log to debug level.

##########
File path: fe/fe-core/src/main/java/org/apache/doris/qe/cache/RowBatchBuilder.java
##########
@@ -0,0 +1,156 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.qe.cache;
+
+import com.google.common.collect.Lists;
+import org.apache.doris.analysis.Expr;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.Type;
+import org.apache.doris.qe.RowBatch;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.List;
+
+public class RowBatchBuilder {

Review comment:
       Add a comment for this class.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] marising commented on a change in pull request #4330: [Feature][Cache] Sql cache and partition cache #2581

Posted by GitBox <gi...@apache.org>.
marising commented on a change in pull request #4330:
URL: https://github.com/apache/incubator-doris/pull/4330#discussion_r472611199



##########
File path: fe/fe-core/src/main/java/org/apache/doris/qe/cache/PartitionRange.java
##########
@@ -0,0 +1,596 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.qe.cache;
+
+import org.apache.doris.analysis.CompoundPredicate;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.DateLiteral;
+import org.apache.doris.analysis.InPredicate;
+import org.apache.doris.analysis.PartitionValue;
+import org.apache.doris.analysis.Expr;
+import org.apache.doris.analysis.LiteralExpr;
+import org.apache.doris.analysis.IntLiteral;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.PrimitiveType;
+import org.apache.doris.catalog.RangePartitionInfo;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.PartitionKey;
+import org.apache.doris.catalog.Type;
+import org.apache.doris.common.Config;
+import org.apache.doris.planner.PartitionColumnFilter;
+
+import org.apache.doris.common.AnalysisException;
+
+import com.google.common.collect.Lists;
+import com.google.common.collect.Range;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.text.SimpleDateFormat;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Convert the range of the partition to the list
+ * all partition by day/week/month split to day list
+ */
+public class PartitionRange {
+    private static final Logger LOG = LogManager.getLogger(PartitionRange.class);
+
+    public class PartitionSingle {
+        private Partition partition;
+        private PartitionKey partitionKey;
+        private long partitionId;
+        private PartitionKeyType cacheKey;
+        private boolean fromCache;
+        private boolean tooNew;
+
+        public Partition getPartition() {
+            return partition;
+        }
+
+        public void setPartition(Partition partition) {
+            this.partition = partition;
+        }
+
+        public PartitionKey getPartitionKey() {
+            return partitionKey;
+        }
+
+        public void setPartitionKey(PartitionKey key) {
+            this.partitionKey = key;
+        }
+
+        public long getPartitionId() {
+            return partitionId;
+        }
+
+        public void setPartitionId(long partitionId) {
+            this.partitionId = partitionId;
+        }
+
+        public PartitionKeyType getCacheKey() {
+            return cacheKey;
+        }
+
+        public void setCacheKey(PartitionKeyType cacheKey) {
+            this.cacheKey.clone(cacheKey);
+        }
+
+        public boolean isFromCache() {
+            return fromCache;
+        }
+
+        public void setFromCache(boolean fromCache) {
+            this.fromCache = fromCache;
+        }
+
+        public boolean isTooNew() {
+            return tooNew;
+        }
+
+        public void setTooNew(boolean tooNew) {
+            this.tooNew = tooNew;
+        }
+
+        public PartitionSingle() {
+            this.partitionId = 0;
+            this.cacheKey = new PartitionKeyType();
+            this.fromCache = false;
+            this.tooNew = false;
+        }
+
+        public void Debug() {
+            if (partition != null) {
+                LOG.info("partition id {}, cacheKey {}, version {}, time {}, fromCache {}, tooNew {} ",
+                        partitionId, cacheKey.realValue(),
+                        partition.getVisibleVersion(), partition.getVisibleVersionTime(),
+                        fromCache, tooNew);
+            } else {
+                LOG.info("partition id {}, cacheKey {}, fromCache {}, tooNew {} ", partitionId,
+                        cacheKey.realValue(), fromCache, tooNew);
+            }
+        }
+    }
+
+    public enum KeyType {
+        DEFAULT,
+        LONG,
+        DATE,
+        DATETIME,
+        TIME
+    }
+
+    public static class PartitionKeyType {
+        private SimpleDateFormat df8 = new SimpleDateFormat("yyyyMMdd");
+        private SimpleDateFormat df10 = new SimpleDateFormat("yyyy-MM-dd");
+
+        public KeyType keyType = KeyType.DEFAULT;
+        public long value;
+        public Date date;
+
+        public boolean init(Type type, String str) {
+            if (type.getPrimitiveType() == PrimitiveType.DATE) {
+                try {
+                    date = df10.parse(str);
+                } catch (Exception e) {
+                    LOG.warn("parse error str{}.", str);
+                    return false;
+                }
+                keyType = KeyType.DATE;
+            } else {
+                value = Long.valueOf(str);
+                keyType = KeyType.LONG;
+            }
+            return true;
+        }
+
+        public boolean init(Type type, LiteralExpr expr) {
+            switch (type.getPrimitiveType()) {
+                case BOOLEAN:
+                case TIME:
+                case DATETIME:
+                case FLOAT:
+                case DOUBLE:
+                case DECIMAL:
+                case DECIMALV2:
+                case CHAR:
+                case VARCHAR:
+                case LARGEINT:
+                    LOG.info("PartitionCache not support such key type {}", type.toSql());
+                    return false;
+                case DATE:
+                    date = getDateValue(expr);
+                    keyType = KeyType.DATE;
+                    break;
+                case TINYINT:
+                case SMALLINT:
+                case INT:
+                case BIGINT:
+                    value = expr.getLongValue();
+                    keyType = KeyType.LONG;
+                    break;
+            }
+            return true;
+        }
+
+        public void clone(PartitionKeyType key) {
+            keyType = key.keyType;
+            value = key.value;
+            date = key.date;
+        }
+
+        public boolean equals(PartitionKeyType key) {
+            return realValue() == key.realValue();
+        }
+
+        public void add(int num) {
+            if (keyType == KeyType.DATE) {
+                date = new Date(date.getTime() + num * 3600 * 24 * 1000);
+            } else {
+                value += num;
+            }
+        }
+
+        public String toString() {
+            if (keyType == KeyType.DEFAULT) {
+                return "";
+            } else if (keyType == KeyType.DATE) {
+                return df10.format(date);
+            } else {
+                return String.valueOf(value);
+            }
+        }
+
+        public long realValue() {
+            if (keyType == KeyType.DATE) {
+                return Long.parseLong(df8.format(date));
+            } else {
+                return value;
+            }
+        }
+
+        private Date getDateValue(LiteralExpr expr) {
+            value = expr.getLongValue() / 1000000;
+            Date dt = null;
+            try {
+                dt = df8.parse(String.valueOf(value));
+            } catch (Exception e) {
+            }
+            return dt;
+        }
+    }
+
+    private CompoundPredicate partitionKeyPredicate;
+    private OlapTable olapTable;
+    private RangePartitionInfo rangePartitionInfo;
+    private Column partitionColumn;
+    private List<PartitionSingle> partitionSingleList;
+
+    public CompoundPredicate getPartitionKeyPredicate() {
+        return partitionKeyPredicate;
+    }
+
+    public void setPartitionKeyPredicate(CompoundPredicate partitionKeyPredicate) {
+        this.partitionKeyPredicate = partitionKeyPredicate;
+    }
+
+    public RangePartitionInfo getRangePartitionInfo() {
+        return rangePartitionInfo;
+    }
+
+    public void setRangePartitionInfo(RangePartitionInfo rangePartitionInfo) {
+        this.rangePartitionInfo = rangePartitionInfo;
+    }
+
+    public Column getPartitionColumn() {
+        return partitionColumn;
+    }
+
+    public void setPartitionColumn(Column partitionColumn) {
+        this.partitionColumn = partitionColumn;
+    }
+
+    public List<PartitionSingle> getPartitionSingleList() {
+        return partitionSingleList;
+    }
+
+    public PartitionRange() {
+    }
+
+    public PartitionRange(CompoundPredicate partitionKeyPredicate, OlapTable olapTable,
+                          RangePartitionInfo rangePartitionInfo) {
+        this.partitionKeyPredicate = partitionKeyPredicate;
+        this.olapTable = olapTable;
+        this.rangePartitionInfo = rangePartitionInfo;
+        this.partitionSingleList = Lists.newArrayList();
+    }
+
+    /**
+     * analytics PartitionKey and PartitionInfo
+     *
+     * @return
+     */
+    public boolean analytics() {
+        if (rangePartitionInfo.getPartitionColumns().size() != 1) {
+            return false;
+        }
+        partitionColumn = rangePartitionInfo.getPartitionColumns().get(0);
+        PartitionColumnFilter filter = createPartitionFilter(this.partitionKeyPredicate, partitionColumn);
+        try {
+            if (!buildPartitionKeyRange(filter, partitionColumn)) {
+                return false;
+            }
+            getTablePartitionList(olapTable);
+        } catch (AnalysisException e) {
+            LOG.warn("get partition range failed, because:", e);
+            return false;
+        }
+        return true;
+    }
+
+    public boolean setCacheFlag(long cacheKey) {
+        boolean find = false;
+        for (PartitionSingle single : partitionSingleList) {
+            if (single.getCacheKey().realValue() == cacheKey) {
+                single.setFromCache(true);
+                find = true;
+                break;
+            }
+        }
+        return find;
+    }
+
+    public boolean setTooNewByID(long partitionId) {
+        boolean find = false;
+        for (PartitionSingle single : partitionSingleList) {
+            if (single.getPartition().getId() == partitionId) {
+                single.setTooNew(true);
+                find = true;
+                break;
+            }
+        }
+        return find;
+    }
+
+    public boolean setTooNewByKey(long cacheKey) {
+        boolean find = false;
+        for (PartitionSingle single : partitionSingleList) {
+            if (single.getCacheKey().realValue() == cacheKey) {
+                single.setTooNew(true);
+                find = true;
+                break;
+            }
+        }
+        return find;
+    }
+
+    /**
+     * Support left or right hit cache, not support middle.
+     * 20200113-2020115, not support 20200114
+     */
+    public Cache.HitRange diskPartitionRange(List<PartitionSingle> rangeList) {

Review comment:
       buildDiskPartitionRange();
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] marising commented on a change in pull request #4330: [Feature][Cache] Sql cache and partition cache #2581

Posted by GitBox <gi...@apache.org>.
marising commented on a change in pull request #4330:
URL: https://github.com/apache/incubator-doris/pull/4330#discussion_r472277065



##########
File path: fe/fe-core/src/main/java/org/apache/doris/qe/cache/PartitionRange.java
##########
@@ -0,0 +1,596 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.qe.cache;
+
+import org.apache.doris.analysis.CompoundPredicate;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.DateLiteral;
+import org.apache.doris.analysis.InPredicate;
+import org.apache.doris.analysis.PartitionValue;
+import org.apache.doris.analysis.Expr;
+import org.apache.doris.analysis.LiteralExpr;
+import org.apache.doris.analysis.IntLiteral;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.PrimitiveType;
+import org.apache.doris.catalog.RangePartitionInfo;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.PartitionKey;
+import org.apache.doris.catalog.Type;
+import org.apache.doris.common.Config;
+import org.apache.doris.planner.PartitionColumnFilter;
+
+import org.apache.doris.common.AnalysisException;
+
+import com.google.common.collect.Lists;
+import com.google.common.collect.Range;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.text.SimpleDateFormat;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Convert the range of the partition to the list
+ * all partition by day/week/month split to day list
+ */
+public class PartitionRange {
+    private static final Logger LOG = LogManager.getLogger(PartitionRange.class);
+
+    public class PartitionSingle {
+        private Partition partition;
+        private PartitionKey partitionKey;
+        private long partitionId;
+        private PartitionKeyType cacheKey;
+        private boolean fromCache;
+        private boolean tooNew;
+
+        public Partition getPartition() {
+            return partition;
+        }
+
+        public void setPartition(Partition partition) {
+            this.partition = partition;
+        }
+
+        public PartitionKey getPartitionKey() {
+            return partitionKey;
+        }
+
+        public void setPartitionKey(PartitionKey key) {
+            this.partitionKey = key;
+        }
+
+        public long getPartitionId() {
+            return partitionId;
+        }
+
+        public void setPartitionId(long partitionId) {
+            this.partitionId = partitionId;
+        }
+
+        public PartitionKeyType getCacheKey() {
+            return cacheKey;
+        }
+
+        public void setCacheKey(PartitionKeyType cacheKey) {
+            this.cacheKey.clone(cacheKey);
+        }
+
+        public boolean isFromCache() {
+            return fromCache;
+        }
+
+        public void setFromCache(boolean fromCache) {
+            this.fromCache = fromCache;
+        }
+
+        public boolean isTooNew() {
+            return tooNew;
+        }
+
+        public void setTooNew(boolean tooNew) {
+            this.tooNew = tooNew;
+        }
+
+        public PartitionSingle() {
+            this.partitionId = 0;
+            this.cacheKey = new PartitionKeyType();
+            this.fromCache = false;
+            this.tooNew = false;
+        }
+
+        public void Debug() {
+            if (partition != null) {
+                LOG.info("partition id {}, cacheKey {}, version {}, time {}, fromCache {}, tooNew {} ",
+                        partitionId, cacheKey.realValue(),
+                        partition.getVisibleVersion(), partition.getVisibleVersionTime(),
+                        fromCache, tooNew);
+            } else {
+                LOG.info("partition id {}, cacheKey {}, fromCache {}, tooNew {} ", partitionId,
+                        cacheKey.realValue(), fromCache, tooNew);
+            }
+        }
+    }
+
+    public enum KeyType {
+        DEFAULT,
+        LONG,
+        DATE,
+        DATETIME,
+        TIME
+    }
+
+    public static class PartitionKeyType {
+        private SimpleDateFormat df8 = new SimpleDateFormat("yyyyMMdd");
+        private SimpleDateFormat df10 = new SimpleDateFormat("yyyy-MM-dd");
+
+        public KeyType keyType = KeyType.DEFAULT;
+        public long value;
+        public Date date;
+
+        public boolean init(Type type, String str) {
+            if (type.getPrimitiveType() == PrimitiveType.DATE) {
+                try {
+                    date = df10.parse(str);
+                } catch (Exception e) {
+                    LOG.warn("parse error str{}.", str);
+                    return false;
+                }
+                keyType = KeyType.DATE;
+            } else {
+                value = Long.valueOf(str);
+                keyType = KeyType.LONG;
+            }
+            return true;
+        }
+
+        public boolean init(Type type, LiteralExpr expr) {
+            switch (type.getPrimitiveType()) {
+                case BOOLEAN:
+                case TIME:
+                case DATETIME:
+                case FLOAT:
+                case DOUBLE:
+                case DECIMAL:
+                case DECIMALV2:
+                case CHAR:
+                case VARCHAR:
+                case LARGEINT:
+                    LOG.info("PartitionCache not support such key type {}", type.toSql());
+                    return false;
+                case DATE:
+                    date = getDateValue(expr);
+                    keyType = KeyType.DATE;
+                    break;
+                case TINYINT:
+                case SMALLINT:
+                case INT:
+                case BIGINT:
+                    value = expr.getLongValue();
+                    keyType = KeyType.LONG;
+                    break;
+            }
+            return true;
+        }
+
+        public void clone(PartitionKeyType key) {
+            keyType = key.keyType;
+            value = key.value;
+            date = key.date;
+        }
+
+        public boolean equals(PartitionKeyType key) {
+            return realValue() == key.realValue();
+        }
+
+        public void add(int num) {
+            if (keyType == KeyType.DATE) {
+                date = new Date(date.getTime() + num * 3600 * 24 * 1000);
+            } else {
+                value += num;
+            }
+        }
+
+        public String toString() {
+            if (keyType == KeyType.DEFAULT) {
+                return "";
+            } else if (keyType == KeyType.DATE) {
+                return df10.format(date);
+            } else {
+                return String.valueOf(value);
+            }
+        }
+
+        public long realValue() {
+            if (keyType == KeyType.DATE) {
+                return Long.parseLong(df8.format(date));
+            } else {
+                return value;
+            }
+        }
+
+        private Date getDateValue(LiteralExpr expr) {
+            value = expr.getLongValue() / 1000000;
+            Date dt = null;
+            try {
+                dt = df8.parse(String.valueOf(value));
+            } catch (Exception e) {
+            }
+            return dt;
+        }
+    }
+
+    private CompoundPredicate partitionKeyPredicate;
+    private OlapTable olapTable;
+    private RangePartitionInfo rangePartitionInfo;
+    private Column partitionColumn;
+    private List<PartitionSingle> partitionSingleList;
+
+    public CompoundPredicate getPartitionKeyPredicate() {
+        return partitionKeyPredicate;
+    }
+
+    public void setPartitionKeyPredicate(CompoundPredicate partitionKeyPredicate) {
+        this.partitionKeyPredicate = partitionKeyPredicate;
+    }
+
+    public RangePartitionInfo getRangePartitionInfo() {
+        return rangePartitionInfo;
+    }
+
+    public void setRangePartitionInfo(RangePartitionInfo rangePartitionInfo) {
+        this.rangePartitionInfo = rangePartitionInfo;
+    }
+
+    public Column getPartitionColumn() {
+        return partitionColumn;
+    }
+
+    public void setPartitionColumn(Column partitionColumn) {
+        this.partitionColumn = partitionColumn;
+    }
+
+    public List<PartitionSingle> getPartitionSingleList() {
+        return partitionSingleList;
+    }
+
+    public PartitionRange() {
+    }
+
+    public PartitionRange(CompoundPredicate partitionKeyPredicate, OlapTable olapTable,
+                          RangePartitionInfo rangePartitionInfo) {
+        this.partitionKeyPredicate = partitionKeyPredicate;
+        this.olapTable = olapTable;
+        this.rangePartitionInfo = rangePartitionInfo;
+        this.partitionSingleList = Lists.newArrayList();
+    }
+
+    /**
+     * analytics PartitionKey and PartitionInfo
+     *
+     * @return
+     */
+    public boolean analytics() {
+        if (rangePartitionInfo.getPartitionColumns().size() != 1) {
+            return false;
+        }
+        partitionColumn = rangePartitionInfo.getPartitionColumns().get(0);
+        PartitionColumnFilter filter = createPartitionFilter(this.partitionKeyPredicate, partitionColumn);
+        try {
+            if (!buildPartitionKeyRange(filter, partitionColumn)) {
+                return false;
+            }
+            getTablePartitionList(olapTable);
+        } catch (AnalysisException e) {
+            LOG.warn("get partition range failed, because:", e);
+            return false;
+        }
+        return true;
+    }
+
+    public boolean setCacheFlag(long cacheKey) {
+        boolean find = false;
+        for (PartitionSingle single : partitionSingleList) {
+            if (single.getCacheKey().realValue() == cacheKey) {
+                single.setFromCache(true);
+                find = true;
+                break;
+            }
+        }
+        return find;
+    }
+
+    public boolean setTooNewByID(long partitionId) {
+        boolean find = false;
+        for (PartitionSingle single : partitionSingleList) {
+            if (single.getPartition().getId() == partitionId) {
+                single.setTooNew(true);
+                find = true;
+                break;
+            }
+        }
+        return find;
+    }
+
+    public boolean setTooNewByKey(long cacheKey) {
+        boolean find = false;
+        for (PartitionSingle single : partitionSingleList) {
+            if (single.getCacheKey().realValue() == cacheKey) {
+                single.setTooNew(true);
+                find = true;
+                break;
+            }
+        }
+        return find;
+    }
+
+    /**
+     * Support left or right hit cache, not support middle.
+     * 20200113-2020115, not support 20200114

Review comment:
       It's my problem. I explained it in detail in be, but it's simplified here




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] marising commented on a change in pull request #4330: [Feature][Cache] Sql cache and partition cache #2581

Posted by GitBox <gi...@apache.org>.
marising commented on a change in pull request #4330:
URL: https://github.com/apache/incubator-doris/pull/4330#discussion_r474429568



##########
File path: fe/fe-core/src/main/java/org/apache/doris/qe/cache/CacheAnalyzer.java
##########
@@ -0,0 +1,450 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.qe.cache;
+
+import org.apache.doris.analysis.AggregateInfo;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.CastExpr;
+import org.apache.doris.analysis.CompoundPredicate;
+import org.apache.doris.analysis.Expr;
+import org.apache.doris.analysis.InlineViewRef;
+import org.apache.doris.analysis.QueryStmt;
+import org.apache.doris.analysis.SelectStmt;
+import org.apache.doris.analysis.SlotRef;
+import org.apache.doris.analysis.StatementBase;
+import org.apache.doris.analysis.TableRef;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.RangePartitionInfo;
+import org.apache.doris.catalog.PartitionType;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.common.util.DebugUtil;
+import org.apache.doris.metric.MetricRepo;
+import org.apache.doris.planner.OlapScanNode;
+import org.apache.doris.planner.Planner;
+import org.apache.doris.planner.ScanNode;
+import org.apache.doris.qe.ConnectContext;
+import org.apache.doris.qe.RowBatch;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.Status;
+
+import com.google.common.collect.Lists;
+import org.apache.doris.thrift.TUniqueId;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+/**
+ * Analyze which caching mode a SQL is suitable for
+ * 1. T + 1 update is suitable for SQL mode
+ * 2. Partition by date, update the data of the day in near real time, which is suitable for Partition mode
+ */
+public class CacheAnalyzer {
+    private static final Logger LOG = LogManager.getLogger(CacheAnalyzer.class);
+
+    /**
+     * NoNeed : disable config or variable, not query, not scan table etc.
+     */
+    public enum CacheMode {
+        NoNeed,
+        None,
+        TTL,
+        Sql,
+        Partition
+    }
+
+    private ConnectContext context;
+    private boolean enableSqlCache = false;
+    private boolean enablePartitionCache = false;
+    private TUniqueId queryId;
+    private CacheMode cacheMode;
+    private CacheTable latestTable;
+    private StatementBase parsedStmt;
+    private SelectStmt selectStmt;
+    private List<ScanNode> scanNodes;
+    private OlapTable olapTable;
+    private RangePartitionInfo partitionInfo;
+    private Column partColumn;
+    private CompoundPredicate partitionPredicate;
+    private Cache cache;
+
+    public Cache getCache() {
+        return cache;
+    }
+
+    public CacheAnalyzer(ConnectContext context, StatementBase parsedStmt, Planner planner) {
+        this.context = context;
+        this.queryId = context.queryId();
+        this.parsedStmt = parsedStmt;
+        scanNodes = planner.getScanNodes();
+        latestTable = new CacheTable();
+        checkCacheConfig();
+    }
+
+    //for unit test
+    public CacheAnalyzer(ConnectContext context, StatementBase parsedStmt, List<ScanNode> scanNodes) {
+        this.context = context;
+        this.parsedStmt = parsedStmt;
+        this.scanNodes = scanNodes;
+        checkCacheConfig();
+    }
+
+    private void checkCacheConfig() {
+        if (Config.cache_enable_sql_mode) {
+            if (context.getSessionVariable().isEnableSqlCache()) {
+                enableSqlCache = true;
+            }
+        }
+        if (Config.cache_enable_partition_mode) {
+            if (context.getSessionVariable().isEnablePartitionCache()) {
+                enablePartitionCache = true;
+            }
+        }
+    }
+
+    public CacheMode getCacheMode() {
+        return cacheMode;
+    }
+
+    public class CacheTable implements Comparable<CacheTable> {
+        public OlapTable olapTable;
+        public long latestId;
+        public long latestVersion;
+        public long latestTime;
+
+        public CacheTable() {
+            olapTable = null;
+            latestId = 0;
+            latestVersion = 0;
+            latestTime = 0;
+        }
+
+        @Override
+        public int compareTo(CacheTable table) {
+            return (int) (table.latestTime - this.latestTime);
+        }
+
+        public void Debug() {
+            LOG.info("table {}, partition id {}, ver {}, time {}", olapTable.getName(), latestId, latestVersion, latestTime);
+        }
+    }
+
+    public boolean enableCache() {
+        return enableSqlCache || enablePartitionCache;
+    }
+
+    public boolean enableSqlCache() {
+        return enableSqlCache;
+    }
+
+    public boolean enablePartitionCache() {
+        return enablePartitionCache;
+    }
+
+    /**
+     * Check cache mode with SQL and table
+     * 1、Only Olap table
+     * 2、The update time of the table is before Config.last_version_interval_time
+     * 2、PartitionType is PartitionType.RANGE, and partition key has only one column
+     * 4、Partition key must be included in the group by clause
+     * 5、Where clause must contain only one partition key predicate
+     * CacheMode.Sql
+     * xxx FROM user_profile, updated before Config.last_version_interval_time
+     * CacheMode.Partition, partition by event_date, only the partition of today will be updated.
+     * SELECT xxx FROM app_event WHERE event_date >= 20191201 AND event_date <= 20191207 GROUP BY event_date
+     * SELECT xxx FROM app_event INNER JOIN user_Profile ON app_event.user_id = user_profile.user_id xxx
+     * SELECT xxx FROM app_event INNER JOIN user_profile ON xxx INNER JOIN site_channel ON xxx
+     */
+    public void checkCacheMode(long now) {
+        cacheMode = innerCheckCacheMode(now);
+    }
+
+    private CacheMode innerCheckCacheMode(long now) {
+        if (!enableCache()) {
+            return CacheMode.NoNeed;
+        }
+        if (!(parsedStmt instanceof SelectStmt) || scanNodes.size() == 0) {
+            return CacheMode.NoNeed;
+        }
+        MetricRepo.COUNTER_QUERY_TABLE.increase(1L);
+
+        this.selectStmt = (SelectStmt) parsedStmt;
+        //Check the last version time of the table
+        List<CacheTable> tblTimeList = Lists.newArrayList();
+        for (int i = 0; i < scanNodes.size(); i++) {
+            ScanNode node = scanNodes.get(i);
+            if (!(node instanceof OlapScanNode)) {
+                return CacheMode.None;
+            }
+            OlapScanNode oNode = (OlapScanNode) node;
+            OlapTable oTable = oNode.getOlapTable();
+            CacheTable cTable = getLastUpdateTime(oTable);
+            tblTimeList.add(cTable);
+        }
+        MetricRepo.COUNTER_QUERY_OLAP_TABLE.increase(1L);
+        Collections.sort(tblTimeList);
+        latestTable = tblTimeList.get(0);
+        latestTable.Debug();
+
+        if (now == 0) {
+            now = nowtime();
+        }
+        if (enableSqlCache() &&
+                (now - latestTable.latestTime) >= Config.cache_last_version_interval_second * 1000) {
+            LOG.info("TIME:{},{},{}", now, latestTable.latestTime, Config.cache_last_version_interval_second*1000);
+            cache = new SqlCache(this.queryId, this.selectStmt);
+            ((SqlCache) cache).setCacheInfo(this.latestTable);
+            MetricRepo.COUNTER_CACHE_MODE_SQL.increase(1L);
+            return CacheMode.Sql;
+        }
+
+        if (!enablePartitionCache()) {
+            return CacheMode.None;
+        }
+
+        //Check if selectStmt matches partition key
+        //Only one table can be updated in Config.cache_last_version_interval_second range
+        for (int i = 1; i < tblTimeList.size(); i++) {
+            if ((now - tblTimeList.get(i).latestTime) < Config.cache_last_version_interval_second * 1000) {
+                LOG.info("the time of other tables is newer than {}", Config.cache_last_version_interval_second);
+                return CacheMode.None;
+            }
+        }
+        olapTable = latestTable.olapTable;
+        if (olapTable.getPartitionInfo().getType() != PartitionType.RANGE) {
+            LOG.info("the partition of OlapTable not RANGE type");
+            return CacheMode.None;
+        }
+        partitionInfo = (RangePartitionInfo) olapTable.getPartitionInfo();
+        List<Column> columns = partitionInfo.getPartitionColumns();
+        //Partition key has only one column
+        if (columns.size() != 1) {
+            LOG.info("the size of columns for partition key is {}", columns.size());
+            return CacheMode.None;
+        }
+        partColumn = columns.get(0);
+        //Check if group expr contain partition column
+        if (!checkGroupByPartitionKey(this.selectStmt, partColumn)) {
+            LOG.info("not group by partition key, key {}", partColumn.getName());
+            return CacheMode.None;
+        }
+        //Check if whereClause have one CompoundPredicate of partition column
+        List<CompoundPredicate> compoundPredicates = Lists.newArrayList();
+        getPartitionKeyFromSelectStmt(this.selectStmt, partColumn, compoundPredicates);
+        if (compoundPredicates.size() != 1) {
+            LOG.info("the predicate size include partition key has {}", compoundPredicates.size());
+            return CacheMode.None;
+        }
+        partitionPredicate = compoundPredicates.get(0);
+        cache = new PartitionCache(this.queryId, this.selectStmt);
+        ((PartitionCache) cache).setCacheInfo(this.latestTable, this.partitionInfo, this.partColumn,
+                this.partitionPredicate);
+        MetricRepo.COUNTER_CACHE_MODE_PARTITION.increase(1L);
+        return CacheMode.Partition;
+    }
+
+    public CacheBeProxy.FetchCacheResult getCacheData() {
+        CacheProxy.FetchCacheResult cacheResult = null;
+        cacheMode = innerCheckCacheMode(0);
+        if (cacheMode == CacheMode.NoNeed) {
+            return cacheResult;
+        }
+        if (cacheMode == CacheMode.None) {
+            LOG.info("check cache mode {}, queryid {}", cacheMode, DebugUtil.printId(queryId));
+            return cacheResult;
+        }
+        Status status = new Status();
+        cacheResult = cache.getCacheData(status);
+
+        if (status.ok() && cacheResult != null) {
+            LOG.info("hit cache, mode {}, queryid {}, all count {}, value count {}, row count {}, data size {}",
+                    cacheMode, DebugUtil.printId(queryId),
+                    cacheResult.all_count, cacheResult.value_count,
+                    cacheResult.row_count, cacheResult.data_size);
+        } else {
+            LOG.info("miss cache, mode {}, queryid {}, code {}, msg {}", cacheMode,
+                    DebugUtil.printId(queryId), status.getErrorCode(), status.getErrorMsg());
+            cacheResult = null;
+        }
+        return cacheResult;
+    }
+
+    public long nowtime() {
+        return System.currentTimeMillis();
+    }
+
+    private void getPartitionKeyFromSelectStmt(SelectStmt stmt, Column partColumn,
+                                               List<CompoundPredicate> compoundPredicates) {
+        getPartitionKeyFromWhereClause(stmt.getWhereClause(), partColumn, compoundPredicates);
+        List<TableRef> tableRefs = stmt.getTableRefs();
+        for (TableRef tblRef : tableRefs) {
+            if (tblRef instanceof InlineViewRef) {
+                InlineViewRef viewRef = (InlineViewRef) tblRef;
+                QueryStmt queryStmt = viewRef.getViewStmt();
+                if (queryStmt instanceof SelectStmt) {
+                    getPartitionKeyFromSelectStmt((SelectStmt) queryStmt, partColumn, compoundPredicates);
+                }
+            }
+        }
+    }
+
+    /**
+     * Only support case 1
+     * 1.key >= a and key <= b
+     * 2.key = a or key = b
+     * 3.key in(a,b,c)
+     */
+    private void getPartitionKeyFromWhereClause(Expr expr, Column partColumn,
+                                                List<CompoundPredicate> compoundPredicates) {
+        if (expr == null) {
+            return;
+        }
+        if (expr instanceof CompoundPredicate) {
+            CompoundPredicate cp = (CompoundPredicate) expr;
+            if (cp.getOp() == CompoundPredicate.Operator.AND) {
+                if (cp.getChildren().size() == 2 && cp.getChild(0) instanceof BinaryPredicate &&
+                        cp.getChild(1) instanceof BinaryPredicate) {
+                    BinaryPredicate leftPre = (BinaryPredicate) cp.getChild(0);
+                    BinaryPredicate rightPre = (BinaryPredicate) cp.getChild(1);
+                    String leftColumn = getColumnName(leftPre);
+                    String rightColumn = getColumnName(rightPre);
+                    if (leftColumn.equalsIgnoreCase(partColumn.getName()) &&
+                            rightColumn.equalsIgnoreCase(partColumn.getName())) {
+                        compoundPredicates.add(cp);
+                    }
+                }
+            }
+            for (Expr subExpr : expr.getChildren()) {
+                getPartitionKeyFromWhereClause(subExpr, partColumn, compoundPredicates);
+            }
+        }
+    }
+
+    private String getColumnName(BinaryPredicate predicate) {
+        SlotRef slot = null;
+        if (predicate.getChild(0) instanceof SlotRef) {
+            slot = (SlotRef) predicate.getChild(0);
+        } else if (predicate.getChild(0) instanceof CastExpr) {
+            CastExpr expr = (CastExpr) predicate.getChild(0);
+            if (expr.getChild(0) instanceof SlotRef) {
+                slot = (SlotRef) expr.getChild(0);
+            }
+        }
+
+        if (slot != null) {
+            return slot.getColumnName();
+        }
+        return "";
+    }
+
+    /**
+     * Check the selectStmt and tableRefs always group by partition key
+     * 1. At least one group by
+     * 2. group by must contain partition key
+     * 3. AggregateInfo cannot be distinct agg

Review comment:
       Yes, It's support COUNT(DISTINCT xxx)




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] marising commented on a change in pull request #4330: [Feature][Cache] Sql cache and partition cache #2581

Posted by GitBox <gi...@apache.org>.
marising commented on a change in pull request #4330:
URL: https://github.com/apache/incubator-doris/pull/4330#discussion_r472246168



##########
File path: fe/fe-core/src/main/java/org/apache/doris/qe/cache/PartitionCache.java
##########
@@ -0,0 +1,215 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.qe.cache;
+
+import com.google.common.collect.Lists;
+import org.apache.doris.analysis.CompoundPredicate;
+import org.apache.doris.analysis.Expr;
+import org.apache.doris.analysis.InlineViewRef;
+import org.apache.doris.analysis.QueryStmt;
+import org.apache.doris.analysis.SelectStmt;
+import org.apache.doris.analysis.TableRef;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.RangePartitionInfo;
+import org.apache.doris.common.Status;
+import org.apache.doris.common.util.DebugUtil;
+import org.apache.doris.metric.MetricRepo;
+import org.apache.doris.qe.RowBatch;
+import org.apache.doris.thrift.TUniqueId;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.util.List;
+
+public class PartitionCache extends Cache {
+    private static final Logger LOG = LogManager.getLogger(PartitionCache.class);
+    private SelectStmt nokeyStmt;

Review comment:
       After rewriting, there is no partition key select statement 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] marising commented on a change in pull request #4330: [Feature][Cache] Sql cache and partition cache #2581

Posted by GitBox <gi...@apache.org>.
marising commented on a change in pull request #4330:
URL: https://github.com/apache/incubator-doris/pull/4330#discussion_r472266869



##########
File path: fe/fe-core/src/main/java/org/apache/doris/qe/StmtExecutor.java
##########
@@ -575,6 +583,78 @@ private void handleSetStmt() {
         context.getState().setOk();
     }
 
+    private void sendChannel(MysqlChannel channel, List<CacheProxy.CacheValue> cacheValues, boolean hitAll)

Review comment:
       This means whether the query partitions are all hit,so isHitAll is better?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] marising commented on a change in pull request #4330: [Feature][Cache] Sql cache and partition cache #2581

Posted by GitBox <gi...@apache.org>.
marising commented on a change in pull request #4330:
URL: https://github.com/apache/incubator-doris/pull/4330#discussion_r474392090



##########
File path: fe/fe-core/src/main/java/org/apache/doris/qe/StmtExecutor.java
##########
@@ -575,6 +583,78 @@ private void handleSetStmt() {
         context.getState().setOk();
     }
 
+    private void sendChannel(MysqlChannel channel, List<CacheProxy.CacheValue> cacheValues, boolean hitAll)

Review comment:
       ok,i change hitAll to isEos




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] marising commented on a change in pull request #4330: [Feature][Cache] Sql cache and partition cache #2581

Posted by GitBox <gi...@apache.org>.
marising commented on a change in pull request #4330:
URL: https://github.com/apache/incubator-doris/pull/4330#discussion_r472259742



##########
File path: fe/fe-core/src/main/java/org/apache/doris/qe/cache/CacheAnalyzer.java
##########
@@ -0,0 +1,450 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.qe.cache;
+
+import org.apache.doris.analysis.AggregateInfo;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.CastExpr;
+import org.apache.doris.analysis.CompoundPredicate;
+import org.apache.doris.analysis.Expr;
+import org.apache.doris.analysis.InlineViewRef;
+import org.apache.doris.analysis.QueryStmt;
+import org.apache.doris.analysis.SelectStmt;
+import org.apache.doris.analysis.SlotRef;
+import org.apache.doris.analysis.StatementBase;
+import org.apache.doris.analysis.TableRef;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.RangePartitionInfo;
+import org.apache.doris.catalog.PartitionType;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.common.util.DebugUtil;
+import org.apache.doris.metric.MetricRepo;
+import org.apache.doris.planner.OlapScanNode;
+import org.apache.doris.planner.Planner;
+import org.apache.doris.planner.ScanNode;
+import org.apache.doris.qe.ConnectContext;
+import org.apache.doris.qe.RowBatch;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.Status;
+
+import com.google.common.collect.Lists;
+import org.apache.doris.thrift.TUniqueId;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+/**
+ * Analyze which caching mode a SQL is suitable for
+ * 1. T + 1 update is suitable for SQL mode
+ * 2. Partition by date, update the data of the day in near real time, which is suitable for Partition mode
+ */
+public class CacheAnalyzer {
+    private static final Logger LOG = LogManager.getLogger(CacheAnalyzer.class);
+
+    /**
+     * NoNeed : disable config or variable, not query, not scan table etc.
+     */
+    public enum CacheMode {
+        NoNeed,
+        None,
+        TTL,
+        Sql,
+        Partition
+    }
+
+    private ConnectContext context;
+    private boolean enableSqlCache = false;
+    private boolean enablePartitionCache = false;
+    private TUniqueId queryId;
+    private CacheMode cacheMode;
+    private CacheTable latestTable;
+    private StatementBase parsedStmt;
+    private SelectStmt selectStmt;
+    private List<ScanNode> scanNodes;
+    private OlapTable olapTable;
+    private RangePartitionInfo partitionInfo;
+    private Column partColumn;
+    private CompoundPredicate partitionPredicate;
+    private Cache cache;
+
+    public Cache getCache() {
+        return cache;
+    }
+
+    public CacheAnalyzer(ConnectContext context, StatementBase parsedStmt, Planner planner) {
+        this.context = context;
+        this.queryId = context.queryId();
+        this.parsedStmt = parsedStmt;
+        scanNodes = planner.getScanNodes();
+        latestTable = new CacheTable();
+        checkCacheConfig();
+    }
+
+    //for unit test
+    public CacheAnalyzer(ConnectContext context, StatementBase parsedStmt, List<ScanNode> scanNodes) {
+        this.context = context;
+        this.parsedStmt = parsedStmt;
+        this.scanNodes = scanNodes;
+        checkCacheConfig();
+    }
+
+    private void checkCacheConfig() {
+        if (Config.cache_enable_sql_mode) {
+            if (context.getSessionVariable().isEnableSqlCache()) {

Review comment:
       I understand that getsessionvariable() can obtain session variables and global variables. Session variables have higher priority than global variables. I don't know if I understand correctly.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] kangkaisen commented on a change in pull request #4330: [Feature][Cache] Sql cache and partition cache #2581

Posted by GitBox <gi...@apache.org>.
kangkaisen commented on a change in pull request #4330:
URL: https://github.com/apache/incubator-doris/pull/4330#discussion_r468649412



##########
File path: fe/fe-core/src/main/java/org/apache/doris/qe/cache/PartitionRange.java
##########
@@ -0,0 +1,596 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.qe.cache;
+
+import org.apache.doris.analysis.CompoundPredicate;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.DateLiteral;
+import org.apache.doris.analysis.InPredicate;
+import org.apache.doris.analysis.PartitionValue;
+import org.apache.doris.analysis.Expr;
+import org.apache.doris.analysis.LiteralExpr;
+import org.apache.doris.analysis.IntLiteral;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.PrimitiveType;
+import org.apache.doris.catalog.RangePartitionInfo;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.PartitionKey;
+import org.apache.doris.catalog.Type;
+import org.apache.doris.common.Config;
+import org.apache.doris.planner.PartitionColumnFilter;
+
+import org.apache.doris.common.AnalysisException;
+
+import com.google.common.collect.Lists;
+import com.google.common.collect.Range;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.text.SimpleDateFormat;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Convert the range of the partition to the list
+ * all partition by day/week/month split to day list
+ */
+public class PartitionRange {
+    private static final Logger LOG = LogManager.getLogger(PartitionRange.class);
+
+    public class PartitionSingle {
+        private Partition partition;
+        private PartitionKey partitionKey;
+        private long partitionId;
+        private PartitionKeyType cacheKey;
+        private boolean fromCache;
+        private boolean tooNew;
+
+        public Partition getPartition() {
+            return partition;
+        }
+
+        public void setPartition(Partition partition) {
+            this.partition = partition;
+        }
+
+        public PartitionKey getPartitionKey() {
+            return partitionKey;
+        }
+
+        public void setPartitionKey(PartitionKey key) {
+            this.partitionKey = key;
+        }
+
+        public long getPartitionId() {
+            return partitionId;
+        }
+
+        public void setPartitionId(long partitionId) {
+            this.partitionId = partitionId;
+        }
+
+        public PartitionKeyType getCacheKey() {
+            return cacheKey;
+        }
+
+        public void setCacheKey(PartitionKeyType cacheKey) {
+            this.cacheKey.clone(cacheKey);
+        }
+
+        public boolean isFromCache() {
+            return fromCache;
+        }
+
+        public void setFromCache(boolean fromCache) {
+            this.fromCache = fromCache;
+        }
+
+        public boolean isTooNew() {
+            return tooNew;
+        }
+
+        public void setTooNew(boolean tooNew) {
+            this.tooNew = tooNew;
+        }
+
+        public PartitionSingle() {
+            this.partitionId = 0;
+            this.cacheKey = new PartitionKeyType();
+            this.fromCache = false;
+            this.tooNew = false;
+        }
+
+        public void Debug() {
+            if (partition != null) {
+                LOG.info("partition id {}, cacheKey {}, version {}, time {}, fromCache {}, tooNew {} ",
+                        partitionId, cacheKey.realValue(),
+                        partition.getVisibleVersion(), partition.getVisibleVersionTime(),
+                        fromCache, tooNew);
+            } else {
+                LOG.info("partition id {}, cacheKey {}, fromCache {}, tooNew {} ", partitionId,
+                        cacheKey.realValue(), fromCache, tooNew);
+            }
+        }
+    }
+
+    public enum KeyType {
+        DEFAULT,
+        LONG,
+        DATE,
+        DATETIME,
+        TIME
+    }
+
+    public static class PartitionKeyType {
+        private SimpleDateFormat df8 = new SimpleDateFormat("yyyyMMdd");
+        private SimpleDateFormat df10 = new SimpleDateFormat("yyyy-MM-dd");
+
+        public KeyType keyType = KeyType.DEFAULT;
+        public long value;
+        public Date date;
+
+        public boolean init(Type type, String str) {
+            if (type.getPrimitiveType() == PrimitiveType.DATE) {
+                try {
+                    date = df10.parse(str);
+                } catch (Exception e) {
+                    LOG.warn("parse error str{}.", str);
+                    return false;
+                }
+                keyType = KeyType.DATE;
+            } else {
+                value = Long.valueOf(str);
+                keyType = KeyType.LONG;
+            }
+            return true;
+        }
+
+        public boolean init(Type type, LiteralExpr expr) {
+            switch (type.getPrimitiveType()) {
+                case BOOLEAN:
+                case TIME:
+                case DATETIME:
+                case FLOAT:
+                case DOUBLE:
+                case DECIMAL:
+                case DECIMALV2:
+                case CHAR:
+                case VARCHAR:
+                case LARGEINT:
+                    LOG.info("PartitionCache not support such key type {}", type.toSql());
+                    return false;
+                case DATE:
+                    date = getDateValue(expr);
+                    keyType = KeyType.DATE;
+                    break;
+                case TINYINT:
+                case SMALLINT:
+                case INT:
+                case BIGINT:
+                    value = expr.getLongValue();
+                    keyType = KeyType.LONG;
+                    break;
+            }
+            return true;
+        }
+
+        public void clone(PartitionKeyType key) {
+            keyType = key.keyType;
+            value = key.value;
+            date = key.date;
+        }
+
+        public boolean equals(PartitionKeyType key) {
+            return realValue() == key.realValue();
+        }
+
+        public void add(int num) {
+            if (keyType == KeyType.DATE) {
+                date = new Date(date.getTime() + num * 3600 * 24 * 1000);
+            } else {
+                value += num;
+            }
+        }
+
+        public String toString() {
+            if (keyType == KeyType.DEFAULT) {
+                return "";
+            } else if (keyType == KeyType.DATE) {
+                return df10.format(date);
+            } else {
+                return String.valueOf(value);
+            }
+        }
+
+        public long realValue() {
+            if (keyType == KeyType.DATE) {
+                return Long.parseLong(df8.format(date));
+            } else {
+                return value;
+            }
+        }
+
+        private Date getDateValue(LiteralExpr expr) {
+            value = expr.getLongValue() / 1000000;
+            Date dt = null;
+            try {
+                dt = df8.parse(String.valueOf(value));
+            } catch (Exception e) {
+            }
+            return dt;
+        }
+    }
+
+    private CompoundPredicate partitionKeyPredicate;
+    private OlapTable olapTable;
+    private RangePartitionInfo rangePartitionInfo;
+    private Column partitionColumn;
+    private List<PartitionSingle> partitionSingleList;
+
+    public CompoundPredicate getPartitionKeyPredicate() {
+        return partitionKeyPredicate;
+    }
+
+    public void setPartitionKeyPredicate(CompoundPredicate partitionKeyPredicate) {
+        this.partitionKeyPredicate = partitionKeyPredicate;
+    }
+
+    public RangePartitionInfo getRangePartitionInfo() {
+        return rangePartitionInfo;
+    }
+
+    public void setRangePartitionInfo(RangePartitionInfo rangePartitionInfo) {
+        this.rangePartitionInfo = rangePartitionInfo;
+    }
+
+    public Column getPartitionColumn() {
+        return partitionColumn;
+    }
+
+    public void setPartitionColumn(Column partitionColumn) {
+        this.partitionColumn = partitionColumn;
+    }
+
+    public List<PartitionSingle> getPartitionSingleList() {
+        return partitionSingleList;
+    }
+
+    public PartitionRange() {
+    }
+
+    public PartitionRange(CompoundPredicate partitionKeyPredicate, OlapTable olapTable,
+                          RangePartitionInfo rangePartitionInfo) {
+        this.partitionKeyPredicate = partitionKeyPredicate;
+        this.olapTable = olapTable;
+        this.rangePartitionInfo = rangePartitionInfo;
+        this.partitionSingleList = Lists.newArrayList();
+    }
+
+    /**
+     * analytics PartitionKey and PartitionInfo
+     *
+     * @return
+     */
+    public boolean analytics() {
+        if (rangePartitionInfo.getPartitionColumns().size() != 1) {
+            return false;
+        }
+        partitionColumn = rangePartitionInfo.getPartitionColumns().get(0);
+        PartitionColumnFilter filter = createPartitionFilter(this.partitionKeyPredicate, partitionColumn);
+        try {
+            if (!buildPartitionKeyRange(filter, partitionColumn)) {
+                return false;
+            }
+            getTablePartitionList(olapTable);
+        } catch (AnalysisException e) {
+            LOG.warn("get partition range failed, because:", e);
+            return false;
+        }
+        return true;
+    }
+
+    public boolean setCacheFlag(long cacheKey) {
+        boolean find = false;
+        for (PartitionSingle single : partitionSingleList) {
+            if (single.getCacheKey().realValue() == cacheKey) {
+                single.setFromCache(true);
+                find = true;
+                break;
+            }
+        }
+        return find;
+    }
+
+    public boolean setTooNewByID(long partitionId) {
+        boolean find = false;
+        for (PartitionSingle single : partitionSingleList) {
+            if (single.getPartition().getId() == partitionId) {
+                single.setTooNew(true);
+                find = true;
+                break;
+            }
+        }
+        return find;
+    }
+
+    public boolean setTooNewByKey(long cacheKey) {
+        boolean find = false;
+        for (PartitionSingle single : partitionSingleList) {
+            if (single.getCacheKey().realValue() == cacheKey) {
+                single.setTooNew(true);
+                find = true;
+                break;
+            }
+        }
+        return find;
+    }
+
+    /**
+     * Support left or right hit cache, not support middle.
+     * 20200113-2020115, not support 20200114
+     */
+    public Cache.HitRange diskPartitionRange(List<PartitionSingle> rangeList) {

Review comment:
       `diskPartitionRange `? need a better name.

##########
File path: fe/fe-core/src/main/java/org/apache/doris/qe/cache/CacheAnalyzer.java
##########
@@ -0,0 +1,450 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.qe.cache;
+
+import org.apache.doris.analysis.AggregateInfo;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.CastExpr;
+import org.apache.doris.analysis.CompoundPredicate;
+import org.apache.doris.analysis.Expr;
+import org.apache.doris.analysis.InlineViewRef;
+import org.apache.doris.analysis.QueryStmt;
+import org.apache.doris.analysis.SelectStmt;
+import org.apache.doris.analysis.SlotRef;
+import org.apache.doris.analysis.StatementBase;
+import org.apache.doris.analysis.TableRef;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.RangePartitionInfo;
+import org.apache.doris.catalog.PartitionType;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.common.util.DebugUtil;
+import org.apache.doris.metric.MetricRepo;
+import org.apache.doris.planner.OlapScanNode;
+import org.apache.doris.planner.Planner;
+import org.apache.doris.planner.ScanNode;
+import org.apache.doris.qe.ConnectContext;
+import org.apache.doris.qe.RowBatch;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.Status;
+
+import com.google.common.collect.Lists;
+import org.apache.doris.thrift.TUniqueId;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+/**
+ * Analyze which caching mode a SQL is suitable for
+ * 1. T + 1 update is suitable for SQL mode
+ * 2. Partition by date, update the data of the day in near real time, which is suitable for Partition mode
+ */
+public class CacheAnalyzer {
+    private static final Logger LOG = LogManager.getLogger(CacheAnalyzer.class);
+
+    /**
+     * NoNeed : disable config or variable, not query, not scan table etc.
+     */
+    public enum CacheMode {
+        NoNeed,
+        None,
+        TTL,
+        Sql,
+        Partition
+    }
+
+    private ConnectContext context;
+    private boolean enableSqlCache = false;
+    private boolean enablePartitionCache = false;
+    private TUniqueId queryId;
+    private CacheMode cacheMode;
+    private CacheTable latestTable;
+    private StatementBase parsedStmt;
+    private SelectStmt selectStmt;
+    private List<ScanNode> scanNodes;
+    private OlapTable olapTable;
+    private RangePartitionInfo partitionInfo;
+    private Column partColumn;
+    private CompoundPredicate partitionPredicate;
+    private Cache cache;
+
+    public Cache getCache() {
+        return cache;
+    }
+
+    public CacheAnalyzer(ConnectContext context, StatementBase parsedStmt, Planner planner) {
+        this.context = context;
+        this.queryId = context.queryId();
+        this.parsedStmt = parsedStmt;
+        scanNodes = planner.getScanNodes();
+        latestTable = new CacheTable();
+        checkCacheConfig();
+    }
+
+    //for unit test
+    public CacheAnalyzer(ConnectContext context, StatementBase parsedStmt, List<ScanNode> scanNodes) {
+        this.context = context;
+        this.parsedStmt = parsedStmt;
+        this.scanNodes = scanNodes;
+        checkCacheConfig();
+    }
+
+    private void checkCacheConfig() {
+        if (Config.cache_enable_sql_mode) {
+            if (context.getSessionVariable().isEnableSqlCache()) {

Review comment:
       Only session variable is enough? because session variable  could be global, global  session variable is persist.

##########
File path: fe/fe-core/src/main/java/org/apache/doris/qe/cache/CacheAnalyzer.java
##########
@@ -0,0 +1,450 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.qe.cache;
+
+import org.apache.doris.analysis.AggregateInfo;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.CastExpr;
+import org.apache.doris.analysis.CompoundPredicate;
+import org.apache.doris.analysis.Expr;
+import org.apache.doris.analysis.InlineViewRef;
+import org.apache.doris.analysis.QueryStmt;
+import org.apache.doris.analysis.SelectStmt;
+import org.apache.doris.analysis.SlotRef;
+import org.apache.doris.analysis.StatementBase;
+import org.apache.doris.analysis.TableRef;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.RangePartitionInfo;
+import org.apache.doris.catalog.PartitionType;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.common.util.DebugUtil;
+import org.apache.doris.metric.MetricRepo;
+import org.apache.doris.planner.OlapScanNode;
+import org.apache.doris.planner.Planner;
+import org.apache.doris.planner.ScanNode;
+import org.apache.doris.qe.ConnectContext;
+import org.apache.doris.qe.RowBatch;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.Status;
+
+import com.google.common.collect.Lists;
+import org.apache.doris.thrift.TUniqueId;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+/**
+ * Analyze which caching mode a SQL is suitable for
+ * 1. T + 1 update is suitable for SQL mode
+ * 2. Partition by date, update the data of the day in near real time, which is suitable for Partition mode
+ */
+public class CacheAnalyzer {
+    private static final Logger LOG = LogManager.getLogger(CacheAnalyzer.class);
+
+    /**
+     * NoNeed : disable config or variable, not query, not scan table etc.
+     */
+    public enum CacheMode {
+        NoNeed,
+        None,
+        TTL,
+        Sql,
+        Partition
+    }
+
+    private ConnectContext context;
+    private boolean enableSqlCache = false;
+    private boolean enablePartitionCache = false;
+    private TUniqueId queryId;
+    private CacheMode cacheMode;
+    private CacheTable latestTable;
+    private StatementBase parsedStmt;
+    private SelectStmt selectStmt;
+    private List<ScanNode> scanNodes;
+    private OlapTable olapTable;
+    private RangePartitionInfo partitionInfo;
+    private Column partColumn;
+    private CompoundPredicate partitionPredicate;
+    private Cache cache;
+
+    public Cache getCache() {
+        return cache;
+    }
+
+    public CacheAnalyzer(ConnectContext context, StatementBase parsedStmt, Planner planner) {
+        this.context = context;
+        this.queryId = context.queryId();
+        this.parsedStmt = parsedStmt;
+        scanNodes = planner.getScanNodes();
+        latestTable = new CacheTable();
+        checkCacheConfig();
+    }
+
+    //for unit test
+    public CacheAnalyzer(ConnectContext context, StatementBase parsedStmt, List<ScanNode> scanNodes) {
+        this.context = context;
+        this.parsedStmt = parsedStmt;
+        this.scanNodes = scanNodes;
+        checkCacheConfig();
+    }
+
+    private void checkCacheConfig() {
+        if (Config.cache_enable_sql_mode) {
+            if (context.getSessionVariable().isEnableSqlCache()) {
+                enableSqlCache = true;
+            }
+        }
+        if (Config.cache_enable_partition_mode) {
+            if (context.getSessionVariable().isEnablePartitionCache()) {
+                enablePartitionCache = true;
+            }
+        }
+    }
+
+    public CacheMode getCacheMode() {
+        return cacheMode;
+    }
+
+    public class CacheTable implements Comparable<CacheTable> {
+        public OlapTable olapTable;
+        public long latestId;
+        public long latestVersion;
+        public long latestTime;
+
+        public CacheTable() {
+            olapTable = null;
+            latestId = 0;
+            latestVersion = 0;
+            latestTime = 0;
+        }
+
+        @Override
+        public int compareTo(CacheTable table) {
+            return (int) (table.latestTime - this.latestTime);
+        }
+
+        public void Debug() {
+            LOG.info("table {}, partition id {}, ver {}, time {}", olapTable.getName(), latestId, latestVersion, latestTime);
+        }
+    }
+
+    public boolean enableCache() {
+        return enableSqlCache || enablePartitionCache;
+    }
+
+    public boolean enableSqlCache() {
+        return enableSqlCache;
+    }
+
+    public boolean enablePartitionCache() {
+        return enablePartitionCache;
+    }
+
+    /**
+     * Check cache mode with SQL and table
+     * 1、Only Olap table
+     * 2、The update time of the table is before Config.last_version_interval_time
+     * 2、PartitionType is PartitionType.RANGE, and partition key has only one column
+     * 4、Partition key must be included in the group by clause
+     * 5、Where clause must contain only one partition key predicate
+     * CacheMode.Sql
+     * xxx FROM user_profile, updated before Config.last_version_interval_time
+     * CacheMode.Partition, partition by event_date, only the partition of today will be updated.
+     * SELECT xxx FROM app_event WHERE event_date >= 20191201 AND event_date <= 20191207 GROUP BY event_date
+     * SELECT xxx FROM app_event INNER JOIN user_Profile ON app_event.user_id = user_profile.user_id xxx
+     * SELECT xxx FROM app_event INNER JOIN user_profile ON xxx INNER JOIN site_channel ON xxx
+     */
+    public void checkCacheMode(long now) {
+        cacheMode = innerCheckCacheMode(now);
+    }
+
+    private CacheMode innerCheckCacheMode(long now) {
+        if (!enableCache()) {
+            return CacheMode.NoNeed;
+        }
+        if (!(parsedStmt instanceof SelectStmt) || scanNodes.size() == 0) {
+            return CacheMode.NoNeed;
+        }
+        MetricRepo.COUNTER_QUERY_TABLE.increase(1L);
+
+        this.selectStmt = (SelectStmt) parsedStmt;
+        //Check the last version time of the table
+        List<CacheTable> tblTimeList = Lists.newArrayList();
+        for (int i = 0; i < scanNodes.size(); i++) {
+            ScanNode node = scanNodes.get(i);
+            if (!(node instanceof OlapScanNode)) {
+                return CacheMode.None;
+            }
+            OlapScanNode oNode = (OlapScanNode) node;
+            OlapTable oTable = oNode.getOlapTable();
+            CacheTable cTable = getLastUpdateTime(oTable);
+            tblTimeList.add(cTable);
+        }
+        MetricRepo.COUNTER_QUERY_OLAP_TABLE.increase(1L);
+        Collections.sort(tblTimeList);
+        latestTable = tblTimeList.get(0);
+        latestTable.Debug();
+
+        if (now == 0) {
+            now = nowtime();
+        }
+        if (enableSqlCache() &&
+                (now - latestTable.latestTime) >= Config.cache_last_version_interval_second * 1000) {
+            LOG.info("TIME:{},{},{}", now, latestTable.latestTime, Config.cache_last_version_interval_second*1000);
+            cache = new SqlCache(this.queryId, this.selectStmt);
+            ((SqlCache) cache).setCacheInfo(this.latestTable);
+            MetricRepo.COUNTER_CACHE_MODE_SQL.increase(1L);
+            return CacheMode.Sql;
+        }
+
+        if (!enablePartitionCache()) {
+            return CacheMode.None;
+        }
+
+        //Check if selectStmt matches partition key
+        //Only one table can be updated in Config.cache_last_version_interval_second range
+        for (int i = 1; i < tblTimeList.size(); i++) {
+            if ((now - tblTimeList.get(i).latestTime) < Config.cache_last_version_interval_second * 1000) {
+                LOG.info("the time of other tables is newer than {}", Config.cache_last_version_interval_second);
+                return CacheMode.None;
+            }
+        }
+        olapTable = latestTable.olapTable;
+        if (olapTable.getPartitionInfo().getType() != PartitionType.RANGE) {
+            LOG.info("the partition of OlapTable not RANGE type");
+            return CacheMode.None;
+        }
+        partitionInfo = (RangePartitionInfo) olapTable.getPartitionInfo();
+        List<Column> columns = partitionInfo.getPartitionColumns();
+        //Partition key has only one column
+        if (columns.size() != 1) {
+            LOG.info("the size of columns for partition key is {}", columns.size());
+            return CacheMode.None;
+        }
+        partColumn = columns.get(0);
+        //Check if group expr contain partition column
+        if (!checkGroupByPartitionKey(this.selectStmt, partColumn)) {
+            LOG.info("not group by partition key, key {}", partColumn.getName());
+            return CacheMode.None;
+        }
+        //Check if whereClause have one CompoundPredicate of partition column
+        List<CompoundPredicate> compoundPredicates = Lists.newArrayList();
+        getPartitionKeyFromSelectStmt(this.selectStmt, partColumn, compoundPredicates);
+        if (compoundPredicates.size() != 1) {
+            LOG.info("the predicate size include partition key has {}", compoundPredicates.size());
+            return CacheMode.None;
+        }
+        partitionPredicate = compoundPredicates.get(0);
+        cache = new PartitionCache(this.queryId, this.selectStmt);
+        ((PartitionCache) cache).setCacheInfo(this.latestTable, this.partitionInfo, this.partColumn,
+                this.partitionPredicate);
+        MetricRepo.COUNTER_CACHE_MODE_PARTITION.increase(1L);
+        return CacheMode.Partition;
+    }
+
+    public CacheBeProxy.FetchCacheResult getCacheData() {
+        CacheProxy.FetchCacheResult cacheResult = null;
+        cacheMode = innerCheckCacheMode(0);
+        if (cacheMode == CacheMode.NoNeed) {
+            return cacheResult;
+        }
+        if (cacheMode == CacheMode.None) {
+            LOG.info("check cache mode {}, queryid {}", cacheMode, DebugUtil.printId(queryId));
+            return cacheResult;
+        }
+        Status status = new Status();
+        cacheResult = cache.getCacheData(status);
+
+        if (status.ok() && cacheResult != null) {
+            LOG.info("hit cache, mode {}, queryid {}, all count {}, value count {}, row count {}, data size {}",
+                    cacheMode, DebugUtil.printId(queryId),
+                    cacheResult.all_count, cacheResult.value_count,
+                    cacheResult.row_count, cacheResult.data_size);
+        } else {
+            LOG.info("miss cache, mode {}, queryid {}, code {}, msg {}", cacheMode,
+                    DebugUtil.printId(queryId), status.getErrorCode(), status.getErrorMsg());
+            cacheResult = null;
+        }
+        return cacheResult;
+    }
+
+    public long nowtime() {
+        return System.currentTimeMillis();
+    }
+
+    private void getPartitionKeyFromSelectStmt(SelectStmt stmt, Column partColumn,
+                                               List<CompoundPredicate> compoundPredicates) {
+        getPartitionKeyFromWhereClause(stmt.getWhereClause(), partColumn, compoundPredicates);
+        List<TableRef> tableRefs = stmt.getTableRefs();
+        for (TableRef tblRef : tableRefs) {
+            if (tblRef instanceof InlineViewRef) {
+                InlineViewRef viewRef = (InlineViewRef) tblRef;
+                QueryStmt queryStmt = viewRef.getViewStmt();
+                if (queryStmt instanceof SelectStmt) {
+                    getPartitionKeyFromSelectStmt((SelectStmt) queryStmt, partColumn, compoundPredicates);
+                }
+            }
+        }
+    }
+
+    /**
+     * Only support case 1
+     * 1.key >= a and key <= b
+     * 2.key = a or key = b
+     * 3.key in(a,b,c)
+     */
+    private void getPartitionKeyFromWhereClause(Expr expr, Column partColumn,
+                                                List<CompoundPredicate> compoundPredicates) {
+        if (expr == null) {
+            return;
+        }
+        if (expr instanceof CompoundPredicate) {
+            CompoundPredicate cp = (CompoundPredicate) expr;
+            if (cp.getOp() == CompoundPredicate.Operator.AND) {
+                if (cp.getChildren().size() == 2 && cp.getChild(0) instanceof BinaryPredicate &&
+                        cp.getChild(1) instanceof BinaryPredicate) {
+                    BinaryPredicate leftPre = (BinaryPredicate) cp.getChild(0);
+                    BinaryPredicate rightPre = (BinaryPredicate) cp.getChild(1);
+                    String leftColumn = getColumnName(leftPre);
+                    String rightColumn = getColumnName(rightPre);
+                    if (leftColumn.equalsIgnoreCase(partColumn.getName()) &&
+                            rightColumn.equalsIgnoreCase(partColumn.getName())) {
+                        compoundPredicates.add(cp);
+                    }
+                }
+            }
+            for (Expr subExpr : expr.getChildren()) {
+                getPartitionKeyFromWhereClause(subExpr, partColumn, compoundPredicates);
+            }
+        }
+    }
+
+    private String getColumnName(BinaryPredicate predicate) {
+        SlotRef slot = null;
+        if (predicate.getChild(0) instanceof SlotRef) {
+            slot = (SlotRef) predicate.getChild(0);
+        } else if (predicate.getChild(0) instanceof CastExpr) {
+            CastExpr expr = (CastExpr) predicate.getChild(0);
+            if (expr.getChild(0) instanceof SlotRef) {
+                slot = (SlotRef) expr.getChild(0);
+            }
+        }
+
+        if (slot != null) {
+            return slot.getColumnName();
+        }
+        return "";
+    }
+
+    /**
+     * Check the selectStmt and tableRefs always group by partition key
+     * 1. At least one group by
+     * 2. group by must contain partition key
+     * 3. AggregateInfo cannot be distinct agg

Review comment:
       Why AggregateInfo cannot be distinct agg? if distinct cache result is final result, I think which is OK.

##########
File path: fe/fe-core/src/main/java/org/apache/doris/qe/StmtExecutor.java
##########
@@ -575,6 +583,78 @@ private void handleSetStmt() {
         context.getState().setOk();
     }
 
+    private void sendChannel(MysqlChannel channel, List<CacheProxy.CacheValue> cacheValues, boolean hitAll)

Review comment:
       Rename `hitAll ` to `isEos` or `isFinished` ?

##########
File path: fe/fe-core/src/main/java/org/apache/doris/qe/cache/CacheAnalyzer.java
##########
@@ -0,0 +1,450 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.qe.cache;
+
+import org.apache.doris.analysis.AggregateInfo;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.CastExpr;
+import org.apache.doris.analysis.CompoundPredicate;
+import org.apache.doris.analysis.Expr;
+import org.apache.doris.analysis.InlineViewRef;
+import org.apache.doris.analysis.QueryStmt;
+import org.apache.doris.analysis.SelectStmt;
+import org.apache.doris.analysis.SlotRef;
+import org.apache.doris.analysis.StatementBase;
+import org.apache.doris.analysis.TableRef;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.RangePartitionInfo;
+import org.apache.doris.catalog.PartitionType;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.common.util.DebugUtil;
+import org.apache.doris.metric.MetricRepo;
+import org.apache.doris.planner.OlapScanNode;
+import org.apache.doris.planner.Planner;
+import org.apache.doris.planner.ScanNode;
+import org.apache.doris.qe.ConnectContext;
+import org.apache.doris.qe.RowBatch;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.Status;
+
+import com.google.common.collect.Lists;
+import org.apache.doris.thrift.TUniqueId;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+/**
+ * Analyze which caching mode a SQL is suitable for
+ * 1. T + 1 update is suitable for SQL mode
+ * 2. Partition by date, update the data of the day in near real time, which is suitable for Partition mode
+ */
+public class CacheAnalyzer {
+    private static final Logger LOG = LogManager.getLogger(CacheAnalyzer.class);
+
+    /**
+     * NoNeed : disable config or variable, not query, not scan table etc.
+     */
+    public enum CacheMode {
+        NoNeed,
+        None,
+        TTL,
+        Sql,
+        Partition
+    }
+
+    private ConnectContext context;
+    private boolean enableSqlCache = false;
+    private boolean enablePartitionCache = false;
+    private TUniqueId queryId;
+    private CacheMode cacheMode;
+    private CacheTable latestTable;
+    private StatementBase parsedStmt;
+    private SelectStmt selectStmt;
+    private List<ScanNode> scanNodes;
+    private OlapTable olapTable;
+    private RangePartitionInfo partitionInfo;
+    private Column partColumn;
+    private CompoundPredicate partitionPredicate;
+    private Cache cache;
+
+    public Cache getCache() {
+        return cache;
+    }
+
+    public CacheAnalyzer(ConnectContext context, StatementBase parsedStmt, Planner planner) {
+        this.context = context;
+        this.queryId = context.queryId();
+        this.parsedStmt = parsedStmt;
+        scanNodes = planner.getScanNodes();
+        latestTable = new CacheTable();
+        checkCacheConfig();
+    }
+
+    //for unit test
+    public CacheAnalyzer(ConnectContext context, StatementBase parsedStmt, List<ScanNode> scanNodes) {
+        this.context = context;
+        this.parsedStmt = parsedStmt;
+        this.scanNodes = scanNodes;
+        checkCacheConfig();
+    }
+
+    private void checkCacheConfig() {
+        if (Config.cache_enable_sql_mode) {
+            if (context.getSessionVariable().isEnableSqlCache()) {
+                enableSqlCache = true;
+            }
+        }
+        if (Config.cache_enable_partition_mode) {
+            if (context.getSessionVariable().isEnablePartitionCache()) {
+                enablePartitionCache = true;
+            }
+        }
+    }
+
+    public CacheMode getCacheMode() {
+        return cacheMode;
+    }
+
+    public class CacheTable implements Comparable<CacheTable> {
+        public OlapTable olapTable;
+        public long latestId;
+        public long latestVersion;
+        public long latestTime;
+
+        public CacheTable() {
+            olapTable = null;
+            latestId = 0;
+            latestVersion = 0;
+            latestTime = 0;
+        }
+
+        @Override
+        public int compareTo(CacheTable table) {
+            return (int) (table.latestTime - this.latestTime);
+        }
+
+        public void Debug() {
+            LOG.info("table {}, partition id {}, ver {}, time {}", olapTable.getName(), latestId, latestVersion, latestTime);
+        }
+    }
+
+    public boolean enableCache() {
+        return enableSqlCache || enablePartitionCache;
+    }
+
+    public boolean enableSqlCache() {
+        return enableSqlCache;
+    }
+
+    public boolean enablePartitionCache() {
+        return enablePartitionCache;
+    }
+
+    /**
+     * Check cache mode with SQL and table
+     * 1、Only Olap table
+     * 2、The update time of the table is before Config.last_version_interval_time
+     * 2、PartitionType is PartitionType.RANGE, and partition key has only one column
+     * 4、Partition key must be included in the group by clause
+     * 5、Where clause must contain only one partition key predicate
+     * CacheMode.Sql
+     * xxx FROM user_profile, updated before Config.last_version_interval_time
+     * CacheMode.Partition, partition by event_date, only the partition of today will be updated.
+     * SELECT xxx FROM app_event WHERE event_date >= 20191201 AND event_date <= 20191207 GROUP BY event_date
+     * SELECT xxx FROM app_event INNER JOIN user_Profile ON app_event.user_id = user_profile.user_id xxx
+     * SELECT xxx FROM app_event INNER JOIN user_profile ON xxx INNER JOIN site_channel ON xxx
+     */
+    public void checkCacheMode(long now) {
+        cacheMode = innerCheckCacheMode(now);
+    }
+
+    private CacheMode innerCheckCacheMode(long now) {
+        if (!enableCache()) {
+            return CacheMode.NoNeed;
+        }
+        if (!(parsedStmt instanceof SelectStmt) || scanNodes.size() == 0) {
+            return CacheMode.NoNeed;
+        }
+        MetricRepo.COUNTER_QUERY_TABLE.increase(1L);
+
+        this.selectStmt = (SelectStmt) parsedStmt;
+        //Check the last version time of the table
+        List<CacheTable> tblTimeList = Lists.newArrayList();
+        for (int i = 0; i < scanNodes.size(); i++) {
+            ScanNode node = scanNodes.get(i);
+            if (!(node instanceof OlapScanNode)) {
+                return CacheMode.None;
+            }
+            OlapScanNode oNode = (OlapScanNode) node;
+            OlapTable oTable = oNode.getOlapTable();
+            CacheTable cTable = getLastUpdateTime(oTable);
+            tblTimeList.add(cTable);
+        }
+        MetricRepo.COUNTER_QUERY_OLAP_TABLE.increase(1L);
+        Collections.sort(tblTimeList);
+        latestTable = tblTimeList.get(0);
+        latestTable.Debug();
+
+        if (now == 0) {
+            now = nowtime();
+        }
+        if (enableSqlCache() &&
+                (now - latestTable.latestTime) >= Config.cache_last_version_interval_second * 1000) {
+            LOG.info("TIME:{},{},{}", now, latestTable.latestTime, Config.cache_last_version_interval_second*1000);
+            cache = new SqlCache(this.queryId, this.selectStmt);
+            ((SqlCache) cache).setCacheInfo(this.latestTable);
+            MetricRepo.COUNTER_CACHE_MODE_SQL.increase(1L);
+            return CacheMode.Sql;
+        }
+
+        if (!enablePartitionCache()) {
+            return CacheMode.None;
+        }
+
+        //Check if selectStmt matches partition key
+        //Only one table can be updated in Config.cache_last_version_interval_second range
+        for (int i = 1; i < tblTimeList.size(); i++) {
+            if ((now - tblTimeList.get(i).latestTime) < Config.cache_last_version_interval_second * 1000) {
+                LOG.info("the time of other tables is newer than {}", Config.cache_last_version_interval_second);
+                return CacheMode.None;
+            }
+        }
+        olapTable = latestTable.olapTable;
+        if (olapTable.getPartitionInfo().getType() != PartitionType.RANGE) {
+            LOG.info("the partition of OlapTable not RANGE type");
+            return CacheMode.None;
+        }
+        partitionInfo = (RangePartitionInfo) olapTable.getPartitionInfo();
+        List<Column> columns = partitionInfo.getPartitionColumns();
+        //Partition key has only one column
+        if (columns.size() != 1) {
+            LOG.info("the size of columns for partition key is {}", columns.size());
+            return CacheMode.None;
+        }
+        partColumn = columns.get(0);
+        //Check if group expr contain partition column
+        if (!checkGroupByPartitionKey(this.selectStmt, partColumn)) {
+            LOG.info("not group by partition key, key {}", partColumn.getName());
+            return CacheMode.None;
+        }
+        //Check if whereClause have one CompoundPredicate of partition column
+        List<CompoundPredicate> compoundPredicates = Lists.newArrayList();
+        getPartitionKeyFromSelectStmt(this.selectStmt, partColumn, compoundPredicates);
+        if (compoundPredicates.size() != 1) {
+            LOG.info("the predicate size include partition key has {}", compoundPredicates.size());
+            return CacheMode.None;
+        }
+        partitionPredicate = compoundPredicates.get(0);
+        cache = new PartitionCache(this.queryId, this.selectStmt);
+        ((PartitionCache) cache).setCacheInfo(this.latestTable, this.partitionInfo, this.partColumn,
+                this.partitionPredicate);
+        MetricRepo.COUNTER_CACHE_MODE_PARTITION.increase(1L);
+        return CacheMode.Partition;
+    }
+
+    public CacheBeProxy.FetchCacheResult getCacheData() {
+        CacheProxy.FetchCacheResult cacheResult = null;
+        cacheMode = innerCheckCacheMode(0);
+        if (cacheMode == CacheMode.NoNeed) {
+            return cacheResult;
+        }
+        if (cacheMode == CacheMode.None) {
+            LOG.info("check cache mode {}, queryid {}", cacheMode, DebugUtil.printId(queryId));
+            return cacheResult;
+        }
+        Status status = new Status();
+        cacheResult = cache.getCacheData(status);
+
+        if (status.ok() && cacheResult != null) {
+            LOG.info("hit cache, mode {}, queryid {}, all count {}, value count {}, row count {}, data size {}",
+                    cacheMode, DebugUtil.printId(queryId),
+                    cacheResult.all_count, cacheResult.value_count,
+                    cacheResult.row_count, cacheResult.data_size);
+        } else {
+            LOG.info("miss cache, mode {}, queryid {}, code {}, msg {}", cacheMode,
+                    DebugUtil.printId(queryId), status.getErrorCode(), status.getErrorMsg());
+            cacheResult = null;
+        }
+        return cacheResult;
+    }
+
+    public long nowtime() {
+        return System.currentTimeMillis();
+    }
+
+    private void getPartitionKeyFromSelectStmt(SelectStmt stmt, Column partColumn,
+                                               List<CompoundPredicate> compoundPredicates) {
+        getPartitionKeyFromWhereClause(stmt.getWhereClause(), partColumn, compoundPredicates);
+        List<TableRef> tableRefs = stmt.getTableRefs();
+        for (TableRef tblRef : tableRefs) {
+            if (tblRef instanceof InlineViewRef) {
+                InlineViewRef viewRef = (InlineViewRef) tblRef;
+                QueryStmt queryStmt = viewRef.getViewStmt();
+                if (queryStmt instanceof SelectStmt) {
+                    getPartitionKeyFromSelectStmt((SelectStmt) queryStmt, partColumn, compoundPredicates);
+                }
+            }
+        }
+    }
+
+    /**
+     * Only support case 1
+     * 1.key >= a and key <= b
+     * 2.key = a or key = b

Review comment:
       Why don't support key = a?  only one BinaryPredicate.

##########
File path: fe/fe-core/src/main/java/org/apache/doris/qe/cache/CacheAnalyzer.java
##########
@@ -0,0 +1,450 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.qe.cache;
+
+import org.apache.doris.analysis.AggregateInfo;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.CastExpr;
+import org.apache.doris.analysis.CompoundPredicate;
+import org.apache.doris.analysis.Expr;
+import org.apache.doris.analysis.InlineViewRef;
+import org.apache.doris.analysis.QueryStmt;
+import org.apache.doris.analysis.SelectStmt;
+import org.apache.doris.analysis.SlotRef;
+import org.apache.doris.analysis.StatementBase;
+import org.apache.doris.analysis.TableRef;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.RangePartitionInfo;
+import org.apache.doris.catalog.PartitionType;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.common.util.DebugUtil;
+import org.apache.doris.metric.MetricRepo;
+import org.apache.doris.planner.OlapScanNode;
+import org.apache.doris.planner.Planner;
+import org.apache.doris.planner.ScanNode;
+import org.apache.doris.qe.ConnectContext;
+import org.apache.doris.qe.RowBatch;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.Status;
+
+import com.google.common.collect.Lists;
+import org.apache.doris.thrift.TUniqueId;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+/**
+ * Analyze which caching mode a SQL is suitable for
+ * 1. T + 1 update is suitable for SQL mode
+ * 2. Partition by date, update the data of the day in near real time, which is suitable for Partition mode
+ */
+public class CacheAnalyzer {
+    private static final Logger LOG = LogManager.getLogger(CacheAnalyzer.class);
+
+    /**
+     * NoNeed : disable config or variable, not query, not scan table etc.
+     */
+    public enum CacheMode {
+        NoNeed,
+        None,
+        TTL,
+        Sql,
+        Partition
+    }
+
+    private ConnectContext context;
+    private boolean enableSqlCache = false;
+    private boolean enablePartitionCache = false;
+    private TUniqueId queryId;
+    private CacheMode cacheMode;
+    private CacheTable latestTable;
+    private StatementBase parsedStmt;
+    private SelectStmt selectStmt;
+    private List<ScanNode> scanNodes;
+    private OlapTable olapTable;
+    private RangePartitionInfo partitionInfo;
+    private Column partColumn;
+    private CompoundPredicate partitionPredicate;
+    private Cache cache;
+
+    public Cache getCache() {
+        return cache;
+    }
+
+    public CacheAnalyzer(ConnectContext context, StatementBase parsedStmt, Planner planner) {
+        this.context = context;
+        this.queryId = context.queryId();
+        this.parsedStmt = parsedStmt;
+        scanNodes = planner.getScanNodes();
+        latestTable = new CacheTable();
+        checkCacheConfig();
+    }
+
+    //for unit test
+    public CacheAnalyzer(ConnectContext context, StatementBase parsedStmt, List<ScanNode> scanNodes) {
+        this.context = context;
+        this.parsedStmt = parsedStmt;
+        this.scanNodes = scanNodes;
+        checkCacheConfig();
+    }
+
+    private void checkCacheConfig() {
+        if (Config.cache_enable_sql_mode) {
+            if (context.getSessionVariable().isEnableSqlCache()) {
+                enableSqlCache = true;
+            }
+        }
+        if (Config.cache_enable_partition_mode) {
+            if (context.getSessionVariable().isEnablePartitionCache()) {
+                enablePartitionCache = true;
+            }
+        }
+    }
+
+    public CacheMode getCacheMode() {
+        return cacheMode;
+    }
+
+    public class CacheTable implements Comparable<CacheTable> {
+        public OlapTable olapTable;
+        public long latestId;

Review comment:
       ```suggestion
           public long latestPartitionId;
   ```

##########
File path: fe/fe-core/src/main/java/org/apache/doris/qe/cache/PartitionRange.java
##########
@@ -0,0 +1,596 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.qe.cache;
+
+import org.apache.doris.analysis.CompoundPredicate;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.DateLiteral;
+import org.apache.doris.analysis.InPredicate;
+import org.apache.doris.analysis.PartitionValue;
+import org.apache.doris.analysis.Expr;
+import org.apache.doris.analysis.LiteralExpr;
+import org.apache.doris.analysis.IntLiteral;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.PrimitiveType;
+import org.apache.doris.catalog.RangePartitionInfo;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.PartitionKey;
+import org.apache.doris.catalog.Type;
+import org.apache.doris.common.Config;
+import org.apache.doris.planner.PartitionColumnFilter;
+
+import org.apache.doris.common.AnalysisException;
+
+import com.google.common.collect.Lists;
+import com.google.common.collect.Range;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.text.SimpleDateFormat;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Convert the range of the partition to the list
+ * all partition by day/week/month split to day list
+ */
+public class PartitionRange {
+    private static final Logger LOG = LogManager.getLogger(PartitionRange.class);
+
+    public class PartitionSingle {
+        private Partition partition;
+        private PartitionKey partitionKey;
+        private long partitionId;
+        private PartitionKeyType cacheKey;
+        private boolean fromCache;
+        private boolean tooNew;
+
+        public Partition getPartition() {
+            return partition;
+        }
+
+        public void setPartition(Partition partition) {
+            this.partition = partition;
+        }
+
+        public PartitionKey getPartitionKey() {
+            return partitionKey;
+        }
+
+        public void setPartitionKey(PartitionKey key) {
+            this.partitionKey = key;
+        }
+
+        public long getPartitionId() {
+            return partitionId;
+        }
+
+        public void setPartitionId(long partitionId) {
+            this.partitionId = partitionId;
+        }
+
+        public PartitionKeyType getCacheKey() {
+            return cacheKey;
+        }
+
+        public void setCacheKey(PartitionKeyType cacheKey) {
+            this.cacheKey.clone(cacheKey);
+        }
+
+        public boolean isFromCache() {
+            return fromCache;
+        }
+
+        public void setFromCache(boolean fromCache) {
+            this.fromCache = fromCache;
+        }
+
+        public boolean isTooNew() {
+            return tooNew;
+        }
+
+        public void setTooNew(boolean tooNew) {
+            this.tooNew = tooNew;
+        }
+
+        public PartitionSingle() {
+            this.partitionId = 0;
+            this.cacheKey = new PartitionKeyType();
+            this.fromCache = false;
+            this.tooNew = false;
+        }
+
+        public void Debug() {
+            if (partition != null) {
+                LOG.info("partition id {}, cacheKey {}, version {}, time {}, fromCache {}, tooNew {} ",
+                        partitionId, cacheKey.realValue(),
+                        partition.getVisibleVersion(), partition.getVisibleVersionTime(),
+                        fromCache, tooNew);
+            } else {
+                LOG.info("partition id {}, cacheKey {}, fromCache {}, tooNew {} ", partitionId,
+                        cacheKey.realValue(), fromCache, tooNew);
+            }
+        }
+    }
+
+    public enum KeyType {
+        DEFAULT,
+        LONG,
+        DATE,
+        DATETIME,
+        TIME
+    }
+
+    public static class PartitionKeyType {
+        private SimpleDateFormat df8 = new SimpleDateFormat("yyyyMMdd");
+        private SimpleDateFormat df10 = new SimpleDateFormat("yyyy-MM-dd");
+
+        public KeyType keyType = KeyType.DEFAULT;
+        public long value;
+        public Date date;
+
+        public boolean init(Type type, String str) {
+            if (type.getPrimitiveType() == PrimitiveType.DATE) {
+                try {
+                    date = df10.parse(str);
+                } catch (Exception e) {
+                    LOG.warn("parse error str{}.", str);
+                    return false;
+                }
+                keyType = KeyType.DATE;
+            } else {
+                value = Long.valueOf(str);
+                keyType = KeyType.LONG;
+            }
+            return true;
+        }
+
+        public boolean init(Type type, LiteralExpr expr) {
+            switch (type.getPrimitiveType()) {
+                case BOOLEAN:
+                case TIME:
+                case DATETIME:
+                case FLOAT:
+                case DOUBLE:
+                case DECIMAL:
+                case DECIMALV2:
+                case CHAR:
+                case VARCHAR:
+                case LARGEINT:
+                    LOG.info("PartitionCache not support such key type {}", type.toSql());
+                    return false;
+                case DATE:
+                    date = getDateValue(expr);
+                    keyType = KeyType.DATE;
+                    break;
+                case TINYINT:
+                case SMALLINT:
+                case INT:
+                case BIGINT:
+                    value = expr.getLongValue();
+                    keyType = KeyType.LONG;
+                    break;
+            }
+            return true;
+        }
+
+        public void clone(PartitionKeyType key) {
+            keyType = key.keyType;
+            value = key.value;
+            date = key.date;
+        }
+
+        public boolean equals(PartitionKeyType key) {
+            return realValue() == key.realValue();
+        }
+
+        public void add(int num) {
+            if (keyType == KeyType.DATE) {
+                date = new Date(date.getTime() + num * 3600 * 24 * 1000);
+            } else {
+                value += num;
+            }
+        }
+
+        public String toString() {
+            if (keyType == KeyType.DEFAULT) {
+                return "";
+            } else if (keyType == KeyType.DATE) {
+                return df10.format(date);
+            } else {
+                return String.valueOf(value);
+            }
+        }
+
+        public long realValue() {
+            if (keyType == KeyType.DATE) {
+                return Long.parseLong(df8.format(date));
+            } else {
+                return value;
+            }
+        }
+
+        private Date getDateValue(LiteralExpr expr) {
+            value = expr.getLongValue() / 1000000;
+            Date dt = null;
+            try {
+                dt = df8.parse(String.valueOf(value));
+            } catch (Exception e) {
+            }
+            return dt;
+        }
+    }
+
+    private CompoundPredicate partitionKeyPredicate;
+    private OlapTable olapTable;
+    private RangePartitionInfo rangePartitionInfo;
+    private Column partitionColumn;
+    private List<PartitionSingle> partitionSingleList;
+
+    public CompoundPredicate getPartitionKeyPredicate() {
+        return partitionKeyPredicate;
+    }
+
+    public void setPartitionKeyPredicate(CompoundPredicate partitionKeyPredicate) {
+        this.partitionKeyPredicate = partitionKeyPredicate;
+    }
+
+    public RangePartitionInfo getRangePartitionInfo() {
+        return rangePartitionInfo;
+    }
+
+    public void setRangePartitionInfo(RangePartitionInfo rangePartitionInfo) {
+        this.rangePartitionInfo = rangePartitionInfo;
+    }
+
+    public Column getPartitionColumn() {
+        return partitionColumn;
+    }
+
+    public void setPartitionColumn(Column partitionColumn) {
+        this.partitionColumn = partitionColumn;
+    }
+
+    public List<PartitionSingle> getPartitionSingleList() {
+        return partitionSingleList;
+    }
+
+    public PartitionRange() {
+    }
+
+    public PartitionRange(CompoundPredicate partitionKeyPredicate, OlapTable olapTable,
+                          RangePartitionInfo rangePartitionInfo) {
+        this.partitionKeyPredicate = partitionKeyPredicate;
+        this.olapTable = olapTable;
+        this.rangePartitionInfo = rangePartitionInfo;
+        this.partitionSingleList = Lists.newArrayList();
+    }
+
+    /**
+     * analytics PartitionKey and PartitionInfo
+     *
+     * @return
+     */
+    public boolean analytics() {
+        if (rangePartitionInfo.getPartitionColumns().size() != 1) {
+            return false;
+        }
+        partitionColumn = rangePartitionInfo.getPartitionColumns().get(0);
+        PartitionColumnFilter filter = createPartitionFilter(this.partitionKeyPredicate, partitionColumn);
+        try {
+            if (!buildPartitionKeyRange(filter, partitionColumn)) {
+                return false;
+            }
+            getTablePartitionList(olapTable);
+        } catch (AnalysisException e) {
+            LOG.warn("get partition range failed, because:", e);
+            return false;
+        }
+        return true;
+    }
+
+    public boolean setCacheFlag(long cacheKey) {
+        boolean find = false;
+        for (PartitionSingle single : partitionSingleList) {
+            if (single.getCacheKey().realValue() == cacheKey) {
+                single.setFromCache(true);
+                find = true;
+                break;
+            }
+        }
+        return find;
+    }
+
+    public boolean setTooNewByID(long partitionId) {
+        boolean find = false;
+        for (PartitionSingle single : partitionSingleList) {
+            if (single.getPartition().getId() == partitionId) {
+                single.setTooNew(true);
+                find = true;
+                break;
+            }
+        }
+        return find;
+    }
+
+    public boolean setTooNewByKey(long cacheKey) {
+        boolean find = false;
+        for (PartitionSingle single : partitionSingleList) {
+            if (single.getCacheKey().realValue() == cacheKey) {
+                single.setTooNew(true);
+                find = true;
+                break;
+            }
+        }
+        return find;
+    }
+
+    /**
+     * Support left or right hit cache, not support middle.
+     * 20200113-2020115, not support 20200114
+     */
+    public Cache.HitRange diskPartitionRange(List<PartitionSingle> rangeList) {
+        Cache.HitRange hitRange = Cache.HitRange.None;
+        if (partitionSingleList.size() == 0) {
+            return hitRange;
+        }
+        int begin = partitionSingleList.size() - 1;
+        int end = 0;
+        for (int i = 0; i < partitionSingleList.size(); i++) {
+            if (!partitionSingleList.get(i).isFromCache()) {
+                if (begin > i) {
+                    begin = i;
+                }
+                if (end < i) {
+                    end = i;
+                }
+            }
+        }
+        if (end < begin) {
+            hitRange = Cache.HitRange.Full;
+            return hitRange;
+        }
+
+        if (end == partitionSingleList.size() - 1) {
+            hitRange = Cache.HitRange.Left;
+        }
+        if (begin == 0) {
+            hitRange = Cache.HitRange.Right;
+        }
+
+        rangeList.add(partitionSingleList.get(begin));
+        rangeList.add(partitionSingleList.get(end));
+        LOG.info("the new range for scan be is [{},{}], hit range", rangeList.get(0).getCacheKey().realValue(),
+                rangeList.get(1).getCacheKey().realValue(), hitRange);
+        return hitRange;
+    }
+
+    public List<PartitionSingle> updatePartitionRange() {

Review comment:
       Add a comment for this method.

##########
File path: fe/fe-core/src/main/java/org/apache/doris/qe/cache/PartitionCache.java
##########
@@ -0,0 +1,215 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.qe.cache;
+
+import com.google.common.collect.Lists;
+import org.apache.doris.analysis.CompoundPredicate;
+import org.apache.doris.analysis.Expr;
+import org.apache.doris.analysis.InlineViewRef;
+import org.apache.doris.analysis.QueryStmt;
+import org.apache.doris.analysis.SelectStmt;
+import org.apache.doris.analysis.TableRef;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.RangePartitionInfo;
+import org.apache.doris.common.Status;
+import org.apache.doris.common.util.DebugUtil;
+import org.apache.doris.metric.MetricRepo;
+import org.apache.doris.qe.RowBatch;
+import org.apache.doris.thrift.TUniqueId;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.util.List;
+
+public class PartitionCache extends Cache {
+    private static final Logger LOG = LogManager.getLogger(PartitionCache.class);
+    private SelectStmt nokeyStmt;

Review comment:
       What's the meaning of `nokeyStmt `?

##########
File path: fe/fe-core/src/main/java/org/apache/doris/qe/StmtExecutor.java
##########
@@ -575,6 +583,78 @@ private void handleSetStmt() {
         context.getState().setOk();
     }
 
+    private void sendChannel(MysqlChannel channel, List<CacheProxy.CacheValue> cacheValues, boolean hitAll)
+            throws Exception {
+        RowBatch batch = null;
+        for (CacheBeProxy.CacheValue value : cacheValues) {
+            batch = value.getRowBatch();
+            for (ByteBuffer row : batch.getBatch().getRows()) {
+                channel.sendOnePacket(row);
+            }
+            context.updateReturnRows(batch.getBatch().getRows().size());
+        }
+        if (hitAll) {
+            if (batch != null) {
+                statisticsForAuditLog = batch.getQueryStatistics();
+            }
+            context.getState().setEof();
+            return;
+        }
+    }
+
+    private boolean handleCacheStmt(CacheAnalyzer cacheAnalyzer,MysqlChannel channel) throws Exception {
+        RowBatch batch = null;
+        CacheBeProxy.FetchCacheResult cacheResult = cacheAnalyzer.getCacheData();
+        CacheMode mode = cacheAnalyzer.getCacheMode();
+        if (cacheResult != null) {
+            isCached = true;
+            if (cacheAnalyzer.getHitRange() == Cache.HitRange.Full) {
+                sendChannel(channel, cacheResult.getValueList(), true);
+                return true;
+            }
+            //rewrite sql
+            if (mode == CacheMode.Partition) {
+                if (cacheAnalyzer.getHitRange() == Cache.HitRange.Left) {
+                    sendChannel(channel, cacheResult.getValueList(), false);
+                }
+                SelectStmt newSelectStmt = cacheAnalyzer.getRewriteStmt();
+                newSelectStmt.reset();
+                analyzer = new Analyzer(context.getCatalog(), context);
+                newSelectStmt.analyze(analyzer);
+                planner = new Planner();
+                planner.plan(newSelectStmt, analyzer, context.getSessionVariable().toThrift());
+            }
+        }
+
+        coord = new Coordinator(context, analyzer, planner);
+        QeProcessorImpl.INSTANCE.registerQuery(context.queryId(),
+                new QeProcessorImpl.QueryInfo(context, originStmt.originStmt, coord));
+        coord.exec();
+
+        while (true) {
+            batch = coord.getNext();
+            if (batch.getBatch() != null) {
+                cacheAnalyzer.copyRowBatch(batch);
+                for (ByteBuffer row : batch.getBatch().getRows()) {
+                    channel.sendOnePacket(row);
+                }
+                context.updateReturnRows(batch.getBatch().getRows().size());
+            }
+            if (batch.isEos()) {
+                break;
+            }
+        }
+        
+        if (cacheResult != null && cacheAnalyzer.getHitRange() == Cache.HitRange.Right) {
+            sendChannel(channel, cacheResult.getValueList(), false);
+        }
+
+        cacheAnalyzer.updateCache();

Review comment:
       Do we need update cache every time?

##########
File path: fe/fe-core/src/main/java/org/apache/doris/qe/cache/RowBatchBuilder.java
##########
@@ -0,0 +1,156 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.qe.cache;
+
+import com.google.common.collect.Lists;
+import org.apache.doris.analysis.Expr;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.Type;
+import org.apache.doris.qe.RowBatch;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.List;
+
+public class RowBatchBuilder {
+    private static final Logger LOG = LogManager.getLogger(RowBatchBuilder.class);
+
+    private CacheBeProxy.UpdateCacheRequest updateRequest;
+    private CacheAnalyzer.CacheMode cacheMode;
+    private int keyIndex;
+    private Type keyType;
+    private HashMap<Long, PartitionRange.PartitionSingle> cachePartMap;
+    private List<byte[]> rowList;
+    private int batchSize;
+    private int rowSize;
+    private int dataSize;
+
+    public int getRowSize() {
+        return rowSize;
+    }
+
+    public RowBatchBuilder(CacheAnalyzer.CacheMode model) {
+        cacheMode = model;
+        keyIndex = 0;
+        keyType = Type.INVALID;
+        rowList = Lists.newArrayList();
+        cachePartMap = new HashMap<>();
+        batchSize = 0;
+        rowSize = 0;
+        dataSize = 0;
+    }
+
+    public void partitionIndex(ArrayList<Expr> resultExpr,

Review comment:
       need a better name.

##########
File path: fe/fe-core/src/main/java/org/apache/doris/qe/cache/PartitionRange.java
##########
@@ -0,0 +1,596 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.qe.cache;
+
+import org.apache.doris.analysis.CompoundPredicate;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.DateLiteral;
+import org.apache.doris.analysis.InPredicate;
+import org.apache.doris.analysis.PartitionValue;
+import org.apache.doris.analysis.Expr;
+import org.apache.doris.analysis.LiteralExpr;
+import org.apache.doris.analysis.IntLiteral;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.PrimitiveType;
+import org.apache.doris.catalog.RangePartitionInfo;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.PartitionKey;
+import org.apache.doris.catalog.Type;
+import org.apache.doris.common.Config;
+import org.apache.doris.planner.PartitionColumnFilter;
+
+import org.apache.doris.common.AnalysisException;
+
+import com.google.common.collect.Lists;
+import com.google.common.collect.Range;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.text.SimpleDateFormat;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Convert the range of the partition to the list
+ * all partition by day/week/month split to day list
+ */
+public class PartitionRange {
+    private static final Logger LOG = LogManager.getLogger(PartitionRange.class);
+
+    public class PartitionSingle {
+        private Partition partition;
+        private PartitionKey partitionKey;
+        private long partitionId;
+        private PartitionKeyType cacheKey;
+        private boolean fromCache;
+        private boolean tooNew;
+
+        public Partition getPartition() {
+            return partition;
+        }
+
+        public void setPartition(Partition partition) {
+            this.partition = partition;
+        }
+
+        public PartitionKey getPartitionKey() {
+            return partitionKey;
+        }
+
+        public void setPartitionKey(PartitionKey key) {
+            this.partitionKey = key;
+        }
+
+        public long getPartitionId() {
+            return partitionId;
+        }
+
+        public void setPartitionId(long partitionId) {
+            this.partitionId = partitionId;
+        }
+
+        public PartitionKeyType getCacheKey() {
+            return cacheKey;
+        }
+
+        public void setCacheKey(PartitionKeyType cacheKey) {
+            this.cacheKey.clone(cacheKey);
+        }
+
+        public boolean isFromCache() {
+            return fromCache;
+        }
+
+        public void setFromCache(boolean fromCache) {
+            this.fromCache = fromCache;
+        }
+
+        public boolean isTooNew() {
+            return tooNew;
+        }
+
+        public void setTooNew(boolean tooNew) {
+            this.tooNew = tooNew;
+        }
+
+        public PartitionSingle() {
+            this.partitionId = 0;
+            this.cacheKey = new PartitionKeyType();
+            this.fromCache = false;
+            this.tooNew = false;
+        }
+
+        public void Debug() {
+            if (partition != null) {
+                LOG.info("partition id {}, cacheKey {}, version {}, time {}, fromCache {}, tooNew {} ",
+                        partitionId, cacheKey.realValue(),
+                        partition.getVisibleVersion(), partition.getVisibleVersionTime(),
+                        fromCache, tooNew);
+            } else {
+                LOG.info("partition id {}, cacheKey {}, fromCache {}, tooNew {} ", partitionId,
+                        cacheKey.realValue(), fromCache, tooNew);
+            }
+        }
+    }
+
+    public enum KeyType {
+        DEFAULT,
+        LONG,
+        DATE,
+        DATETIME,
+        TIME
+    }
+
+    public static class PartitionKeyType {
+        private SimpleDateFormat df8 = new SimpleDateFormat("yyyyMMdd");
+        private SimpleDateFormat df10 = new SimpleDateFormat("yyyy-MM-dd");
+
+        public KeyType keyType = KeyType.DEFAULT;
+        public long value;
+        public Date date;
+
+        public boolean init(Type type, String str) {
+            if (type.getPrimitiveType() == PrimitiveType.DATE) {
+                try {
+                    date = df10.parse(str);
+                } catch (Exception e) {
+                    LOG.warn("parse error str{}.", str);
+                    return false;
+                }
+                keyType = KeyType.DATE;
+            } else {
+                value = Long.valueOf(str);
+                keyType = KeyType.LONG;
+            }
+            return true;
+        }
+
+        public boolean init(Type type, LiteralExpr expr) {
+            switch (type.getPrimitiveType()) {
+                case BOOLEAN:
+                case TIME:
+                case DATETIME:
+                case FLOAT:
+                case DOUBLE:
+                case DECIMAL:
+                case DECIMALV2:
+                case CHAR:
+                case VARCHAR:
+                case LARGEINT:
+                    LOG.info("PartitionCache not support such key type {}", type.toSql());
+                    return false;
+                case DATE:
+                    date = getDateValue(expr);
+                    keyType = KeyType.DATE;
+                    break;
+                case TINYINT:
+                case SMALLINT:
+                case INT:
+                case BIGINT:
+                    value = expr.getLongValue();
+                    keyType = KeyType.LONG;
+                    break;
+            }
+            return true;
+        }
+
+        public void clone(PartitionKeyType key) {
+            keyType = key.keyType;
+            value = key.value;
+            date = key.date;
+        }
+
+        public boolean equals(PartitionKeyType key) {
+            return realValue() == key.realValue();
+        }
+
+        public void add(int num) {
+            if (keyType == KeyType.DATE) {
+                date = new Date(date.getTime() + num * 3600 * 24 * 1000);
+            } else {
+                value += num;
+            }
+        }
+
+        public String toString() {
+            if (keyType == KeyType.DEFAULT) {
+                return "";
+            } else if (keyType == KeyType.DATE) {
+                return df10.format(date);
+            } else {
+                return String.valueOf(value);
+            }
+        }
+
+        public long realValue() {
+            if (keyType == KeyType.DATE) {
+                return Long.parseLong(df8.format(date));
+            } else {
+                return value;
+            }
+        }
+
+        private Date getDateValue(LiteralExpr expr) {
+            value = expr.getLongValue() / 1000000;
+            Date dt = null;
+            try {
+                dt = df8.parse(String.valueOf(value));
+            } catch (Exception e) {
+            }
+            return dt;
+        }
+    }
+
+    private CompoundPredicate partitionKeyPredicate;
+    private OlapTable olapTable;
+    private RangePartitionInfo rangePartitionInfo;
+    private Column partitionColumn;
+    private List<PartitionSingle> partitionSingleList;
+
+    public CompoundPredicate getPartitionKeyPredicate() {
+        return partitionKeyPredicate;
+    }
+
+    public void setPartitionKeyPredicate(CompoundPredicate partitionKeyPredicate) {
+        this.partitionKeyPredicate = partitionKeyPredicate;
+    }
+
+    public RangePartitionInfo getRangePartitionInfo() {
+        return rangePartitionInfo;
+    }
+
+    public void setRangePartitionInfo(RangePartitionInfo rangePartitionInfo) {
+        this.rangePartitionInfo = rangePartitionInfo;
+    }
+
+    public Column getPartitionColumn() {
+        return partitionColumn;
+    }
+
+    public void setPartitionColumn(Column partitionColumn) {
+        this.partitionColumn = partitionColumn;
+    }
+
+    public List<PartitionSingle> getPartitionSingleList() {
+        return partitionSingleList;
+    }
+
+    public PartitionRange() {
+    }
+
+    public PartitionRange(CompoundPredicate partitionKeyPredicate, OlapTable olapTable,
+                          RangePartitionInfo rangePartitionInfo) {
+        this.partitionKeyPredicate = partitionKeyPredicate;
+        this.olapTable = olapTable;
+        this.rangePartitionInfo = rangePartitionInfo;
+        this.partitionSingleList = Lists.newArrayList();
+    }
+
+    /**
+     * analytics PartitionKey and PartitionInfo
+     *
+     * @return
+     */
+    public boolean analytics() {
+        if (rangePartitionInfo.getPartitionColumns().size() != 1) {
+            return false;
+        }
+        partitionColumn = rangePartitionInfo.getPartitionColumns().get(0);
+        PartitionColumnFilter filter = createPartitionFilter(this.partitionKeyPredicate, partitionColumn);
+        try {
+            if (!buildPartitionKeyRange(filter, partitionColumn)) {
+                return false;
+            }
+            getTablePartitionList(olapTable);
+        } catch (AnalysisException e) {
+            LOG.warn("get partition range failed, because:", e);
+            return false;
+        }
+        return true;
+    }
+
+    public boolean setCacheFlag(long cacheKey) {
+        boolean find = false;
+        for (PartitionSingle single : partitionSingleList) {
+            if (single.getCacheKey().realValue() == cacheKey) {
+                single.setFromCache(true);
+                find = true;
+                break;
+            }
+        }
+        return find;
+    }
+
+    public boolean setTooNewByID(long partitionId) {
+        boolean find = false;
+        for (PartitionSingle single : partitionSingleList) {
+            if (single.getPartition().getId() == partitionId) {
+                single.setTooNew(true);
+                find = true;
+                break;
+            }
+        }
+        return find;
+    }
+
+    public boolean setTooNewByKey(long cacheKey) {
+        boolean find = false;
+        for (PartitionSingle single : partitionSingleList) {
+            if (single.getCacheKey().realValue() == cacheKey) {
+                single.setTooNew(true);
+                find = true;
+                break;
+            }
+        }
+        return find;
+    }
+
+    /**
+     * Support left or right hit cache, not support middle.
+     * 20200113-2020115, not support 20200114

Review comment:
       I don't understand this comment.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] kangkaisen commented on a change in pull request #4330: [Feature][Cache] Sql cache and partition cache #2581

Posted by GitBox <gi...@apache.org>.
kangkaisen commented on a change in pull request #4330:
URL: https://github.com/apache/incubator-doris/pull/4330#discussion_r473567236



##########
File path: fe/fe-core/src/main/java/org/apache/doris/qe/cache/PartitionRange.java
##########
@@ -0,0 +1,596 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.qe.cache;
+
+import org.apache.doris.analysis.CompoundPredicate;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.DateLiteral;
+import org.apache.doris.analysis.InPredicate;
+import org.apache.doris.analysis.PartitionValue;
+import org.apache.doris.analysis.Expr;
+import org.apache.doris.analysis.LiteralExpr;
+import org.apache.doris.analysis.IntLiteral;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.PrimitiveType;
+import org.apache.doris.catalog.RangePartitionInfo;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.PartitionKey;
+import org.apache.doris.catalog.Type;
+import org.apache.doris.common.Config;
+import org.apache.doris.planner.PartitionColumnFilter;
+
+import org.apache.doris.common.AnalysisException;
+
+import com.google.common.collect.Lists;
+import com.google.common.collect.Range;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.text.SimpleDateFormat;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Convert the range of the partition to the list
+ * all partition by day/week/month split to day list
+ */
+public class PartitionRange {
+    private static final Logger LOG = LogManager.getLogger(PartitionRange.class);
+
+    public class PartitionSingle {
+        private Partition partition;
+        private PartitionKey partitionKey;
+        private long partitionId;
+        private PartitionKeyType cacheKey;
+        private boolean fromCache;
+        private boolean tooNew;
+
+        public Partition getPartition() {
+            return partition;
+        }
+
+        public void setPartition(Partition partition) {
+            this.partition = partition;
+        }
+
+        public PartitionKey getPartitionKey() {
+            return partitionKey;
+        }
+
+        public void setPartitionKey(PartitionKey key) {
+            this.partitionKey = key;
+        }
+
+        public long getPartitionId() {
+            return partitionId;
+        }
+
+        public void setPartitionId(long partitionId) {
+            this.partitionId = partitionId;
+        }
+
+        public PartitionKeyType getCacheKey() {
+            return cacheKey;
+        }
+
+        public void setCacheKey(PartitionKeyType cacheKey) {
+            this.cacheKey.clone(cacheKey);
+        }
+
+        public boolean isFromCache() {
+            return fromCache;
+        }
+
+        public void setFromCache(boolean fromCache) {
+            this.fromCache = fromCache;
+        }
+
+        public boolean isTooNew() {
+            return tooNew;
+        }
+
+        public void setTooNew(boolean tooNew) {
+            this.tooNew = tooNew;
+        }
+
+        public PartitionSingle() {
+            this.partitionId = 0;
+            this.cacheKey = new PartitionKeyType();
+            this.fromCache = false;
+            this.tooNew = false;
+        }
+
+        public void Debug() {
+            if (partition != null) {
+                LOG.info("partition id {}, cacheKey {}, version {}, time {}, fromCache {}, tooNew {} ",
+                        partitionId, cacheKey.realValue(),
+                        partition.getVisibleVersion(), partition.getVisibleVersionTime(),
+                        fromCache, tooNew);
+            } else {
+                LOG.info("partition id {}, cacheKey {}, fromCache {}, tooNew {} ", partitionId,
+                        cacheKey.realValue(), fromCache, tooNew);
+            }
+        }
+    }
+
+    public enum KeyType {
+        DEFAULT,
+        LONG,
+        DATE,
+        DATETIME,
+        TIME
+    }
+
+    public static class PartitionKeyType {
+        private SimpleDateFormat df8 = new SimpleDateFormat("yyyyMMdd");
+        private SimpleDateFormat df10 = new SimpleDateFormat("yyyy-MM-dd");
+
+        public KeyType keyType = KeyType.DEFAULT;
+        public long value;
+        public Date date;
+
+        public boolean init(Type type, String str) {
+            if (type.getPrimitiveType() == PrimitiveType.DATE) {
+                try {
+                    date = df10.parse(str);
+                } catch (Exception e) {
+                    LOG.warn("parse error str{}.", str);
+                    return false;
+                }
+                keyType = KeyType.DATE;
+            } else {
+                value = Long.valueOf(str);
+                keyType = KeyType.LONG;
+            }
+            return true;
+        }
+
+        public boolean init(Type type, LiteralExpr expr) {
+            switch (type.getPrimitiveType()) {
+                case BOOLEAN:
+                case TIME:
+                case DATETIME:
+                case FLOAT:
+                case DOUBLE:
+                case DECIMAL:
+                case DECIMALV2:
+                case CHAR:
+                case VARCHAR:
+                case LARGEINT:
+                    LOG.info("PartitionCache not support such key type {}", type.toSql());
+                    return false;
+                case DATE:
+                    date = getDateValue(expr);
+                    keyType = KeyType.DATE;
+                    break;
+                case TINYINT:
+                case SMALLINT:
+                case INT:
+                case BIGINT:
+                    value = expr.getLongValue();
+                    keyType = KeyType.LONG;
+                    break;
+            }
+            return true;
+        }
+
+        public void clone(PartitionKeyType key) {
+            keyType = key.keyType;
+            value = key.value;
+            date = key.date;
+        }
+
+        public boolean equals(PartitionKeyType key) {
+            return realValue() == key.realValue();
+        }
+
+        public void add(int num) {
+            if (keyType == KeyType.DATE) {
+                date = new Date(date.getTime() + num * 3600 * 24 * 1000);
+            } else {
+                value += num;
+            }
+        }
+
+        public String toString() {
+            if (keyType == KeyType.DEFAULT) {
+                return "";
+            } else if (keyType == KeyType.DATE) {
+                return df10.format(date);
+            } else {
+                return String.valueOf(value);
+            }
+        }
+
+        public long realValue() {
+            if (keyType == KeyType.DATE) {
+                return Long.parseLong(df8.format(date));
+            } else {
+                return value;
+            }
+        }
+
+        private Date getDateValue(LiteralExpr expr) {
+            value = expr.getLongValue() / 1000000;
+            Date dt = null;
+            try {
+                dt = df8.parse(String.valueOf(value));
+            } catch (Exception e) {
+            }
+            return dt;
+        }
+    }
+
+    private CompoundPredicate partitionKeyPredicate;
+    private OlapTable olapTable;
+    private RangePartitionInfo rangePartitionInfo;
+    private Column partitionColumn;
+    private List<PartitionSingle> partitionSingleList;
+
+    public CompoundPredicate getPartitionKeyPredicate() {
+        return partitionKeyPredicate;
+    }
+
+    public void setPartitionKeyPredicate(CompoundPredicate partitionKeyPredicate) {
+        this.partitionKeyPredicate = partitionKeyPredicate;
+    }
+
+    public RangePartitionInfo getRangePartitionInfo() {
+        return rangePartitionInfo;
+    }
+
+    public void setRangePartitionInfo(RangePartitionInfo rangePartitionInfo) {
+        this.rangePartitionInfo = rangePartitionInfo;
+    }
+
+    public Column getPartitionColumn() {
+        return partitionColumn;
+    }
+
+    public void setPartitionColumn(Column partitionColumn) {
+        this.partitionColumn = partitionColumn;
+    }
+
+    public List<PartitionSingle> getPartitionSingleList() {
+        return partitionSingleList;
+    }
+
+    public PartitionRange() {
+    }
+
+    public PartitionRange(CompoundPredicate partitionKeyPredicate, OlapTable olapTable,
+                          RangePartitionInfo rangePartitionInfo) {
+        this.partitionKeyPredicate = partitionKeyPredicate;
+        this.olapTable = olapTable;
+        this.rangePartitionInfo = rangePartitionInfo;
+        this.partitionSingleList = Lists.newArrayList();
+    }
+
+    /**
+     * analytics PartitionKey and PartitionInfo
+     *
+     * @return
+     */
+    public boolean analytics() {
+        if (rangePartitionInfo.getPartitionColumns().size() != 1) {
+            return false;
+        }
+        partitionColumn = rangePartitionInfo.getPartitionColumns().get(0);
+        PartitionColumnFilter filter = createPartitionFilter(this.partitionKeyPredicate, partitionColumn);
+        try {
+            if (!buildPartitionKeyRange(filter, partitionColumn)) {
+                return false;
+            }
+            getTablePartitionList(olapTable);
+        } catch (AnalysisException e) {
+            LOG.warn("get partition range failed, because:", e);
+            return false;
+        }
+        return true;
+    }
+
+    public boolean setCacheFlag(long cacheKey) {
+        boolean find = false;
+        for (PartitionSingle single : partitionSingleList) {
+            if (single.getCacheKey().realValue() == cacheKey) {
+                single.setFromCache(true);
+                find = true;
+                break;
+            }
+        }
+        return find;
+    }
+
+    public boolean setTooNewByID(long partitionId) {
+        boolean find = false;
+        for (PartitionSingle single : partitionSingleList) {
+            if (single.getPartition().getId() == partitionId) {
+                single.setTooNew(true);
+                find = true;
+                break;
+            }
+        }
+        return find;
+    }
+
+    public boolean setTooNewByKey(long cacheKey) {
+        boolean find = false;
+        for (PartitionSingle single : partitionSingleList) {
+            if (single.getCacheKey().realValue() == cacheKey) {
+                single.setTooNew(true);
+                find = true;
+                break;
+            }
+        }
+        return find;
+    }
+
+    /**
+     * Support left or right hit cache, not support middle.
+     * 20200113-2020115, not support 20200114
+     */
+    public Cache.HitRange diskPartitionRange(List<PartitionSingle> rangeList) {

Review comment:
       OK




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] kangkaisen commented on a change in pull request #4330: [Feature][Cache] Sql cache and partition cache #2581

Posted by GitBox <gi...@apache.org>.
kangkaisen commented on a change in pull request #4330:
URL: https://github.com/apache/incubator-doris/pull/4330#discussion_r473562239



##########
File path: fe/fe-core/src/main/java/org/apache/doris/qe/cache/CacheAnalyzer.java
##########
@@ -0,0 +1,450 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.qe.cache;
+
+import org.apache.doris.analysis.AggregateInfo;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.CastExpr;
+import org.apache.doris.analysis.CompoundPredicate;
+import org.apache.doris.analysis.Expr;
+import org.apache.doris.analysis.InlineViewRef;
+import org.apache.doris.analysis.QueryStmt;
+import org.apache.doris.analysis.SelectStmt;
+import org.apache.doris.analysis.SlotRef;
+import org.apache.doris.analysis.StatementBase;
+import org.apache.doris.analysis.TableRef;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.RangePartitionInfo;
+import org.apache.doris.catalog.PartitionType;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.common.util.DebugUtil;
+import org.apache.doris.metric.MetricRepo;
+import org.apache.doris.planner.OlapScanNode;
+import org.apache.doris.planner.Planner;
+import org.apache.doris.planner.ScanNode;
+import org.apache.doris.qe.ConnectContext;
+import org.apache.doris.qe.RowBatch;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.Status;
+
+import com.google.common.collect.Lists;
+import org.apache.doris.thrift.TUniqueId;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+/**
+ * Analyze which caching mode a SQL is suitable for
+ * 1. T + 1 update is suitable for SQL mode
+ * 2. Partition by date, update the data of the day in near real time, which is suitable for Partition mode
+ */
+public class CacheAnalyzer {
+    private static final Logger LOG = LogManager.getLogger(CacheAnalyzer.class);
+
+    /**
+     * NoNeed : disable config or variable, not query, not scan table etc.
+     */
+    public enum CacheMode {
+        NoNeed,
+        None,
+        TTL,
+        Sql,
+        Partition
+    }
+
+    private ConnectContext context;
+    private boolean enableSqlCache = false;
+    private boolean enablePartitionCache = false;
+    private TUniqueId queryId;
+    private CacheMode cacheMode;
+    private CacheTable latestTable;
+    private StatementBase parsedStmt;
+    private SelectStmt selectStmt;
+    private List<ScanNode> scanNodes;
+    private OlapTable olapTable;
+    private RangePartitionInfo partitionInfo;
+    private Column partColumn;
+    private CompoundPredicate partitionPredicate;
+    private Cache cache;
+
+    public Cache getCache() {
+        return cache;
+    }
+
+    public CacheAnalyzer(ConnectContext context, StatementBase parsedStmt, Planner planner) {
+        this.context = context;
+        this.queryId = context.queryId();
+        this.parsedStmt = parsedStmt;
+        scanNodes = planner.getScanNodes();
+        latestTable = new CacheTable();
+        checkCacheConfig();
+    }
+
+    //for unit test
+    public CacheAnalyzer(ConnectContext context, StatementBase parsedStmt, List<ScanNode> scanNodes) {
+        this.context = context;
+        this.parsedStmt = parsedStmt;
+        this.scanNodes = scanNodes;
+        checkCacheConfig();
+    }
+
+    private void checkCacheConfig() {
+        if (Config.cache_enable_sql_mode) {
+            if (context.getSessionVariable().isEnableSqlCache()) {

Review comment:
       Yes. `Session variables have higher priority than global variables.`.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org