You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by GitBox <gi...@apache.org> on 2020/07/23 07:08:57 UTC

[GitHub] [flink] fsk119 opened a new pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

fsk119 opened a new pull request #12966:
URL: https://github.com/apache/flink/pull/12966


   …planner.
   
   <!--
   *Thank you very much for contributing to Apache Flink - we are happy that you want to help us improve Flink. To help the community review your contribution in the best possible way, please go through the checklist below, which will get the contribution into a shape in which it can be best reviewed.*
   
   *Please understand that we do not do this to make contributions to Flink a hassle. In order to uphold a high standard of quality for code contributions, while at the same time managing a large number of contributions, we need contributors to prepare the contributions well, and give reviewers enough contextual information for the review. Please also understand that contributions that do not follow this guide will take longer to review and thus typically be picked up with lower priority by the community.*
   
   ## Contribution Checklist
   
     - Make sure that the pull request corresponds to a [JIRA issue](https://issues.apache.org/jira/projects/FLINK/issues). Exceptions are made for typos in JavaDoc or documentation files, which need no JIRA issue.
     
     - Name the pull request in the form "[FLINK-XXXX] [component] Title of the pull request", where *FLINK-XXXX* should be replaced by the actual issue number. Skip *component* if you are unsure about which is the best component.
     Typo fixes that have no associated JIRA issue should be named following this pattern: `[hotfix] [docs] Fix typo in event time introduction` or `[hotfix] [javadocs] Expand JavaDoc for PuncuatedWatermarkGenerator`.
   
     - Fill out the template below to describe the changes contributed by the pull request. That will give reviewers the context they need to do the review.
     
     - Make sure that the change passes the automated tests, i.e., `mvn clean verify` passes. You can set up Azure Pipelines CI to do that following [this guide](https://cwiki.apache.org/confluence/display/FLINK/Azure+Pipelines#AzurePipelines-Tutorial:SettingupAzurePipelinesforaforkoftheFlinkrepository).
   
     - Each pull request should address only one issue, not mix up code from multiple issues.
     
     - Each commit in the pull request has a meaningful commit message (including the JIRA id)
   
     - Once all items of the checklist are addressed, remove the above text and this checklist, leaving only the filled out template below.
   
   
   **(The sections below can be removed for hotfixes of typos)**
   -->
   
   ## What is the purpose of the change
   
   *Support partition push down in planner.*
   
   
   ## Brief change log
   
     - *Implement rule to Support partition into table source scan.*
     - *Add rule into rule set*
   
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
     - *Pass the same test designed for `PushPartitionIntoLegacyTableSourceScanRule`*.
   
   # Does this pull request potentially affect one of the following parts:
   
     - Dependencies (does it add or upgrade a dependency): (yes / **no**)
     - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / **no**)
     - The serializers: (yes / **no** / don't know)
     - The runtime per-record code paths (performance sensitive): (yes / **no** / don't know)
     - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
     - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
     - Does this pull request introduce a new feature? (yes / **no**)
     - If yes, how is the feature documented? (not applicable / docs / JavaDocs / **not documented**)
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6697110fb14fef778707f74b66227df2953c487c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107",
       "triggerID" : "6697110fb14fef778707f74b66227df2953c487c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122",
       "triggerID" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5147",
       "triggerID" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "triggerType" : "PUSH"
     }, {
       "hash" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5455",
       "triggerID" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "triggerType" : "PUSH"
     }, {
       "hash" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5458",
       "triggerID" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "triggerType" : "PUSH"
     }, {
       "hash" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 1790d93b79cfdfc3f65a7805e444699736f80d93 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122) 
   * eea5ebcd848bd4e554134096073cbd88d9725395 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5458) 
   * 50079cc5a0bbe71de746f78a22180febf9a35e57 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6697110fb14fef778707f74b66227df2953c487c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107",
       "triggerID" : "6697110fb14fef778707f74b66227df2953c487c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122",
       "triggerID" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5147",
       "triggerID" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "triggerType" : "PUSH"
     }, {
       "hash" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5455",
       "triggerID" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "triggerType" : "PUSH"
     }, {
       "hash" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5458",
       "triggerID" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "triggerType" : "PUSH"
     }, {
       "hash" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5467",
       "triggerID" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5469",
       "triggerID" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "4642a985cde81555583a17880cd2462399338310",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5503",
       "triggerID" : "4642a985cde81555583a17880cd2462399338310",
       "triggerType" : "PUSH"
     }, {
       "hash" : "984744723761b8124aa003f23e65d4bb484a73c7",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5504",
       "triggerID" : "984744723761b8124aa003f23e65d4bb484a73c7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e91a84cc20e3655749b8cf9b69ed79d855aaedaf",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5522",
       "triggerID" : "e91a84cc20e3655749b8cf9b69ed79d855aaedaf",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d02272061d2264ee74d67da5d9f0e1524f1c7d52",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5657",
       "triggerID" : "d02272061d2264ee74d67da5d9f0e1524f1c7d52",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d02272061d2264ee74d67da5d9f0e1524f1c7d52",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5684",
       "triggerID" : "675468974",
       "triggerType" : "MANUAL"
     }, {
       "hash" : "d02272061d2264ee74d67da5d9f0e1524f1c7d52",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5657",
       "triggerID" : "675468974",
       "triggerType" : "MANUAL"
     } ]
   }-->
   ## CI report:
   
   * d02272061d2264ee74d67da5d9f0e1524f1c7d52 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5684) Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5657) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] fsk119 commented on a change in pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
fsk119 commented on a change in pull request #12966:
URL: https://github.com/apache/flink/pull/12966#discussion_r467745487



##########
File path: flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/PushPartitionIntoTableSourceScanRule.java
##########
@@ -0,0 +1,325 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.rules.logical;
+
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.catalog.Catalog;
+import org.apache.flink.table.catalog.CatalogPartitionSpec;
+import org.apache.flink.table.catalog.CatalogTable;
+import org.apache.flink.table.catalog.ObjectIdentifier;
+import org.apache.flink.table.catalog.ObjectPath;
+import org.apache.flink.table.catalog.exceptions.CatalogException;
+import org.apache.flink.table.catalog.exceptions.PartitionNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotPartitionedException;
+import org.apache.flink.table.catalog.stats.CatalogColumnStatistics;
+import org.apache.flink.table.catalog.stats.CatalogTableStatistics;
+import org.apache.flink.table.connector.source.DynamicTableSource;
+import org.apache.flink.table.connector.source.abilities.SupportsPartitionPushDown;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.expressions.ResolvedExpression;
+import org.apache.flink.table.plan.stats.TableStats;
+import org.apache.flink.table.planner.calcite.FlinkContext;
+import org.apache.flink.table.planner.calcite.FlinkTypeFactory;
+import org.apache.flink.table.planner.plan.schema.TableSourceTable;
+import org.apache.flink.table.planner.plan.stats.FlinkStatistic;
+import org.apache.flink.table.planner.plan.utils.FlinkRelOptUtil;
+import org.apache.flink.table.planner.plan.utils.PartitionPruner;
+import org.apache.flink.table.planner.plan.utils.RexNodeExtractor;
+import org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter;
+import org.apache.flink.table.planner.utils.CatalogTableStatisticsConverter;
+import org.apache.flink.table.types.logical.LogicalType;
+
+import org.apache.calcite.plan.RelOptRule;
+import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.rel.core.Filter;
+import org.apache.calcite.rel.logical.LogicalTableScan;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rex.RexBuilder;
+import org.apache.calcite.rex.RexInputRef;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.rex.RexShuttle;
+import org.apache.calcite.rex.RexUtil;
+import org.apache.calcite.tools.RelBuilder;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.TimeZone;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+import scala.Option;
+import scala.Tuple2;
+import scala.collection.JavaConversions;
+import scala.collection.Seq;
+
+/**
+ * Planner rule that tries to push partition evaluated by filter condition into a {@link LogicalTableScan}.
+*/
+public class PushPartitionIntoTableSourceScanRule extends RelOptRule {
+	public static final PushPartitionIntoTableSourceScanRule INSTANCE = new PushPartitionIntoTableSourceScanRule();
+
+	public PushPartitionIntoTableSourceScanRule(){
+		super(operand(Filter.class,
+				operand(LogicalTableScan.class, none())),
+			"PushPartitionTableSourceScanRule");
+	}
+
+	@Override
+	public boolean matches(RelOptRuleCall call) {
+		Filter filter = call.rel(0);
+		if (filter.getCondition() == null) {
+			return false;
+		}
+		TableSourceTable tableSourceTable = call.rel(1).getTable().unwrap(TableSourceTable.class);
+		if (tableSourceTable == null){
+			return false;
+		}
+		DynamicTableSource dynamicTableSource = tableSourceTable.tableSource();
+		if (!(dynamicTableSource instanceof SupportsPartitionPushDown)) {
+			return false;
+		}
+		CatalogTable catalogTable = tableSourceTable.catalogTable();
+		if (!catalogTable.isPartitioned() || catalogTable.getPartitionKeys().isEmpty()) {
+			return false;
+		}
+		return Arrays.stream(tableSourceTable.extraDigests()).noneMatch(digest -> digest.startsWith("partitions=["));
+	}
+
+	@Override
+	public void onMatch(RelOptRuleCall call) {
+		Filter filter = call.rel(0);
+		LogicalTableScan scan = call.rel(1);
+		FlinkContext context = call.getPlanner().getContext().unwrap(FlinkContext.class);
+		TableSourceTable tableSourceTable = scan.getTable().unwrap(TableSourceTable.class);
+
+		// build pruner
+		RelDataType inputFieldTypes = filter.getInput().getRowType();
+		List<String> inputFieldNames = inputFieldTypes.getFieldNames();
+		List<String> partitionFieldNames = tableSourceTable.catalogTable().getPartitionKeys();
+		RelBuilder relBuilder = call.builder();
+		RexBuilder rexBuilder = relBuilder.getRexBuilder();
+		Tuple2<Seq<RexNode>, Seq<RexNode>> allPredicates = RexNodeExtractor.extractPartitionPredicateList(
+			filter.getCondition(),
+			FlinkRelOptUtil.getMaxCnfNodeCount(scan),
+			inputFieldNames.toArray(new String[0]),
+			rexBuilder,
+			partitionFieldNames.toArray(new String[0])
+			);
+		RexNode partitionPredicate = RexUtil.composeConjunction(rexBuilder, JavaConversions.seqAsJavaList(allPredicates._1));
+
+		if (partitionPredicate.isAlwaysTrue()){
+			return;
+		}
+
+		List<LogicalType> partitionFieldTypes = partitionFieldNames.stream().map(name -> {
+			int index  = inputFieldNames.indexOf(name);
+			if (index < 0) {
+				throw new TableException(String.format("Partitioned key '%s' isn't found in input columns. " +
+					"Validator should have checked that.", name));
+			}			return inputFieldTypes.getFieldList().get(index).getType(); })
+			.map(FlinkTypeFactory::toLogicalType).collect(Collectors.toList());
+
+		RexNode finalPartitionPredicate = adjustPartitionPredicate(inputFieldNames, partitionFieldNames, partitionPredicate);
+		Function<List<Map<String, String>>, List<Map<String, String>>> defaultPruner = partitions -> PartitionPruner.prunePartitions(
+			context.getTableConfig(),
+			partitionFieldNames.toArray(new String[0]),
+			partitionFieldTypes.toArray(new LogicalType[0]),
+			partitions,
+			finalPartitionPredicate);
+
+		// get partitions from table/catalog and prune
+		Optional<Catalog> catalogOptional = context.getCatalogManager().getCatalog(tableSourceTable.tableIdentifier().getCatalogName());
+		List<Map<String, String>> remainingPartitions = null;
+		Optional<List<Map<String, String>>> optionalPartitions;
+		// fields to read partitions from catalog and build new statistic
+		DynamicTableSource dynamicTableSource = tableSourceTable.tableSource().copy();
+		ObjectIdentifier identifier = tableSourceTable.tableIdentifier();
+		ObjectPath tablePath = identifier.toObjectPath();
+		try {
+			optionalPartitions = ((SupportsPartitionPushDown) dynamicTableSource).listPartitions();
+			if (optionalPartitions.isPresent() && !optionalPartitions.get().isEmpty()) {
+				remainingPartitions = defaultPruner.apply(optionalPartitions.get());
+			}
+		} catch (UnsupportedOperationException e) {
+			// check catalog whether is available
+			// we will read partitions from catalog if table doesn't support listPartitions.
+			if (!catalogOptional.isPresent()){
+				throw new TableException(
+					String.format("Table %s must from a catalog, but %s is not a catalog",
+						identifier.asSummaryString(), identifier.getCatalogName()), e);
+			}
+			try {
+				optionalPartitions = readPartitionFromCatalogAndPrune(
+					context,
+					catalogOptional.get(),
+					identifier,
+					inputFieldNames,
+					allPredicates._1(),
+					defaultPruner
+				);
+				if (optionalPartitions.isPresent() && !optionalPartitions.get().isEmpty()) {
+					remainingPartitions = optionalPartitions.get();
+				}
+			} catch (TableNotExistException tableNotExistException) {
+				throw new TableException(String.format("Table %s is not found in catalog.", identifier.asSummaryString()), e);
+			} catch (TableNotPartitionedException tableNotPartitionedException) {
+				remainingPartitions = null;
+			}
+		}
+		if (remainingPartitions != null) {
+			((SupportsPartitionPushDown) dynamicTableSource).applyPartitions(remainingPartitions);
+		}
+
+		// build new statistic
+		TableStats newTableStat = null;
+		Optional<TableStats> partitionStats;
+		if (remainingPartitions != null && catalogOptional.isPresent()) {
+			for (Map<String, String> partition: remainingPartitions) {
+				partitionStats = getPartitionStats(catalogOptional.get(), tablePath, partition);
+				if (!partitionStats.isPresent()) {
+					// clear all information before
+					newTableStat = null;
+					break;
+				} else {
+					newTableStat = newTableStat == null ? partitionStats.get() : newTableStat.merge(partitionStats.get());
+				}
+			}
+		}
+		FlinkStatistic newStatistic = FlinkStatistic.builder()
+			.statistic(tableSourceTable.getStatistic())
+			.tableStats(newTableStat)
+			.build();
+
+		String extraDigest = remainingPartitions == null ? "partitions=[]" :
+			("partitions=[" +
+				String.join(", ", remainingPartitions
+					.stream()
+					.map(Object::toString)
+					.toArray(String[]::new)) +
+				"]");
+		TableSourceTable newTableSourceTable = tableSourceTable.copy(dynamicTableSource, newStatistic, new String[]{extraDigest});
+
+		LogicalTableScan newScan = LogicalTableScan.create(scan.getCluster(), newTableSourceTable, scan.getHints());
+
+		RexNode nonPartitionPredicate = RexUtil.composeConjunction(rexBuilder, JavaConversions.seqAsJavaList(allPredicates._2()));
+		if (nonPartitionPredicate.isAlwaysTrue()) {
+			call.transformTo(newScan);
+		} else {
+			Filter newFilter = filter.copy(filter.getTraitSet(), newScan, nonPartitionPredicate);
+			call.transformTo(newFilter);
+		}
+	}
+
+	/**
+	 * adjust the partition field reference index to evaluate the partition values.
+	 * e.g. the original input fields is: a, b, c, p, and p is partition field. the partition values
+	 * are: List(Map("p"->"1"), Map("p" -> "2"), Map("p" -> "3")). If the original partition
+	 * predicate is $3 > 1. after adjusting, the new predicate is ($0 > 1).
+	 * and use ($0 > 1) to evaluate partition values (row(1), row(2), row(3)).
+	 */
+	private RexNode adjustPartitionPredicate(List<String> inputFieldNames, List<String> partitionFieldNames, RexNode partitionPredicate) {
+		return partitionPredicate.accept(new RexShuttle(){
+			@Override
+			public RexNode visitInputRef(RexInputRef inputRef) {
+				int index = inputRef.getIndex();
+				String fieldName = inputFieldNames.get(index);
+				int newIndex = partitionFieldNames.indexOf(fieldName);
+				if (newIndex < 0) {
+					throw new TableException(String.format("Field name '%s' isn't found in partitioned columns." +
+						" Validator should have checked that.", fieldName));
+				}
+				if (newIndex == index){
+					return inputRef;
+				} else {
+					return new RexInputRef(newIndex, inputRef.getType());
+				}
+			}
+		});
+	}
+
+	private Optional<List<Map<String, String>>> readPartitionFromCatalogAndPrune(
+			FlinkContext context,
+			Catalog catalog,
+			ObjectIdentifier tableIdentifier,
+			List<String> allFieldNames,
+			Seq<RexNode> partitionPredicate,
+			Function<List<Map<String, String>>, List<Map<String, String>>> pruner) throws TableNotExistException, TableNotPartitionedException{
+		RexNodeToExpressionConverter converter = new RexNodeToExpressionConverter(
+			allFieldNames.toArray(new String[0]),
+			context.getFunctionCatalog(),
+			context.getCatalogManager(),
+			TimeZone.getTimeZone(context.getTableConfig().getLocalTimeZone()));
+		ArrayList<Expression> partitionFilters = new ArrayList<>();
+		Option<ResolvedExpression> subExpr;
+		for (RexNode node: JavaConversions.seqAsJavaList(partitionPredicate)) {
+			subExpr = node.accept(converter);
+			if (!subExpr.isEmpty()) {

Review comment:
       we only use `subExpr` to accept value that is calculated from node. we have checked whether `subExpr` isEmpty in line 276




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6697110fb14fef778707f74b66227df2953c487c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107",
       "triggerID" : "6697110fb14fef778707f74b66227df2953c487c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122",
       "triggerID" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5147",
       "triggerID" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "triggerType" : "PUSH"
     }, {
       "hash" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5455",
       "triggerID" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "triggerType" : "PUSH"
     }, {
       "hash" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 1790d93b79cfdfc3f65a7805e444699736f80d93 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122) 
   * 97bb191e646a7ddc62d105e08a7473c8e5561160 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5455) 
   * eea5ebcd848bd4e554134096073cbd88d9725395 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] godfreyhe commented on a change in pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
godfreyhe commented on a change in pull request #12966:
URL: https://github.com/apache/flink/pull/12966#discussion_r467731059



##########
File path: flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/planner/plan/rules/logical/PushPartitionIntoTableSourceScanRuleTest.java
##########
@@ -0,0 +1,85 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.rules.logical;
+
+import org.apache.flink.table.planner.calcite.CalciteConfig;
+import org.apache.flink.table.planner.plan.optimize.program.BatchOptimizeContext;
+import org.apache.flink.table.planner.plan.optimize.program.FlinkBatchProgram;
+import org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgramBuilder;
+import org.apache.flink.table.planner.plan.optimize.program.HEP_RULES_EXECUTION_TYPE;
+import org.apache.flink.table.planner.utils.TableConfigUtils;
+
+import org.apache.calcite.plan.hep.HepMatchOrder;
+import org.apache.calcite.rel.rules.FilterProjectTransposeRule;
+import org.apache.calcite.tools.RuleSets;
+
+/**
+ * Test for {@link PushPartitionIntoTableSourceScanRule}.
+ */
+public class PushPartitionIntoTableSourceScanRuleTest extends PushPartitionIntoLegacyTableSourceScanRuleTest{

Review comment:
       It seems a lot of logic in `PushPartitionIntoTableSourceScanRule` is not covered, such as: listing partitions by filter, listing partition without filter, etc




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6697110fb14fef778707f74b66227df2953c487c",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107",
       "triggerID" : "6697110fb14fef778707f74b66227df2953c487c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122",
       "triggerID" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 6697110fb14fef778707f74b66227df2953c487c Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107) 
   * 1790d93b79cfdfc3f65a7805e444699736f80d93 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122) 
   * 9a05a030c661def4a45b36370dbcfa5e786ed8dc UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] fsk119 commented on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
fsk119 commented on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-663364240


   cc @godfreyhe 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6697110fb14fef778707f74b66227df2953c487c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107",
       "triggerID" : "6697110fb14fef778707f74b66227df2953c487c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122",
       "triggerID" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5147",
       "triggerID" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "triggerType" : "PUSH"
     }, {
       "hash" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5455",
       "triggerID" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "triggerType" : "PUSH"
     }, {
       "hash" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5458",
       "triggerID" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "triggerType" : "PUSH"
     }, {
       "hash" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5467",
       "triggerID" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5469",
       "triggerID" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 1790d93b79cfdfc3f65a7805e444699736f80d93 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122) 
   * 50079cc5a0bbe71de746f78a22180febf9a35e57 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5467) 
   * b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5469) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6697110fb14fef778707f74b66227df2953c487c",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "6697110fb14fef778707f74b66227df2953c487c",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 0f19df4d1fcbb4093b69d772628b67b81ebb443a Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759) 
   * 6697110fb14fef778707f74b66227df2953c487c UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] godfreyhe commented on a change in pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
godfreyhe commented on a change in pull request #12966:
URL: https://github.com/apache/flink/pull/12966#discussion_r467752120



##########
File path: flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/PushPartitionIntoTableSourceScanRule.java
##########
@@ -0,0 +1,325 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.rules.logical;
+
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.catalog.Catalog;
+import org.apache.flink.table.catalog.CatalogPartitionSpec;
+import org.apache.flink.table.catalog.CatalogTable;
+import org.apache.flink.table.catalog.ObjectIdentifier;
+import org.apache.flink.table.catalog.ObjectPath;
+import org.apache.flink.table.catalog.exceptions.CatalogException;
+import org.apache.flink.table.catalog.exceptions.PartitionNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotPartitionedException;
+import org.apache.flink.table.catalog.stats.CatalogColumnStatistics;
+import org.apache.flink.table.catalog.stats.CatalogTableStatistics;
+import org.apache.flink.table.connector.source.DynamicTableSource;
+import org.apache.flink.table.connector.source.abilities.SupportsPartitionPushDown;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.expressions.ResolvedExpression;
+import org.apache.flink.table.plan.stats.TableStats;
+import org.apache.flink.table.planner.calcite.FlinkContext;
+import org.apache.flink.table.planner.calcite.FlinkTypeFactory;
+import org.apache.flink.table.planner.plan.schema.TableSourceTable;
+import org.apache.flink.table.planner.plan.stats.FlinkStatistic;
+import org.apache.flink.table.planner.plan.utils.FlinkRelOptUtil;
+import org.apache.flink.table.planner.plan.utils.PartitionPruner;
+import org.apache.flink.table.planner.plan.utils.RexNodeExtractor;
+import org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter;
+import org.apache.flink.table.planner.utils.CatalogTableStatisticsConverter;
+import org.apache.flink.table.types.logical.LogicalType;
+
+import org.apache.calcite.plan.RelOptRule;
+import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.rel.core.Filter;
+import org.apache.calcite.rel.logical.LogicalTableScan;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rex.RexBuilder;
+import org.apache.calcite.rex.RexInputRef;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.rex.RexShuttle;
+import org.apache.calcite.rex.RexUtil;
+import org.apache.calcite.tools.RelBuilder;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.TimeZone;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+import scala.Option;
+import scala.Tuple2;
+import scala.collection.JavaConversions;
+import scala.collection.Seq;
+
+/**
+ * Planner rule that tries to push partition evaluated by filter condition into a {@link LogicalTableScan}.
+*/
+public class PushPartitionIntoTableSourceScanRule extends RelOptRule {
+	public static final PushPartitionIntoTableSourceScanRule INSTANCE = new PushPartitionIntoTableSourceScanRule();
+
+	public PushPartitionIntoTableSourceScanRule(){
+		super(operand(Filter.class,
+				operand(LogicalTableScan.class, none())),
+			"PushPartitionTableSourceScanRule");
+	}
+
+	@Override
+	public boolean matches(RelOptRuleCall call) {
+		Filter filter = call.rel(0);
+		if (filter.getCondition() == null) {
+			return false;
+		}
+		TableSourceTable tableSourceTable = call.rel(1).getTable().unwrap(TableSourceTable.class);
+		if (tableSourceTable == null){
+			return false;
+		}
+		DynamicTableSource dynamicTableSource = tableSourceTable.tableSource();
+		if (!(dynamicTableSource instanceof SupportsPartitionPushDown)) {
+			return false;
+		}
+		CatalogTable catalogTable = tableSourceTable.catalogTable();
+		if (!catalogTable.isPartitioned() || catalogTable.getPartitionKeys().isEmpty()) {
+			return false;
+		}
+		return Arrays.stream(tableSourceTable.extraDigests()).noneMatch(digest -> digest.startsWith("partitions=["));
+	}
+
+	@Override
+	public void onMatch(RelOptRuleCall call) {
+		Filter filter = call.rel(0);
+		LogicalTableScan scan = call.rel(1);
+		FlinkContext context = call.getPlanner().getContext().unwrap(FlinkContext.class);
+		TableSourceTable tableSourceTable = scan.getTable().unwrap(TableSourceTable.class);
+
+		// build pruner
+		RelDataType inputFieldTypes = filter.getInput().getRowType();
+		List<String> inputFieldNames = inputFieldTypes.getFieldNames();
+		List<String> partitionFieldNames = tableSourceTable.catalogTable().getPartitionKeys();
+		RelBuilder relBuilder = call.builder();
+		RexBuilder rexBuilder = relBuilder.getRexBuilder();
+		Tuple2<Seq<RexNode>, Seq<RexNode>> allPredicates = RexNodeExtractor.extractPartitionPredicateList(
+			filter.getCondition(),
+			FlinkRelOptUtil.getMaxCnfNodeCount(scan),
+			inputFieldNames.toArray(new String[0]),
+			rexBuilder,
+			partitionFieldNames.toArray(new String[0])
+			);
+		RexNode partitionPredicate = RexUtil.composeConjunction(rexBuilder, JavaConversions.seqAsJavaList(allPredicates._1));
+
+		if (partitionPredicate.isAlwaysTrue()){
+			return;
+		}
+
+		List<LogicalType> partitionFieldTypes = partitionFieldNames.stream().map(name -> {
+			int index  = inputFieldNames.indexOf(name);
+			if (index < 0) {
+				throw new TableException(String.format("Partitioned key '%s' isn't found in input columns. " +
+					"Validator should have checked that.", name));
+			}			return inputFieldTypes.getFieldList().get(index).getType(); })
+			.map(FlinkTypeFactory::toLogicalType).collect(Collectors.toList());
+
+		RexNode finalPartitionPredicate = adjustPartitionPredicate(inputFieldNames, partitionFieldNames, partitionPredicate);
+		Function<List<Map<String, String>>, List<Map<String, String>>> defaultPruner = partitions -> PartitionPruner.prunePartitions(
+			context.getTableConfig(),
+			partitionFieldNames.toArray(new String[0]),
+			partitionFieldTypes.toArray(new LogicalType[0]),
+			partitions,
+			finalPartitionPredicate);
+
+		// get partitions from table/catalog and prune
+		Optional<Catalog> catalogOptional = context.getCatalogManager().getCatalog(tableSourceTable.tableIdentifier().getCatalogName());
+		List<Map<String, String>> remainingPartitions = null;
+		Optional<List<Map<String, String>>> optionalPartitions;
+		// fields to read partitions from catalog and build new statistic
+		DynamicTableSource dynamicTableSource = tableSourceTable.tableSource().copy();
+		ObjectIdentifier identifier = tableSourceTable.tableIdentifier();
+		ObjectPath tablePath = identifier.toObjectPath();
+		try {
+			optionalPartitions = ((SupportsPartitionPushDown) dynamicTableSource).listPartitions();
+			if (optionalPartitions.isPresent() && !optionalPartitions.get().isEmpty()) {
+				remainingPartitions = defaultPruner.apply(optionalPartitions.get());
+			}
+		} catch (UnsupportedOperationException e) {
+			// check catalog whether is available
+			// we will read partitions from catalog if table doesn't support listPartitions.
+			if (!catalogOptional.isPresent()){
+				throw new TableException(
+					String.format("Table %s must from a catalog, but %s is not a catalog",
+						identifier.asSummaryString(), identifier.getCatalogName()), e);
+			}
+			try {
+				optionalPartitions = readPartitionFromCatalogAndPrune(
+					context,
+					catalogOptional.get(),
+					identifier,
+					inputFieldNames,
+					allPredicates._1(),
+					defaultPruner
+				);
+				if (optionalPartitions.isPresent() && !optionalPartitions.get().isEmpty()) {
+					remainingPartitions = optionalPartitions.get();
+				}
+			} catch (TableNotExistException tableNotExistException) {
+				throw new TableException(String.format("Table %s is not found in catalog.", identifier.asSummaryString()), e);
+			} catch (TableNotPartitionedException tableNotPartitionedException) {
+				remainingPartitions = null;
+			}
+		}
+		if (remainingPartitions != null) {
+			((SupportsPartitionPushDown) dynamicTableSource).applyPartitions(remainingPartitions);
+		}
+
+		// build new statistic
+		TableStats newTableStat = null;
+		Optional<TableStats> partitionStats;
+		if (remainingPartitions != null && catalogOptional.isPresent()) {
+			for (Map<String, String> partition: remainingPartitions) {
+				partitionStats = getPartitionStats(catalogOptional.get(), tablePath, partition);
+				if (!partitionStats.isPresent()) {
+					// clear all information before
+					newTableStat = null;
+					break;
+				} else {
+					newTableStat = newTableStat == null ? partitionStats.get() : newTableStat.merge(partitionStats.get());
+				}
+			}
+		}
+		FlinkStatistic newStatistic = FlinkStatistic.builder()
+			.statistic(tableSourceTable.getStatistic())
+			.tableStats(newTableStat)
+			.build();
+
+		String extraDigest = remainingPartitions == null ? "partitions=[]" :
+			("partitions=[" +
+				String.join(", ", remainingPartitions
+					.stream()
+					.map(Object::toString)
+					.toArray(String[]::new)) +
+				"]");
+		TableSourceTable newTableSourceTable = tableSourceTable.copy(dynamicTableSource, newStatistic, new String[]{extraDigest});
+
+		LogicalTableScan newScan = LogicalTableScan.create(scan.getCluster(), newTableSourceTable, scan.getHints());
+
+		RexNode nonPartitionPredicate = RexUtil.composeConjunction(rexBuilder, JavaConversions.seqAsJavaList(allPredicates._2()));
+		if (nonPartitionPredicate.isAlwaysTrue()) {
+			call.transformTo(newScan);
+		} else {
+			Filter newFilter = filter.copy(filter.getTraitSet(), newScan, nonPartitionPredicate);
+			call.transformTo(newFilter);
+		}
+	}
+
+	/**
+	 * adjust the partition field reference index to evaluate the partition values.
+	 * e.g. the original input fields is: a, b, c, p, and p is partition field. the partition values
+	 * are: List(Map("p"->"1"), Map("p" -> "2"), Map("p" -> "3")). If the original partition
+	 * predicate is $3 > 1. after adjusting, the new predicate is ($0 > 1).
+	 * and use ($0 > 1) to evaluate partition values (row(1), row(2), row(3)).
+	 */
+	private RexNode adjustPartitionPredicate(List<String> inputFieldNames, List<String> partitionFieldNames, RexNode partitionPredicate) {
+		return partitionPredicate.accept(new RexShuttle(){
+			@Override
+			public RexNode visitInputRef(RexInputRef inputRef) {
+				int index = inputRef.getIndex();
+				String fieldName = inputFieldNames.get(index);
+				int newIndex = partitionFieldNames.indexOf(fieldName);
+				if (newIndex < 0) {
+					throw new TableException(String.format("Field name '%s' isn't found in partitioned columns." +
+						" Validator should have checked that.", fieldName));
+				}
+				if (newIndex == index){
+					return inputRef;
+				} else {
+					return new RexInputRef(newIndex, inputRef.getType());
+				}
+			}
+		});
+	}
+
+	private Optional<List<Map<String, String>>> readPartitionFromCatalogAndPrune(
+			FlinkContext context,
+			Catalog catalog,
+			ObjectIdentifier tableIdentifier,
+			List<String> allFieldNames,
+			Seq<RexNode> partitionPredicate,
+			Function<List<Map<String, String>>, List<Map<String, String>>> pruner) throws TableNotExistException, TableNotPartitionedException{
+		RexNodeToExpressionConverter converter = new RexNodeToExpressionConverter(
+			allFieldNames.toArray(new String[0]),
+			context.getFunctionCatalog(),
+			context.getCatalogManager(),
+			TimeZone.getTimeZone(context.getTableConfig().getLocalTimeZone()));
+		ArrayList<Expression> partitionFilters = new ArrayList<>();
+		Option<ResolvedExpression> subExpr;
+		for (RexNode node: JavaConversions.seqAsJavaList(partitionPredicate)) {
+			subExpr = node.accept(converter);
+			if (!subExpr.isEmpty()) {

Review comment:
       if `subExpr` is empty, its corresponding sub-filter is dropped, then the result is incorrect. There is no test coverage...




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6697110fb14fef778707f74b66227df2953c487c",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107",
       "triggerID" : "6697110fb14fef778707f74b66227df2953c487c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122",
       "triggerID" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 6697110fb14fef778707f74b66227df2953c487c Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107) 
   * 1790d93b79cfdfc3f65a7805e444699736f80d93 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] godfreyhe commented on a change in pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
godfreyhe commented on a change in pull request #12966:
URL: https://github.com/apache/flink/pull/12966#discussion_r471957471



##########
File path: flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/plan/rules/logical/PushPartitionIntoLegacyTableSourceScanRuleTest.scala
##########
@@ -17,26 +17,30 @@
  */
 package org.apache.flink.table.planner.plan.rules.logical
 
+import java.util

Review comment:
       nit: reorder the imports




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6697110fb14fef778707f74b66227df2953c487c",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107",
       "triggerID" : "6697110fb14fef778707f74b66227df2953c487c",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 6697110fb14fef778707f74b66227df2953c487c Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6697110fb14fef778707f74b66227df2953c487c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107",
       "triggerID" : "6697110fb14fef778707f74b66227df2953c487c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122",
       "triggerID" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5147",
       "triggerID" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "triggerType" : "PUSH"
     }, {
       "hash" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5455",
       "triggerID" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "triggerType" : "PUSH"
     }, {
       "hash" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5458",
       "triggerID" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "triggerType" : "PUSH"
     }, {
       "hash" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5467",
       "triggerID" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5469",
       "triggerID" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 1790d93b79cfdfc3f65a7805e444699736f80d93 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122) 
   * b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5469) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6697110fb14fef778707f74b66227df2953c487c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107",
       "triggerID" : "6697110fb14fef778707f74b66227df2953c487c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122",
       "triggerID" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5147",
       "triggerID" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "triggerType" : "PUSH"
     }, {
       "hash" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5455",
       "triggerID" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "triggerType" : "PUSH"
     }, {
       "hash" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5458",
       "triggerID" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "triggerType" : "PUSH"
     }, {
       "hash" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5467",
       "triggerID" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5469",
       "triggerID" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "4642a985cde81555583a17880cd2462399338310",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5503",
       "triggerID" : "4642a985cde81555583a17880cd2462399338310",
       "triggerType" : "PUSH"
     }, {
       "hash" : "984744723761b8124aa003f23e65d4bb484a73c7",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5504",
       "triggerID" : "984744723761b8124aa003f23e65d4bb484a73c7",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 1790d93b79cfdfc3f65a7805e444699736f80d93 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122) 
   * 4642a985cde81555583a17880cd2462399338310 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5503) 
   * 984744723761b8124aa003f23e65d4bb484a73c7 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5504) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] godfreyhe commented on a change in pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
godfreyhe commented on a change in pull request #12966:
URL: https://github.com/apache/flink/pull/12966#discussion_r469829851



##########
File path: flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/planner/factories/TestValuesTableFactory.java
##########
@@ -821,4 +848,125 @@ public String asSummaryString() {
 		}
 	}
 
+	// --------------------------------------------------------------------------------------------
+	// Table utils
+	// --------------------------------------------------------------------------------------------
+
+	/**
+	 * Utils for catalog and source to filter partition or row.
+	 * */
+	public static class FilterUtil {

Review comment:
       move this class to `org.apache.flink.table.planner.utils` ? which could make `TestValuesTableFactory` more lightweight.

##########
File path: flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/planner/factories/TestValuesCatalog.java
##########
@@ -0,0 +1,114 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.factories;
+
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.api.TableSchema;
+import org.apache.flink.table.catalog.CatalogBaseTable;
+import org.apache.flink.table.catalog.CatalogPartitionSpec;
+import org.apache.flink.table.catalog.GenericInMemoryCatalog;
+import org.apache.flink.table.catalog.ObjectPath;
+import org.apache.flink.table.catalog.exceptions.CatalogException;
+import org.apache.flink.table.catalog.exceptions.TableNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotPartitionedException;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.expressions.ResolvedExpression;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.logical.BooleanType;
+import org.apache.flink.table.types.logical.CharType;
+import org.apache.flink.table.types.logical.DoubleType;
+import org.apache.flink.table.types.logical.IntType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.VarCharType;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.function.Function;
+
+
+/**
+ * Use TestValuesCatalog to test partition push down.
+ * */
+public class TestValuesCatalog extends GenericInMemoryCatalog {
+	private boolean supportListPartitionByFilter;
+	public TestValuesCatalog(String name, String defaultDatabase, boolean supportListPartitionByFilter) {
+		super(name, defaultDatabase);
+		this.supportListPartitionByFilter = supportListPartitionByFilter;
+	}
+
+	@Override
+	public List<CatalogPartitionSpec> listPartitionsByFilter(ObjectPath tablePath, List<Expression> filters)
+			throws TableNotExistException, TableNotPartitionedException, CatalogException {
+		if (!supportListPartitionByFilter) {
+			throw new UnsupportedOperationException("TestValuesCatalog doesn't support list partition by filters");
+		}
+
+		List<CatalogPartitionSpec> partitions = listPartitions(tablePath);
+		if (partitions.isEmpty()) {
+			return partitions;
+		}
+
+		CatalogBaseTable table = this.getTable(tablePath);
+		TableSchema schema = table.getSchema();
+		TestValuesTableFactory.FilterUtil util = TestValuesTableFactory.FilterUtil.INSTANCE;
+		List<CatalogPartitionSpec> remainingPartitions = new ArrayList<>();
+		for (CatalogPartitionSpec partition : partitions) {
+			boolean isRetained = true;
+			Function<String, Comparable<?>> gettter = getGetter(partition.getPartitionSpec(), schema);
+			for (Expression predicate : filters) {
+				isRetained = util.isRetainedAfterApplyingFilterPredicates((ResolvedExpression) predicate, gettter);
+				if (!isRetained) {
+					break;
+				}
+			}
+			if (isRetained) {
+				remainingPartitions.add(partition);
+			}
+		}
+		return remainingPartitions;
+	}
+
+	private Function<String, Comparable<?>> getGetter(Map<String, String> spec, TableSchema schema) {

Review comment:
       `getValueGetter` ?

##########
File path: flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/PushPartitionIntoTableSourceScanRule.java
##########
@@ -0,0 +1,343 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.rules.logical;
+
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.catalog.Catalog;
+import org.apache.flink.table.catalog.CatalogPartitionSpec;
+import org.apache.flink.table.catalog.CatalogTable;
+import org.apache.flink.table.catalog.ObjectIdentifier;
+import org.apache.flink.table.catalog.ObjectPath;
+import org.apache.flink.table.catalog.exceptions.CatalogException;
+import org.apache.flink.table.catalog.exceptions.PartitionNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotPartitionedException;
+import org.apache.flink.table.catalog.stats.CatalogColumnStatistics;
+import org.apache.flink.table.catalog.stats.CatalogTableStatistics;
+import org.apache.flink.table.connector.source.DynamicTableSource;
+import org.apache.flink.table.connector.source.abilities.SupportsPartitionPushDown;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.expressions.ResolvedExpression;
+import org.apache.flink.table.plan.stats.TableStats;
+import org.apache.flink.table.planner.calcite.FlinkContext;
+import org.apache.flink.table.planner.calcite.FlinkTypeFactory;
+import org.apache.flink.table.planner.plan.schema.TableSourceTable;
+import org.apache.flink.table.planner.plan.stats.FlinkStatistic;
+import org.apache.flink.table.planner.plan.utils.FlinkRelOptUtil;
+import org.apache.flink.table.planner.plan.utils.PartitionPruner;
+import org.apache.flink.table.planner.plan.utils.RexNodeExtractor;
+import org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter;
+import org.apache.flink.table.planner.utils.CatalogTableStatisticsConverter;
+import org.apache.flink.table.types.logical.LogicalType;
+
+import org.apache.calcite.plan.RelOptRule;
+import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.rel.core.Filter;
+import org.apache.calcite.rel.logical.LogicalTableScan;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rex.RexBuilder;
+import org.apache.calcite.rex.RexInputRef;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.rex.RexShuttle;
+import org.apache.calcite.rex.RexUtil;
+import org.apache.calcite.tools.RelBuilder;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.TimeZone;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+import scala.Option;
+import scala.Tuple2;
+import scala.collection.JavaConversions;
+import scala.collection.Seq;
+
+/**
+ * Planner rule that tries to push partition evaluated by filter condition into a {@link LogicalTableScan}.
+*/
+public class PushPartitionIntoTableSourceScanRule extends RelOptRule {
+	public static final PushPartitionIntoTableSourceScanRule INSTANCE = new PushPartitionIntoTableSourceScanRule();
+
+	public PushPartitionIntoTableSourceScanRule() {
+		super(operand(Filter.class,
+				operand(LogicalTableScan.class, none())),
+			"PushPartitionTableSourceScanRule");

Review comment:
       "PushPartitionTableSourceScanRule" -> "PushPartitionIntoTableSourceScanRule"

##########
File path: flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/planner/factories/TestValuesTableFactory.java
##########
@@ -629,35 +618,73 @@ public String asSummaryString() {
 		}
 
 		private Collection<RowData> convertToRowData(
-				Collection<Row> data,
+				Map<Map<String, String>, Collection<Row>> data,
 				int[] projectedFields,
 				DataStructureConverter converter) {
 			List<RowData> result = new ArrayList<>();
-			for (Row value : data) {
-				if (result.size() >= limit) {
-					return result;
-				}
-				if (isRetainedAfterApplyingFilterPredicates(value)) {
-					Row projectedRow;
-					if (projectedFields == null) {
-						projectedRow = value;
-					} else {
-						Object[] newValues = new Object[projectedFields.length];
-						for (int i = 0; i < projectedFields.length; ++i) {
-							newValues[i] = value.getField(projectedFields[i]);
+			List<Map<String, String>> keys = allPartitions.isEmpty() ?
+				Collections.singletonList(Collections.EMPTY_MAP) :
+				allPartitions;
+			FilterUtil util = FilterUtil.INSTANCE;
+			boolean isRetained = true;
+			for (Map<String, String> partition: keys) {
+				for (Row value : data.get(partition)) {
+					if (result.size() >= limit) {
+						return result;
+					}
+					if (filterPredicates != null && !filterPredicates.isEmpty()) {
+						for (ResolvedExpression predicate: filterPredicates) {
+							isRetained = util.isRetainedAfterApplyingFilterPredicates(predicate, getGetter(value));
+							if (!isRetained) {
+								break;
+							}
 						}
-						projectedRow = Row.of(newValues);
 					}
-					RowData rowData = (RowData) converter.toInternal(projectedRow);
-					if (rowData != null) {
-						rowData.setRowKind(value.getKind());
-						result.add(rowData);
+					if (isRetained) {
+						Row projectedRow;
+						if (projectedFields == null) {
+							projectedRow = value;
+						} else {
+							Object[] newValues = new Object[projectedFields.length];
+							for (int i = 0; i < projectedFields.length; ++i) {
+								newValues[i] = value.getField(projectedFields[i]);
+							}
+							projectedRow = Row.of(newValues);
+						}
+						RowData rowData = (RowData) converter.toInternal(projectedRow);
+						if (rowData != null) {
+							rowData.setRowKind(value.getKind());
+							result.add(rowData);
+						}
 					}
 				}
 			}
 			return result;
 		}
 
+		@Override
+		public Optional<List<Map<String, String>>> listPartitions() {
+			if (allPartitions.isEmpty()) {
+				throw new UnsupportedOperationException("Please use catalog to read partitions");
+			}
+			return Optional.of(allPartitions);
+		}
+
+		@Override
+		public void applyPartitions(List<Map<String, String>> remainingPartitions) {
+			// remainingPartition is non-nullable.
+			if (allPartitions.isEmpty()) {
+				// read partitions from catalog
+				if (!remainingPartitions.isEmpty()) {

Review comment:
       what if `remainingPartitions ` is empty ?

##########
File path: flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/planner/factories/TestValuesCatalog.java
##########
@@ -0,0 +1,114 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.factories;
+
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.api.TableSchema;
+import org.apache.flink.table.catalog.CatalogBaseTable;
+import org.apache.flink.table.catalog.CatalogPartitionSpec;
+import org.apache.flink.table.catalog.GenericInMemoryCatalog;
+import org.apache.flink.table.catalog.ObjectPath;
+import org.apache.flink.table.catalog.exceptions.CatalogException;
+import org.apache.flink.table.catalog.exceptions.TableNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotPartitionedException;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.expressions.ResolvedExpression;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.logical.BooleanType;
+import org.apache.flink.table.types.logical.CharType;
+import org.apache.flink.table.types.logical.DoubleType;
+import org.apache.flink.table.types.logical.IntType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.VarCharType;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.function.Function;
+
+
+/**
+ * Use TestValuesCatalog to test partition push down.
+ * */
+public class TestValuesCatalog extends GenericInMemoryCatalog {
+	private boolean supportListPartitionByFilter;

Review comment:
       nit: make `supportListPartitionByFilter` final?

##########
File path: flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/PushPartitionIntoTableSourceScanRule.java
##########
@@ -0,0 +1,343 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.rules.logical;
+
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.catalog.Catalog;
+import org.apache.flink.table.catalog.CatalogPartitionSpec;
+import org.apache.flink.table.catalog.CatalogTable;
+import org.apache.flink.table.catalog.ObjectIdentifier;
+import org.apache.flink.table.catalog.ObjectPath;
+import org.apache.flink.table.catalog.exceptions.CatalogException;
+import org.apache.flink.table.catalog.exceptions.PartitionNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotPartitionedException;
+import org.apache.flink.table.catalog.stats.CatalogColumnStatistics;
+import org.apache.flink.table.catalog.stats.CatalogTableStatistics;
+import org.apache.flink.table.connector.source.DynamicTableSource;
+import org.apache.flink.table.connector.source.abilities.SupportsPartitionPushDown;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.expressions.ResolvedExpression;
+import org.apache.flink.table.plan.stats.TableStats;
+import org.apache.flink.table.planner.calcite.FlinkContext;
+import org.apache.flink.table.planner.calcite.FlinkTypeFactory;
+import org.apache.flink.table.planner.plan.schema.TableSourceTable;
+import org.apache.flink.table.planner.plan.stats.FlinkStatistic;
+import org.apache.flink.table.planner.plan.utils.FlinkRelOptUtil;
+import org.apache.flink.table.planner.plan.utils.PartitionPruner;
+import org.apache.flink.table.planner.plan.utils.RexNodeExtractor;
+import org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter;
+import org.apache.flink.table.planner.utils.CatalogTableStatisticsConverter;
+import org.apache.flink.table.types.logical.LogicalType;
+
+import org.apache.calcite.plan.RelOptRule;
+import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.rel.core.Filter;
+import org.apache.calcite.rel.logical.LogicalTableScan;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rex.RexBuilder;
+import org.apache.calcite.rex.RexInputRef;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.rex.RexShuttle;
+import org.apache.calcite.rex.RexUtil;
+import org.apache.calcite.tools.RelBuilder;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.TimeZone;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+import scala.Option;
+import scala.Tuple2;
+import scala.collection.JavaConversions;
+import scala.collection.Seq;
+
+/**
+ * Planner rule that tries to push partition evaluated by filter condition into a {@link LogicalTableScan}.
+*/
+public class PushPartitionIntoTableSourceScanRule extends RelOptRule {
+	public static final PushPartitionIntoTableSourceScanRule INSTANCE = new PushPartitionIntoTableSourceScanRule();
+
+	public PushPartitionIntoTableSourceScanRule() {
+		super(operand(Filter.class,
+				operand(LogicalTableScan.class, none())),
+			"PushPartitionTableSourceScanRule");
+	}
+
+	@Override
+	public boolean matches(RelOptRuleCall call) {
+		Filter filter = call.rel(0);
+		if (filter.getCondition() == null) {
+			return false;
+		}
+		TableSourceTable tableSourceTable = call.rel(1).getTable().unwrap(TableSourceTable.class);
+		if (tableSourceTable == null) {
+			return false;
+		}
+		DynamicTableSource dynamicTableSource = tableSourceTable.tableSource();
+		if (!(dynamicTableSource instanceof SupportsPartitionPushDown)) {
+			return false;
+		}
+		CatalogTable catalogTable = tableSourceTable.catalogTable();
+		if (!catalogTable.isPartitioned() || catalogTable.getPartitionKeys().isEmpty()) {
+			return false;
+		}
+		return Arrays.stream(tableSourceTable.extraDigests()).noneMatch(digest -> digest.startsWith("partitions=["));
+	}
+
+	@Override
+	public void onMatch(RelOptRuleCall call) {
+		Filter filter = call.rel(0);
+		LogicalTableScan scan = call.rel(1);
+		TableSourceTable tableSourceTable = scan.getTable().unwrap(TableSourceTable.class);
+
+		RelDataType inputFieldTypes = filter.getInput().getRowType();
+		List<String> inputFieldNames = inputFieldTypes.getFieldNames();
+		List<String> partitionFieldNames = tableSourceTable.catalogTable().getPartitionKeys();
+		// extract partition predicates
+		RelBuilder relBuilder = call.builder();
+		RexBuilder rexBuilder = relBuilder.getRexBuilder();
+		Tuple2<Seq<RexNode>, Seq<RexNode>> allPredicates = RexNodeExtractor.extractPartitionPredicateList(
+			filter.getCondition(),
+			FlinkRelOptUtil.getMaxCnfNodeCount(scan),
+			inputFieldNames.toArray(new String[0]),
+			rexBuilder,
+			partitionFieldNames.toArray(new String[0]));
+		RexNode partitionPredicate = RexUtil.composeConjunction(rexBuilder, JavaConversions.seqAsJavaList(allPredicates._1));
+
+		if (partitionPredicate.isAlwaysTrue()) {
+			return;
+		}
+		// build pruner
+		LogicalType[] partitionFieldTypes = partitionFieldNames.stream()
+			.map(name -> {
+				int index  = inputFieldNames.indexOf(name);
+				if (index < 0) {
+					throw new TableException(String.format("Partitioned key '%s' isn't found in input columns. " +
+						"Validator should have checked that.", name));
+				}
+				return inputFieldTypes.getFieldList().get(index).getType(); })
+			.map(FlinkTypeFactory::toLogicalType)
+			.toArray(LogicalType[]::new);
+		RexNode finalPartitionPredicate = adjustPartitionPredicate(inputFieldNames, partitionFieldNames, partitionPredicate);
+		FlinkContext context = call.getPlanner().getContext().unwrap(FlinkContext.class);
+		Function<List<Map<String, String>>, List<Map<String, String>>> defaultPruner = partitions -> PartitionPruner.prunePartitions(
+			context.getTableConfig(),
+			partitionFieldNames.toArray(new String[0]),
+			partitionFieldTypes,
+			partitions,
+			finalPartitionPredicate);
+		// prune partitions
+		Optional<List<Map<String, String>>> remainingPartitions =
+			readPartitionsAndPrune(context, tableSourceTable, defaultPruner, allPredicates._1(), inputFieldNames);
+		// apply push down
+		DynamicTableSource dynamicTableSource = tableSourceTable.tableSource().copy();
+		remainingPartitions.ifPresent(((SupportsPartitionPushDown) dynamicTableSource)::applyPartitions);
+
+		// build new statistic
+		TableStats newTableStat = null;
+		ObjectIdentifier identifier = tableSourceTable.tableIdentifier();
+		ObjectPath tablePath = identifier.toObjectPath();
+		Optional<Catalog> catalogOptional = context.getCatalogManager().getCatalog(identifier.getCatalogName());
+		Optional<TableStats> partitionStats;
+		if (remainingPartitions.isPresent() && catalogOptional.isPresent()) {
+			for (Map<String, String> partition: remainingPartitions.get()) {
+				partitionStats = getPartitionStats(catalogOptional.get(), tablePath, partition);
+				if (!partitionStats.isPresent()) {
+					// clear all information before
+					newTableStat = null;
+					break;
+				} else {
+					newTableStat = newTableStat == null ? partitionStats.get() : newTableStat.merge(partitionStats.get());
+				}
+			}
+		}
+		FlinkStatistic newStatistic = FlinkStatistic.builder()
+			.statistic(tableSourceTable.getStatistic())
+			.tableStats(newTableStat)
+			.build();
+
+		String extraDigest = remainingPartitions.map(partition -> ("partitions=[" +
+			String.join(", ", partition
+				.stream()
+				.map(Object::toString)
+				.toArray(String[]::new)) +
+			"]")).orElse("partitions=[]");
+		TableSourceTable newTableSourceTable = tableSourceTable.copy(dynamicTableSource, newStatistic, new String[]{extraDigest});
+		LogicalTableScan newScan = LogicalTableScan.create(scan.getCluster(), newTableSourceTable, scan.getHints());
+
+		// transform to new node
+		RexNode nonPartitionPredicate = RexUtil.composeConjunction(rexBuilder, JavaConversions.seqAsJavaList(allPredicates._2()));
+		if (nonPartitionPredicate.isAlwaysTrue()) {
+			call.transformTo(newScan);
+		} else {
+			Filter newFilter = filter.copy(filter.getTraitSet(), newScan, nonPartitionPredicate);
+			call.transformTo(newFilter);
+		}
+	}
+
+	/**
+	 * adjust the partition field reference index to evaluate the partition values.
+	 * e.g. the original input fields is: a, b, c, p, and p is partition field. the partition values
+	 * are: List(Map("p"->"1"), Map("p" -> "2"), Map("p" -> "3")). If the original partition
+	 * predicate is $3 > 1. after adjusting, the new predicate is ($0 > 1).
+	 * and use ($0 > 1) to evaluate partition values (row(1), row(2), row(3)).
+	 */
+	private RexNode adjustPartitionPredicate(List<String> inputFieldNames, List<String> partitionFieldNames, RexNode partitionPredicate) {
+		return partitionPredicate.accept(new RexShuttle() {
+			@Override
+			public RexNode visitInputRef(RexInputRef inputRef) {
+				int index = inputRef.getIndex();
+				String fieldName = inputFieldNames.get(index);
+				int newIndex = partitionFieldNames.indexOf(fieldName);
+				if (newIndex < 0) {
+					throw new TableException(String.format("Field name '%s' isn't found in partitioned columns." +
+						" Validator should have checked that.", fieldName));
+				}
+				if (newIndex == index) {
+					return inputRef;
+				} else {
+					return new RexInputRef(newIndex, inputRef.getType());
+				}
+			}
+		});
+	}
+
+	private Optional<List<Map<String, String>>> readPartitionsAndPrune(
+			FlinkContext context,
+			TableSourceTable tableSourceTable,
+			Function<List<Map<String, String>>, List<Map<String, String>>> pruner,
+			Seq<RexNode> partitionPredicate,
+			List<String> inputFieldNames) {
+		// get partitions from table/catalog and prune
+		Optional<Catalog> catalogOptional = context.getCatalogManager().getCatalog(tableSourceTable.tableIdentifier().getCatalogName());
+		List<Map<String, String>> remainingPartitions;
+		Optional<List<Map<String, String>>> optionalPartitions;
+
+		DynamicTableSource dynamicTableSource = tableSourceTable.tableSource();
+		ObjectIdentifier identifier = tableSourceTable.tableIdentifier();
+		try {
+			optionalPartitions = ((SupportsPartitionPushDown) dynamicTableSource).listPartitions();
+			if (optionalPartitions.isPresent() && !optionalPartitions.get().isEmpty()) {
+				remainingPartitions = pruner.apply(optionalPartitions.get());
+				return remainingPartitions != null ? Optional.of(remainingPartitions) : Optional.empty();
+			} else {
+				return Optional.empty();
+			}
+		} catch (UnsupportedOperationException e) {
+			// check catalog whether is available
+			// we will read partitions from catalog if table doesn't support listPartitions.
+			if (!catalogOptional.isPresent()) {
+				throw new TableException(
+					String.format("Table %s must from a catalog, but %s is not a catalog",
+						identifier.asSummaryString(), identifier.getCatalogName()), e);
+			}
+			try {
+				return readPartitionFromCatalogAndPrune(
+					context,
+					catalogOptional.get(),
+					identifier,
+					inputFieldNames,
+					partitionPredicate,
+					pruner);
+			} catch (TableNotExistException tableNotExistException) {
+				throw new TableException(String.format("Table %s is not found in catalog.", identifier.asSummaryString()), e);
+			} catch (TableNotPartitionedException tableNotPartitionedException) {
+				throw new TableException(
+					String.format("Table %s is not a partitionable source. Validator should have checked it.", identifier.asSummaryString()),
+					tableNotPartitionedException);
+			}
+		}
+	}
+
+	private Optional<List<Map<String, String>>> readPartitionFromCatalogAndPrune(
+			FlinkContext context,
+			Catalog catalog,
+			ObjectIdentifier tableIdentifier,
+			List<String> allFieldNames,
+			Seq<RexNode> partitionPredicate,
+			Function<List<Map<String, String>>, List<Map<String, String>>> pruner)
+			throws TableNotExistException, TableNotPartitionedException {
+		ObjectPath tablePath = tableIdentifier.toObjectPath();
+		// build filters
+		RexNodeToExpressionConverter converter = new RexNodeToExpressionConverter(
+			allFieldNames.toArray(new String[0]),
+			context.getFunctionCatalog(),
+			context.getCatalogManager(),
+			TimeZone.getTimeZone(context.getTableConfig().getLocalTimeZone()));
+		ArrayList<Expression> partitionFilters = new ArrayList<>();
+		Option<ResolvedExpression> subExpr;
+		for (RexNode node: JavaConversions.seqAsJavaList(partitionPredicate)) {
+			subExpr = node.accept(converter);
+			if (!subExpr.isEmpty()) {
+				partitionFilters.add(subExpr.get());
+			} else {
+				// if part of expr is unresolved, we read all partitions and prune.
+				return readPartitionFromCatalogWithoutFilterAndPrune(catalog, tablePath, pruner);
+			}
+		}
+		if (partitionFilters.size() > 0) {

Review comment:
       `partitionFilters.size()` should not be empty, because line#128 has checked it

##########
File path: flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/planner/factories/TestValuesTableFactory.java
##########
@@ -629,35 +618,73 @@ public String asSummaryString() {
 		}
 
 		private Collection<RowData> convertToRowData(
-				Collection<Row> data,
+				Map<Map<String, String>, Collection<Row>> data,
 				int[] projectedFields,
 				DataStructureConverter converter) {
 			List<RowData> result = new ArrayList<>();
-			for (Row value : data) {
-				if (result.size() >= limit) {
-					return result;
-				}
-				if (isRetainedAfterApplyingFilterPredicates(value)) {
-					Row projectedRow;
-					if (projectedFields == null) {
-						projectedRow = value;
-					} else {
-						Object[] newValues = new Object[projectedFields.length];
-						for (int i = 0; i < projectedFields.length; ++i) {
-							newValues[i] = value.getField(projectedFields[i]);
+			List<Map<String, String>> keys = allPartitions.isEmpty() ?
+				Collections.singletonList(Collections.EMPTY_MAP) :
+				allPartitions;
+			FilterUtil util = FilterUtil.INSTANCE;
+			boolean isRetained = true;

Review comment:
       nit: move this variable into `for` loop ?

##########
File path: flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/planner/factories/TestValuesTableFactory.java
##########
@@ -629,35 +618,73 @@ public String asSummaryString() {
 		}
 
 		private Collection<RowData> convertToRowData(
-				Collection<Row> data,
+				Map<Map<String, String>, Collection<Row>> data,
 				int[] projectedFields,
 				DataStructureConverter converter) {
 			List<RowData> result = new ArrayList<>();
-			for (Row value : data) {
-				if (result.size() >= limit) {
-					return result;
-				}
-				if (isRetainedAfterApplyingFilterPredicates(value)) {
-					Row projectedRow;
-					if (projectedFields == null) {
-						projectedRow = value;
-					} else {
-						Object[] newValues = new Object[projectedFields.length];
-						for (int i = 0; i < projectedFields.length; ++i) {
-							newValues[i] = value.getField(projectedFields[i]);
+			List<Map<String, String>> keys = allPartitions.isEmpty() ?
+				Collections.singletonList(Collections.EMPTY_MAP) :

Review comment:
       nit: use `Collections.emptyMap()` to make IDE happy ?

##########
File path: flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/planner/factories/TestValuesTableFactory.java
##########
@@ -821,4 +848,125 @@ public String asSummaryString() {
 		}
 	}
 
+	// --------------------------------------------------------------------------------------------
+	// Table utils
+	// --------------------------------------------------------------------------------------------
+
+	/**
+	 * Utils for catalog and source to filter partition or row.
+	 * */
+	public static class FilterUtil {
+		public static final FilterUtil INSTANCE = new FilterUtil();

Review comment:
       just make these utility methods `static` ? then we can remove this field.

##########
File path: flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/planner/factories/TestValuesCatalog.java
##########
@@ -0,0 +1,114 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.factories;
+
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.api.TableSchema;
+import org.apache.flink.table.catalog.CatalogBaseTable;
+import org.apache.flink.table.catalog.CatalogPartitionSpec;
+import org.apache.flink.table.catalog.GenericInMemoryCatalog;
+import org.apache.flink.table.catalog.ObjectPath;
+import org.apache.flink.table.catalog.exceptions.CatalogException;
+import org.apache.flink.table.catalog.exceptions.TableNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotPartitionedException;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.expressions.ResolvedExpression;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.logical.BooleanType;
+import org.apache.flink.table.types.logical.CharType;
+import org.apache.flink.table.types.logical.DoubleType;
+import org.apache.flink.table.types.logical.IntType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.VarCharType;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.function.Function;
+
+
+/**
+ * Use TestValuesCatalog to test partition push down.
+ * */
+public class TestValuesCatalog extends GenericInMemoryCatalog {
+	private boolean supportListPartitionByFilter;
+	public TestValuesCatalog(String name, String defaultDatabase, boolean supportListPartitionByFilter) {
+		super(name, defaultDatabase);
+		this.supportListPartitionByFilter = supportListPartitionByFilter;
+	}
+
+	@Override
+	public List<CatalogPartitionSpec> listPartitionsByFilter(ObjectPath tablePath, List<Expression> filters)
+			throws TableNotExistException, TableNotPartitionedException, CatalogException {
+		if (!supportListPartitionByFilter) {
+			throw new UnsupportedOperationException("TestValuesCatalog doesn't support list partition by filters");
+		}
+
+		List<CatalogPartitionSpec> partitions = listPartitions(tablePath);
+		if (partitions.isEmpty()) {
+			return partitions;
+		}
+
+		CatalogBaseTable table = this.getTable(tablePath);
+		TableSchema schema = table.getSchema();
+		TestValuesTableFactory.FilterUtil util = TestValuesTableFactory.FilterUtil.INSTANCE;
+		List<CatalogPartitionSpec> remainingPartitions = new ArrayList<>();
+		for (CatalogPartitionSpec partition : partitions) {
+			boolean isRetained = true;
+			Function<String, Comparable<?>> gettter = getGetter(partition.getPartitionSpec(), schema);

Review comment:
       typo

##########
File path: flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/PushPartitionIntoTableSourceScanRule.java
##########
@@ -0,0 +1,343 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.rules.logical;
+
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.catalog.Catalog;
+import org.apache.flink.table.catalog.CatalogPartitionSpec;
+import org.apache.flink.table.catalog.CatalogTable;
+import org.apache.flink.table.catalog.ObjectIdentifier;
+import org.apache.flink.table.catalog.ObjectPath;
+import org.apache.flink.table.catalog.exceptions.CatalogException;
+import org.apache.flink.table.catalog.exceptions.PartitionNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotPartitionedException;
+import org.apache.flink.table.catalog.stats.CatalogColumnStatistics;
+import org.apache.flink.table.catalog.stats.CatalogTableStatistics;
+import org.apache.flink.table.connector.source.DynamicTableSource;
+import org.apache.flink.table.connector.source.abilities.SupportsPartitionPushDown;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.expressions.ResolvedExpression;
+import org.apache.flink.table.plan.stats.TableStats;
+import org.apache.flink.table.planner.calcite.FlinkContext;
+import org.apache.flink.table.planner.calcite.FlinkTypeFactory;
+import org.apache.flink.table.planner.plan.schema.TableSourceTable;
+import org.apache.flink.table.planner.plan.stats.FlinkStatistic;
+import org.apache.flink.table.planner.plan.utils.FlinkRelOptUtil;
+import org.apache.flink.table.planner.plan.utils.PartitionPruner;
+import org.apache.flink.table.planner.plan.utils.RexNodeExtractor;
+import org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter;
+import org.apache.flink.table.planner.utils.CatalogTableStatisticsConverter;
+import org.apache.flink.table.types.logical.LogicalType;
+
+import org.apache.calcite.plan.RelOptRule;
+import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.rel.core.Filter;
+import org.apache.calcite.rel.logical.LogicalTableScan;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rex.RexBuilder;
+import org.apache.calcite.rex.RexInputRef;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.rex.RexShuttle;
+import org.apache.calcite.rex.RexUtil;
+import org.apache.calcite.tools.RelBuilder;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.TimeZone;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+import scala.Option;
+import scala.Tuple2;
+import scala.collection.JavaConversions;
+import scala.collection.Seq;
+
+/**
+ * Planner rule that tries to push partition evaluated by filter condition into a {@link LogicalTableScan}.
+*/
+public class PushPartitionIntoTableSourceScanRule extends RelOptRule {
+	public static final PushPartitionIntoTableSourceScanRule INSTANCE = new PushPartitionIntoTableSourceScanRule();
+
+	public PushPartitionIntoTableSourceScanRule() {
+		super(operand(Filter.class,
+				operand(LogicalTableScan.class, none())),
+			"PushPartitionTableSourceScanRule");
+	}
+
+	@Override
+	public boolean matches(RelOptRuleCall call) {
+		Filter filter = call.rel(0);
+		if (filter.getCondition() == null) {
+			return false;
+		}
+		TableSourceTable tableSourceTable = call.rel(1).getTable().unwrap(TableSourceTable.class);
+		if (tableSourceTable == null) {
+			return false;
+		}
+		DynamicTableSource dynamicTableSource = tableSourceTable.tableSource();
+		if (!(dynamicTableSource instanceof SupportsPartitionPushDown)) {
+			return false;
+		}
+		CatalogTable catalogTable = tableSourceTable.catalogTable();
+		if (!catalogTable.isPartitioned() || catalogTable.getPartitionKeys().isEmpty()) {
+			return false;
+		}
+		return Arrays.stream(tableSourceTable.extraDigests()).noneMatch(digest -> digest.startsWith("partitions=["));
+	}
+
+	@Override
+	public void onMatch(RelOptRuleCall call) {
+		Filter filter = call.rel(0);
+		LogicalTableScan scan = call.rel(1);
+		TableSourceTable tableSourceTable = scan.getTable().unwrap(TableSourceTable.class);
+
+		RelDataType inputFieldTypes = filter.getInput().getRowType();
+		List<String> inputFieldNames = inputFieldTypes.getFieldNames();
+		List<String> partitionFieldNames = tableSourceTable.catalogTable().getPartitionKeys();
+		// extract partition predicates
+		RelBuilder relBuilder = call.builder();
+		RexBuilder rexBuilder = relBuilder.getRexBuilder();
+		Tuple2<Seq<RexNode>, Seq<RexNode>> allPredicates = RexNodeExtractor.extractPartitionPredicateList(
+			filter.getCondition(),
+			FlinkRelOptUtil.getMaxCnfNodeCount(scan),
+			inputFieldNames.toArray(new String[0]),
+			rexBuilder,
+			partitionFieldNames.toArray(new String[0]));
+		RexNode partitionPredicate = RexUtil.composeConjunction(rexBuilder, JavaConversions.seqAsJavaList(allPredicates._1));
+
+		if (partitionPredicate.isAlwaysTrue()) {
+			return;
+		}
+		// build pruner
+		LogicalType[] partitionFieldTypes = partitionFieldNames.stream()
+			.map(name -> {
+				int index  = inputFieldNames.indexOf(name);
+				if (index < 0) {
+					throw new TableException(String.format("Partitioned key '%s' isn't found in input columns. " +
+						"Validator should have checked that.", name));
+				}
+				return inputFieldTypes.getFieldList().get(index).getType(); })
+			.map(FlinkTypeFactory::toLogicalType)
+			.toArray(LogicalType[]::new);
+		RexNode finalPartitionPredicate = adjustPartitionPredicate(inputFieldNames, partitionFieldNames, partitionPredicate);
+		FlinkContext context = call.getPlanner().getContext().unwrap(FlinkContext.class);
+		Function<List<Map<String, String>>, List<Map<String, String>>> defaultPruner = partitions -> PartitionPruner.prunePartitions(
+			context.getTableConfig(),
+			partitionFieldNames.toArray(new String[0]),
+			partitionFieldTypes,
+			partitions,
+			finalPartitionPredicate);
+		// prune partitions
+		Optional<List<Map<String, String>>> remainingPartitions =
+			readPartitionsAndPrune(context, tableSourceTable, defaultPruner, allPredicates._1(), inputFieldNames);
+		// apply push down
+		DynamicTableSource dynamicTableSource = tableSourceTable.tableSource().copy();
+		remainingPartitions.ifPresent(((SupportsPartitionPushDown) dynamicTableSource)::applyPartitions);
+
+		// build new statistic
+		TableStats newTableStat = null;
+		ObjectIdentifier identifier = tableSourceTable.tableIdentifier();
+		ObjectPath tablePath = identifier.toObjectPath();
+		Optional<Catalog> catalogOptional = context.getCatalogManager().getCatalog(identifier.getCatalogName());
+		Optional<TableStats> partitionStats;
+		if (remainingPartitions.isPresent() && catalogOptional.isPresent()) {
+			for (Map<String, String> partition: remainingPartitions.get()) {
+				partitionStats = getPartitionStats(catalogOptional.get(), tablePath, partition);
+				if (!partitionStats.isPresent()) {
+					// clear all information before
+					newTableStat = null;
+					break;
+				} else {
+					newTableStat = newTableStat == null ? partitionStats.get() : newTableStat.merge(partitionStats.get());
+				}
+			}
+		}
+		FlinkStatistic newStatistic = FlinkStatistic.builder()
+			.statistic(tableSourceTable.getStatistic())
+			.tableStats(newTableStat)
+			.build();
+
+		String extraDigest = remainingPartitions.map(partition -> ("partitions=[" +
+			String.join(", ", partition
+				.stream()
+				.map(Object::toString)
+				.toArray(String[]::new)) +
+			"]")).orElse("partitions=[]");
+		TableSourceTable newTableSourceTable = tableSourceTable.copy(dynamicTableSource, newStatistic, new String[]{extraDigest});
+		LogicalTableScan newScan = LogicalTableScan.create(scan.getCluster(), newTableSourceTable, scan.getHints());
+
+		// transform to new node
+		RexNode nonPartitionPredicate = RexUtil.composeConjunction(rexBuilder, JavaConversions.seqAsJavaList(allPredicates._2()));
+		if (nonPartitionPredicate.isAlwaysTrue()) {
+			call.transformTo(newScan);
+		} else {
+			Filter newFilter = filter.copy(filter.getTraitSet(), newScan, nonPartitionPredicate);
+			call.transformTo(newFilter);
+		}
+	}
+
+	/**
+	 * adjust the partition field reference index to evaluate the partition values.
+	 * e.g. the original input fields is: a, b, c, p, and p is partition field. the partition values
+	 * are: List(Map("p"->"1"), Map("p" -> "2"), Map("p" -> "3")). If the original partition
+	 * predicate is $3 > 1. after adjusting, the new predicate is ($0 > 1).
+	 * and use ($0 > 1) to evaluate partition values (row(1), row(2), row(3)).
+	 */
+	private RexNode adjustPartitionPredicate(List<String> inputFieldNames, List<String> partitionFieldNames, RexNode partitionPredicate) {
+		return partitionPredicate.accept(new RexShuttle() {
+			@Override
+			public RexNode visitInputRef(RexInputRef inputRef) {
+				int index = inputRef.getIndex();
+				String fieldName = inputFieldNames.get(index);
+				int newIndex = partitionFieldNames.indexOf(fieldName);
+				if (newIndex < 0) {
+					throw new TableException(String.format("Field name '%s' isn't found in partitioned columns." +
+						" Validator should have checked that.", fieldName));
+				}
+				if (newIndex == index) {
+					return inputRef;
+				} else {
+					return new RexInputRef(newIndex, inputRef.getType());
+				}
+			}
+		});
+	}
+
+	private Optional<List<Map<String, String>>> readPartitionsAndPrune(
+			FlinkContext context,
+			TableSourceTable tableSourceTable,
+			Function<List<Map<String, String>>, List<Map<String, String>>> pruner,
+			Seq<RexNode> partitionPredicate,
+			List<String> inputFieldNames) {
+		// get partitions from table/catalog and prune
+		Optional<Catalog> catalogOptional = context.getCatalogManager().getCatalog(tableSourceTable.tableIdentifier().getCatalogName());
+		List<Map<String, String>> remainingPartitions;
+		Optional<List<Map<String, String>>> optionalPartitions;
+
+		DynamicTableSource dynamicTableSource = tableSourceTable.tableSource();
+		ObjectIdentifier identifier = tableSourceTable.tableIdentifier();
+		try {
+			optionalPartitions = ((SupportsPartitionPushDown) dynamicTableSource).listPartitions();
+			if (optionalPartitions.isPresent() && !optionalPartitions.get().isEmpty()) {
+				remainingPartitions = pruner.apply(optionalPartitions.get());
+				return remainingPartitions != null ? Optional.of(remainingPartitions) : Optional.empty();
+			} else {
+				return Optional.empty();
+			}
+		} catch (UnsupportedOperationException e) {
+			// check catalog whether is available
+			// we will read partitions from catalog if table doesn't support listPartitions.
+			if (!catalogOptional.isPresent()) {
+				throw new TableException(
+					String.format("Table %s must from a catalog, but %s is not a catalog",
+						identifier.asSummaryString(), identifier.getCatalogName()), e);
+			}
+			try {
+				return readPartitionFromCatalogAndPrune(
+					context,
+					catalogOptional.get(),
+					identifier,
+					inputFieldNames,
+					partitionPredicate,
+					pruner);
+			} catch (TableNotExistException tableNotExistException) {
+				throw new TableException(String.format("Table %s is not found in catalog.", identifier.asSummaryString()), e);
+			} catch (TableNotPartitionedException tableNotPartitionedException) {
+				throw new TableException(
+					String.format("Table %s is not a partitionable source. Validator should have checked it.", identifier.asSummaryString()),
+					tableNotPartitionedException);
+			}
+		}
+	}
+
+	private Optional<List<Map<String, String>>> readPartitionFromCatalogAndPrune(
+			FlinkContext context,
+			Catalog catalog,
+			ObjectIdentifier tableIdentifier,
+			List<String> allFieldNames,
+			Seq<RexNode> partitionPredicate,
+			Function<List<Map<String, String>>, List<Map<String, String>>> pruner)
+			throws TableNotExistException, TableNotPartitionedException {
+		ObjectPath tablePath = tableIdentifier.toObjectPath();
+		// build filters
+		RexNodeToExpressionConverter converter = new RexNodeToExpressionConverter(
+			allFieldNames.toArray(new String[0]),
+			context.getFunctionCatalog(),
+			context.getCatalogManager(),
+			TimeZone.getTimeZone(context.getTableConfig().getLocalTimeZone()));
+		ArrayList<Expression> partitionFilters = new ArrayList<>();
+		Option<ResolvedExpression> subExpr;
+		for (RexNode node: JavaConversions.seqAsJavaList(partitionPredicate)) {
+			subExpr = node.accept(converter);
+			if (!subExpr.isEmpty()) {
+				partitionFilters.add(subExpr.get());
+			} else {
+				// if part of expr is unresolved, we read all partitions and prune.
+				return readPartitionFromCatalogWithoutFilterAndPrune(catalog, tablePath, pruner);
+			}
+		}
+		if (partitionFilters.size() > 0) {
+			try {
+				List<Map<String, String>> remainingPartitions = catalog.listPartitionsByFilter(tablePath, partitionFilters)
+					.stream()
+					.map(CatalogPartitionSpec::getPartitionSpec)
+					.collect(Collectors.toList());
+				return Optional.of(remainingPartitions);
+			} catch (UnsupportedOperationException e) {
+				return readPartitionFromCatalogWithoutFilterAndPrune(catalog, tablePath, pruner);
+			}
+		} else {
+			return readPartitionFromCatalogWithoutFilterAndPrune(catalog, tablePath, pruner);
+		}
+	}
+
+	private Optional<List<Map<String, String>>> readPartitionFromCatalogWithoutFilterAndPrune(
+			Catalog catalog,
+			ObjectPath tablePath,
+			Function<List<Map<String, String>>, List<Map<String, String>>> pruner) throws TableNotExistException, CatalogException, TableNotPartitionedException {

Review comment:
       nit:  wrap the line at `throws ` ?

##########
File path: flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/planner/factories/TestValuesTableFactory.java
##########
@@ -629,35 +618,73 @@ public String asSummaryString() {
 		}
 
 		private Collection<RowData> convertToRowData(
-				Collection<Row> data,
+				Map<Map<String, String>, Collection<Row>> data,
 				int[] projectedFields,
 				DataStructureConverter converter) {
 			List<RowData> result = new ArrayList<>();
-			for (Row value : data) {
-				if (result.size() >= limit) {
-					return result;
-				}
-				if (isRetainedAfterApplyingFilterPredicates(value)) {
-					Row projectedRow;
-					if (projectedFields == null) {
-						projectedRow = value;
-					} else {
-						Object[] newValues = new Object[projectedFields.length];
-						for (int i = 0; i < projectedFields.length; ++i) {
-							newValues[i] = value.getField(projectedFields[i]);
+			List<Map<String, String>> keys = allPartitions.isEmpty() ?
+				Collections.singletonList(Collections.EMPTY_MAP) :
+				allPartitions;
+			FilterUtil util = FilterUtil.INSTANCE;
+			boolean isRetained = true;
+			for (Map<String, String> partition: keys) {
+				for (Row value : data.get(partition)) {
+					if (result.size() >= limit) {
+						return result;
+					}
+					if (filterPredicates != null && !filterPredicates.isEmpty()) {
+						for (ResolvedExpression predicate: filterPredicates) {
+							isRetained = util.isRetainedAfterApplyingFilterPredicates(predicate, getGetter(value));

Review comment:
       it's better `isRetainedAfterApplyingFilterPredicates` can accept `multiple predicates` as parameter, because both parts who use this method are predicate list

##########
File path: flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/planner/factories/TestValuesTableFactory.java
##########
@@ -821,4 +848,125 @@ public String asSummaryString() {
 		}
 	}
 
+	// --------------------------------------------------------------------------------------------
+	// Table utils
+	// --------------------------------------------------------------------------------------------
+
+	/**
+	 * Utils for catalog and source to filter partition or row.
+	 * */
+	public static class FilterUtil {
+		public static final FilterUtil INSTANCE = new FilterUtil();
+
+		private FilterUtil() {}
+
+		public boolean shouldPushDown(ResolvedExpression expr, Set<String> filterableFields) {
+			if (expr instanceof CallExpression && expr.getChildren().size() == 2) {
+				return shouldPushDownUnaryExpression(expr.getResolvedChildren().get(0), filterableFields)
+					&& shouldPushDownUnaryExpression(expr.getResolvedChildren().get(1), filterableFields);
+			}
+			return false;
+		}
+
+		public boolean isRetainedAfterApplyingFilterPredicates(ResolvedExpression predicate, Function<String, Comparable<?>> getter) {
+			if (predicate instanceof CallExpression) {
+				FunctionDefinition definition = ((CallExpression) predicate).getFunctionDefinition();
+				if (definition.equals(BuiltInFunctionDefinitions.OR)) {
+					// nested filter, such as (key1 > 2 or key2 > 3)
+					boolean result = false;
+					for (Expression expr: predicate.getChildren()) {
+						if (!(expr instanceof CallExpression && expr.getChildren().size() == 2)) {
+							throw new TableException(expr + " not supported!");
+						}
+						result |= binaryFilterApplies((CallExpression) expr, getter);
+						if (result) {
+							return result;
+						}

Review comment:
       these lines can be simpler

##########
File path: flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/planner/plan/rules/logical/PushPartitionIntoTableSourceScanRuleTest.java
##########
@@ -0,0 +1,142 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.rules.logical;
+
+import org.apache.flink.table.catalog.CatalogPartition;
+import org.apache.flink.table.catalog.CatalogPartitionImpl;
+import org.apache.flink.table.catalog.CatalogPartitionSpec;
+import org.apache.flink.table.catalog.ObjectPath;
+import org.apache.flink.table.planner.calcite.CalciteConfig;
+import org.apache.flink.table.planner.factories.TestValuesCatalog;
+import org.apache.flink.table.planner.plan.optimize.program.BatchOptimizeContext;
+import org.apache.flink.table.planner.plan.optimize.program.FlinkBatchProgram;
+import org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgramBuilder;
+import org.apache.flink.table.planner.plan.optimize.program.HEP_RULES_EXECUTION_TYPE;
+import org.apache.flink.table.planner.utils.TableConfigUtils;
+
+import org.apache.calcite.plan.hep.HepMatchOrder;
+import org.apache.calcite.rel.rules.FilterProjectTransposeRule;
+import org.apache.calcite.tools.RuleSets;
+import org.junit.Test;
+
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Test for {@link PushPartitionIntoTableSourceScanRule}.
+ */
+public class PushPartitionIntoTableSourceScanRuleTest extends PushPartitionIntoLegacyTableSourceScanRuleTest{
+	public PushPartitionIntoTableSourceScanRuleTest(boolean sourceFetchPartitions, boolean useFilter) {
+		super(sourceFetchPartitions, useFilter);
+	}
+
+	@Override
+	public void setup() throws Exception {
+		util().buildBatchProgram(FlinkBatchProgram.DEFAULT_REWRITE());
+		CalciteConfig calciteConfig = TableConfigUtils.getCalciteConfig(util().tableEnv().getConfig());
+		calciteConfig.getBatchProgram().get().addLast(
+			"rules",
+			FlinkHepRuleSetProgramBuilder.<BatchOptimizeContext>newBuilder()
+				.setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_SEQUENCE())
+				.setHepMatchOrder(HepMatchOrder.BOTTOM_UP)
+				.add(RuleSets.ofList(FilterProjectTransposeRule.INSTANCE,
+					PushPartitionIntoTableSourceScanRule.INSTANCE))
+				.build());
+
+		// define ddl
+		String ddlTemp =
+			"CREATE TABLE MyTable (\n" +
+				"  id int,\n" +
+				"  name string,\n" +
+				"  part1 string,\n" +
+				"  part2 int)\n" +
+				"  partitioned by (part1, part2)\n" +
+				"  WITH (\n" +
+				" 'connector' = 'values',\n" +
+				" 'bounded' = 'true',\n" +
+				" 'partition-list' = '%s'" +
+				")";
+
+		String ddlTempWithVirtualColumn =
+			"CREATE TABLE VirtualTable (\n" +
+				"  id int,\n" +
+				"  name string,\n" +
+				"  part1 string,\n" +
+				"  part2 int,\n" +
+				"  virtualField AS part2 + 1)\n" +
+				"  partitioned by (part1, part2)\n" +
+				"  WITH (\n" +
+				" 'connector' = 'values',\n" +
+				" 'bounded' = 'true',\n" +
+				" 'partition-list' = '%s'" +
+				")";
+
+		if (sourceFetchPartitions()) {
+			String partitionString = "part1:A,part2:1;part1:A,part2:2;part1:B,part2:3;part1:C,part2:1";
+			util().tableEnv().executeSql(String.format(ddlTemp, partitionString));
+			util().tableEnv().executeSql(String.format(ddlTempWithVirtualColumn, partitionString));
+		} else {
+			// replace catalog with TestValuesCatalog
+			util().tableEnv().executeSql("drop catalog default_catalog");
+			TestValuesCatalog catalog =
+				new TestValuesCatalog("default_catalog", "default_database", useCatalogFilter());
+			util().tableEnv().registerCatalog("default_catalog", catalog);
+			util().tableEnv().useCatalog("default_catalog");

Review comment:
       just register a new catalog, and change it as default ?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6697110fb14fef778707f74b66227df2953c487c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107",
       "triggerID" : "6697110fb14fef778707f74b66227df2953c487c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122",
       "triggerID" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5147",
       "triggerID" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "triggerType" : "PUSH"
     }, {
       "hash" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5455",
       "triggerID" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "triggerType" : "PUSH"
     }, {
       "hash" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5458",
       "triggerID" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "triggerType" : "PUSH"
     }, {
       "hash" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5467",
       "triggerID" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5469",
       "triggerID" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "4642a985cde81555583a17880cd2462399338310",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5503",
       "triggerID" : "4642a985cde81555583a17880cd2462399338310",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 1790d93b79cfdfc3f65a7805e444699736f80d93 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122) 
   * b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5469) 
   * 4642a985cde81555583a17880cd2462399338310 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5503) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6697110fb14fef778707f74b66227df2953c487c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107",
       "triggerID" : "6697110fb14fef778707f74b66227df2953c487c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122",
       "triggerID" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5147",
       "triggerID" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "triggerType" : "PUSH"
     }, {
       "hash" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5455",
       "triggerID" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "triggerType" : "PUSH"
     }, {
       "hash" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5458",
       "triggerID" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "triggerType" : "PUSH"
     }, {
       "hash" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5467",
       "triggerID" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5469",
       "triggerID" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "4642a985cde81555583a17880cd2462399338310",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "4642a985cde81555583a17880cd2462399338310",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 1790d93b79cfdfc3f65a7805e444699736f80d93 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122) 
   * b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5469) 
   * 4642a985cde81555583a17880cd2462399338310 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6697110fb14fef778707f74b66227df2953c487c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107",
       "triggerID" : "6697110fb14fef778707f74b66227df2953c487c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122",
       "triggerID" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5147",
       "triggerID" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "triggerType" : "PUSH"
     }, {
       "hash" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5455",
       "triggerID" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "triggerType" : "PUSH"
     }, {
       "hash" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5458",
       "triggerID" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "triggerType" : "PUSH"
     }, {
       "hash" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5467",
       "triggerID" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5469",
       "triggerID" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "4642a985cde81555583a17880cd2462399338310",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5503",
       "triggerID" : "4642a985cde81555583a17880cd2462399338310",
       "triggerType" : "PUSH"
     }, {
       "hash" : "984744723761b8124aa003f23e65d4bb484a73c7",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5504",
       "triggerID" : "984744723761b8124aa003f23e65d4bb484a73c7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e91a84cc20e3655749b8cf9b69ed79d855aaedaf",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5522",
       "triggerID" : "e91a84cc20e3655749b8cf9b69ed79d855aaedaf",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d02272061d2264ee74d67da5d9f0e1524f1c7d52",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5657",
       "triggerID" : "d02272061d2264ee74d67da5d9f0e1524f1c7d52",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d02272061d2264ee74d67da5d9f0e1524f1c7d52",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5684",
       "triggerID" : "675468974",
       "triggerType" : "MANUAL"
     }, {
       "hash" : "d02272061d2264ee74d67da5d9f0e1524f1c7d52",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5657",
       "triggerID" : "675468974",
       "triggerType" : "MANUAL"
     } ]
   }-->
   ## CI report:
   
   * d02272061d2264ee74d67da5d9f0e1524f1c7d52 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5684) Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5657) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6697110fb14fef778707f74b66227df2953c487c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107",
       "triggerID" : "6697110fb14fef778707f74b66227df2953c487c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122",
       "triggerID" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5147",
       "triggerID" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "triggerType" : "PUSH"
     }, {
       "hash" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5455",
       "triggerID" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "triggerType" : "PUSH"
     }, {
       "hash" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5458",
       "triggerID" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "triggerType" : "PUSH"
     }, {
       "hash" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5467",
       "triggerID" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5469",
       "triggerID" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "4642a985cde81555583a17880cd2462399338310",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5503",
       "triggerID" : "4642a985cde81555583a17880cd2462399338310",
       "triggerType" : "PUSH"
     }, {
       "hash" : "984744723761b8124aa003f23e65d4bb484a73c7",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5504",
       "triggerID" : "984744723761b8124aa003f23e65d4bb484a73c7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e91a84cc20e3655749b8cf9b69ed79d855aaedaf",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5522",
       "triggerID" : "e91a84cc20e3655749b8cf9b69ed79d855aaedaf",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 1790d93b79cfdfc3f65a7805e444699736f80d93 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122) 
   * 984744723761b8124aa003f23e65d4bb484a73c7 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5504) 
   * e91a84cc20e3655749b8cf9b69ed79d855aaedaf Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5522) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6697110fb14fef778707f74b66227df2953c487c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107",
       "triggerID" : "6697110fb14fef778707f74b66227df2953c487c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122",
       "triggerID" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5147",
       "triggerID" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "triggerType" : "PUSH"
     }, {
       "hash" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5455",
       "triggerID" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "triggerType" : "PUSH"
     }, {
       "hash" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5458",
       "triggerID" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "triggerType" : "PUSH"
     }, {
       "hash" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5467",
       "triggerID" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5469",
       "triggerID" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "4642a985cde81555583a17880cd2462399338310",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5503",
       "triggerID" : "4642a985cde81555583a17880cd2462399338310",
       "triggerType" : "PUSH"
     }, {
       "hash" : "984744723761b8124aa003f23e65d4bb484a73c7",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5504",
       "triggerID" : "984744723761b8124aa003f23e65d4bb484a73c7",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 1790d93b79cfdfc3f65a7805e444699736f80d93 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122) 
   * b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5469) 
   * 4642a985cde81555583a17880cd2462399338310 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5503) 
   * 984744723761b8124aa003f23e65d4bb484a73c7 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5504) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] fsk119 commented on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
fsk119 commented on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-675468974


   @flinkbot run azure


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6697110fb14fef778707f74b66227df2953c487c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107",
       "triggerID" : "6697110fb14fef778707f74b66227df2953c487c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122",
       "triggerID" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5147",
       "triggerID" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "triggerType" : "PUSH"
     }, {
       "hash" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5455",
       "triggerID" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "triggerType" : "PUSH"
     }, {
       "hash" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5458",
       "triggerID" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "triggerType" : "PUSH"
     }, {
       "hash" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5467",
       "triggerID" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5469",
       "triggerID" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "4642a985cde81555583a17880cd2462399338310",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5503",
       "triggerID" : "4642a985cde81555583a17880cd2462399338310",
       "triggerType" : "PUSH"
     }, {
       "hash" : "984744723761b8124aa003f23e65d4bb484a73c7",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "984744723761b8124aa003f23e65d4bb484a73c7",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 1790d93b79cfdfc3f65a7805e444699736f80d93 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122) 
   * b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5469) 
   * 4642a985cde81555583a17880cd2462399338310 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5503) 
   * 984744723761b8124aa003f23e65d4bb484a73c7 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] wuchong merged pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
wuchong merged pull request #12966:
URL: https://github.com/apache/flink/pull/12966


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6697110fb14fef778707f74b66227df2953c487c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107",
       "triggerID" : "6697110fb14fef778707f74b66227df2953c487c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122",
       "triggerID" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5147",
       "triggerID" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "triggerType" : "PUSH"
     }, {
       "hash" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5455",
       "triggerID" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "triggerType" : "PUSH"
     }, {
       "hash" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5458",
       "triggerID" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "triggerType" : "PUSH"
     }, {
       "hash" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5467",
       "triggerID" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5469",
       "triggerID" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "4642a985cde81555583a17880cd2462399338310",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5503",
       "triggerID" : "4642a985cde81555583a17880cd2462399338310",
       "triggerType" : "PUSH"
     }, {
       "hash" : "984744723761b8124aa003f23e65d4bb484a73c7",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5504",
       "triggerID" : "984744723761b8124aa003f23e65d4bb484a73c7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e91a84cc20e3655749b8cf9b69ed79d855aaedaf",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5522",
       "triggerID" : "e91a84cc20e3655749b8cf9b69ed79d855aaedaf",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * e91a84cc20e3655749b8cf9b69ed79d855aaedaf Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5522) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] godfreyhe commented on a change in pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
godfreyhe commented on a change in pull request #12966:
URL: https://github.com/apache/flink/pull/12966#discussion_r464250373



##########
File path: flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/PushPartitionIntoTableSourceScanRule.java
##########
@@ -0,0 +1,320 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.rules.logical;
+
+import org.apache.flink.table.api.TableConfig;
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.catalog.Catalog;
+import org.apache.flink.table.catalog.CatalogPartitionSpec;
+import org.apache.flink.table.catalog.CatalogTable;
+import org.apache.flink.table.catalog.ObjectIdentifier;
+import org.apache.flink.table.catalog.ObjectPath;
+import org.apache.flink.table.catalog.exceptions.PartitionNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotPartitionedException;
+import org.apache.flink.table.catalog.stats.CatalogColumnStatistics;
+import org.apache.flink.table.catalog.stats.CatalogTableStatistics;
+import org.apache.flink.table.connector.source.DynamicTableSource;
+import org.apache.flink.table.connector.source.abilities.SupportsPartitionPushDown;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.expressions.ResolvedExpression;
+import org.apache.flink.table.plan.stats.TableStats;
+import org.apache.flink.table.planner.calcite.FlinkContext;
+import org.apache.flink.table.planner.calcite.FlinkTypeFactory;
+import org.apache.flink.table.planner.plan.schema.TableSourceTable;
+import org.apache.flink.table.planner.plan.stats.FlinkStatistic;
+import org.apache.flink.table.planner.plan.utils.FlinkRelOptUtil;
+import org.apache.flink.table.planner.plan.utils.PartitionPruner;
+import org.apache.flink.table.planner.plan.utils.RexNodeExtractor;
+import org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter;
+import org.apache.flink.table.planner.utils.CatalogTableStatisticsConverter;
+import org.apache.flink.table.types.logical.LogicalType;
+
+import org.apache.calcite.plan.RelOptRule;
+import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.rel.core.Filter;
+import org.apache.calcite.rel.logical.LogicalTableScan;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rex.RexBuilder;
+import org.apache.calcite.rex.RexInputRef;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.rex.RexShuttle;
+import org.apache.calcite.rex.RexUtil;
+import org.apache.calcite.tools.RelBuilder;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.TimeZone;
+import java.util.stream.Collectors;
+
+import scala.Option;
+import scala.Tuple2;
+import scala.collection.JavaConversions;
+import scala.collection.Seq;
+
+/**
+ * Planner rule that tries to push partition evaluated by filter condition into a {@link LogicalTableScan}.
+*/
+public class PushPartitionIntoTableSourceScanRule extends RelOptRule {
+	public static final PushPartitionIntoTableSourceScanRule INSTANCE = new PushPartitionIntoTableSourceScanRule();
+
+	public PushPartitionIntoTableSourceScanRule(){
+		super(operand(Filter.class,
+				operand(LogicalTableScan.class, none())),
+			"PushPartitionTableSourceScanRule");
+	}
+
+	@Override
+	public boolean matches(RelOptRuleCall call) {
+		Filter filter = call.rel(0);
+		if (filter.getCondition() == null) {
+			return false;
+		}
+		TableSourceTable tableSourceTable = call.rel(1).getTable().unwrap(TableSourceTable.class);
+		if (tableSourceTable == null){
+			return false;
+		}
+		DynamicTableSource dynamicTableSource = tableSourceTable.tableSource();
+		if (!(dynamicTableSource instanceof SupportsPartitionPushDown)) {
+			return false;
+		}
+		CatalogTable catalogTable = tableSourceTable.catalogTable();
+		if (!catalogTable.isPartitioned() || catalogTable.getPartitionKeys().size() == 0) {
+			return false;
+		}
+		return Arrays.stream(tableSourceTable.extraDigests()).noneMatch(digest -> digest.startsWith("partitions=["));
+	}
+
+	@Override
+	public void onMatch(RelOptRuleCall call) {
+		Filter filter = call.rel(0);
+		LogicalTableScan scan = call.rel(1);
+		FlinkContext context = call.getPlanner().getContext().unwrap(FlinkContext.class);
+		TableSourceTable tableSourceTable = scan.getTable().unwrap(TableSourceTable.class);
+		// use new dynamic table source to push down
+		DynamicTableSource dynamicTableSource = tableSourceTable.tableSource().copy();
+		// fields to read partitions from catalog and build new statistic
+		Optional<Catalog> catalogOptional = context.getCatalogManager().getCatalog(tableSourceTable.tableIdentifier().getCatalogName());
+		ObjectIdentifier identifier = tableSourceTable.tableIdentifier();
+		ObjectPath tablePath = identifier.toObjectPath();
+		// fields used to prune
+		RelDataType inputFieldTypes = filter.getInput().getRowType();
+		List<String> inputFieldNames = inputFieldTypes.getFieldNames();
+
+		List<String> partitionFieldNames = tableSourceTable.catalogTable().getPartitionKeys();
+
+		RelBuilder relBuilder = call.builder();
+		RexBuilder rexBuilder = relBuilder.getRexBuilder();
+
+		Tuple2<Seq<RexNode>, Seq<RexNode>> allPredicates = RexNodeExtractor.extractPartitionPredicateList(
+			filter.getCondition(),
+			FlinkRelOptUtil.getMaxCnfNodeCount(scan),
+			inputFieldNames.toArray(new String[0]),
+			rexBuilder,
+			partitionFieldNames.toArray(new String[0])
+			);
+
+		RexNode partitionPredicate = RexUtil.composeConjunction(rexBuilder, JavaConversions.seqAsJavaList(allPredicates._1));
+
+		if (partitionPredicate.isAlwaysTrue()){
+			return;
+		}
+
+		List<LogicalType> partitionFieldTypes = partitionFieldNames.stream().map(name -> {
+			int index = inputFieldNames.indexOf(name);
+			if (index < 0) {
+				throw new TableException(String.format("Partitioned key '%s' isn't found in input columns. " +
+					"Validator should have checked that.", name));
+			}
+			return inputFieldTypes.getFieldList().get(index).getType(); })
+			.map(FlinkTypeFactory::toLogicalType).collect(Collectors.toList());
+
+		// get partitions from table source and prune
+		List<Map<String, String>> remainingPartitions = null;
+		Optional<List<Map<String, String>>> optionalPartitions = null;
+		try {
+			optionalPartitions = ((SupportsPartitionPushDown) dynamicTableSource).listPartitions();
+		} catch (UnsupportedOperationException e) {
+			// read partitions from catalog if table source doesn't support listPartitions operation.
+			if (!catalogOptional.isPresent()){
+				throw new TableException(
+					String.format("Table %s must from a catalog, but %s is not a catalog",
+						identifier.asSummaryString(), identifier.getCatalogName()), e);
+			}
+		}
+		if (optionalPartitions != null) {
+			if (!optionalPartitions.isPresent() || optionalPartitions.get().size() == 0) {
+				// return if no partitions
+				return;
+			}
+			// get remaining partitions
+			remainingPartitions = internalPrunePartitions(
+				optionalPartitions.get(),
+				inputFieldNames,
+				partitionFieldNames,
+				partitionFieldTypes,
+				partitionPredicate,
+				context.getTableConfig());
+		} else {
+			RexNodeToExpressionConverter converter = new RexNodeToExpressionConverter(
+				inputFieldNames.toArray(new String[0]),
+				context.getFunctionCatalog(),
+				context.getCatalogManager(),
+				TimeZone.getTimeZone(context.getTableConfig().getLocalTimeZone()));
+			ArrayList<Expression> exprs = new ArrayList<>();
+			Option<ResolvedExpression> subExpr;
+			for (RexNode node: JavaConversions.seqAsJavaList(allPredicates._1)) {
+				subExpr = node.accept(converter);
+				if (!subExpr.isEmpty()) {
+					exprs.add(subExpr.get());
+				}
+			}
+			try {
+				if (exprs.size() > 0) {
+					remainingPartitions = catalogOptional.get().listPartitionsByFilter(tablePath, exprs)
+						.stream()
+						.map(CatalogPartitionSpec::getPartitionSpec)
+						.collect(Collectors.toList());
+				} else {
+					// no filter and get all partitions
+					List<Map<String, String>> partitions = catalogOptional.get().listPartitions(tablePath)
+						.stream()
+						.map(CatalogPartitionSpec::getPartitionSpec)
+						.collect(Collectors.toList());
+					// prune partitions
+					if (partitions.size() > 0) {
+						remainingPartitions = internalPrunePartitions(
+							partitions,
+							inputFieldNames,
+							partitionFieldNames,
+							partitionFieldTypes,
+							partitionPredicate,
+							context.getTableConfig());
+					} else {
+						return;
+					}
+				}
+			} catch (TableNotExistException e) {
+				throw new TableException(String.format("Table %s is not found in catalog.", identifier.asSummaryString()), e);
+			} catch (TableNotPartitionedException e) {
+				// no partitions in table source
+				return;
+			}
+		}

Review comment:
       I think it's better we can split the logic to different sub-methods, and simplify this part logic as:
   1. get partition from TableSource, if succeed, do partition pruning and build new table scan, else fallback to step 2
   2. check whether the catalog exists. if not existing, return. else go to step 2.1
   2.1. try to get partitions through `listPartitionsByFilter` method. if succeed, build new table scan. else go to step 2.2
   2.2. try to get partitions through `listPartitions` method. if failed, return. else do partition pruning and build new table scan.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6697110fb14fef778707f74b66227df2953c487c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107",
       "triggerID" : "6697110fb14fef778707f74b66227df2953c487c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122",
       "triggerID" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5147",
       "triggerID" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "triggerType" : "PUSH"
     }, {
       "hash" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5455",
       "triggerID" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "triggerType" : "PUSH"
     }, {
       "hash" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5458",
       "triggerID" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "triggerType" : "PUSH"
     }, {
       "hash" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5467",
       "triggerID" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 1790d93b79cfdfc3f65a7805e444699736f80d93 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122) 
   * 50079cc5a0bbe71de746f78a22180febf9a35e57 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5467) 
   * b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6697110fb14fef778707f74b66227df2953c487c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107",
       "triggerID" : "6697110fb14fef778707f74b66227df2953c487c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122",
       "triggerID" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5147",
       "triggerID" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "triggerType" : "PUSH"
     }, {
       "hash" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5455",
       "triggerID" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "triggerType" : "PUSH"
     }, {
       "hash" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5458",
       "triggerID" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "triggerType" : "PUSH"
     }, {
       "hash" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5467",
       "triggerID" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5469",
       "triggerID" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "4642a985cde81555583a17880cd2462399338310",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5503",
       "triggerID" : "4642a985cde81555583a17880cd2462399338310",
       "triggerType" : "PUSH"
     }, {
       "hash" : "984744723761b8124aa003f23e65d4bb484a73c7",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5504",
       "triggerID" : "984744723761b8124aa003f23e65d4bb484a73c7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e91a84cc20e3655749b8cf9b69ed79d855aaedaf",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5522",
       "triggerID" : "e91a84cc20e3655749b8cf9b69ed79d855aaedaf",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d02272061d2264ee74d67da5d9f0e1524f1c7d52",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5657",
       "triggerID" : "d02272061d2264ee74d67da5d9f0e1524f1c7d52",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d02272061d2264ee74d67da5d9f0e1524f1c7d52",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5684",
       "triggerID" : "675468974",
       "triggerType" : "MANUAL"
     } ]
   }-->
   ## CI report:
   
   * d02272061d2264ee74d67da5d9f0e1524f1c7d52 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5657) Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5684) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6697110fb14fef778707f74b66227df2953c487c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107",
       "triggerID" : "6697110fb14fef778707f74b66227df2953c487c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122",
       "triggerID" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5147",
       "triggerID" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "triggerType" : "PUSH"
     }, {
       "hash" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 1790d93b79cfdfc3f65a7805e444699736f80d93 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122) 
   * 9a05a030c661def4a45b36370dbcfa5e786ed8dc Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5147) 
   * 97bb191e646a7ddc62d105e08a7473c8e5561160 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6697110fb14fef778707f74b66227df2953c487c",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107",
       "triggerID" : "6697110fb14fef778707f74b66227df2953c487c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122",
       "triggerID" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5147",
       "triggerID" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 6697110fb14fef778707f74b66227df2953c487c Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107) 
   * 1790d93b79cfdfc3f65a7805e444699736f80d93 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122) 
   * 9a05a030c661def4a45b36370dbcfa5e786ed8dc Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5147) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6697110fb14fef778707f74b66227df2953c487c",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107",
       "triggerID" : "6697110fb14fef778707f74b66227df2953c487c",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 0f19df4d1fcbb4093b69d772628b67b81ebb443a Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759) 
   * 6697110fb14fef778707f74b66227df2953c487c Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6697110fb14fef778707f74b66227df2953c487c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107",
       "triggerID" : "6697110fb14fef778707f74b66227df2953c487c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122",
       "triggerID" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5147",
       "triggerID" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "triggerType" : "PUSH"
     }, {
       "hash" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5455",
       "triggerID" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "triggerType" : "PUSH"
     }, {
       "hash" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5458",
       "triggerID" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "triggerType" : "PUSH"
     }, {
       "hash" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5467",
       "triggerID" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5469",
       "triggerID" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "4642a985cde81555583a17880cd2462399338310",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5503",
       "triggerID" : "4642a985cde81555583a17880cd2462399338310",
       "triggerType" : "PUSH"
     }, {
       "hash" : "984744723761b8124aa003f23e65d4bb484a73c7",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5504",
       "triggerID" : "984744723761b8124aa003f23e65d4bb484a73c7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e91a84cc20e3655749b8cf9b69ed79d855aaedaf",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5522",
       "triggerID" : "e91a84cc20e3655749b8cf9b69ed79d855aaedaf",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d02272061d2264ee74d67da5d9f0e1524f1c7d52",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5657",
       "triggerID" : "d02272061d2264ee74d67da5d9f0e1524f1c7d52",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * d02272061d2264ee74d67da5d9f0e1524f1c7d52 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5657) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6697110fb14fef778707f74b66227df2953c487c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107",
       "triggerID" : "6697110fb14fef778707f74b66227df2953c487c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122",
       "triggerID" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5147",
       "triggerID" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 1790d93b79cfdfc3f65a7805e444699736f80d93 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122) 
   * 9a05a030c661def4a45b36370dbcfa5e786ed8dc Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5147) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6697110fb14fef778707f74b66227df2953c487c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107",
       "triggerID" : "6697110fb14fef778707f74b66227df2953c487c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122",
       "triggerID" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5147",
       "triggerID" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "triggerType" : "PUSH"
     }, {
       "hash" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5455",
       "triggerID" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "triggerType" : "PUSH"
     }, {
       "hash" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5458",
       "triggerID" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "triggerType" : "PUSH"
     }, {
       "hash" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5467",
       "triggerID" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5469",
       "triggerID" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "4642a985cde81555583a17880cd2462399338310",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5503",
       "triggerID" : "4642a985cde81555583a17880cd2462399338310",
       "triggerType" : "PUSH"
     }, {
       "hash" : "984744723761b8124aa003f23e65d4bb484a73c7",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5504",
       "triggerID" : "984744723761b8124aa003f23e65d4bb484a73c7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e91a84cc20e3655749b8cf9b69ed79d855aaedaf",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "e91a84cc20e3655749b8cf9b69ed79d855aaedaf",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 1790d93b79cfdfc3f65a7805e444699736f80d93 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122) 
   * 984744723761b8124aa003f23e65d4bb484a73c7 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5504) 
   * e91a84cc20e3655749b8cf9b69ed79d855aaedaf UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6697110fb14fef778707f74b66227df2953c487c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107",
       "triggerID" : "6697110fb14fef778707f74b66227df2953c487c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122",
       "triggerID" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5147",
       "triggerID" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "triggerType" : "PUSH"
     }, {
       "hash" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5455",
       "triggerID" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "triggerType" : "PUSH"
     }, {
       "hash" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5458",
       "triggerID" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "triggerType" : "PUSH"
     }, {
       "hash" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5467",
       "triggerID" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5469",
       "triggerID" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "4642a985cde81555583a17880cd2462399338310",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5503",
       "triggerID" : "4642a985cde81555583a17880cd2462399338310",
       "triggerType" : "PUSH"
     }, {
       "hash" : "984744723761b8124aa003f23e65d4bb484a73c7",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5504",
       "triggerID" : "984744723761b8124aa003f23e65d4bb484a73c7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e91a84cc20e3655749b8cf9b69ed79d855aaedaf",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5522",
       "triggerID" : "e91a84cc20e3655749b8cf9b69ed79d855aaedaf",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 1790d93b79cfdfc3f65a7805e444699736f80d93 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122) 
   * e91a84cc20e3655749b8cf9b69ed79d855aaedaf Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5522) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] godfreyhe commented on a change in pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
godfreyhe commented on a change in pull request #12966:
URL: https://github.com/apache/flink/pull/12966#discussion_r470441955



##########
File path: flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/runtime/batch/sql/PartitionableSourceITCase.scala
##########
@@ -0,0 +1,135 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.runtime.batch.sql
+
+import java.util
+
+import org.apache.flink.table.catalog.{CatalogPartitionImpl, CatalogPartitionSpec, ObjectPath}
+import org.apache.flink.table.planner.factories.{TestValuesCatalog, TestValuesTableFactory}
+import org.apache.flink.table.planner.runtime.utils.BatchTestBase
+import org.apache.flink.table.planner.runtime.utils.BatchTestBase.row
+import org.junit.{Before, Test}
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+import scala.collection.JavaConversions._
+
+@RunWith(classOf[Parameterized])
+class PartitionableSourceITCase(
+  val sourceFetchPartitions: Boolean,
+  val useCatalogFilter: Boolean) extends BatchTestBase{
+
+  @Before
+  override def before() : Unit = {
+    super.before()
+    env.setParallelism(1) // set sink parallelism to 1
+    val data = Seq(
+      row(1, "ZhangSan", "A", 1),
+      row(2, "LiSi", "A", 1),
+      row(3, "Jack", "A", 2),
+      row(4, "Tom", "B", 3),
+      row(5, "Vivi", "C", 1)
+    )
+    val myTableDataId = TestValuesTableFactory.registerData(data)
+
+    val ddlTemp =
+      s"""
+        |CREATE TABLE MyTable (
+        |  id int,
+        |  name string,
+        |  part1 string,
+        |  part2 int,
+        |  virtualField as part2 + 1)
+        |  partitioned by (part1, part2)
+        |  with (
+        |    'connector' = 'values',
+        |    'data-id' = '$myTableDataId',
+        |    'bounded' = 'true',
+        |    'partition-list' = '%s'
+        |)
+        |""".stripMargin
+
+    if (sourceFetchPartitions) {
+      val partitions = "part1:A,part2:1;part1:A,part2:2;part1:B,part2:3;part1:C,part2:1"
+      tEnv.executeSql(String.format(ddlTemp, partitions))
+    } else {
+      tEnv.executeSql("drop catalog default_catalog")
+      val catalog =
+        new TestValuesCatalog("default_catalog", "default_database", useCatalogFilter);
+      tEnv.registerCatalog("default_catalog", catalog)
+      tEnv.useCatalog("default_catalog")

Review comment:
       just create a new catalog with different catalog name and set it as default




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] godfreyhe commented on a change in pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
godfreyhe commented on a change in pull request #12966:
URL: https://github.com/apache/flink/pull/12966#discussion_r467726763



##########
File path: flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/PushPartitionIntoTableSourceScanRule.java
##########
@@ -0,0 +1,325 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.rules.logical;
+
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.catalog.Catalog;
+import org.apache.flink.table.catalog.CatalogPartitionSpec;
+import org.apache.flink.table.catalog.CatalogTable;
+import org.apache.flink.table.catalog.ObjectIdentifier;
+import org.apache.flink.table.catalog.ObjectPath;
+import org.apache.flink.table.catalog.exceptions.CatalogException;
+import org.apache.flink.table.catalog.exceptions.PartitionNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotPartitionedException;
+import org.apache.flink.table.catalog.stats.CatalogColumnStatistics;
+import org.apache.flink.table.catalog.stats.CatalogTableStatistics;
+import org.apache.flink.table.connector.source.DynamicTableSource;
+import org.apache.flink.table.connector.source.abilities.SupportsPartitionPushDown;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.expressions.ResolvedExpression;
+import org.apache.flink.table.plan.stats.TableStats;
+import org.apache.flink.table.planner.calcite.FlinkContext;
+import org.apache.flink.table.planner.calcite.FlinkTypeFactory;
+import org.apache.flink.table.planner.plan.schema.TableSourceTable;
+import org.apache.flink.table.planner.plan.stats.FlinkStatistic;
+import org.apache.flink.table.planner.plan.utils.FlinkRelOptUtil;
+import org.apache.flink.table.planner.plan.utils.PartitionPruner;
+import org.apache.flink.table.planner.plan.utils.RexNodeExtractor;
+import org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter;
+import org.apache.flink.table.planner.utils.CatalogTableStatisticsConverter;
+import org.apache.flink.table.types.logical.LogicalType;
+
+import org.apache.calcite.plan.RelOptRule;
+import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.rel.core.Filter;
+import org.apache.calcite.rel.logical.LogicalTableScan;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rex.RexBuilder;
+import org.apache.calcite.rex.RexInputRef;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.rex.RexShuttle;
+import org.apache.calcite.rex.RexUtil;
+import org.apache.calcite.tools.RelBuilder;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.TimeZone;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+import scala.Option;
+import scala.Tuple2;
+import scala.collection.JavaConversions;
+import scala.collection.Seq;
+
+/**
+ * Planner rule that tries to push partition evaluated by filter condition into a {@link LogicalTableScan}.
+*/
+public class PushPartitionIntoTableSourceScanRule extends RelOptRule {
+	public static final PushPartitionIntoTableSourceScanRule INSTANCE = new PushPartitionIntoTableSourceScanRule();
+
+	public PushPartitionIntoTableSourceScanRule(){
+		super(operand(Filter.class,
+				operand(LogicalTableScan.class, none())),
+			"PushPartitionTableSourceScanRule");
+	}
+
+	@Override
+	public boolean matches(RelOptRuleCall call) {
+		Filter filter = call.rel(0);
+		if (filter.getCondition() == null) {
+			return false;
+		}
+		TableSourceTable tableSourceTable = call.rel(1).getTable().unwrap(TableSourceTable.class);
+		if (tableSourceTable == null){
+			return false;
+		}
+		DynamicTableSource dynamicTableSource = tableSourceTable.tableSource();
+		if (!(dynamicTableSource instanceof SupportsPartitionPushDown)) {
+			return false;
+		}
+		CatalogTable catalogTable = tableSourceTable.catalogTable();
+		if (!catalogTable.isPartitioned() || catalogTable.getPartitionKeys().isEmpty()) {
+			return false;
+		}
+		return Arrays.stream(tableSourceTable.extraDigests()).noneMatch(digest -> digest.startsWith("partitions=["));
+	}
+
+	@Override
+	public void onMatch(RelOptRuleCall call) {
+		Filter filter = call.rel(0);
+		LogicalTableScan scan = call.rel(1);
+		FlinkContext context = call.getPlanner().getContext().unwrap(FlinkContext.class);
+		TableSourceTable tableSourceTable = scan.getTable().unwrap(TableSourceTable.class);
+
+		// build pruner
+		RelDataType inputFieldTypes = filter.getInput().getRowType();
+		List<String> inputFieldNames = inputFieldTypes.getFieldNames();
+		List<String> partitionFieldNames = tableSourceTable.catalogTable().getPartitionKeys();
+		RelBuilder relBuilder = call.builder();
+		RexBuilder rexBuilder = relBuilder.getRexBuilder();
+		Tuple2<Seq<RexNode>, Seq<RexNode>> allPredicates = RexNodeExtractor.extractPartitionPredicateList(
+			filter.getCondition(),
+			FlinkRelOptUtil.getMaxCnfNodeCount(scan),
+			inputFieldNames.toArray(new String[0]),
+			rexBuilder,
+			partitionFieldNames.toArray(new String[0])
+			);
+		RexNode partitionPredicate = RexUtil.composeConjunction(rexBuilder, JavaConversions.seqAsJavaList(allPredicates._1));
+
+		if (partitionPredicate.isAlwaysTrue()){
+			return;
+		}
+
+		List<LogicalType> partitionFieldTypes = partitionFieldNames.stream().map(name -> {
+			int index  = inputFieldNames.indexOf(name);
+			if (index < 0) {
+				throw new TableException(String.format("Partitioned key '%s' isn't found in input columns. " +
+					"Validator should have checked that.", name));
+			}			return inputFieldTypes.getFieldList().get(index).getType(); })
+			.map(FlinkTypeFactory::toLogicalType).collect(Collectors.toList());
+
+		RexNode finalPartitionPredicate = adjustPartitionPredicate(inputFieldNames, partitionFieldNames, partitionPredicate);
+		Function<List<Map<String, String>>, List<Map<String, String>>> defaultPruner = partitions -> PartitionPruner.prunePartitions(
+			context.getTableConfig(),
+			partitionFieldNames.toArray(new String[0]),
+			partitionFieldTypes.toArray(new LogicalType[0]),
+			partitions,
+			finalPartitionPredicate);
+
+		// get partitions from table/catalog and prune
+		Optional<Catalog> catalogOptional = context.getCatalogManager().getCatalog(tableSourceTable.tableIdentifier().getCatalogName());
+		List<Map<String, String>> remainingPartitions = null;
+		Optional<List<Map<String, String>>> optionalPartitions;
+		// fields to read partitions from catalog and build new statistic
+		DynamicTableSource dynamicTableSource = tableSourceTable.tableSource().copy();
+		ObjectIdentifier identifier = tableSourceTable.tableIdentifier();
+		ObjectPath tablePath = identifier.toObjectPath();
+		try {
+			optionalPartitions = ((SupportsPartitionPushDown) dynamicTableSource).listPartitions();
+			if (optionalPartitions.isPresent() && !optionalPartitions.get().isEmpty()) {
+				remainingPartitions = defaultPruner.apply(optionalPartitions.get());
+			}
+		} catch (UnsupportedOperationException e) {
+			// check catalog whether is available
+			// we will read partitions from catalog if table doesn't support listPartitions.
+			if (!catalogOptional.isPresent()){

Review comment:
       nit: please add a blank between `)`and `{`, there are many similar case: line 240, line 250, etc

##########
File path: flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/PushPartitionIntoTableSourceScanRule.java
##########
@@ -0,0 +1,325 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.rules.logical;
+
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.catalog.Catalog;
+import org.apache.flink.table.catalog.CatalogPartitionSpec;
+import org.apache.flink.table.catalog.CatalogTable;
+import org.apache.flink.table.catalog.ObjectIdentifier;
+import org.apache.flink.table.catalog.ObjectPath;
+import org.apache.flink.table.catalog.exceptions.CatalogException;
+import org.apache.flink.table.catalog.exceptions.PartitionNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotPartitionedException;
+import org.apache.flink.table.catalog.stats.CatalogColumnStatistics;
+import org.apache.flink.table.catalog.stats.CatalogTableStatistics;
+import org.apache.flink.table.connector.source.DynamicTableSource;
+import org.apache.flink.table.connector.source.abilities.SupportsPartitionPushDown;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.expressions.ResolvedExpression;
+import org.apache.flink.table.plan.stats.TableStats;
+import org.apache.flink.table.planner.calcite.FlinkContext;
+import org.apache.flink.table.planner.calcite.FlinkTypeFactory;
+import org.apache.flink.table.planner.plan.schema.TableSourceTable;
+import org.apache.flink.table.planner.plan.stats.FlinkStatistic;
+import org.apache.flink.table.planner.plan.utils.FlinkRelOptUtil;
+import org.apache.flink.table.planner.plan.utils.PartitionPruner;
+import org.apache.flink.table.planner.plan.utils.RexNodeExtractor;
+import org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter;
+import org.apache.flink.table.planner.utils.CatalogTableStatisticsConverter;
+import org.apache.flink.table.types.logical.LogicalType;
+
+import org.apache.calcite.plan.RelOptRule;
+import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.rel.core.Filter;
+import org.apache.calcite.rel.logical.LogicalTableScan;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rex.RexBuilder;
+import org.apache.calcite.rex.RexInputRef;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.rex.RexShuttle;
+import org.apache.calcite.rex.RexUtil;
+import org.apache.calcite.tools.RelBuilder;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.TimeZone;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+import scala.Option;
+import scala.Tuple2;
+import scala.collection.JavaConversions;
+import scala.collection.Seq;
+
+/**
+ * Planner rule that tries to push partition evaluated by filter condition into a {@link LogicalTableScan}.
+*/
+public class PushPartitionIntoTableSourceScanRule extends RelOptRule {
+	public static final PushPartitionIntoTableSourceScanRule INSTANCE = new PushPartitionIntoTableSourceScanRule();
+
+	public PushPartitionIntoTableSourceScanRule(){
+		super(operand(Filter.class,
+				operand(LogicalTableScan.class, none())),
+			"PushPartitionTableSourceScanRule");
+	}
+
+	@Override
+	public boolean matches(RelOptRuleCall call) {
+		Filter filter = call.rel(0);
+		if (filter.getCondition() == null) {
+			return false;
+		}
+		TableSourceTable tableSourceTable = call.rel(1).getTable().unwrap(TableSourceTable.class);
+		if (tableSourceTable == null){
+			return false;
+		}
+		DynamicTableSource dynamicTableSource = tableSourceTable.tableSource();
+		if (!(dynamicTableSource instanceof SupportsPartitionPushDown)) {
+			return false;
+		}
+		CatalogTable catalogTable = tableSourceTable.catalogTable();
+		if (!catalogTable.isPartitioned() || catalogTable.getPartitionKeys().isEmpty()) {
+			return false;
+		}
+		return Arrays.stream(tableSourceTable.extraDigests()).noneMatch(digest -> digest.startsWith("partitions=["));
+	}
+
+	@Override
+	public void onMatch(RelOptRuleCall call) {
+		Filter filter = call.rel(0);
+		LogicalTableScan scan = call.rel(1);
+		FlinkContext context = call.getPlanner().getContext().unwrap(FlinkContext.class);
+		TableSourceTable tableSourceTable = scan.getTable().unwrap(TableSourceTable.class);
+
+		// build pruner
+		RelDataType inputFieldTypes = filter.getInput().getRowType();
+		List<String> inputFieldNames = inputFieldTypes.getFieldNames();
+		List<String> partitionFieldNames = tableSourceTable.catalogTable().getPartitionKeys();
+		RelBuilder relBuilder = call.builder();
+		RexBuilder rexBuilder = relBuilder.getRexBuilder();
+		Tuple2<Seq<RexNode>, Seq<RexNode>> allPredicates = RexNodeExtractor.extractPartitionPredicateList(
+			filter.getCondition(),
+			FlinkRelOptUtil.getMaxCnfNodeCount(scan),
+			inputFieldNames.toArray(new String[0]),
+			rexBuilder,
+			partitionFieldNames.toArray(new String[0])
+			);
+		RexNode partitionPredicate = RexUtil.composeConjunction(rexBuilder, JavaConversions.seqAsJavaList(allPredicates._1));
+
+		if (partitionPredicate.isAlwaysTrue()){
+			return;
+		}
+
+		List<LogicalType> partitionFieldTypes = partitionFieldNames.stream().map(name -> {
+			int index  = inputFieldNames.indexOf(name);
+			if (index < 0) {
+				throw new TableException(String.format("Partitioned key '%s' isn't found in input columns. " +
+					"Validator should have checked that.", name));
+			}			return inputFieldTypes.getFieldList().get(index).getType(); })
+			.map(FlinkTypeFactory::toLogicalType).collect(Collectors.toList());
+
+		RexNode finalPartitionPredicate = adjustPartitionPredicate(inputFieldNames, partitionFieldNames, partitionPredicate);
+		Function<List<Map<String, String>>, List<Map<String, String>>> defaultPruner = partitions -> PartitionPruner.prunePartitions(
+			context.getTableConfig(),
+			partitionFieldNames.toArray(new String[0]),
+			partitionFieldTypes.toArray(new LogicalType[0]),
+			partitions,
+			finalPartitionPredicate);
+
+		// get partitions from table/catalog and prune
+		Optional<Catalog> catalogOptional = context.getCatalogManager().getCatalog(tableSourceTable.tableIdentifier().getCatalogName());
+		List<Map<String, String>> remainingPartitions = null;
+		Optional<List<Map<String, String>>> optionalPartitions;
+		// fields to read partitions from catalog and build new statistic
+		DynamicTableSource dynamicTableSource = tableSourceTable.tableSource().copy();
+		ObjectIdentifier identifier = tableSourceTable.tableIdentifier();
+		ObjectPath tablePath = identifier.toObjectPath();
+		try {
+			optionalPartitions = ((SupportsPartitionPushDown) dynamicTableSource).listPartitions();
+			if (optionalPartitions.isPresent() && !optionalPartitions.get().isEmpty()) {
+				remainingPartitions = defaultPruner.apply(optionalPartitions.get());
+			}
+		} catch (UnsupportedOperationException e) {
+			// check catalog whether is available
+			// we will read partitions from catalog if table doesn't support listPartitions.
+			if (!catalogOptional.isPresent()){
+				throw new TableException(
+					String.format("Table %s must from a catalog, but %s is not a catalog",
+						identifier.asSummaryString(), identifier.getCatalogName()), e);
+			}
+			try {
+				optionalPartitions = readPartitionFromCatalogAndPrune(
+					context,
+					catalogOptional.get(),
+					identifier,
+					inputFieldNames,
+					allPredicates._1(),
+					defaultPruner
+				);
+				if (optionalPartitions.isPresent() && !optionalPartitions.get().isEmpty()) {
+					remainingPartitions = optionalPartitions.get();
+				}
+			} catch (TableNotExistException tableNotExistException) {
+				throw new TableException(String.format("Table %s is not found in catalog.", identifier.asSummaryString()), e);
+			} catch (TableNotPartitionedException tableNotPartitionedException) {
+				remainingPartitions = null;
+			}
+		}
+		if (remainingPartitions != null) {
+			((SupportsPartitionPushDown) dynamicTableSource).applyPartitions(remainingPartitions);
+		}
+
+		// build new statistic
+		TableStats newTableStat = null;
+		Optional<TableStats> partitionStats;
+		if (remainingPartitions != null && catalogOptional.isPresent()) {
+			for (Map<String, String> partition: remainingPartitions) {
+				partitionStats = getPartitionStats(catalogOptional.get(), tablePath, partition);
+				if (!partitionStats.isPresent()) {
+					// clear all information before
+					newTableStat = null;
+					break;
+				} else {
+					newTableStat = newTableStat == null ? partitionStats.get() : newTableStat.merge(partitionStats.get());
+				}
+			}
+		}
+		FlinkStatistic newStatistic = FlinkStatistic.builder()
+			.statistic(tableSourceTable.getStatistic())
+			.tableStats(newTableStat)
+			.build();
+
+		String extraDigest = remainingPartitions == null ? "partitions=[]" :
+			("partitions=[" +
+				String.join(", ", remainingPartitions
+					.stream()
+					.map(Object::toString)
+					.toArray(String[]::new)) +
+				"]");
+		TableSourceTable newTableSourceTable = tableSourceTable.copy(dynamicTableSource, newStatistic, new String[]{extraDigest});
+
+		LogicalTableScan newScan = LogicalTableScan.create(scan.getCluster(), newTableSourceTable, scan.getHints());
+
+		RexNode nonPartitionPredicate = RexUtil.composeConjunction(rexBuilder, JavaConversions.seqAsJavaList(allPredicates._2()));
+		if (nonPartitionPredicate.isAlwaysTrue()) {
+			call.transformTo(newScan);
+		} else {
+			Filter newFilter = filter.copy(filter.getTraitSet(), newScan, nonPartitionPredicate);
+			call.transformTo(newFilter);
+		}
+	}
+
+	/**
+	 * adjust the partition field reference index to evaluate the partition values.
+	 * e.g. the original input fields is: a, b, c, p, and p is partition field. the partition values
+	 * are: List(Map("p"->"1"), Map("p" -> "2"), Map("p" -> "3")). If the original partition
+	 * predicate is $3 > 1. after adjusting, the new predicate is ($0 > 1).
+	 * and use ($0 > 1) to evaluate partition values (row(1), row(2), row(3)).
+	 */
+	private RexNode adjustPartitionPredicate(List<String> inputFieldNames, List<String> partitionFieldNames, RexNode partitionPredicate) {
+		return partitionPredicate.accept(new RexShuttle(){
+			@Override
+			public RexNode visitInputRef(RexInputRef inputRef) {
+				int index = inputRef.getIndex();
+				String fieldName = inputFieldNames.get(index);
+				int newIndex = partitionFieldNames.indexOf(fieldName);
+				if (newIndex < 0) {
+					throw new TableException(String.format("Field name '%s' isn't found in partitioned columns." +
+						" Validator should have checked that.", fieldName));
+				}
+				if (newIndex == index){
+					return inputRef;
+				} else {
+					return new RexInputRef(newIndex, inputRef.getType());
+				}
+			}
+		});
+	}
+
+	private Optional<List<Map<String, String>>> readPartitionFromCatalogAndPrune(
+			FlinkContext context,
+			Catalog catalog,
+			ObjectIdentifier tableIdentifier,
+			List<String> allFieldNames,
+			Seq<RexNode> partitionPredicate,
+			Function<List<Map<String, String>>, List<Map<String, String>>> pruner) throws TableNotExistException, TableNotPartitionedException{
+		RexNodeToExpressionConverter converter = new RexNodeToExpressionConverter(
+			allFieldNames.toArray(new String[0]),
+			context.getFunctionCatalog(),
+			context.getCatalogManager(),
+			TimeZone.getTimeZone(context.getTableConfig().getLocalTimeZone()));
+		ArrayList<Expression> partitionFilters = new ArrayList<>();
+		Option<ResolvedExpression> subExpr;
+		for (RexNode node: JavaConversions.seqAsJavaList(partitionPredicate)) {
+			subExpr = node.accept(converter);
+			if (!subExpr.isEmpty()) {
+				partitionFilters.add(subExpr.get());
+			}
+		}
+		ObjectPath tablePath = tableIdentifier.toObjectPath();
+		if (partitionFilters.size() > 0) {
+			try {
+				List<Map<String, String>> remainingPartitions = catalog.listPartitionsByFilter(tablePath, partitionFilters)
+					.stream()
+					.map(CatalogPartitionSpec::getPartitionSpec)
+					.collect(Collectors.toList());
+				return Optional.of(remainingPartitions);
+			} catch (UnsupportedOperationException e) {
+				return readPartitionFromCatalogWithoutFilterAndPrune(catalog, tablePath, pruner);
+			}
+		} else {
+			return readPartitionFromCatalogWithoutFilterAndPrune(catalog, tablePath, pruner);
+		}
+	}
+
+	private Optional<List<Map<String, String>>> readPartitionFromCatalogWithoutFilterAndPrune(
+			Catalog catalog,
+			ObjectPath tablePath,
+			Function<List<Map<String, String>>, List<Map<String, String>>> pruner) throws TableNotExistException, TableNotPartitionedException, CatalogException {
+		List<Map<String, String>> remainingPartitions;
+		List<Map<String, String>> partitions = catalog.listPartitions(tablePath)
+			.stream()
+			.map(CatalogPartitionSpec::getPartitionSpec)
+			.collect(Collectors.toList());
+		// prune partitions
+		if (partitions.size() > 0) {
+			remainingPartitions = pruner.apply(partitions);
+			return Optional.of(remainingPartitions);
+		} else {
+			return Optional.empty();
+		}
+	}
+
+	private Optional<TableStats> getPartitionStats(Catalog catalog, ObjectPath tablePath, Map<String, String> partition) {
+		try {
+			CatalogPartitionSpec spec = new CatalogPartitionSpec(partition);
+			CatalogTableStatistics partitionStat = catalog.getPartitionStatistics(tablePath, spec);
+			CatalogColumnStatistics	partitionColStat = catalog.getPartitionColumnStatistics(tablePath, spec);
+			TableStats	stats = CatalogTableStatisticsConverter.convertToTableStats(partitionStat, partitionColStat);

Review comment:
       ditto

##########
File path: flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/planner/factories/TestValuesTableFactory.java
##########
@@ -328,7 +357,59 @@ public DynamicTableSink createDynamicTableSink(Context context) {
 			RUNTIME_SINK,
 			SINK_EXPECTED_MESSAGES_NUM,
 			NESTED_PROJECTION_SUPPORTED,
-			FILTERABLE_FIELDS));
+			FILTERABLE_FIELDS,
+			USE_PARTITION_PUSH_DOWN,
+			PARTITION_LIST));
+	}
+
+	private List<Map<String, String>> parsePartitionList(String partitionString) {
+		return Arrays.stream(partitionString.split(";")).map(
+			partition -> {
+				Map<String, String> spec = new HashMap<>();
+				Arrays.stream(partition.split(",")).forEach(pair -> {
+					String[] split = pair.split(":");
+					spec.put(split[0].trim(), split[1].trim());
+				});
+				return spec;
+			}
+		).collect(Collectors.toList());
+	}
+
+	private Map<Map<String, String>, Collection<Row>> mapRowsToPartitions(
+			TableSchema schema,
+			Collection<Row> rows,
+			List<Map<String, String>> partitions) {
+		if (!rows.isEmpty() && partitions.isEmpty()) {
+			throw new IllegalArgumentException(
+				"Please add partition list if use partition push down. Currently TestValuesTableSource doesn't support create partition list automatically.");
+		}
+		Map<Map<String, String>, Collection<Row>> map = new HashMap<>();
+		for (Map<String, String> partition: partitions) {
+			map.put(partition, new ArrayList<>());
+		}
+		String[] fieldnames = schema.getFieldNames();
+		boolean match = true;
+		for (Row row: rows) {
+			for (Map<String, String> partition: partitions) {
+				match = true;
+				for (Map.Entry<?, ?> entry: partition.entrySet()) {

Review comment:
       `Map.Entry<?, ?>` -> `Map.Entry<String, String>`

##########
File path: flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/planner/plan/rules/logical/PushPartitionIntoTableSourceScanRuleTest.java
##########
@@ -0,0 +1,85 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.rules.logical;
+
+import org.apache.flink.table.planner.calcite.CalciteConfig;
+import org.apache.flink.table.planner.plan.optimize.program.BatchOptimizeContext;
+import org.apache.flink.table.planner.plan.optimize.program.FlinkBatchProgram;
+import org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgramBuilder;
+import org.apache.flink.table.planner.plan.optimize.program.HEP_RULES_EXECUTION_TYPE;
+import org.apache.flink.table.planner.utils.TableConfigUtils;
+
+import org.apache.calcite.plan.hep.HepMatchOrder;
+import org.apache.calcite.rel.rules.FilterProjectTransposeRule;
+import org.apache.calcite.tools.RuleSets;
+
+/**
+ * Test for {@link PushPartitionIntoTableSourceScanRule}.
+ */
+public class PushPartitionIntoTableSourceScanRuleTest extends PushPartitionIntoLegacyTableSourceScanRuleTest{

Review comment:
       It seems many logic in `PushPartitionIntoTableSourceScanRule` is not covered, such as: list partitions by filter, list partition without filter, etc

##########
File path: flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/planner/factories/TestValuesTableFactory.java
##########
@@ -459,7 +547,15 @@ public LookupRuntimeProvider getLookupRuntimeProvider(LookupContext context) {
 				.mapToInt(k -> k[0])
 				.toArray();
 			Map<Row, List<Row>> mapping = new HashMap<>();
-			data.forEach(record -> {
+			Collection<Row> rows;
+			if (allPartitions.equals(Collections.EMPTY_LIST)) {
+				rows = data.getOrDefault(Collections.EMPTY_MAP, Collections.EMPTY_LIST);
+			} else {
+				rows = new ArrayList<>();
+				allPartitions.stream()

Review comment:
       nit: `.stream()` is unnecessary

##########
File path: flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/PushPartitionIntoTableSourceScanRule.java
##########
@@ -0,0 +1,325 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.rules.logical;
+
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.catalog.Catalog;
+import org.apache.flink.table.catalog.CatalogPartitionSpec;
+import org.apache.flink.table.catalog.CatalogTable;
+import org.apache.flink.table.catalog.ObjectIdentifier;
+import org.apache.flink.table.catalog.ObjectPath;
+import org.apache.flink.table.catalog.exceptions.CatalogException;
+import org.apache.flink.table.catalog.exceptions.PartitionNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotPartitionedException;
+import org.apache.flink.table.catalog.stats.CatalogColumnStatistics;
+import org.apache.flink.table.catalog.stats.CatalogTableStatistics;
+import org.apache.flink.table.connector.source.DynamicTableSource;
+import org.apache.flink.table.connector.source.abilities.SupportsPartitionPushDown;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.expressions.ResolvedExpression;
+import org.apache.flink.table.plan.stats.TableStats;
+import org.apache.flink.table.planner.calcite.FlinkContext;
+import org.apache.flink.table.planner.calcite.FlinkTypeFactory;
+import org.apache.flink.table.planner.plan.schema.TableSourceTable;
+import org.apache.flink.table.planner.plan.stats.FlinkStatistic;
+import org.apache.flink.table.planner.plan.utils.FlinkRelOptUtil;
+import org.apache.flink.table.planner.plan.utils.PartitionPruner;
+import org.apache.flink.table.planner.plan.utils.RexNodeExtractor;
+import org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter;
+import org.apache.flink.table.planner.utils.CatalogTableStatisticsConverter;
+import org.apache.flink.table.types.logical.LogicalType;
+
+import org.apache.calcite.plan.RelOptRule;
+import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.rel.core.Filter;
+import org.apache.calcite.rel.logical.LogicalTableScan;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rex.RexBuilder;
+import org.apache.calcite.rex.RexInputRef;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.rex.RexShuttle;
+import org.apache.calcite.rex.RexUtil;
+import org.apache.calcite.tools.RelBuilder;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.TimeZone;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+import scala.Option;
+import scala.Tuple2;
+import scala.collection.JavaConversions;
+import scala.collection.Seq;
+
+/**
+ * Planner rule that tries to push partition evaluated by filter condition into a {@link LogicalTableScan}.
+*/
+public class PushPartitionIntoTableSourceScanRule extends RelOptRule {
+	public static final PushPartitionIntoTableSourceScanRule INSTANCE = new PushPartitionIntoTableSourceScanRule();
+
+	public PushPartitionIntoTableSourceScanRule(){
+		super(operand(Filter.class,
+				operand(LogicalTableScan.class, none())),
+			"PushPartitionTableSourceScanRule");
+	}
+
+	@Override
+	public boolean matches(RelOptRuleCall call) {
+		Filter filter = call.rel(0);
+		if (filter.getCondition() == null) {
+			return false;
+		}
+		TableSourceTable tableSourceTable = call.rel(1).getTable().unwrap(TableSourceTable.class);
+		if (tableSourceTable == null){
+			return false;
+		}
+		DynamicTableSource dynamicTableSource = tableSourceTable.tableSource();
+		if (!(dynamicTableSource instanceof SupportsPartitionPushDown)) {
+			return false;
+		}
+		CatalogTable catalogTable = tableSourceTable.catalogTable();
+		if (!catalogTable.isPartitioned() || catalogTable.getPartitionKeys().isEmpty()) {
+			return false;
+		}
+		return Arrays.stream(tableSourceTable.extraDigests()).noneMatch(digest -> digest.startsWith("partitions=["));
+	}
+
+	@Override
+	public void onMatch(RelOptRuleCall call) {
+		Filter filter = call.rel(0);
+		LogicalTableScan scan = call.rel(1);
+		FlinkContext context = call.getPlanner().getContext().unwrap(FlinkContext.class);
+		TableSourceTable tableSourceTable = scan.getTable().unwrap(TableSourceTable.class);
+
+		// build pruner
+		RelDataType inputFieldTypes = filter.getInput().getRowType();
+		List<String> inputFieldNames = inputFieldTypes.getFieldNames();
+		List<String> partitionFieldNames = tableSourceTable.catalogTable().getPartitionKeys();
+		RelBuilder relBuilder = call.builder();
+		RexBuilder rexBuilder = relBuilder.getRexBuilder();
+		Tuple2<Seq<RexNode>, Seq<RexNode>> allPredicates = RexNodeExtractor.extractPartitionPredicateList(
+			filter.getCondition(),
+			FlinkRelOptUtil.getMaxCnfNodeCount(scan),
+			inputFieldNames.toArray(new String[0]),
+			rexBuilder,
+			partitionFieldNames.toArray(new String[0])
+			);
+		RexNode partitionPredicate = RexUtil.composeConjunction(rexBuilder, JavaConversions.seqAsJavaList(allPredicates._1));
+
+		if (partitionPredicate.isAlwaysTrue()){
+			return;
+		}
+
+		List<LogicalType> partitionFieldTypes = partitionFieldNames.stream().map(name -> {
+			int index  = inputFieldNames.indexOf(name);
+			if (index < 0) {
+				throw new TableException(String.format("Partitioned key '%s' isn't found in input columns. " +
+					"Validator should have checked that.", name));
+			}			return inputFieldTypes.getFieldList().get(index).getType(); })
+			.map(FlinkTypeFactory::toLogicalType).collect(Collectors.toList());
+
+		RexNode finalPartitionPredicate = adjustPartitionPredicate(inputFieldNames, partitionFieldNames, partitionPredicate);
+		Function<List<Map<String, String>>, List<Map<String, String>>> defaultPruner = partitions -> PartitionPruner.prunePartitions(
+			context.getTableConfig(),
+			partitionFieldNames.toArray(new String[0]),
+			partitionFieldTypes.toArray(new LogicalType[0]),
+			partitions,
+			finalPartitionPredicate);
+
+		// get partitions from table/catalog and prune
+		Optional<Catalog> catalogOptional = context.getCatalogManager().getCatalog(tableSourceTable.tableIdentifier().getCatalogName());
+		List<Map<String, String>> remainingPartitions = null;
+		Optional<List<Map<String, String>>> optionalPartitions;
+		// fields to read partitions from catalog and build new statistic
+		DynamicTableSource dynamicTableSource = tableSourceTable.tableSource().copy();
+		ObjectIdentifier identifier = tableSourceTable.tableIdentifier();
+		ObjectPath tablePath = identifier.toObjectPath();
+		try {
+			optionalPartitions = ((SupportsPartitionPushDown) dynamicTableSource).listPartitions();
+			if (optionalPartitions.isPresent() && !optionalPartitions.get().isEmpty()) {
+				remainingPartitions = defaultPruner.apply(optionalPartitions.get());
+			}
+		} catch (UnsupportedOperationException e) {
+			// check catalog whether is available
+			// we will read partitions from catalog if table doesn't support listPartitions.
+			if (!catalogOptional.isPresent()){
+				throw new TableException(
+					String.format("Table %s must from a catalog, but %s is not a catalog",
+						identifier.asSummaryString(), identifier.getCatalogName()), e);
+			}
+			try {
+				optionalPartitions = readPartitionFromCatalogAndPrune(
+					context,
+					catalogOptional.get(),
+					identifier,
+					inputFieldNames,
+					allPredicates._1(),
+					defaultPruner
+				);
+				if (optionalPartitions.isPresent() && !optionalPartitions.get().isEmpty()) {
+					remainingPartitions = optionalPartitions.get();
+				}
+			} catch (TableNotExistException tableNotExistException) {
+				throw new TableException(String.format("Table %s is not found in catalog.", identifier.asSummaryString()), e);
+			} catch (TableNotPartitionedException tableNotPartitionedException) {
+				remainingPartitions = null;

Review comment:
       we should not throw `TableNotPartitionedException` because we had check whether the table is partitioned 

##########
File path: flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/PushPartitionIntoTableSourceScanRule.java
##########
@@ -0,0 +1,325 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.rules.logical;
+
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.catalog.Catalog;
+import org.apache.flink.table.catalog.CatalogPartitionSpec;
+import org.apache.flink.table.catalog.CatalogTable;
+import org.apache.flink.table.catalog.ObjectIdentifier;
+import org.apache.flink.table.catalog.ObjectPath;
+import org.apache.flink.table.catalog.exceptions.CatalogException;
+import org.apache.flink.table.catalog.exceptions.PartitionNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotPartitionedException;
+import org.apache.flink.table.catalog.stats.CatalogColumnStatistics;
+import org.apache.flink.table.catalog.stats.CatalogTableStatistics;
+import org.apache.flink.table.connector.source.DynamicTableSource;
+import org.apache.flink.table.connector.source.abilities.SupportsPartitionPushDown;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.expressions.ResolvedExpression;
+import org.apache.flink.table.plan.stats.TableStats;
+import org.apache.flink.table.planner.calcite.FlinkContext;
+import org.apache.flink.table.planner.calcite.FlinkTypeFactory;
+import org.apache.flink.table.planner.plan.schema.TableSourceTable;
+import org.apache.flink.table.planner.plan.stats.FlinkStatistic;
+import org.apache.flink.table.planner.plan.utils.FlinkRelOptUtil;
+import org.apache.flink.table.planner.plan.utils.PartitionPruner;
+import org.apache.flink.table.planner.plan.utils.RexNodeExtractor;
+import org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter;
+import org.apache.flink.table.planner.utils.CatalogTableStatisticsConverter;
+import org.apache.flink.table.types.logical.LogicalType;
+
+import org.apache.calcite.plan.RelOptRule;
+import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.rel.core.Filter;
+import org.apache.calcite.rel.logical.LogicalTableScan;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rex.RexBuilder;
+import org.apache.calcite.rex.RexInputRef;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.rex.RexShuttle;
+import org.apache.calcite.rex.RexUtil;
+import org.apache.calcite.tools.RelBuilder;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.TimeZone;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+import scala.Option;
+import scala.Tuple2;
+import scala.collection.JavaConversions;
+import scala.collection.Seq;
+
+/**
+ * Planner rule that tries to push partition evaluated by filter condition into a {@link LogicalTableScan}.
+*/
+public class PushPartitionIntoTableSourceScanRule extends RelOptRule {
+	public static final PushPartitionIntoTableSourceScanRule INSTANCE = new PushPartitionIntoTableSourceScanRule();
+
+	public PushPartitionIntoTableSourceScanRule(){
+		super(operand(Filter.class,
+				operand(LogicalTableScan.class, none())),
+			"PushPartitionTableSourceScanRule");
+	}
+
+	@Override
+	public boolean matches(RelOptRuleCall call) {
+		Filter filter = call.rel(0);
+		if (filter.getCondition() == null) {
+			return false;
+		}
+		TableSourceTable tableSourceTable = call.rel(1).getTable().unwrap(TableSourceTable.class);
+		if (tableSourceTable == null){
+			return false;
+		}
+		DynamicTableSource dynamicTableSource = tableSourceTable.tableSource();
+		if (!(dynamicTableSource instanceof SupportsPartitionPushDown)) {
+			return false;
+		}
+		CatalogTable catalogTable = tableSourceTable.catalogTable();
+		if (!catalogTable.isPartitioned() || catalogTable.getPartitionKeys().isEmpty()) {
+			return false;
+		}
+		return Arrays.stream(tableSourceTable.extraDigests()).noneMatch(digest -> digest.startsWith("partitions=["));
+	}
+
+	@Override
+	public void onMatch(RelOptRuleCall call) {
+		Filter filter = call.rel(0);
+		LogicalTableScan scan = call.rel(1);
+		FlinkContext context = call.getPlanner().getContext().unwrap(FlinkContext.class);
+		TableSourceTable tableSourceTable = scan.getTable().unwrap(TableSourceTable.class);
+
+		// build pruner
+		RelDataType inputFieldTypes = filter.getInput().getRowType();
+		List<String> inputFieldNames = inputFieldTypes.getFieldNames();
+		List<String> partitionFieldNames = tableSourceTable.catalogTable().getPartitionKeys();
+		RelBuilder relBuilder = call.builder();
+		RexBuilder rexBuilder = relBuilder.getRexBuilder();
+		Tuple2<Seq<RexNode>, Seq<RexNode>> allPredicates = RexNodeExtractor.extractPartitionPredicateList(
+			filter.getCondition(),
+			FlinkRelOptUtil.getMaxCnfNodeCount(scan),
+			inputFieldNames.toArray(new String[0]),
+			rexBuilder,
+			partitionFieldNames.toArray(new String[0])
+			);
+		RexNode partitionPredicate = RexUtil.composeConjunction(rexBuilder, JavaConversions.seqAsJavaList(allPredicates._1));
+
+		if (partitionPredicate.isAlwaysTrue()){
+			return;
+		}
+
+		List<LogicalType> partitionFieldTypes = partitionFieldNames.stream().map(name -> {
+			int index  = inputFieldNames.indexOf(name);
+			if (index < 0) {
+				throw new TableException(String.format("Partitioned key '%s' isn't found in input columns. " +
+					"Validator should have checked that.", name));
+			}			return inputFieldTypes.getFieldList().get(index).getType(); })
+			.map(FlinkTypeFactory::toLogicalType).collect(Collectors.toList());
+
+		RexNode finalPartitionPredicate = adjustPartitionPredicate(inputFieldNames, partitionFieldNames, partitionPredicate);
+		Function<List<Map<String, String>>, List<Map<String, String>>> defaultPruner = partitions -> PartitionPruner.prunePartitions(
+			context.getTableConfig(),
+			partitionFieldNames.toArray(new String[0]),
+			partitionFieldTypes.toArray(new LogicalType[0]),
+			partitions,
+			finalPartitionPredicate);
+
+		// get partitions from table/catalog and prune
+		Optional<Catalog> catalogOptional = context.getCatalogManager().getCatalog(tableSourceTable.tableIdentifier().getCatalogName());
+		List<Map<String, String>> remainingPartitions = null;
+		Optional<List<Map<String, String>>> optionalPartitions;
+		// fields to read partitions from catalog and build new statistic
+		DynamicTableSource dynamicTableSource = tableSourceTable.tableSource().copy();
+		ObjectIdentifier identifier = tableSourceTable.tableIdentifier();
+		ObjectPath tablePath = identifier.toObjectPath();
+		try {
+			optionalPartitions = ((SupportsPartitionPushDown) dynamicTableSource).listPartitions();
+			if (optionalPartitions.isPresent() && !optionalPartitions.get().isEmpty()) {
+				remainingPartitions = defaultPruner.apply(optionalPartitions.get());
+			}
+		} catch (UnsupportedOperationException e) {
+			// check catalog whether is available
+			// we will read partitions from catalog if table doesn't support listPartitions.
+			if (!catalogOptional.isPresent()){
+				throw new TableException(
+					String.format("Table %s must from a catalog, but %s is not a catalog",
+						identifier.asSummaryString(), identifier.getCatalogName()), e);
+			}
+			try {
+				optionalPartitions = readPartitionFromCatalogAndPrune(
+					context,
+					catalogOptional.get(),
+					identifier,
+					inputFieldNames,
+					allPredicates._1(),
+					defaultPruner
+				);
+				if (optionalPartitions.isPresent() && !optionalPartitions.get().isEmpty()) {
+					remainingPartitions = optionalPartitions.get();
+				}
+			} catch (TableNotExistException tableNotExistException) {
+				throw new TableException(String.format("Table %s is not found in catalog.", identifier.asSummaryString()), e);
+			} catch (TableNotPartitionedException tableNotPartitionedException) {
+				remainingPartitions = null;
+			}
+		}
+		if (remainingPartitions != null) {
+			((SupportsPartitionPushDown) dynamicTableSource).applyPartitions(remainingPartitions);

Review comment:
       extract those code to a method, then the logic of `onMatch ` will be more clean, including 4 steps:
   1. extract partition predicate
   2. do partition prune, and return the remaining partitions
   3. re-build statistic
   4. build new table scan and transform the result
   

##########
File path: flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/PushPartitionIntoTableSourceScanRule.java
##########
@@ -0,0 +1,325 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.rules.logical;
+
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.catalog.Catalog;
+import org.apache.flink.table.catalog.CatalogPartitionSpec;
+import org.apache.flink.table.catalog.CatalogTable;
+import org.apache.flink.table.catalog.ObjectIdentifier;
+import org.apache.flink.table.catalog.ObjectPath;
+import org.apache.flink.table.catalog.exceptions.CatalogException;
+import org.apache.flink.table.catalog.exceptions.PartitionNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotPartitionedException;
+import org.apache.flink.table.catalog.stats.CatalogColumnStatistics;
+import org.apache.flink.table.catalog.stats.CatalogTableStatistics;
+import org.apache.flink.table.connector.source.DynamicTableSource;
+import org.apache.flink.table.connector.source.abilities.SupportsPartitionPushDown;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.expressions.ResolvedExpression;
+import org.apache.flink.table.plan.stats.TableStats;
+import org.apache.flink.table.planner.calcite.FlinkContext;
+import org.apache.flink.table.planner.calcite.FlinkTypeFactory;
+import org.apache.flink.table.planner.plan.schema.TableSourceTable;
+import org.apache.flink.table.planner.plan.stats.FlinkStatistic;
+import org.apache.flink.table.planner.plan.utils.FlinkRelOptUtil;
+import org.apache.flink.table.planner.plan.utils.PartitionPruner;
+import org.apache.flink.table.planner.plan.utils.RexNodeExtractor;
+import org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter;
+import org.apache.flink.table.planner.utils.CatalogTableStatisticsConverter;
+import org.apache.flink.table.types.logical.LogicalType;
+
+import org.apache.calcite.plan.RelOptRule;
+import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.rel.core.Filter;
+import org.apache.calcite.rel.logical.LogicalTableScan;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rex.RexBuilder;
+import org.apache.calcite.rex.RexInputRef;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.rex.RexShuttle;
+import org.apache.calcite.rex.RexUtil;
+import org.apache.calcite.tools.RelBuilder;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.TimeZone;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+import scala.Option;
+import scala.Tuple2;
+import scala.collection.JavaConversions;
+import scala.collection.Seq;
+
+/**
+ * Planner rule that tries to push partition evaluated by filter condition into a {@link LogicalTableScan}.
+*/
+public class PushPartitionIntoTableSourceScanRule extends RelOptRule {
+	public static final PushPartitionIntoTableSourceScanRule INSTANCE = new PushPartitionIntoTableSourceScanRule();
+
+	public PushPartitionIntoTableSourceScanRule(){
+		super(operand(Filter.class,
+				operand(LogicalTableScan.class, none())),
+			"PushPartitionTableSourceScanRule");
+	}
+
+	@Override
+	public boolean matches(RelOptRuleCall call) {
+		Filter filter = call.rel(0);
+		if (filter.getCondition() == null) {
+			return false;
+		}
+		TableSourceTable tableSourceTable = call.rel(1).getTable().unwrap(TableSourceTable.class);
+		if (tableSourceTable == null){
+			return false;
+		}
+		DynamicTableSource dynamicTableSource = tableSourceTable.tableSource();
+		if (!(dynamicTableSource instanceof SupportsPartitionPushDown)) {
+			return false;
+		}
+		CatalogTable catalogTable = tableSourceTable.catalogTable();
+		if (!catalogTable.isPartitioned() || catalogTable.getPartitionKeys().isEmpty()) {
+			return false;
+		}
+		return Arrays.stream(tableSourceTable.extraDigests()).noneMatch(digest -> digest.startsWith("partitions=["));
+	}
+
+	@Override
+	public void onMatch(RelOptRuleCall call) {
+		Filter filter = call.rel(0);
+		LogicalTableScan scan = call.rel(1);
+		FlinkContext context = call.getPlanner().getContext().unwrap(FlinkContext.class);
+		TableSourceTable tableSourceTable = scan.getTable().unwrap(TableSourceTable.class);
+
+		// build pruner
+		RelDataType inputFieldTypes = filter.getInput().getRowType();
+		List<String> inputFieldNames = inputFieldTypes.getFieldNames();
+		List<String> partitionFieldNames = tableSourceTable.catalogTable().getPartitionKeys();
+		RelBuilder relBuilder = call.builder();
+		RexBuilder rexBuilder = relBuilder.getRexBuilder();
+		Tuple2<Seq<RexNode>, Seq<RexNode>> allPredicates = RexNodeExtractor.extractPartitionPredicateList(
+			filter.getCondition(),
+			FlinkRelOptUtil.getMaxCnfNodeCount(scan),
+			inputFieldNames.toArray(new String[0]),
+			rexBuilder,
+			partitionFieldNames.toArray(new String[0])
+			);
+		RexNode partitionPredicate = RexUtil.composeConjunction(rexBuilder, JavaConversions.seqAsJavaList(allPredicates._1));
+
+		if (partitionPredicate.isAlwaysTrue()){
+			return;
+		}
+
+		List<LogicalType> partitionFieldTypes = partitionFieldNames.stream().map(name -> {
+			int index  = inputFieldNames.indexOf(name);
+			if (index < 0) {
+				throw new TableException(String.format("Partitioned key '%s' isn't found in input columns. " +
+					"Validator should have checked that.", name));
+			}			return inputFieldTypes.getFieldList().get(index).getType(); })
+			.map(FlinkTypeFactory::toLogicalType).collect(Collectors.toList());
+
+		RexNode finalPartitionPredicate = adjustPartitionPredicate(inputFieldNames, partitionFieldNames, partitionPredicate);
+		Function<List<Map<String, String>>, List<Map<String, String>>> defaultPruner = partitions -> PartitionPruner.prunePartitions(
+			context.getTableConfig(),
+			partitionFieldNames.toArray(new String[0]),
+			partitionFieldTypes.toArray(new LogicalType[0]),
+			partitions,
+			finalPartitionPredicate);
+
+		// get partitions from table/catalog and prune
+		Optional<Catalog> catalogOptional = context.getCatalogManager().getCatalog(tableSourceTable.tableIdentifier().getCatalogName());
+		List<Map<String, String>> remainingPartitions = null;
+		Optional<List<Map<String, String>>> optionalPartitions;
+		// fields to read partitions from catalog and build new statistic
+		DynamicTableSource dynamicTableSource = tableSourceTable.tableSource().copy();
+		ObjectIdentifier identifier = tableSourceTable.tableIdentifier();
+		ObjectPath tablePath = identifier.toObjectPath();
+		try {
+			optionalPartitions = ((SupportsPartitionPushDown) dynamicTableSource).listPartitions();
+			if (optionalPartitions.isPresent() && !optionalPartitions.get().isEmpty()) {
+				remainingPartitions = defaultPruner.apply(optionalPartitions.get());
+			}
+		} catch (UnsupportedOperationException e) {
+			// check catalog whether is available
+			// we will read partitions from catalog if table doesn't support listPartitions.
+			if (!catalogOptional.isPresent()){
+				throw new TableException(
+					String.format("Table %s must from a catalog, but %s is not a catalog",
+						identifier.asSummaryString(), identifier.getCatalogName()), e);
+			}
+			try {
+				optionalPartitions = readPartitionFromCatalogAndPrune(
+					context,
+					catalogOptional.get(),
+					identifier,
+					inputFieldNames,
+					allPredicates._1(),
+					defaultPruner
+				);
+				if (optionalPartitions.isPresent() && !optionalPartitions.get().isEmpty()) {
+					remainingPartitions = optionalPartitions.get();
+				}
+			} catch (TableNotExistException tableNotExistException) {
+				throw new TableException(String.format("Table %s is not found in catalog.", identifier.asSummaryString()), e);
+			} catch (TableNotPartitionedException tableNotPartitionedException) {
+				remainingPartitions = null;
+			}
+		}
+		if (remainingPartitions != null) {
+			((SupportsPartitionPushDown) dynamicTableSource).applyPartitions(remainingPartitions);
+		}
+
+		// build new statistic
+		TableStats newTableStat = null;
+		Optional<TableStats> partitionStats;
+		if (remainingPartitions != null && catalogOptional.isPresent()) {
+			for (Map<String, String> partition: remainingPartitions) {
+				partitionStats = getPartitionStats(catalogOptional.get(), tablePath, partition);
+				if (!partitionStats.isPresent()) {
+					// clear all information before
+					newTableStat = null;
+					break;
+				} else {
+					newTableStat = newTableStat == null ? partitionStats.get() : newTableStat.merge(partitionStats.get());
+				}
+			}
+		}
+		FlinkStatistic newStatistic = FlinkStatistic.builder()
+			.statistic(tableSourceTable.getStatistic())
+			.tableStats(newTableStat)
+			.build();
+
+		String extraDigest = remainingPartitions == null ? "partitions=[]" :
+			("partitions=[" +
+				String.join(", ", remainingPartitions
+					.stream()
+					.map(Object::toString)
+					.toArray(String[]::new)) +
+				"]");
+		TableSourceTable newTableSourceTable = tableSourceTable.copy(dynamicTableSource, newStatistic, new String[]{extraDigest});
+
+		LogicalTableScan newScan = LogicalTableScan.create(scan.getCluster(), newTableSourceTable, scan.getHints());
+
+		RexNode nonPartitionPredicate = RexUtil.composeConjunction(rexBuilder, JavaConversions.seqAsJavaList(allPredicates._2()));
+		if (nonPartitionPredicate.isAlwaysTrue()) {
+			call.transformTo(newScan);
+		} else {
+			Filter newFilter = filter.copy(filter.getTraitSet(), newScan, nonPartitionPredicate);
+			call.transformTo(newFilter);
+		}
+	}
+
+	/**
+	 * adjust the partition field reference index to evaluate the partition values.
+	 * e.g. the original input fields is: a, b, c, p, and p is partition field. the partition values
+	 * are: List(Map("p"->"1"), Map("p" -> "2"), Map("p" -> "3")). If the original partition
+	 * predicate is $3 > 1. after adjusting, the new predicate is ($0 > 1).
+	 * and use ($0 > 1) to evaluate partition values (row(1), row(2), row(3)).
+	 */
+	private RexNode adjustPartitionPredicate(List<String> inputFieldNames, List<String> partitionFieldNames, RexNode partitionPredicate) {
+		return partitionPredicate.accept(new RexShuttle(){
+			@Override
+			public RexNode visitInputRef(RexInputRef inputRef) {
+				int index = inputRef.getIndex();
+				String fieldName = inputFieldNames.get(index);
+				int newIndex = partitionFieldNames.indexOf(fieldName);
+				if (newIndex < 0) {
+					throw new TableException(String.format("Field name '%s' isn't found in partitioned columns." +
+						" Validator should have checked that.", fieldName));
+				}
+				if (newIndex == index){
+					return inputRef;
+				} else {
+					return new RexInputRef(newIndex, inputRef.getType());
+				}
+			}
+		});
+	}
+
+	private Optional<List<Map<String, String>>> readPartitionFromCatalogAndPrune(
+			FlinkContext context,
+			Catalog catalog,
+			ObjectIdentifier tableIdentifier,
+			List<String> allFieldNames,
+			Seq<RexNode> partitionPredicate,
+			Function<List<Map<String, String>>, List<Map<String, String>>> pruner) throws TableNotExistException, TableNotPartitionedException{
+		RexNodeToExpressionConverter converter = new RexNodeToExpressionConverter(
+			allFieldNames.toArray(new String[0]),
+			context.getFunctionCatalog(),
+			context.getCatalogManager(),
+			TimeZone.getTimeZone(context.getTableConfig().getLocalTimeZone()));
+		ArrayList<Expression> partitionFilters = new ArrayList<>();
+		Option<ResolvedExpression> subExpr;
+		for (RexNode node: JavaConversions.seqAsJavaList(partitionPredicate)) {
+			subExpr = node.accept(converter);
+			if (!subExpr.isEmpty()) {

Review comment:
       what if the `subExpr` is empty ?

##########
File path: flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/PushPartitionIntoTableSourceScanRule.java
##########
@@ -0,0 +1,325 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.rules.logical;
+
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.catalog.Catalog;
+import org.apache.flink.table.catalog.CatalogPartitionSpec;
+import org.apache.flink.table.catalog.CatalogTable;
+import org.apache.flink.table.catalog.ObjectIdentifier;
+import org.apache.flink.table.catalog.ObjectPath;
+import org.apache.flink.table.catalog.exceptions.CatalogException;
+import org.apache.flink.table.catalog.exceptions.PartitionNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotPartitionedException;
+import org.apache.flink.table.catalog.stats.CatalogColumnStatistics;
+import org.apache.flink.table.catalog.stats.CatalogTableStatistics;
+import org.apache.flink.table.connector.source.DynamicTableSource;
+import org.apache.flink.table.connector.source.abilities.SupportsPartitionPushDown;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.expressions.ResolvedExpression;
+import org.apache.flink.table.plan.stats.TableStats;
+import org.apache.flink.table.planner.calcite.FlinkContext;
+import org.apache.flink.table.planner.calcite.FlinkTypeFactory;
+import org.apache.flink.table.planner.plan.schema.TableSourceTable;
+import org.apache.flink.table.planner.plan.stats.FlinkStatistic;
+import org.apache.flink.table.planner.plan.utils.FlinkRelOptUtil;
+import org.apache.flink.table.planner.plan.utils.PartitionPruner;
+import org.apache.flink.table.planner.plan.utils.RexNodeExtractor;
+import org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter;
+import org.apache.flink.table.planner.utils.CatalogTableStatisticsConverter;
+import org.apache.flink.table.types.logical.LogicalType;
+
+import org.apache.calcite.plan.RelOptRule;
+import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.rel.core.Filter;
+import org.apache.calcite.rel.logical.LogicalTableScan;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rex.RexBuilder;
+import org.apache.calcite.rex.RexInputRef;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.rex.RexShuttle;
+import org.apache.calcite.rex.RexUtil;
+import org.apache.calcite.tools.RelBuilder;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.TimeZone;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+import scala.Option;
+import scala.Tuple2;
+import scala.collection.JavaConversions;
+import scala.collection.Seq;
+
+/**
+ * Planner rule that tries to push partition evaluated by filter condition into a {@link LogicalTableScan}.
+*/
+public class PushPartitionIntoTableSourceScanRule extends RelOptRule {
+	public static final PushPartitionIntoTableSourceScanRule INSTANCE = new PushPartitionIntoTableSourceScanRule();
+
+	public PushPartitionIntoTableSourceScanRule(){
+		super(operand(Filter.class,
+				operand(LogicalTableScan.class, none())),
+			"PushPartitionTableSourceScanRule");
+	}
+
+	@Override
+	public boolean matches(RelOptRuleCall call) {
+		Filter filter = call.rel(0);
+		if (filter.getCondition() == null) {
+			return false;
+		}
+		TableSourceTable tableSourceTable = call.rel(1).getTable().unwrap(TableSourceTable.class);
+		if (tableSourceTable == null){
+			return false;
+		}
+		DynamicTableSource dynamicTableSource = tableSourceTable.tableSource();
+		if (!(dynamicTableSource instanceof SupportsPartitionPushDown)) {
+			return false;
+		}
+		CatalogTable catalogTable = tableSourceTable.catalogTable();
+		if (!catalogTable.isPartitioned() || catalogTable.getPartitionKeys().isEmpty()) {
+			return false;
+		}
+		return Arrays.stream(tableSourceTable.extraDigests()).noneMatch(digest -> digest.startsWith("partitions=["));
+	}
+
+	@Override
+	public void onMatch(RelOptRuleCall call) {
+		Filter filter = call.rel(0);
+		LogicalTableScan scan = call.rel(1);
+		FlinkContext context = call.getPlanner().getContext().unwrap(FlinkContext.class);
+		TableSourceTable tableSourceTable = scan.getTable().unwrap(TableSourceTable.class);
+
+		// build pruner
+		RelDataType inputFieldTypes = filter.getInput().getRowType();
+		List<String> inputFieldNames = inputFieldTypes.getFieldNames();
+		List<String> partitionFieldNames = tableSourceTable.catalogTable().getPartitionKeys();
+		RelBuilder relBuilder = call.builder();
+		RexBuilder rexBuilder = relBuilder.getRexBuilder();
+		Tuple2<Seq<RexNode>, Seq<RexNode>> allPredicates = RexNodeExtractor.extractPartitionPredicateList(
+			filter.getCondition(),
+			FlinkRelOptUtil.getMaxCnfNodeCount(scan),
+			inputFieldNames.toArray(new String[0]),
+			rexBuilder,
+			partitionFieldNames.toArray(new String[0])
+			);
+		RexNode partitionPredicate = RexUtil.composeConjunction(rexBuilder, JavaConversions.seqAsJavaList(allPredicates._1));
+
+		if (partitionPredicate.isAlwaysTrue()){
+			return;
+		}
+
+		List<LogicalType> partitionFieldTypes = partitionFieldNames.stream().map(name -> {

Review comment:
       change `List<LogicalType>` to `LogicalType[]` ?

##########
File path: flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/runtime/batch/sql/TableSourceITCase.scala
##########
@@ -20,14 +20,15 @@ package org.apache.flink.table.planner.runtime.batch.sql
 
 import org.apache.flink.table.planner.factories.TestValuesTableFactory
 import org.apache.flink.table.planner.runtime.utils.BatchTestBase.row
-import org.apache.flink.table.planner.runtime.utils.{BatchTestBase, TestData}
+import org.apache.flink.table.planner.runtime.utils.{BatchTestBase, TestData, TestingAppendSink}
 import org.apache.flink.table.planner.utils._
 import org.apache.flink.types.Row
-
 import org.junit.{Before, Test}
-
 import java.lang.{Boolean => JBool, Integer => JInt, Long => JLong}
 
+import org.apache.flink.table.planner.JString

Review comment:
       useless import

##########
File path: flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/PushPartitionIntoTableSourceScanRule.java
##########
@@ -0,0 +1,325 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.rules.logical;
+
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.catalog.Catalog;
+import org.apache.flink.table.catalog.CatalogPartitionSpec;
+import org.apache.flink.table.catalog.CatalogTable;
+import org.apache.flink.table.catalog.ObjectIdentifier;
+import org.apache.flink.table.catalog.ObjectPath;
+import org.apache.flink.table.catalog.exceptions.CatalogException;
+import org.apache.flink.table.catalog.exceptions.PartitionNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotPartitionedException;
+import org.apache.flink.table.catalog.stats.CatalogColumnStatistics;
+import org.apache.flink.table.catalog.stats.CatalogTableStatistics;
+import org.apache.flink.table.connector.source.DynamicTableSource;
+import org.apache.flink.table.connector.source.abilities.SupportsPartitionPushDown;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.expressions.ResolvedExpression;
+import org.apache.flink.table.plan.stats.TableStats;
+import org.apache.flink.table.planner.calcite.FlinkContext;
+import org.apache.flink.table.planner.calcite.FlinkTypeFactory;
+import org.apache.flink.table.planner.plan.schema.TableSourceTable;
+import org.apache.flink.table.planner.plan.stats.FlinkStatistic;
+import org.apache.flink.table.planner.plan.utils.FlinkRelOptUtil;
+import org.apache.flink.table.planner.plan.utils.PartitionPruner;
+import org.apache.flink.table.planner.plan.utils.RexNodeExtractor;
+import org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter;
+import org.apache.flink.table.planner.utils.CatalogTableStatisticsConverter;
+import org.apache.flink.table.types.logical.LogicalType;
+
+import org.apache.calcite.plan.RelOptRule;
+import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.rel.core.Filter;
+import org.apache.calcite.rel.logical.LogicalTableScan;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rex.RexBuilder;
+import org.apache.calcite.rex.RexInputRef;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.rex.RexShuttle;
+import org.apache.calcite.rex.RexUtil;
+import org.apache.calcite.tools.RelBuilder;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.TimeZone;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+import scala.Option;
+import scala.Tuple2;
+import scala.collection.JavaConversions;
+import scala.collection.Seq;
+
+/**
+ * Planner rule that tries to push partition evaluated by filter condition into a {@link LogicalTableScan}.
+*/
+public class PushPartitionIntoTableSourceScanRule extends RelOptRule {
+	public static final PushPartitionIntoTableSourceScanRule INSTANCE = new PushPartitionIntoTableSourceScanRule();
+
+	public PushPartitionIntoTableSourceScanRule(){
+		super(operand(Filter.class,
+				operand(LogicalTableScan.class, none())),
+			"PushPartitionTableSourceScanRule");
+	}
+
+	@Override
+	public boolean matches(RelOptRuleCall call) {
+		Filter filter = call.rel(0);
+		if (filter.getCondition() == null) {
+			return false;
+		}
+		TableSourceTable tableSourceTable = call.rel(1).getTable().unwrap(TableSourceTable.class);
+		if (tableSourceTable == null){
+			return false;
+		}
+		DynamicTableSource dynamicTableSource = tableSourceTable.tableSource();
+		if (!(dynamicTableSource instanceof SupportsPartitionPushDown)) {
+			return false;
+		}
+		CatalogTable catalogTable = tableSourceTable.catalogTable();
+		if (!catalogTable.isPartitioned() || catalogTable.getPartitionKeys().isEmpty()) {
+			return false;
+		}
+		return Arrays.stream(tableSourceTable.extraDigests()).noneMatch(digest -> digest.startsWith("partitions=["));
+	}
+
+	@Override
+	public void onMatch(RelOptRuleCall call) {
+		Filter filter = call.rel(0);
+		LogicalTableScan scan = call.rel(1);
+		FlinkContext context = call.getPlanner().getContext().unwrap(FlinkContext.class);
+		TableSourceTable tableSourceTable = scan.getTable().unwrap(TableSourceTable.class);
+
+		// build pruner
+		RelDataType inputFieldTypes = filter.getInput().getRowType();
+		List<String> inputFieldNames = inputFieldTypes.getFieldNames();
+		List<String> partitionFieldNames = tableSourceTable.catalogTable().getPartitionKeys();
+		RelBuilder relBuilder = call.builder();
+		RexBuilder rexBuilder = relBuilder.getRexBuilder();
+		Tuple2<Seq<RexNode>, Seq<RexNode>> allPredicates = RexNodeExtractor.extractPartitionPredicateList(
+			filter.getCondition(),
+			FlinkRelOptUtil.getMaxCnfNodeCount(scan),
+			inputFieldNames.toArray(new String[0]),
+			rexBuilder,
+			partitionFieldNames.toArray(new String[0])
+			);
+		RexNode partitionPredicate = RexUtil.composeConjunction(rexBuilder, JavaConversions.seqAsJavaList(allPredicates._1));
+
+		if (partitionPredicate.isAlwaysTrue()){
+			return;
+		}
+
+		List<LogicalType> partitionFieldTypes = partitionFieldNames.stream().map(name -> {
+			int index  = inputFieldNames.indexOf(name);
+			if (index < 0) {
+				throw new TableException(String.format("Partitioned key '%s' isn't found in input columns. " +
+					"Validator should have checked that.", name));
+			}			return inputFieldTypes.getFieldList().get(index).getType(); })
+			.map(FlinkTypeFactory::toLogicalType).collect(Collectors.toList());
+
+		RexNode finalPartitionPredicate = adjustPartitionPredicate(inputFieldNames, partitionFieldNames, partitionPredicate);
+		Function<List<Map<String, String>>, List<Map<String, String>>> defaultPruner = partitions -> PartitionPruner.prunePartitions(
+			context.getTableConfig(),
+			partitionFieldNames.toArray(new String[0]),
+			partitionFieldTypes.toArray(new LogicalType[0]),
+			partitions,
+			finalPartitionPredicate);
+
+		// get partitions from table/catalog and prune
+		Optional<Catalog> catalogOptional = context.getCatalogManager().getCatalog(tableSourceTable.tableIdentifier().getCatalogName());
+		List<Map<String, String>> remainingPartitions = null;
+		Optional<List<Map<String, String>>> optionalPartitions;
+		// fields to read partitions from catalog and build new statistic
+		DynamicTableSource dynamicTableSource = tableSourceTable.tableSource().copy();
+		ObjectIdentifier identifier = tableSourceTable.tableIdentifier();
+		ObjectPath tablePath = identifier.toObjectPath();
+		try {
+			optionalPartitions = ((SupportsPartitionPushDown) dynamicTableSource).listPartitions();
+			if (optionalPartitions.isPresent() && !optionalPartitions.get().isEmpty()) {
+				remainingPartitions = defaultPruner.apply(optionalPartitions.get());
+			}
+		} catch (UnsupportedOperationException e) {
+			// check catalog whether is available
+			// we will read partitions from catalog if table doesn't support listPartitions.
+			if (!catalogOptional.isPresent()){
+				throw new TableException(
+					String.format("Table %s must from a catalog, but %s is not a catalog",
+						identifier.asSummaryString(), identifier.getCatalogName()), e);
+			}
+			try {
+				optionalPartitions = readPartitionFromCatalogAndPrune(
+					context,
+					catalogOptional.get(),
+					identifier,
+					inputFieldNames,
+					allPredicates._1(),
+					defaultPruner
+				);
+				if (optionalPartitions.isPresent() && !optionalPartitions.get().isEmpty()) {
+					remainingPartitions = optionalPartitions.get();
+				}
+			} catch (TableNotExistException tableNotExistException) {
+				throw new TableException(String.format("Table %s is not found in catalog.", identifier.asSummaryString()), e);
+			} catch (TableNotPartitionedException tableNotPartitionedException) {
+				remainingPartitions = null;
+			}
+		}
+		if (remainingPartitions != null) {
+			((SupportsPartitionPushDown) dynamicTableSource).applyPartitions(remainingPartitions);
+		}
+
+		// build new statistic
+		TableStats newTableStat = null;
+		Optional<TableStats> partitionStats;
+		if (remainingPartitions != null && catalogOptional.isPresent()) {
+			for (Map<String, String> partition: remainingPartitions) {
+				partitionStats = getPartitionStats(catalogOptional.get(), tablePath, partition);
+				if (!partitionStats.isPresent()) {
+					// clear all information before
+					newTableStat = null;
+					break;
+				} else {
+					newTableStat = newTableStat == null ? partitionStats.get() : newTableStat.merge(partitionStats.get());
+				}
+			}
+		}
+		FlinkStatistic newStatistic = FlinkStatistic.builder()
+			.statistic(tableSourceTable.getStatistic())
+			.tableStats(newTableStat)
+			.build();
+
+		String extraDigest = remainingPartitions == null ? "partitions=[]" :
+			("partitions=[" +
+				String.join(", ", remainingPartitions
+					.stream()
+					.map(Object::toString)
+					.toArray(String[]::new)) +
+				"]");
+		TableSourceTable newTableSourceTable = tableSourceTable.copy(dynamicTableSource, newStatistic, new String[]{extraDigest});
+
+		LogicalTableScan newScan = LogicalTableScan.create(scan.getCluster(), newTableSourceTable, scan.getHints());
+
+		RexNode nonPartitionPredicate = RexUtil.composeConjunction(rexBuilder, JavaConversions.seqAsJavaList(allPredicates._2()));
+		if (nonPartitionPredicate.isAlwaysTrue()) {
+			call.transformTo(newScan);
+		} else {
+			Filter newFilter = filter.copy(filter.getTraitSet(), newScan, nonPartitionPredicate);
+			call.transformTo(newFilter);
+		}
+	}
+
+	/**
+	 * adjust the partition field reference index to evaluate the partition values.
+	 * e.g. the original input fields is: a, b, c, p, and p is partition field. the partition values
+	 * are: List(Map("p"->"1"), Map("p" -> "2"), Map("p" -> "3")). If the original partition
+	 * predicate is $3 > 1. after adjusting, the new predicate is ($0 > 1).
+	 * and use ($0 > 1) to evaluate partition values (row(1), row(2), row(3)).
+	 */
+	private RexNode adjustPartitionPredicate(List<String> inputFieldNames, List<String> partitionFieldNames, RexNode partitionPredicate) {
+		return partitionPredicate.accept(new RexShuttle(){
+			@Override
+			public RexNode visitInputRef(RexInputRef inputRef) {
+				int index = inputRef.getIndex();
+				String fieldName = inputFieldNames.get(index);
+				int newIndex = partitionFieldNames.indexOf(fieldName);
+				if (newIndex < 0) {
+					throw new TableException(String.format("Field name '%s' isn't found in partitioned columns." +
+						" Validator should have checked that.", fieldName));
+				}
+				if (newIndex == index){
+					return inputRef;
+				} else {
+					return new RexInputRef(newIndex, inputRef.getType());
+				}
+			}
+		});
+	}
+
+	private Optional<List<Map<String, String>>> readPartitionFromCatalogAndPrune(
+			FlinkContext context,
+			Catalog catalog,
+			ObjectIdentifier tableIdentifier,
+			List<String> allFieldNames,
+			Seq<RexNode> partitionPredicate,
+			Function<List<Map<String, String>>, List<Map<String, String>>> pruner) throws TableNotExistException, TableNotPartitionedException{
+		RexNodeToExpressionConverter converter = new RexNodeToExpressionConverter(
+			allFieldNames.toArray(new String[0]),
+			context.getFunctionCatalog(),
+			context.getCatalogManager(),
+			TimeZone.getTimeZone(context.getTableConfig().getLocalTimeZone()));
+		ArrayList<Expression> partitionFilters = new ArrayList<>();
+		Option<ResolvedExpression> subExpr;
+		for (RexNode node: JavaConversions.seqAsJavaList(partitionPredicate)) {
+			subExpr = node.accept(converter);
+			if (!subExpr.isEmpty()) {
+				partitionFilters.add(subExpr.get());
+			}
+		}
+		ObjectPath tablePath = tableIdentifier.toObjectPath();
+		if (partitionFilters.size() > 0) {
+			try {
+				List<Map<String, String>> remainingPartitions = catalog.listPartitionsByFilter(tablePath, partitionFilters)
+					.stream()
+					.map(CatalogPartitionSpec::getPartitionSpec)
+					.collect(Collectors.toList());
+				return Optional.of(remainingPartitions);
+			} catch (UnsupportedOperationException e) {
+				return readPartitionFromCatalogWithoutFilterAndPrune(catalog, tablePath, pruner);
+			}
+		} else {
+			return readPartitionFromCatalogWithoutFilterAndPrune(catalog, tablePath, pruner);
+		}
+	}
+
+	private Optional<List<Map<String, String>>> readPartitionFromCatalogWithoutFilterAndPrune(
+			Catalog catalog,
+			ObjectPath tablePath,
+			Function<List<Map<String, String>>, List<Map<String, String>>> pruner) throws TableNotExistException, TableNotPartitionedException, CatalogException {
+		List<Map<String, String>> remainingPartitions;
+		List<Map<String, String>> partitions = catalog.listPartitions(tablePath)
+			.stream()
+			.map(CatalogPartitionSpec::getPartitionSpec)
+			.collect(Collectors.toList());
+		// prune partitions
+		if (partitions.size() > 0) {
+			remainingPartitions = pruner.apply(partitions);
+			return Optional.of(remainingPartitions);
+		} else {
+			return Optional.empty();
+		}
+	}
+
+	private Optional<TableStats> getPartitionStats(Catalog catalog, ObjectPath tablePath, Map<String, String> partition) {
+		try {
+			CatalogPartitionSpec spec = new CatalogPartitionSpec(partition);
+			CatalogTableStatistics partitionStat = catalog.getPartitionStatistics(tablePath, spec);
+			CatalogColumnStatistics	partitionColStat = catalog.getPartitionColumnStatistics(tablePath, spec);

Review comment:
       nit: tab -> blank

##########
File path: flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/planner/factories/TestValuesTableFactory.java
##########
@@ -629,35 +728,53 @@ public String asSummaryString() {
 		}
 
 		private Collection<RowData> convertToRowData(
-				Collection<Row> data,
+				Map<Map<String, String>, Collection<Row>> data,
 				int[] projectedFields,
 				DataStructureConverter converter) {
 			List<RowData> result = new ArrayList<>();
-			for (Row value : data) {
-				if (result.size() >= limit) {
-					return result;
-				}
-				if (isRetainedAfterApplyingFilterPredicates(value)) {
-					Row projectedRow;
-					if (projectedFields == null) {
-						projectedRow = value;
-					} else {
-						Object[] newValues = new Object[projectedFields.length];
-						for (int i = 0; i < projectedFields.length; ++i) {
-							newValues[i] = value.getField(projectedFields[i]);
-						}
-						projectedRow = Row.of(newValues);
+			List<Map<String, String>> keys = Collections.EMPTY_LIST.equals(allPartitions) ?

Review comment:
       `Collections.EMPTY_LIST.equals(allPartitions)` -> `allPartitions.isEmpty()` 

##########
File path: flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/planner/factories/TestValuesTableFactory.java
##########
@@ -328,7 +357,59 @@ public DynamicTableSink createDynamicTableSink(Context context) {
 			RUNTIME_SINK,
 			SINK_EXPECTED_MESSAGES_NUM,
 			NESTED_PROJECTION_SUPPORTED,
-			FILTERABLE_FIELDS));
+			FILTERABLE_FIELDS,
+			USE_PARTITION_PUSH_DOWN,
+			PARTITION_LIST));
+	}
+
+	private List<Map<String, String>> parsePartitionList(String partitionString) {
+		return Arrays.stream(partitionString.split(";")).map(
+			partition -> {
+				Map<String, String> spec = new HashMap<>();
+				Arrays.stream(partition.split(",")).forEach(pair -> {
+					String[] split = pair.split(":");
+					spec.put(split[0].trim(), split[1].trim());
+				});
+				return spec;
+			}
+		).collect(Collectors.toList());
+	}
+
+	private Map<Map<String, String>, Collection<Row>> mapRowsToPartitions(
+			TableSchema schema,
+			Collection<Row> rows,
+			List<Map<String, String>> partitions) {
+		if (!rows.isEmpty() && partitions.isEmpty()) {
+			throw new IllegalArgumentException(
+				"Please add partition list if use partition push down. Currently TestValuesTableSource doesn't support create partition list automatically.");
+		}
+		Map<Map<String, String>, Collection<Row>> map = new HashMap<>();
+		for (Map<String, String> partition: partitions) {
+			map.put(partition, new ArrayList<>());
+		}
+		String[] fieldnames = schema.getFieldNames();
+		boolean match = true;

Review comment:
       move this line to line#394

##########
File path: flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/planner/factories/TestValuesTableFactory.java
##########
@@ -243,6 +245,19 @@ private static RowKind parseRowKind(String rowKindShortString) {
 		.asList()
 		.noDefaultValue();
 
+	private static final ConfigOption<Boolean> USE_PARTITION_PUSH_DOWN = ConfigOptions

Review comment:
       whether we can use `PARTITION_LIST` is not empty (including null) to represent `use-partition-push-down=true` ? then this config can be removed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6697110fb14fef778707f74b66227df2953c487c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107",
       "triggerID" : "6697110fb14fef778707f74b66227df2953c487c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122",
       "triggerID" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5147",
       "triggerID" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "triggerType" : "PUSH"
     }, {
       "hash" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5455",
       "triggerID" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "triggerType" : "PUSH"
     }, {
       "hash" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5458",
       "triggerID" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 1790d93b79cfdfc3f65a7805e444699736f80d93 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122) 
   * eea5ebcd848bd4e554134096073cbd88d9725395 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5458) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6697110fb14fef778707f74b66227df2953c487c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107",
       "triggerID" : "6697110fb14fef778707f74b66227df2953c487c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122",
       "triggerID" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5147",
       "triggerID" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "triggerType" : "PUSH"
     }, {
       "hash" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5455",
       "triggerID" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "triggerType" : "PUSH"
     }, {
       "hash" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5458",
       "triggerID" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "triggerType" : "PUSH"
     }, {
       "hash" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5467",
       "triggerID" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5469",
       "triggerID" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "4642a985cde81555583a17880cd2462399338310",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5503",
       "triggerID" : "4642a985cde81555583a17880cd2462399338310",
       "triggerType" : "PUSH"
     }, {
       "hash" : "984744723761b8124aa003f23e65d4bb484a73c7",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5504",
       "triggerID" : "984744723761b8124aa003f23e65d4bb484a73c7",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 1790d93b79cfdfc3f65a7805e444699736f80d93 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122) 
   * 984744723761b8124aa003f23e65d4bb484a73c7 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5504) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot commented on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662853203


   Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress of the review.
   
   
   ## Automated Checks
   Last check on commit 0f19df4d1fcbb4093b69d772628b67b81ebb443a (Thu Jul 23 07:11:13 UTC 2020)
   
   **Warnings:**
    * No documentation files were touched! Remember to keep the Flink docs up to date!
   
   
   <sub>Mention the bot in a comment to re-run the automated checks.</sub>
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process.<details>
    The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`)
    - `@flinkbot approve all` to approve all aspects
    - `@flinkbot approve-until architecture` to approve everything until `architecture`
    - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention
    - `@flinkbot disapprove architecture` to remove an approval you gave earlier
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] godfreyhe commented on a change in pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
godfreyhe commented on a change in pull request #12966:
URL: https://github.com/apache/flink/pull/12966#discussion_r464239138



##########
File path: flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/PushPartitionIntoTableSourceScanRule.java
##########
@@ -0,0 +1,320 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.rules.logical;
+
+import org.apache.flink.table.api.TableConfig;
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.catalog.Catalog;
+import org.apache.flink.table.catalog.CatalogPartitionSpec;
+import org.apache.flink.table.catalog.CatalogTable;
+import org.apache.flink.table.catalog.ObjectIdentifier;
+import org.apache.flink.table.catalog.ObjectPath;
+import org.apache.flink.table.catalog.exceptions.PartitionNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotPartitionedException;
+import org.apache.flink.table.catalog.stats.CatalogColumnStatistics;
+import org.apache.flink.table.catalog.stats.CatalogTableStatistics;
+import org.apache.flink.table.connector.source.DynamicTableSource;
+import org.apache.flink.table.connector.source.abilities.SupportsPartitionPushDown;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.expressions.ResolvedExpression;
+import org.apache.flink.table.plan.stats.TableStats;
+import org.apache.flink.table.planner.calcite.FlinkContext;
+import org.apache.flink.table.planner.calcite.FlinkTypeFactory;
+import org.apache.flink.table.planner.plan.schema.TableSourceTable;
+import org.apache.flink.table.planner.plan.stats.FlinkStatistic;
+import org.apache.flink.table.planner.plan.utils.FlinkRelOptUtil;
+import org.apache.flink.table.planner.plan.utils.PartitionPruner;
+import org.apache.flink.table.planner.plan.utils.RexNodeExtractor;
+import org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter;
+import org.apache.flink.table.planner.utils.CatalogTableStatisticsConverter;
+import org.apache.flink.table.types.logical.LogicalType;
+
+import org.apache.calcite.plan.RelOptRule;
+import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.rel.core.Filter;
+import org.apache.calcite.rel.logical.LogicalTableScan;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rex.RexBuilder;
+import org.apache.calcite.rex.RexInputRef;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.rex.RexShuttle;
+import org.apache.calcite.rex.RexUtil;
+import org.apache.calcite.tools.RelBuilder;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.TimeZone;
+import java.util.stream.Collectors;
+
+import scala.Option;
+import scala.Tuple2;
+import scala.collection.JavaConversions;
+import scala.collection.Seq;
+
+/**
+ * Planner rule that tries to push partition evaluated by filter condition into a {@link LogicalTableScan}.
+*/
+public class PushPartitionIntoTableSourceScanRule extends RelOptRule {
+	public static final PushPartitionIntoTableSourceScanRule INSTANCE = new PushPartitionIntoTableSourceScanRule();
+
+	public PushPartitionIntoTableSourceScanRule(){
+		super(operand(Filter.class,
+				operand(LogicalTableScan.class, none())),
+			"PushPartitionTableSourceScanRule");
+	}
+
+	@Override
+	public boolean matches(RelOptRuleCall call) {
+		Filter filter = call.rel(0);
+		if (filter.getCondition() == null) {
+			return false;
+		}
+		TableSourceTable tableSourceTable = call.rel(1).getTable().unwrap(TableSourceTable.class);
+		if (tableSourceTable == null){
+			return false;
+		}
+		DynamicTableSource dynamicTableSource = tableSourceTable.tableSource();
+		if (!(dynamicTableSource instanceof SupportsPartitionPushDown)) {
+			return false;
+		}
+		CatalogTable catalogTable = tableSourceTable.catalogTable();
+		if (!catalogTable.isPartitioned() || catalogTable.getPartitionKeys().size() == 0) {
+			return false;
+		}
+		return Arrays.stream(tableSourceTable.extraDigests()).noneMatch(digest -> digest.startsWith("partitions=["));
+	}
+
+	@Override
+	public void onMatch(RelOptRuleCall call) {
+		Filter filter = call.rel(0);
+		LogicalTableScan scan = call.rel(1);
+		FlinkContext context = call.getPlanner().getContext().unwrap(FlinkContext.class);
+		TableSourceTable tableSourceTable = scan.getTable().unwrap(TableSourceTable.class);
+		// use new dynamic table source to push down
+		DynamicTableSource dynamicTableSource = tableSourceTable.tableSource().copy();

Review comment:
       Initialize variables as close as possible to the point where you use them, please refer to [The Principle of Proximity](https://www.approxion.com/the-principle-of-proximity/) for more detailes

##########
File path: flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/PushPartitionIntoTableSourceScanRule.java
##########
@@ -0,0 +1,320 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.rules.logical;
+
+import org.apache.flink.table.api.TableConfig;
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.catalog.Catalog;
+import org.apache.flink.table.catalog.CatalogPartitionSpec;
+import org.apache.flink.table.catalog.CatalogTable;
+import org.apache.flink.table.catalog.ObjectIdentifier;
+import org.apache.flink.table.catalog.ObjectPath;
+import org.apache.flink.table.catalog.exceptions.PartitionNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotPartitionedException;
+import org.apache.flink.table.catalog.stats.CatalogColumnStatistics;
+import org.apache.flink.table.catalog.stats.CatalogTableStatistics;
+import org.apache.flink.table.connector.source.DynamicTableSource;
+import org.apache.flink.table.connector.source.abilities.SupportsPartitionPushDown;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.expressions.ResolvedExpression;
+import org.apache.flink.table.plan.stats.TableStats;
+import org.apache.flink.table.planner.calcite.FlinkContext;
+import org.apache.flink.table.planner.calcite.FlinkTypeFactory;
+import org.apache.flink.table.planner.plan.schema.TableSourceTable;
+import org.apache.flink.table.planner.plan.stats.FlinkStatistic;
+import org.apache.flink.table.planner.plan.utils.FlinkRelOptUtil;
+import org.apache.flink.table.planner.plan.utils.PartitionPruner;
+import org.apache.flink.table.planner.plan.utils.RexNodeExtractor;
+import org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter;
+import org.apache.flink.table.planner.utils.CatalogTableStatisticsConverter;
+import org.apache.flink.table.types.logical.LogicalType;
+
+import org.apache.calcite.plan.RelOptRule;
+import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.rel.core.Filter;
+import org.apache.calcite.rel.logical.LogicalTableScan;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rex.RexBuilder;
+import org.apache.calcite.rex.RexInputRef;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.rex.RexShuttle;
+import org.apache.calcite.rex.RexUtil;
+import org.apache.calcite.tools.RelBuilder;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.TimeZone;
+import java.util.stream.Collectors;
+
+import scala.Option;
+import scala.Tuple2;
+import scala.collection.JavaConversions;
+import scala.collection.Seq;
+
+/**
+ * Planner rule that tries to push partition evaluated by filter condition into a {@link LogicalTableScan}.
+*/
+public class PushPartitionIntoTableSourceScanRule extends RelOptRule {
+	public static final PushPartitionIntoTableSourceScanRule INSTANCE = new PushPartitionIntoTableSourceScanRule();
+
+	public PushPartitionIntoTableSourceScanRule(){
+		super(operand(Filter.class,
+				operand(LogicalTableScan.class, none())),
+			"PushPartitionTableSourceScanRule");
+	}
+
+	@Override
+	public boolean matches(RelOptRuleCall call) {
+		Filter filter = call.rel(0);
+		if (filter.getCondition() == null) {
+			return false;
+		}
+		TableSourceTable tableSourceTable = call.rel(1).getTable().unwrap(TableSourceTable.class);
+		if (tableSourceTable == null){
+			return false;
+		}
+		DynamicTableSource dynamicTableSource = tableSourceTable.tableSource();
+		if (!(dynamicTableSource instanceof SupportsPartitionPushDown)) {
+			return false;
+		}
+		CatalogTable catalogTable = tableSourceTable.catalogTable();
+		if (!catalogTable.isPartitioned() || catalogTable.getPartitionKeys().size() == 0) {

Review comment:
       nit: `catalogTable.getPartitionKeys().isEmpty()` ?

##########
File path: flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/PushPartitionIntoTableSourceScanRule.java
##########
@@ -0,0 +1,320 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.rules.logical;
+
+import org.apache.flink.table.api.TableConfig;
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.catalog.Catalog;
+import org.apache.flink.table.catalog.CatalogPartitionSpec;
+import org.apache.flink.table.catalog.CatalogTable;
+import org.apache.flink.table.catalog.ObjectIdentifier;
+import org.apache.flink.table.catalog.ObjectPath;
+import org.apache.flink.table.catalog.exceptions.PartitionNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotPartitionedException;
+import org.apache.flink.table.catalog.stats.CatalogColumnStatistics;
+import org.apache.flink.table.catalog.stats.CatalogTableStatistics;
+import org.apache.flink.table.connector.source.DynamicTableSource;
+import org.apache.flink.table.connector.source.abilities.SupportsPartitionPushDown;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.expressions.ResolvedExpression;
+import org.apache.flink.table.plan.stats.TableStats;
+import org.apache.flink.table.planner.calcite.FlinkContext;
+import org.apache.flink.table.planner.calcite.FlinkTypeFactory;
+import org.apache.flink.table.planner.plan.schema.TableSourceTable;
+import org.apache.flink.table.planner.plan.stats.FlinkStatistic;
+import org.apache.flink.table.planner.plan.utils.FlinkRelOptUtil;
+import org.apache.flink.table.planner.plan.utils.PartitionPruner;
+import org.apache.flink.table.planner.plan.utils.RexNodeExtractor;
+import org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter;
+import org.apache.flink.table.planner.utils.CatalogTableStatisticsConverter;
+import org.apache.flink.table.types.logical.LogicalType;
+
+import org.apache.calcite.plan.RelOptRule;
+import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.rel.core.Filter;
+import org.apache.calcite.rel.logical.LogicalTableScan;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rex.RexBuilder;
+import org.apache.calcite.rex.RexInputRef;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.rex.RexShuttle;
+import org.apache.calcite.rex.RexUtil;
+import org.apache.calcite.tools.RelBuilder;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.TimeZone;
+import java.util.stream.Collectors;
+
+import scala.Option;
+import scala.Tuple2;
+import scala.collection.JavaConversions;
+import scala.collection.Seq;
+
+/**
+ * Planner rule that tries to push partition evaluated by filter condition into a {@link LogicalTableScan}.
+*/
+public class PushPartitionIntoTableSourceScanRule extends RelOptRule {
+	public static final PushPartitionIntoTableSourceScanRule INSTANCE = new PushPartitionIntoTableSourceScanRule();
+
+	public PushPartitionIntoTableSourceScanRule(){
+		super(operand(Filter.class,
+				operand(LogicalTableScan.class, none())),
+			"PushPartitionTableSourceScanRule");
+	}
+
+	@Override
+	public boolean matches(RelOptRuleCall call) {
+		Filter filter = call.rel(0);
+		if (filter.getCondition() == null) {
+			return false;
+		}
+		TableSourceTable tableSourceTable = call.rel(1).getTable().unwrap(TableSourceTable.class);
+		if (tableSourceTable == null){
+			return false;
+		}
+		DynamicTableSource dynamicTableSource = tableSourceTable.tableSource();
+		if (!(dynamicTableSource instanceof SupportsPartitionPushDown)) {
+			return false;
+		}
+		CatalogTable catalogTable = tableSourceTable.catalogTable();
+		if (!catalogTable.isPartitioned() || catalogTable.getPartitionKeys().size() == 0) {
+			return false;
+		}
+		return Arrays.stream(tableSourceTable.extraDigests()).noneMatch(digest -> digest.startsWith("partitions=["));
+	}
+
+	@Override
+	public void onMatch(RelOptRuleCall call) {
+		Filter filter = call.rel(0);
+		LogicalTableScan scan = call.rel(1);
+		FlinkContext context = call.getPlanner().getContext().unwrap(FlinkContext.class);
+		TableSourceTable tableSourceTable = scan.getTable().unwrap(TableSourceTable.class);
+		// use new dynamic table source to push down
+		DynamicTableSource dynamicTableSource = tableSourceTable.tableSource().copy();
+		// fields to read partitions from catalog and build new statistic
+		Optional<Catalog> catalogOptional = context.getCatalogManager().getCatalog(tableSourceTable.tableIdentifier().getCatalogName());
+		ObjectIdentifier identifier = tableSourceTable.tableIdentifier();
+		ObjectPath tablePath = identifier.toObjectPath();
+		// fields used to prune
+		RelDataType inputFieldTypes = filter.getInput().getRowType();
+		List<String> inputFieldNames = inputFieldTypes.getFieldNames();
+
+		List<String> partitionFieldNames = tableSourceTable.catalogTable().getPartitionKeys();
+
+		RelBuilder relBuilder = call.builder();
+		RexBuilder rexBuilder = relBuilder.getRexBuilder();
+
+		Tuple2<Seq<RexNode>, Seq<RexNode>> allPredicates = RexNodeExtractor.extractPartitionPredicateList(
+			filter.getCondition(),
+			FlinkRelOptUtil.getMaxCnfNodeCount(scan),
+			inputFieldNames.toArray(new String[0]),
+			rexBuilder,
+			partitionFieldNames.toArray(new String[0])
+			);

Review comment:
       nit: move them into one line ?

##########
File path: flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/PushPartitionIntoTableSourceScanRule.java
##########
@@ -0,0 +1,320 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.rules.logical;
+
+import org.apache.flink.table.api.TableConfig;
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.catalog.Catalog;
+import org.apache.flink.table.catalog.CatalogPartitionSpec;
+import org.apache.flink.table.catalog.CatalogTable;
+import org.apache.flink.table.catalog.ObjectIdentifier;
+import org.apache.flink.table.catalog.ObjectPath;
+import org.apache.flink.table.catalog.exceptions.PartitionNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotPartitionedException;
+import org.apache.flink.table.catalog.stats.CatalogColumnStatistics;
+import org.apache.flink.table.catalog.stats.CatalogTableStatistics;
+import org.apache.flink.table.connector.source.DynamicTableSource;
+import org.apache.flink.table.connector.source.abilities.SupportsPartitionPushDown;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.expressions.ResolvedExpression;
+import org.apache.flink.table.plan.stats.TableStats;
+import org.apache.flink.table.planner.calcite.FlinkContext;
+import org.apache.flink.table.planner.calcite.FlinkTypeFactory;
+import org.apache.flink.table.planner.plan.schema.TableSourceTable;
+import org.apache.flink.table.planner.plan.stats.FlinkStatistic;
+import org.apache.flink.table.planner.plan.utils.FlinkRelOptUtil;
+import org.apache.flink.table.planner.plan.utils.PartitionPruner;
+import org.apache.flink.table.planner.plan.utils.RexNodeExtractor;
+import org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter;
+import org.apache.flink.table.planner.utils.CatalogTableStatisticsConverter;
+import org.apache.flink.table.types.logical.LogicalType;
+
+import org.apache.calcite.plan.RelOptRule;
+import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.rel.core.Filter;
+import org.apache.calcite.rel.logical.LogicalTableScan;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rex.RexBuilder;
+import org.apache.calcite.rex.RexInputRef;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.rex.RexShuttle;
+import org.apache.calcite.rex.RexUtil;
+import org.apache.calcite.tools.RelBuilder;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.TimeZone;
+import java.util.stream.Collectors;
+
+import scala.Option;
+import scala.Tuple2;
+import scala.collection.JavaConversions;
+import scala.collection.Seq;
+
+/**
+ * Planner rule that tries to push partition evaluated by filter condition into a {@link LogicalTableScan}.
+*/
+public class PushPartitionIntoTableSourceScanRule extends RelOptRule {
+	public static final PushPartitionIntoTableSourceScanRule INSTANCE = new PushPartitionIntoTableSourceScanRule();
+
+	public PushPartitionIntoTableSourceScanRule(){
+		super(operand(Filter.class,
+				operand(LogicalTableScan.class, none())),
+			"PushPartitionTableSourceScanRule");
+	}
+
+	@Override
+	public boolean matches(RelOptRuleCall call) {
+		Filter filter = call.rel(0);
+		if (filter.getCondition() == null) {
+			return false;
+		}
+		TableSourceTable tableSourceTable = call.rel(1).getTable().unwrap(TableSourceTable.class);
+		if (tableSourceTable == null){
+			return false;
+		}
+		DynamicTableSource dynamicTableSource = tableSourceTable.tableSource();
+		if (!(dynamicTableSource instanceof SupportsPartitionPushDown)) {
+			return false;
+		}
+		CatalogTable catalogTable = tableSourceTable.catalogTable();
+		if (!catalogTable.isPartitioned() || catalogTable.getPartitionKeys().size() == 0) {
+			return false;
+		}
+		return Arrays.stream(tableSourceTable.extraDigests()).noneMatch(digest -> digest.startsWith("partitions=["));
+	}
+
+	@Override
+	public void onMatch(RelOptRuleCall call) {
+		Filter filter = call.rel(0);
+		LogicalTableScan scan = call.rel(1);
+		FlinkContext context = call.getPlanner().getContext().unwrap(FlinkContext.class);
+		TableSourceTable tableSourceTable = scan.getTable().unwrap(TableSourceTable.class);
+		// use new dynamic table source to push down
+		DynamicTableSource dynamicTableSource = tableSourceTable.tableSource().copy();
+		// fields to read partitions from catalog and build new statistic
+		Optional<Catalog> catalogOptional = context.getCatalogManager().getCatalog(tableSourceTable.tableIdentifier().getCatalogName());
+		ObjectIdentifier identifier = tableSourceTable.tableIdentifier();
+		ObjectPath tablePath = identifier.toObjectPath();
+		// fields used to prune
+		RelDataType inputFieldTypes = filter.getInput().getRowType();
+		List<String> inputFieldNames = inputFieldTypes.getFieldNames();
+
+		List<String> partitionFieldNames = tableSourceTable.catalogTable().getPartitionKeys();
+
+		RelBuilder relBuilder = call.builder();
+		RexBuilder rexBuilder = relBuilder.getRexBuilder();
+
+		Tuple2<Seq<RexNode>, Seq<RexNode>> allPredicates = RexNodeExtractor.extractPartitionPredicateList(
+			filter.getCondition(),
+			FlinkRelOptUtil.getMaxCnfNodeCount(scan),
+			inputFieldNames.toArray(new String[0]),
+			rexBuilder,
+			partitionFieldNames.toArray(new String[0])
+			);
+
+		RexNode partitionPredicate = RexUtil.composeConjunction(rexBuilder, JavaConversions.seqAsJavaList(allPredicates._1));
+
+		if (partitionPredicate.isAlwaysTrue()){
+			return;
+		}
+
+		List<LogicalType> partitionFieldTypes = partitionFieldNames.stream().map(name -> {
+			int index = inputFieldNames.indexOf(name);
+			if (index < 0) {
+				throw new TableException(String.format("Partitioned key '%s' isn't found in input columns. " +
+					"Validator should have checked that.", name));
+			}
+			return inputFieldTypes.getFieldList().get(index).getType(); })
+			.map(FlinkTypeFactory::toLogicalType).collect(Collectors.toList());
+
+		// get partitions from table source and prune
+		List<Map<String, String>> remainingPartitions = null;
+		Optional<List<Map<String, String>>> optionalPartitions = null;
+		try {
+			optionalPartitions = ((SupportsPartitionPushDown) dynamicTableSource).listPartitions();
+		} catch (UnsupportedOperationException e) {
+			// read partitions from catalog if table source doesn't support listPartitions operation.
+			if (!catalogOptional.isPresent()){
+				throw new TableException(
+					String.format("Table %s must from a catalog, but %s is not a catalog",
+						identifier.asSummaryString(), identifier.getCatalogName()), e);
+			}
+		}
+		if (optionalPartitions != null) {
+			if (!optionalPartitions.isPresent() || optionalPartitions.get().size() == 0) {
+				// return if no partitions
+				return;
+			}
+			// get remaining partitions
+			remainingPartitions = internalPrunePartitions(
+				optionalPartitions.get(),
+				inputFieldNames,
+				partitionFieldNames,
+				partitionFieldTypes,
+				partitionPredicate,
+				context.getTableConfig());
+		} else {
+			RexNodeToExpressionConverter converter = new RexNodeToExpressionConverter(
+				inputFieldNames.toArray(new String[0]),
+				context.getFunctionCatalog(),
+				context.getCatalogManager(),
+				TimeZone.getTimeZone(context.getTableConfig().getLocalTimeZone()));
+			ArrayList<Expression> exprs = new ArrayList<>();
+			Option<ResolvedExpression> subExpr;
+			for (RexNode node: JavaConversions.seqAsJavaList(allPredicates._1)) {
+				subExpr = node.accept(converter);
+				if (!subExpr.isEmpty()) {
+					exprs.add(subExpr.get());
+				}
+			}
+			try {
+				if (exprs.size() > 0) {
+					remainingPartitions = catalogOptional.get().listPartitionsByFilter(tablePath, exprs)
+						.stream()
+						.map(CatalogPartitionSpec::getPartitionSpec)
+						.collect(Collectors.toList());
+				} else {
+					// no filter and get all partitions
+					List<Map<String, String>> partitions = catalogOptional.get().listPartitions(tablePath)
+						.stream()
+						.map(CatalogPartitionSpec::getPartitionSpec)
+						.collect(Collectors.toList());
+					// prune partitions
+					if (partitions.size() > 0) {
+						remainingPartitions = internalPrunePartitions(
+							partitions,
+							inputFieldNames,
+							partitionFieldNames,
+							partitionFieldTypes,
+							partitionPredicate,
+							context.getTableConfig());
+					} else {
+						return;
+					}
+				}
+			} catch (TableNotExistException e) {
+				throw new TableException(String.format("Table %s is not found in catalog.", identifier.asSummaryString()), e);
+			} catch (TableNotPartitionedException e) {
+				// no partitions in table source
+				return;
+			}
+		}

Review comment:
       I think it's better we can split the logic to different sub-methods, and simplify this part logic as:
   1. get partition from TableSource, if succeed, do partition pruning and build new table scan, else fallback to step 2
   2. check whether the catalog exists. if not existing, return. else go to step 2.1
   2.1. try to get partitions through `listPartitionsByFilter` method. if succeed, build new table scan. else go to step 2.2
   2.2. try to get partitions through `listPartitions` method. if failed, return, else do partition pruning and build new table scan.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6697110fb14fef778707f74b66227df2953c487c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107",
       "triggerID" : "6697110fb14fef778707f74b66227df2953c487c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122",
       "triggerID" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5147",
       "triggerID" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "triggerType" : "PUSH"
     }, {
       "hash" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5455",
       "triggerID" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "triggerType" : "PUSH"
     }, {
       "hash" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5458",
       "triggerID" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "triggerType" : "PUSH"
     }, {
       "hash" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5467",
       "triggerID" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5469",
       "triggerID" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "4642a985cde81555583a17880cd2462399338310",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5503",
       "triggerID" : "4642a985cde81555583a17880cd2462399338310",
       "triggerType" : "PUSH"
     }, {
       "hash" : "984744723761b8124aa003f23e65d4bb484a73c7",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5504",
       "triggerID" : "984744723761b8124aa003f23e65d4bb484a73c7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e91a84cc20e3655749b8cf9b69ed79d855aaedaf",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5522",
       "triggerID" : "e91a84cc20e3655749b8cf9b69ed79d855aaedaf",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d02272061d2264ee74d67da5d9f0e1524f1c7d52",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5657",
       "triggerID" : "d02272061d2264ee74d67da5d9f0e1524f1c7d52",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * e91a84cc20e3655749b8cf9b69ed79d855aaedaf Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5522) 
   * d02272061d2264ee74d67da5d9f0e1524f1c7d52 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5657) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6697110fb14fef778707f74b66227df2953c487c",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107",
       "triggerID" : "6697110fb14fef778707f74b66227df2953c487c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 6697110fb14fef778707f74b66227df2953c487c Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107) 
   * 1790d93b79cfdfc3f65a7805e444699736f80d93 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 0f19df4d1fcbb4093b69d772628b67b81ebb443a Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 0f19df4d1fcbb4093b69d772628b67b81ebb443a Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot commented on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 0f19df4d1fcbb4093b69d772628b67b81ebb443a UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12966: [FLINK-17427][table sql/planner]Support SupportsPartitionPushDown in …

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12966:
URL: https://github.com/apache/flink/pull/12966#issuecomment-662861328


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4759",
       "triggerID" : "0f19df4d1fcbb4093b69d772628b67b81ebb443a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6697110fb14fef778707f74b66227df2953c487c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5107",
       "triggerID" : "6697110fb14fef778707f74b66227df2953c487c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5122",
       "triggerID" : "1790d93b79cfdfc3f65a7805e444699736f80d93",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5147",
       "triggerID" : "9a05a030c661def4a45b36370dbcfa5e786ed8dc",
       "triggerType" : "PUSH"
     }, {
       "hash" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5455",
       "triggerID" : "97bb191e646a7ddc62d105e08a7473c8e5561160",
       "triggerType" : "PUSH"
     }, {
       "hash" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5458",
       "triggerID" : "eea5ebcd848bd4e554134096073cbd88d9725395",
       "triggerType" : "PUSH"
     }, {
       "hash" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5467",
       "triggerID" : "50079cc5a0bbe71de746f78a22180febf9a35e57",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5469",
       "triggerID" : "b1b4efe76749cd0dfa6b4aea94ca2ce9f0d15be1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "4642a985cde81555583a17880cd2462399338310",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5503",
       "triggerID" : "4642a985cde81555583a17880cd2462399338310",
       "triggerType" : "PUSH"
     }, {
       "hash" : "984744723761b8124aa003f23e65d4bb484a73c7",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5504",
       "triggerID" : "984744723761b8124aa003f23e65d4bb484a73c7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e91a84cc20e3655749b8cf9b69ed79d855aaedaf",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5522",
       "triggerID" : "e91a84cc20e3655749b8cf9b69ed79d855aaedaf",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d02272061d2264ee74d67da5d9f0e1524f1c7d52",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "d02272061d2264ee74d67da5d9f0e1524f1c7d52",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * e91a84cc20e3655749b8cf9b69ed79d855aaedaf Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5522) 
   * d02272061d2264ee74d67da5d9f0e1524f1c7d52 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org