You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by GitBox <gi...@apache.org> on 2022/04/03 22:27:35 UTC

[GitHub] [spark] huaxingao commented on a diff in pull request #36043: [SPARK-38768][SQL] Remove `Limit` from plan if complete push down limit to data source.

huaxingao commented on code in PR #36043:
URL: https://github.com/apache/spark/pull/36043#discussion_r841285841


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/V2ScanRelationPushDown.scala:
##########
@@ -380,27 +380,32 @@ object V2ScanRelationPushDown extends Rule[LogicalPlan] with PredicateHelper wit
           sHolder.pushedLimit = Some(limit)
           sHolder.sortOrders = orders
           if (isPartiallyPushed) {
-            s
+            (s, isPartiallyPushed)
           } else {
-            operation
+            (operation, isPartiallyPushed)
           }
         } else {
-          s
+          (s, true)
         }
       } else {
-        s
+        (s, true)
       }
     case p: Project =>
-      val newChild = pushDownLimit(p.child, limit)
-      p.withNewChildren(Seq(newChild))
-    case other => other
+      val (newChild, isPartiallyPushed) = pushDownLimit(p.child, limit)
+      (p.withNewChildren(Seq(newChild)), isPartiallyPushed)
+    case other => (other, true)
   }
 
   def pushDownLimits(plan: LogicalPlan): LogicalPlan = plan.transform {
     case globalLimit @ Limit(IntegerLiteral(limitValue), child) =>
-      val newChild = pushDownLimit(child, limitValue)
-      val newLocalLimit = globalLimit.child.asInstanceOf[LocalLimit].withNewChildren(Seq(newChild))
-      globalLimit.withNewChildren(Seq(newLocalLimit))
+      val (newChild, isPartiallyPushed) = pushDownLimit(child, limitValue)
+      if (isPartiallyPushed) {
+        val newLocalLimit =
+          globalLimit.child.asInstanceOf[LocalLimit].withNewChildren(Seq(newChild))
+        globalLimit.withNewChildren(Seq(newLocalLimit))
+      } else {
+        newChild

Review Comment:
   I think there is a problem here. If `isPartiallyPushed` is false, it is assumed that `Limit` is completely pushed down so Spark doesn't do `Limit` any more. However, the `isPartiallyPushed`  false could come from the default case in `PushDownUtils`.`pushLimit`
   
   ```
     def pushLimit(scanBuilder: ScanBuilder, limit: Int): (Boolean, Boolean) = {
       scanBuilder match {
         case s: SupportsPushDownLimit if s.pushLimit(limit) =>
           (true, s.isPartiallyPushed)
         case _ => (false, false)
       }
     }
   ```
   In this case, the `Limit` at Spark is removed wrongly.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscribe@spark.apache.org