You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@arrow.apache.org by GitBox <gi...@apache.org> on 2022/05/27 01:40:18 UTC

[GitHub] [arrow-datafusion] ming535 opened a new pull request, #2629: Offset

ming535 opened a new pull request, #2629:
URL: https://github.com/apache/arrow-datafusion/pull/2629

   # Which issue does this PR close?
   
   Closes https://github.com/apache/arrow-datafusion/issues/2551
   
    # Rationale for this change
   
   Support physical plan for `OFFSET`.
   Tow test cases for `OFFSET` without `LIMIT` are still not working, which is related to `limit_push_down` https://github.com/apache/arrow-datafusion/issues/2624
   
   # What changes are included in this PR?
   
   Implement physical plan of `OFFSET`
   
   # Are there any user-facing changes?
   
   # Does this PR break compatibility with Ballista?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [arrow-datafusion] ming535 commented on pull request #2629: Physical Plan for OFFSET

Posted by GitBox <gi...@apache.org>.
ming535 commented on PR #2629:
URL: https://github.com/apache/arrow-datafusion/pull/2629#issuecomment-1139241758

   I think a better way to solve this issue https://github.com/apache/arrow-datafusion/issues/2624 might be to add another LogicalPlan, for example `LimitWithOffset`. The semantic of these three LogicalPlan is different:
   
   1. `Limit`: only set an upper bounds on the stream's data
   2. `Offset` only set a lower bounds on the stream's data
   3. `LimitWithOffset` set a lower bounds AND upper bounds on the stream's data
   
   As of `limit_push_down`, it wont' touch `Offset`.
   It will push down `limit` of `Limit`.
   It will adjust `LimitWithOffset`'s  `limit` based on its offset and push the limit down to TableScan; if the push down happens, we replace it with `Offset`. 
   
   As of physical plan, we need to implement 3 physical plan accordingly. (Or may be we should just )
   
   I am not quite sure whether this is a good way to solve this issue, any thoughts?
   @alamb @andygrove @Ted-Jiang 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [arrow-datafusion] ming535 commented on pull request #2629: Physical Plan for OFFSET

Posted by GitBox <gi...@apache.org>.
ming535 commented on PR #2629:
URL: https://github.com/apache/arrow-datafusion/pull/2629#issuecomment-1146661462

   If https://github.com/apache/arrow-datafusion/pull/2694 is ok, then this PR can be closed.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [arrow-datafusion] ming535 closed pull request #2629: Physical Plan for OFFSET

Posted by GitBox <gi...@apache.org>.
ming535 closed pull request #2629: Physical Plan for OFFSET
URL: https://github.com/apache/arrow-datafusion/pull/2629


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [arrow-datafusion] ming535 commented on pull request #2629: Physical Plan for OFFSET

Posted by GitBox <gi...@apache.org>.
ming535 commented on PR #2629:
URL: https://github.com/apache/arrow-datafusion/pull/2629#issuecomment-1140276593

   Will update the PR when https://github.com/apache/arrow-datafusion/pull/2638 is reviewed and merged.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [arrow-datafusion] ming535 commented on a diff in pull request #2629: Physical Plan for OFFSET

Posted by GitBox <gi...@apache.org>.
ming535 commented on code in PR #2629:
URL: https://github.com/apache/arrow-datafusion/pull/2629#discussion_r883207529


##########
datafusion/core/tests/sql/offset.rs:
##########
@@ -0,0 +1,77 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+use super::*;
+
+#[tokio::test]
+async fn csv_offset_without_limit() -> Result<()> {

Review Comment:
   not working right now.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [arrow-datafusion] alamb commented on a diff in pull request #2629: Physical Plan for OFFSET

Posted by GitBox <gi...@apache.org>.
alamb commented on code in PR #2629:
URL: https://github.com/apache/arrow-datafusion/pull/2629#discussion_r883498432


##########
datafusion/core/src/physical_plan/offset.rs:
##########
@@ -0,0 +1,316 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+//! Defines the OFFSET plan
+
+use crate::execution::context::TaskContext;
+use arrow::array::ArrayRef;
+use arrow::array::UInt64Array;
+use arrow::compute::take;
+use arrow::datatypes::SchemaRef;
+use arrow::error::Result as ArrowResult;
+use arrow::record_batch::RecordBatch;
+
+use datafusion_physical_expr::PhysicalSortExpr;
+use std::any::Any;
+use std::fmt::Formatter;
+use std::pin::Pin;
+use std::sync::Arc;
+use std::task::{Context, Poll};
+
+use futures::stream::{Stream, StreamExt};
+use log::debug;
+
+use super::metrics::ExecutionPlanMetricsSet;
+use crate::error::{DataFusionError, Result};
+use crate::physical_plan::metrics::{BaselineMetrics, MetricsSet};
+use crate::physical_plan::{
+    DisplayFormatType, Distribution, ExecutionPlan, Partitioning, RecordBatchStream,
+    SendableRecordBatchStream, Statistics,
+};
+
+/// Offset execution plan
+#[derive(Debug)]
+pub struct OffsetExec {
+    /// Input execution plan
+    input: Arc<dyn ExecutionPlan>,
+    /// Number of rows to skip
+    offset: usize,
+    /// Execution metrics
+    metrics: ExecutionPlanMetricsSet,
+}
+
+impl OffsetExec {
+    /// Create a new OffsetExec
+    pub fn new(input: Arc<dyn ExecutionPlan>, offset: usize) -> Self {
+        OffsetExec {
+            input,
+            offset,
+            metrics: ExecutionPlanMetricsSet::new(),
+        }
+    }
+
+    /// Input execution plan
+    pub fn input(&self) -> &Arc<dyn ExecutionPlan> {
+        &self.input
+    }
+
+    /// Number of rows to ignore
+    pub fn offset(&self) -> usize {
+        self.offset
+    }
+}
+
+impl ExecutionPlan for OffsetExec {
+    fn as_any(&self) -> &dyn Any {
+        self
+    }
+
+    fn schema(&self) -> SchemaRef {
+        self.input.schema()
+    }
+
+    fn children(&self) -> Vec<Arc<dyn ExecutionPlan>> {
+        vec![self.input.clone()]
+    }
+
+    fn required_child_distribution(&self) -> Distribution {
+        Distribution::SinglePartition
+    }
+
+    /// Get the output partitioning of this plan
+    fn output_partitioning(&self) -> Partitioning {
+        Partitioning::UnknownPartitioning(1)
+    }
+
+    fn relies_on_input_order(&self) -> bool {
+        self.input.output_ordering().is_some()
+    }
+
+    fn maintains_input_order(&self) -> bool {
+        true
+    }
+
+    fn benefits_from_input_partitioning(&self) -> bool {
+        false
+    }
+
+    fn output_ordering(&self) -> Option<&[PhysicalSortExpr]> {
+        self.input.output_ordering()
+    }
+
+    fn with_new_children(
+        self: Arc<Self>,
+        children: Vec<Arc<dyn ExecutionPlan>>,
+    ) -> Result<Arc<dyn ExecutionPlan>> {
+        Ok(Arc::new(OffsetExec::new(children[0].clone(), self.offset)))
+    }
+
+    fn execute(
+        &self,
+        partition: usize,
+        context: Arc<TaskContext>,
+    ) -> Result<SendableRecordBatchStream> {
+        debug!("Start OffsetExec::execute for partition: {}", partition);
+
+        if 0 != partition {
+            return Err(DataFusionError::Internal(format!(
+                "OffsetExec invalid partition {}",
+                partition
+            )));
+        }
+
+        if 1 != self.input.output_partitioning().partition_count() {
+            return Err(DataFusionError::Internal(
+                "OffsetExec requires a single input partition".to_owned(),
+            ));
+        }
+
+        let baseline_metrics = BaselineMetrics::new(&self.metrics, partition);
+        let stream = self.input.execute(0, context)?;
+        Ok(Box::pin(OffsetStream::new(
+            stream,
+            self.offset,
+            baseline_metrics,
+        )))
+    }
+
+    fn fmt_as(&self, t: DisplayFormatType, f: &mut Formatter) -> std::fmt::Result {
+        match t {
+            DisplayFormatType::Default => {
+                write!(f, "OffsetExec: offset={}", self.offset)
+            }
+        }
+    }
+
+    fn metrics(&self) -> Option<MetricsSet> {
+        Some(self.metrics.clone_inner())
+    }
+
+    fn statistics(&self) -> Statistics {
+        let input_stats = self.input.statistics();
+        match input_stats {
+            Statistics {
+                num_rows: Some(nr), ..
+            } => Statistics {
+                num_rows: Some(nr - self.offset),
+                is_exact: input_stats.is_exact,
+                ..Default::default()
+            },
+            _ => Statistics::default(),
+        }
+    }
+}
+
+/// An Offset stream skip the input stream's data up to `offset` row.
+pub struct OffsetStream {
+    /// Number of rows to skip, starts with 1.
+    offset: usize,
+    input: SendableRecordBatchStream,
+    schema: SchemaRef,
+    /// Number of rows have already skipped.
+    skipped: usize,
+    /// Execution time metrics
+    baseline_metrics: BaselineMetrics,
+}
+
+impl OffsetStream {
+    fn new(
+        input: SendableRecordBatchStream,
+        offset: usize,
+        baseline_metrics: BaselineMetrics,
+    ) -> Self {
+        let schema = input.schema();
+        Self {
+            offset,
+            input,
+            schema,
+            skipped: 0,
+            baseline_metrics,
+        }
+    }
+
+    fn poll_and_skip(
+        &mut self,
+        cx: &mut Context<'_>,
+    ) -> Poll<Option<ArrowResult<RecordBatch>>> {
+        loop {
+            let poll = self.input.poll_next_unpin(cx);
+            let poll = poll.map_ok(|batch| {
+                if batch.num_rows() + self.skipped <= self.offset {
+                    self.skipped += batch.num_rows();
+                    RecordBatch::new_empty(self.input.schema())
+                } else {
+                    let new_batch = cut_batch(&batch, self.offset - self.skipped);
+                    self.skipped = self.offset;
+                    new_batch
+                }
+            });
+
+            match &poll {
+                Poll::Ready(Some(Ok(batch)))
+                    if batch.num_rows() > 0 && self.skipped == self.offset =>
+                {
+                    break poll
+                }
+                Poll::Ready(Some(Err(_e))) => break poll,
+                Poll::Ready(None) => break poll,
+                Poll::Pending => break poll,
+                _ => {
+                    // continue to poll input stream
+                }
+            }
+        }
+    }
+}
+
+/// Remove a RecordBatch's first n rows
+pub fn cut_batch(batch: &RecordBatch, n: usize) -> RecordBatch {

Review Comment:
   I think you might be able to use `RecordBatch::slice()` here
   
   https://docs.rs/arrow/14.0.0/arrow/record_batch/struct.RecordBatch.html#method.slice



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [arrow-datafusion] ming535 commented on a diff in pull request #2629: Physical Plan for OFFSET

Posted by GitBox <gi...@apache.org>.
ming535 commented on code in PR #2629:
URL: https://github.com/apache/arrow-datafusion/pull/2629#discussion_r883207529


##########
datafusion/core/tests/sql/offset.rs:
##########
@@ -0,0 +1,77 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+use super::*;
+
+#[tokio::test]
+async fn csv_offset_without_limit() -> Result<()> {

Review Comment:
   This test case is not working right now due to `limit_push_down`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [arrow-datafusion] ming535 commented on a diff in pull request #2629: Physical Plan for OFFSET

Posted by GitBox <gi...@apache.org>.
ming535 commented on code in PR #2629:
URL: https://github.com/apache/arrow-datafusion/pull/2629#discussion_r883208936


##########
datafusion/core/src/optimizer/limit_push_down.rs:
##########
@@ -386,6 +386,19 @@ mod test {
         Ok(())
     }
 
+    #[test]
+    fn limit_pushdown_should_not_pushdown_limit_with_offset_only() -> Result<()> {

Review Comment:
   This test case is not working right now due to limit_push_down



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org