You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@arrow.apache.org by GitBox <gi...@apache.org> on 2022/11/27 12:22:55 UTC

[GitHub] [arrow-datafusion] HaoYang670 opened a new pull request, #4391: Clean the code in `limit.rs`.

HaoYang670 opened a new pull request, #4391:
URL: https://github.com/apache/arrow-datafusion/pull/4391

   Signed-off-by: remzi <13...@gmail.com>
   
   # Which issue does this PR close?
   
   <!--
   We generally require a GitHub issue to be filed for all bug fixes and enhancements and this helps us generate change logs for our releases. You can link an issue to this PR using the GitHub syntax. For example `Closes #123` indicates that this PR will close issue #123.
   -->
   None.
   
   # Rationale for this change
   
   <!--
    Why are you proposing this change? If this is already explained clearly in the issue then this section is not needed.
    Explaining clearly why changes are proposed helps reviewers understand your changes and offer better suggestions for fixes.  
   -->
   
   # What changes are included in this PR?
   
   <!--
   There is no need to duplicate the description in the issue here but it is sometimes worth providing a summary of the individual changes in this PR.
   -->
   1. Remove the `current_skip` and `current_fetch` from `LimitStream` because they are redundant.
   2. Remove the public function `truncate_batch` and merge its functionality into `stream_limit`. Because `truncate_batch` is    not performant when `row_num == 0` and it's only used in `stream_limit`
   
   # Are these changes tested?
   Current tests cover the changes.
   <!--
   We typically require tests for all PRs in order to:
   1. Prevent the code from being accidentally broken by subsequent changes
   3. Serve as another way to document the expected behavior of the code
   
   If tests are not included in your PR, please explain why (for example, are they covered by existing tests)?
   -->
   
   # Are there any user-facing changes?
   No.
   <!--
   If there are user-facing changes then we may require documentation to be updated before approving the PR.
   -->
   
   <!--
   If there are any breaking changes to public APIs, please add the `api change` label.
   -->


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [arrow-datafusion] liukun4515 commented on pull request #4391: Clean the code in `limit.rs`.

Posted by GitBox <gi...@apache.org>.
liukun4515 commented on PR #4391:
URL: https://github.com/apache/arrow-datafusion/pull/4391#issuecomment-1328446292

   cc @jackwener 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [arrow-datafusion] alamb commented on a diff in pull request #4391: Clean the code in `limit.rs`.

Posted by GitBox <gi...@apache.org>.
alamb commented on code in PR #4391:
URL: https://github.com/apache/arrow-datafusion/pull/4391#discussion_r1033988402


##########
datafusion/core/src/physical_plan/limit.rs:
##########
@@ -365,30 +365,17 @@ impl ExecutionPlan for LocalLimitExec {
     }
 }
 
-/// Truncate a RecordBatch to maximum of n rows
-pub fn truncate_batch(batch: &RecordBatch, n: usize) -> RecordBatch {
-    let limited_columns: Vec<ArrayRef> = (0..batch.num_columns())
-        .map(|i| limit(batch.column(i), n))
-        .collect();
-
-    RecordBatch::try_new(batch.schema(), limited_columns).unwrap()
-}
-
 /// A Limit stream skips `skip` rows, and then fetch up to `fetch` rows.
 struct LimitStream {
-    /// The number of rows to skip
+    /// The remaining number of rows to skip
     skip: usize,

Review Comment:
   Oh, I see this was not yet done -- @HaoYang670  can you please do this as a follow on PR? If you don't have time I will do so myself



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [arrow-datafusion] alamb merged pull request #4391: Clean the code in `limit.rs`.

Posted by GitBox <gi...@apache.org>.
alamb merged PR #4391:
URL: https://github.com/apache/arrow-datafusion/pull/4391


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [arrow-datafusion] jackwener commented on a diff in pull request #4391: Clean the code in `limit.rs`.

Posted by GitBox <gi...@apache.org>.
jackwener commented on code in PR #4391:
URL: https://github.com/apache/arrow-datafusion/pull/4391#discussion_r1033087940


##########
datafusion/core/src/physical_plan/limit.rs:
##########
@@ -420,47 +405,52 @@ impl LimitStream {
         loop {
             let poll = input.poll_next_unpin(cx);
             let poll = poll.map_ok(|batch| {
-                if batch.num_rows() + self.current_skipped <= self.skip {
-                    self.current_skipped += batch.num_rows();
+                if batch.num_rows() <= self.skip {
+                    self.skip -= batch.num_rows();
                     RecordBatch::new_empty(input.schema())
                 } else {
-                    let offset = self.skip - self.current_skipped;
-                    let new_batch = batch.slice(offset, batch.num_rows() - offset);
-                    self.current_skipped = self.skip;
+                    let new_batch = batch.slice(self.skip, batch.num_rows() - self.skip);
+                    self.skip = 0;
                     new_batch
                 }
             });
 
             match &poll {
-                Poll::Ready(Some(Ok(batch)))
-                    if batch.num_rows() > 0 && self.current_skipped == self.skip =>
-                {
-                    break poll
+                Poll::Ready(Some(Ok(batch))) => {
+                    if batch.num_rows() > 0 && self.skip == 0 {
+                        break poll;
+                    } else {
+                        // continue to poll input stream
+                    }

Review Comment:
   more clear 👍



##########
datafusion/core/src/physical_plan/limit.rs:
##########
@@ -365,30 +365,17 @@ impl ExecutionPlan for LocalLimitExec {
     }
 }
 
-/// Truncate a RecordBatch to maximum of n rows
-pub fn truncate_batch(batch: &RecordBatch, n: usize) -> RecordBatch {
-    let limited_columns: Vec<ArrayRef> = (0..batch.num_columns())
-        .map(|i| limit(batch.column(i), n))
-        .collect();
-
-    RecordBatch::try_new(batch.schema(), limited_columns).unwrap()
-}
-
 /// A Limit stream skips `skip` rows, and then fetch up to `fetch` rows.
 struct LimitStream {
-    /// The number of rows to skip
+    /// The remaining number of rows to skip
     skip: usize,

Review Comment:
   The meaning of`skip` and `fetch` has changed, we should rename them.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [arrow-datafusion] HaoYang670 commented on a diff in pull request #4391: Clean the code in `limit.rs`.

Posted by GitBox <gi...@apache.org>.
HaoYang670 commented on code in PR #4391:
URL: https://github.com/apache/arrow-datafusion/pull/4391#discussion_r1034222322


##########
datafusion/core/src/physical_plan/limit.rs:
##########
@@ -365,30 +365,17 @@ impl ExecutionPlan for LocalLimitExec {
     }
 }
 
-/// Truncate a RecordBatch to maximum of n rows
-pub fn truncate_batch(batch: &RecordBatch, n: usize) -> RecordBatch {
-    let limited_columns: Vec<ArrayRef> = (0..batch.num_columns())
-        .map(|i| limit(batch.column(i), n))
-        .collect();
-
-    RecordBatch::try_new(batch.schema(), limited_columns).unwrap()
-}
-
 /// A Limit stream skips `skip` rows, and then fetch up to `fetch` rows.
 struct LimitStream {
-    /// The number of rows to skip
+    /// The remaining number of rows to skip
     skip: usize,

Review Comment:
   Sure, I'll file a follow-up.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [arrow-datafusion] ursabot commented on pull request #4391: Clean the code in `limit.rs`.

Posted by GitBox <gi...@apache.org>.
ursabot commented on PR #4391:
URL: https://github.com/apache/arrow-datafusion/pull/4391#issuecomment-1329686648

   Benchmark runs are scheduled for baseline = a31b44eae236920d43ddd3ef7fadf8f9abf18976 and contender = 52e198ea456c588cd57d44e9d98b082d0d2bd163. 52e198ea456c588cd57d44e9d98b082d0d2bd163 is a master commit associated with this PR. Results will be available as each benchmark for each run completes.
   Conbench compare runs links:
   [Skipped :warning: Benchmarking of arrow-datafusion-commits is not supported on ec2-t3-xlarge-us-east-2] [ec2-t3-xlarge-us-east-2](https://conbench.ursa.dev/compare/runs/9d1f3af96d724a329fe5ba36dd3fbd28...8a1d6a9e8144471ea160fbf595429b98/)
   [Skipped :warning: Benchmarking of arrow-datafusion-commits is not supported on test-mac-arm] [test-mac-arm](https://conbench.ursa.dev/compare/runs/3c10de84e1f547399dd9edb906337967...1f452b0b9e094beeaf7cb42adf4764b1/)
   [Skipped :warning: Benchmarking of arrow-datafusion-commits is not supported on ursa-i9-9960x] [ursa-i9-9960x](https://conbench.ursa.dev/compare/runs/b5b72ab9db7a470a9958caf1ced535e5...e1dcc9c6029849b0804021114e9c24e3/)
   [Skipped :warning: Benchmarking of arrow-datafusion-commits is not supported on ursa-thinkcentre-m75q] [ursa-thinkcentre-m75q](https://conbench.ursa.dev/compare/runs/1bdd07ba7ade4d328f3d9690df3fe23e...694098fac7f7436ebe9f500c72b861aa/)
   Buildkite builds:
   Supported benchmarks:
   ec2-t3-xlarge-us-east-2: Supported benchmark langs: Python, R. Runs only benchmarks with cloud = True
   test-mac-arm: Supported benchmark langs: C++, Python, R
   ursa-i9-9960x: Supported benchmark langs: Python, R, JavaScript
   ursa-thinkcentre-m75q: Supported benchmark langs: C++, Java
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org