You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@arrow.apache.org by GitBox <gi...@apache.org> on 2021/07/27 23:57:38 UTC

[GitHub] [arrow-datafusion] andygrove opened a new pull request #789: WIP: Implement streaming versions of Dataframe.collect methods

andygrove opened a new pull request #789:
URL: https://github.com/apache/arrow-datafusion/pull/789


   # Which issue does this PR close?
   
   <!--
   We generally require a GitHub issue to be filed for all bug fixes and enhancements and this helps us generate change logs for our releases. You can link an issue to this PR using the GitHub syntax. For example `Closes #123` indicates that this PR will close issue #123.
   -->
   
   Closes #47.
   
    # Rationale for this change
   <!--
    Why are you proposing this change? If this is already explained clearly in the issue then this section is not needed.
    Explaining clearly why changes are proposed helps reviewers understand your changes and offer better suggestions for fixes.  
   -->
   
   In addition to the current `collect*` methods that load results into memory in a `Vec<RecordBatch>` this PR adds alternate `collect_stream*` methods that return streams instead so that results don't have to be loaded into memory before being processed.
   
   # What changes are included in this PR?
   <!--
   There is no need to duplicate the description in the issue here but it is sometimes worth providing a summary of the individual changes in this PR.
   -->
   
   New `collect_stream` and `collect_stream_partitioned` methods on `DataFrame`.
   
   # Are there any user-facing changes?
   <!--
   If there are user-facing changes then we may require documentation to be updated before approving the PR.
   -->
   
   Yes, new DataFrame methods.
   <!--
   If there are any breaking changes to public APIs, please add the `api change` label.
   -->
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [arrow-datafusion] andygrove merged pull request #789: Implement streaming versions of Dataframe.collect methods

Posted by GitBox <gi...@apache.org>.
andygrove merged pull request #789:
URL: https://github.com/apache/arrow-datafusion/pull/789


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [arrow-datafusion] andygrove commented on a change in pull request #789: Implement streaming versions of Dataframe.collect methods

Posted by GitBox <gi...@apache.org>.
andygrove commented on a change in pull request #789:
URL: https://github.com/apache/arrow-datafusion/pull/789#discussion_r678397555



##########
File path: datafusion/src/execution/dataframe_impl.rs
##########
@@ -149,8 +152,19 @@ impl DataFrame for DataFrameImpl {
         Ok(collect(plan).await?)
     }
 
-    // Convert the logical plan represented by this DataFrame into a physical plan and
-    // execute it
+    /// Convert the logical plan represented by this DataFrame into a physical plan and
+    /// execute it, returning a stream over a single partition
+    async fn collect_stream(&self) -> Result<SendableRecordBatchStream> {
+        let state = self.ctx_state.lock().unwrap().clone();
+        let ctx = ExecutionContext::from(Arc::new(Mutex::new(state)));
+        let plan = ctx.optimize(&self.plan)?;
+        let plan = ctx.create_physical_plan(&plan)?;
+        collect_stream(plan).await
+    }
+
+    /// Convert the logical plan represented by this DataFrame into a physical plan and
+    /// execute it, collecting all resulting batches into memory while maintaining
+    /// partitioning
     async fn collect_partitioned(&self) -> Result<Vec<Vec<RecordBatch>>> {
         let state = self.ctx_state.lock().unwrap().clone();

Review comment:
       I've cleaned the code up and removed a fair bit of duplication now.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [arrow-datafusion] alamb commented on a change in pull request #789: WIP: Implement streaming versions of Dataframe.collect methods

Posted by GitBox <gi...@apache.org>.
alamb commented on a change in pull request #789:
URL: https://github.com/apache/arrow-datafusion/pull/789#discussion_r678247263



##########
File path: datafusion/src/dataframe.rs
##########
@@ -222,6 +223,21 @@ pub trait DataFrame: Send + Sync {
     /// ```
     async fn collect(&self) -> Result<Vec<RecordBatch>>;
 
+    /// Executes this DataFrame and returns a stream over a single partition
+    ///
+    /// ```
+    /// # use datafusion::prelude::*;
+    /// # use datafusion::error::Result;
+    /// # #[tokio::main]
+    /// # async fn main() -> Result<()> {
+    /// let mut ctx = ExecutionContext::new();
+    /// let df = ctx.read_csv("tests/example.csv", CsvReadOptions::new())?;
+    /// let stream = df.collect_stream().await?;
+    /// # Ok(())
+    /// # }
+    /// ```
+    async fn collect_stream(&self) -> Result<SendableRecordBatchStream>;

Review comment:
       What if we called this something like `execute` rather than `collect_stream`? 
   
   ```
       async fn execute_stream(&self) -> Result<SendableRecordBatchStream>;
   ```
   
   This would mirror the naming of `ExecutionPlan::execute` and might make it clearer that `collect` means collect into a Vec and `execute` means get a stream

##########
File path: datafusion/src/execution/dataframe_impl.rs
##########
@@ -149,8 +152,19 @@ impl DataFrame for DataFrameImpl {
         Ok(collect(plan).await?)
     }
 
-    // Convert the logical plan represented by this DataFrame into a physical plan and
-    // execute it
+    /// Convert the logical plan represented by this DataFrame into a physical plan and
+    /// execute it, returning a stream over a single partition
+    async fn collect_stream(&self) -> Result<SendableRecordBatchStream> {
+        let state = self.ctx_state.lock().unwrap().clone();
+        let ctx = ExecutionContext::from(Arc::new(Mutex::new(state)));
+        let plan = ctx.optimize(&self.plan)?;
+        let plan = ctx.create_physical_plan(&plan)?;
+        collect_stream(plan).await
+    }
+
+    /// Convert the logical plan represented by this DataFrame into a physical plan and
+    /// execute it, collecting all resulting batches into memory while maintaining
+    /// partitioning
     async fn collect_partitioned(&self) -> Result<Vec<Vec<RecordBatch>>> {
         let state = self.ctx_state.lock().unwrap().clone();

Review comment:
       You could probably rewrite `collect_partitioned` to be in terms of `collect_stream_partitioned`:
   
   ```rust
   collect(self.collect_stream_partitioned().await?)
   ``` 
   or something like that




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [arrow-datafusion] andygrove commented on a change in pull request #789: Implement streaming versions of Dataframe.collect methods

Posted by GitBox <gi...@apache.org>.
andygrove commented on a change in pull request #789:
URL: https://github.com/apache/arrow-datafusion/pull/789#discussion_r678396699



##########
File path: datafusion/src/dataframe.rs
##########
@@ -222,6 +223,21 @@ pub trait DataFrame: Send + Sync {
     /// ```
     async fn collect(&self) -> Result<Vec<RecordBatch>>;
 
+    /// Executes this DataFrame and returns a stream over a single partition
+    ///
+    /// ```
+    /// # use datafusion::prelude::*;
+    /// # use datafusion::error::Result;
+    /// # #[tokio::main]
+    /// # async fn main() -> Result<()> {
+    /// let mut ctx = ExecutionContext::new();
+    /// let df = ctx.read_csv("tests/example.csv", CsvReadOptions::new())?;
+    /// let stream = df.collect_stream().await?;
+    /// # Ok(())
+    /// # }
+    /// ```
+    async fn collect_stream(&self) -> Result<SendableRecordBatchStream>;

Review comment:
       Good idea. I renamed these to `execute_stream` and `execute_stream_partitioned`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org