You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@arrow.apache.org by "jiangzhx (via GitHub)" <gi...@apache.org> on 2023/04/07 04:15:50 UTC

[GitHub] [arrow-datafusion] jiangzhx commented on a diff in pull request #5860: when inferring the schema of compressed CSV, decompress before newline-delimited chunking

jiangzhx commented on code in PR #5860:
URL: https://github.com/apache/arrow-datafusion/pull/5860#discussion_r1160422312


##########
datafusion/core/src/datasource/file_format/file_type.rs:
##########
@@ -111,6 +115,58 @@ impl FileCompressionType {
         self.variant.is_compressed()
     }
 
+    /// Given a `Stream`, create a `Stream` which data are decompressed with `FileCompressionType`.
+    pub fn convert_to_compress_stream<
+        T: Stream<Item = Result<Bytes>> + Unpin + Send + 'static,
+    >(
+        &self,
+        s: T,
+    ) -> Result<Box<dyn Stream<Item = Result<Bytes>> + Send + Unpin>> {
+        #[cfg(feature = "compression")]
+        let err_converter = |e: std::io::Error| match e
+            .get_ref()
+            .and_then(|e| e.downcast_ref::<DataFusionError>())
+        {
+            Some(_) => {
+                *(e.into_inner()
+                    .unwrap()
+                    .downcast::<DataFusionError>()
+                    .unwrap())

Review Comment:
   I think it's fine.
   Because err_converter only accepts std::io::Error type as input, and DataFusionError has also implemented it accordingly.
   
   https://github.com/apache/arrow-datafusion/blob/fe46a1ed9833f2f9ea4c4ccd4d77718e5c371ab1/datafusion-common/src/error.rs#L78-L82



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org