You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@arrow.apache.org by "haohuaijin (via GitHub)" <gi...@apache.org> on 2023/09/19 05:31:18 UTC

[GitHub] [arrow-datafusion] haohuaijin opened a new pull request, #7596: Move `FileCompressionType` out of `common` and into `core`

haohuaijin opened a new pull request, #7596:
URL: https://github.com/apache/arrow-datafusion/pull/7596

   ## Which issue does this PR close?
   
   <!--
   We generally require a GitHub issue to be filed for all bug fixes and enhancements and this helps us generate change logs for our releases. You can link an issue to this PR using the GitHub syntax. For example `Closes #123` indicates that this PR will close issue #123.
   -->
   
   Closes #7516
   
   ## Rationale for this change
   
   <!--
    Why are you proposing this change? If this is already explained clearly in the issue then this section is not needed.
    Explaining clearly why changes are proposed helps reviewers understand your changes and offer better suggestions for fixes.  
   -->
   
   ## What changes are included in this PR?
   
   <!--
   There is no need to duplicate the description in the issue here but it is sometimes worth providing a summary of the individual changes in this PR.
   -->
   - move `FileCompressionType` from `common` to `core/datasource/file_compression_type.rs`.
   - because `FileType`'s `get_ext_with_compression` method is related to `FileCompressionType` and `get_ext_with_compression` method is only used in `core`, so I also move `get_ext_with_compression` to `core` by add a trait `FileTypeExt`.
   
   ## Are these changes tested?
   
   <!--
   We typically require tests for all PRs in order to:
   1. Prevent the code from being accidentally broken by subsequent changes
   2. Serve as another way to document the expected behavior of the code
   
   If tests are not included in your PR, please explain why (for example, are they covered by existing tests)?
   -->
   
   ## Are there any user-facing changes?
   
   <!--
   If there are user-facing changes then we may require documentation to be updated before approving the PR.
   -->
   
   <!--
   If there are any breaking changes to public APIs, please add the `api change` label.
   -->


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [arrow-datafusion] alamb commented on pull request #7596: Move `FileCompressionType` out of `common` and into `core`

Posted by "alamb (via GitHub)" <gi...@apache.org>.
alamb commented on PR #7596:
URL: https://github.com/apache/arrow-datafusion/pull/7596#issuecomment-1728398115

   Thanks again @haohuaijin 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [arrow-datafusion] alamb commented on a diff in pull request #7596: Move `FileCompressionType` out of `common` and into `core`

Posted by "alamb (via GitHub)" <gi...@apache.org>.
alamb commented on code in PR #7596:
URL: https://github.com/apache/arrow-datafusion/pull/7596#discussion_r1330850367


##########
datafusion-cli/Cargo.lock:
##########
@@ -1124,20 +1124,11 @@ version = "31.0.0"
 dependencies = [
  "arrow",
  "arrow-array",
- "async-compression",

Review Comment:
   💯 -- FYI @universalmind303 



##########
datafusion/core/src/datasource/file_format/file_compression_type.rs:
##########
@@ -0,0 +1,333 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+//! File Compression type abstraction
+
+use crate::error::{DataFusionError, Result};
+#[cfg(feature = "compression")]
+use async_compression::tokio::bufread::{
+    BzDecoder as AsyncBzDecoder, BzEncoder as AsyncBzEncoder,
+    GzipDecoder as AsyncGzDecoder, GzipEncoder as AsyncGzEncoder,
+    XzDecoder as AsyncXzDecoder, XzEncoder as AsyncXzEncoder,
+    ZstdDecoder as AsyncZstdDecoer, ZstdEncoder as AsyncZstdEncoder,
+};
+
+#[cfg(feature = "compression")]
+use async_compression::tokio::write::{BzEncoder, GzipEncoder, XzEncoder, ZstdEncoder};
+use bytes::Bytes;
+#[cfg(feature = "compression")]
+use bzip2::read::MultiBzDecoder;
+use datafusion_common::{parsers::CompressionTypeVariant, FileType, GetExt};
+#[cfg(feature = "compression")]
+use flate2::read::MultiGzDecoder;
+
+use futures::stream::BoxStream;
+use futures::StreamExt;
+#[cfg(feature = "compression")]
+use futures::TryStreamExt;
+use std::str::FromStr;
+use tokio::io::AsyncWrite;
+#[cfg(feature = "compression")]
+use tokio_util::io::{ReaderStream, StreamReader};
+#[cfg(feature = "compression")]
+use xz2::read::XzDecoder;
+#[cfg(feature = "compression")]
+use zstd::Decoder as ZstdDecoder;
+use CompressionTypeVariant::*;
+
+/// Readable file compression type
+#[derive(Debug, Clone, Copy, PartialEq, Eq)]
+pub struct FileCompressionType {
+    variant: CompressionTypeVariant,
+}
+
+impl GetExt for FileCompressionType {
+    fn get_ext(&self) -> String {
+        match self.variant {
+            GZIP => ".gz".to_owned(),
+            BZIP2 => ".bz2".to_owned(),
+            XZ => ".xz".to_owned(),
+            ZSTD => ".zst".to_owned(),
+            UNCOMPRESSED => "".to_owned(),
+        }
+    }
+}
+
+impl From<CompressionTypeVariant> for FileCompressionType {
+    fn from(t: CompressionTypeVariant) -> Self {
+        Self { variant: t }
+    }
+}
+
+impl FromStr for FileCompressionType {
+    type Err = DataFusionError;
+
+    fn from_str(s: &str) -> Result<Self> {
+        let variant = CompressionTypeVariant::from_str(s).map_err(|_| {
+            DataFusionError::NotImplemented(format!("Unknown FileCompressionType: {s}"))
+        })?;
+        Ok(Self { variant })
+    }
+}
+
+/// `FileCompressionType` implementation
+impl FileCompressionType {
+    /// Gzip-ed file
+    pub const GZIP: Self = Self { variant: GZIP };
+
+    /// Bzip2-ed file
+    pub const BZIP2: Self = Self { variant: BZIP2 };
+
+    /// Xz-ed file (liblzma)
+    pub const XZ: Self = Self { variant: XZ };
+
+    /// Zstd-ed file
+    pub const ZSTD: Self = Self { variant: ZSTD };
+
+    /// Uncompressed file
+    pub const UNCOMPRESSED: Self = Self {
+        variant: UNCOMPRESSED,
+    };
+
+    /// The file is compressed or not
+    pub const fn is_compressed(&self) -> bool {
+        self.variant.is_compressed()
+    }
+
+    /// Given a `Stream`, create a `Stream` which data are compressed with `FileCompressionType`.
+    pub fn convert_to_compress_stream(
+        &self,
+        s: BoxStream<'static, Result<Bytes>>,
+    ) -> Result<BoxStream<'static, Result<Bytes>>> {
+        Ok(match self.variant {
+            #[cfg(feature = "compression")]
+            GZIP => ReaderStream::new(AsyncGzEncoder::new(StreamReader::new(s)))
+                .map_err(DataFusionError::from)
+                .boxed(),
+            #[cfg(feature = "compression")]
+            BZIP2 => ReaderStream::new(AsyncBzEncoder::new(StreamReader::new(s)))
+                .map_err(DataFusionError::from)
+                .boxed(),
+            #[cfg(feature = "compression")]
+            XZ => ReaderStream::new(AsyncXzEncoder::new(StreamReader::new(s)))
+                .map_err(DataFusionError::from)
+                .boxed(),
+            #[cfg(feature = "compression")]
+            ZSTD => ReaderStream::new(AsyncZstdEncoder::new(StreamReader::new(s)))
+                .map_err(DataFusionError::from)
+                .boxed(),
+            #[cfg(not(feature = "compression"))]
+            GZIP | BZIP2 | XZ | ZSTD => {
+                return Err(DataFusionError::NotImplemented(
+                    "Compression feature is not enabled".to_owned(),
+                ))
+            }
+            UNCOMPRESSED => s.boxed(),
+        })
+    }
+
+    /// Wrap the given `AsyncWrite` so that it performs compressed writes
+    /// according to this `FileCompressionType`.
+    pub fn convert_async_writer(
+        &self,
+        w: Box<dyn AsyncWrite + Send + Unpin>,
+    ) -> Result<Box<dyn AsyncWrite + Send + Unpin>> {
+        Ok(match self.variant {
+            #[cfg(feature = "compression")]
+            GZIP => Box::new(GzipEncoder::new(w)),
+            #[cfg(feature = "compression")]
+            BZIP2 => Box::new(BzEncoder::new(w)),
+            #[cfg(feature = "compression")]
+            XZ => Box::new(XzEncoder::new(w)),
+            #[cfg(feature = "compression")]
+            ZSTD => Box::new(ZstdEncoder::new(w)),
+            #[cfg(not(feature = "compression"))]
+            GZIP | BZIP2 | XZ | ZSTD => {
+                return Err(DataFusionError::NotImplemented(
+                    "Compression feature is not enabled".to_owned(),
+                ))
+            }
+            UNCOMPRESSED => w,
+        })
+    }
+
+    /// Given a `Stream`, create a `Stream` which data are decompressed with `FileCompressionType`.
+    pub fn convert_stream(
+        &self,
+        s: BoxStream<'static, Result<Bytes>>,
+    ) -> Result<BoxStream<'static, Result<Bytes>>> {
+        Ok(match self.variant {
+            #[cfg(feature = "compression")]
+            GZIP => ReaderStream::new(AsyncGzDecoder::new(StreamReader::new(s)))
+                .map_err(DataFusionError::from)
+                .boxed(),
+            #[cfg(feature = "compression")]
+            BZIP2 => ReaderStream::new(AsyncBzDecoder::new(StreamReader::new(s)))
+                .map_err(DataFusionError::from)
+                .boxed(),
+            #[cfg(feature = "compression")]
+            XZ => ReaderStream::new(AsyncXzDecoder::new(StreamReader::new(s)))
+                .map_err(DataFusionError::from)
+                .boxed(),
+            #[cfg(feature = "compression")]
+            ZSTD => ReaderStream::new(AsyncZstdDecoer::new(StreamReader::new(s)))
+                .map_err(DataFusionError::from)
+                .boxed(),
+            #[cfg(not(feature = "compression"))]
+            GZIP | BZIP2 | XZ | ZSTD => {
+                return Err(DataFusionError::NotImplemented(
+                    "Compression feature is not enabled".to_owned(),
+                ))
+            }
+            UNCOMPRESSED => s.boxed(),
+        })
+    }
+
+    /// Given a `Read`, create a `Read` which data are decompressed with `FileCompressionType`.
+    pub fn convert_read<T: std::io::Read + Send + 'static>(
+        &self,
+        r: T,
+    ) -> Result<Box<dyn std::io::Read + Send>> {
+        Ok(match self.variant {
+            #[cfg(feature = "compression")]
+            GZIP => Box::new(MultiGzDecoder::new(r)),
+            #[cfg(feature = "compression")]
+            BZIP2 => Box::new(MultiBzDecoder::new(r)),
+            #[cfg(feature = "compression")]
+            XZ => Box::new(XzDecoder::new_multi_decoder(r)),
+            #[cfg(feature = "compression")]
+            ZSTD => match ZstdDecoder::new(r) {
+                Ok(decoder) => Box::new(decoder),
+                Err(e) => return Err(DataFusionError::External(Box::new(e))),
+            },
+            #[cfg(not(feature = "compression"))]
+            GZIP | BZIP2 | XZ | ZSTD => {
+                return Err(DataFusionError::NotImplemented(
+                    "Compression feature is not enabled".to_owned(),
+                ))
+            }
+            UNCOMPRESSED => Box::new(r),
+        })
+    }
+}
+
+/// Trait for extending the functionality of the `FileType` enum.

Review Comment:
   this is perfect
   
   FYI @devinjdangelo I think you originally consolidated the FileType enum -- this now splits out the dependency heavy part into `datasource`



##########
datafusion-cli/Cargo.lock:
##########
@@ -1124,20 +1124,11 @@ version = "31.0.0"
 dependencies = [
  "arrow",
  "arrow-array",
- "async-compression",

Review Comment:
   💯 -- FYI @universalmind303 



##########
datafusion/common/Cargo.toml:
##########
@@ -35,34 +35,19 @@ path = "src/lib.rs"
 [features]
 avro = ["apache-avro"]
 backtrace = []
-compression = ["xz2", "bzip2", "flate2", "zstd", "async-compression"]
-default = ["compression", "parquet"]
+default = ["parquet"]
 pyarrow = ["pyo3", "arrow/pyarrow"]
 
 [dependencies]
 apache-avro = { version = "0.15", default-features = false, features = ["snappy"], optional = true }
 arrow = { workspace = true }
 arrow-array = { workspace = true }
-async-compression = { version = "0.4.0", features = ["bzip2", "gzip", "xz", "zstd", "futures-io", "tokio"], optional = true }

Review Comment:
   This is a very nice reduction in dependencies



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [arrow-datafusion] jonmmease commented on pull request #7596: Move `FileCompressionType` out of `common` and into `core`

Posted by "jonmmease (via GitHub)" <gi...@apache.org>.
jonmmease commented on PR #7596:
URL: https://github.com/apache/arrow-datafusion/pull/7596#issuecomment-1731357682

   Thanks for the fix @haohuaijin! This prevented the ability to compile `datafusion-common` to WASM, which VegaFusion depends on.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [arrow-datafusion] alamb merged pull request #7596: Move `FileCompressionType` out of `common` and into `core`

Posted by "alamb (via GitHub)" <gi...@apache.org>.
alamb merged PR #7596:
URL: https://github.com/apache/arrow-datafusion/pull/7596


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org