You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@arrow.apache.org by "suxiaogang223 (via GitHub)" <gi...@apache.org> on 2023/04/30 15:33:17 UTC

[GitHub] [arrow-rs] suxiaogang223 opened a new pull request, #4160: feat: Support compressed

suxiaogang223 opened a new pull request, #4160:
URL: https://github.com/apache/arrow-rs/pull/4160

   # Which issue does this PR close?
   
   <!--
   We generally require a GitHub issue to be filed for all bug fixes and enhancements and this helps us generate change logs for our releases. You can link an issue to this PR using the GitHub syntax. For example `Closes #123` indicates that this PR will close issue #123.
   -->
   
   Closes #3721 
   
   # Rationale for this change
    
   <!--
   Why are you proposing this change? If this is already explained clearly in the issue then this section is not needed.
   Explaining clearly why changes are proposed helps reviewers understand your changes and offer better suggestions for fixes.
   -->
   
   # What changes are included in this PR?
   
   <!--
   There is no need to duplicate the description in the issue here but it is sometimes worth providing a summary of the individual changes in this PR.
   -->
   
   # Are there any user-facing changes?
   add new arg for parquet-fromcsv : -C/--csv-commoression
   
   <!--
   If there are user-facing changes then we may require documentation to be updated before approving the PR.
   -->
   
   <!---
   If there are any breaking changes to public APIs, please add the `breaking change` label.
   -->
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [arrow-rs] tustvold merged pull request #4160: Support Compression in parquet-fromcsv

Posted by "tustvold (via GitHub)" <gi...@apache.org>.
tustvold merged PR #4160:
URL: https://github.com/apache/arrow-rs/pull/4160


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [arrow-rs] suxiaogang223 commented on pull request #4160: Support Compression in parquet-fromcsv

Posted by "suxiaogang223 (via GitHub)" <gi...@apache.org>.
suxiaogang223 commented on PR #4160:
URL: https://github.com/apache/arrow-rs/pull/4160#issuecomment-1531551032

   > Looking good thank you, only thing left I think is some feature flag shenanigans
   > 
   > I believe running with `cargo run --bin parquet-from-csv --no-default-features --features arrow,cli` will result in compilation errors now
   
   thanks, I'll fix this soon


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [arrow-rs] tustvold commented on a diff in pull request #4160: Support Compression in parquet-fromcsv

Posted by "tustvold (via GitHub)" <gi...@apache.org>.
tustvold commented on code in PR #4160:
URL: https://github.com/apache/arrow-rs/pull/4160#discussion_r1182476936


##########
parquet/src/bin/parquet-fromcsv.rs:
##########
@@ -626,14 +658,71 @@ mod tests {
         schema.as_file().write_all(schema_text.as_bytes()).unwrap();
 
         let mut input_file = NamedTempFile::new().unwrap();
-        {
-            let csv = input_file.as_file_mut();
+
+        fn wirte_tmp_file<T: Write>(w: &mut T) {

Review Comment:
   ```suggestion
           fn write_tmp_file<T: Write>(w: &mut T) {
   ```



##########
parquet/src/bin/parquet-fromcsv.rs:
##########
@@ -368,9 +376,28 @@ fn convert_csv_to_parquet(args: &Args) -> Result<(), ParquetFromCsvError> {
             &format!("Failed to open input file {:#?}", &args.input_file),
         )
     })?;
+
+    // open input file decoder
+    let input_file_decoder = match args.csv_compression {
+        Compression::UNCOMPRESSED => Box::new(input_file) as Box<dyn Read>,
+        Compression::SNAPPY => Box::new(FrameDecoder::new(input_file)) as Box<dyn Read>,
+        Compression::GZIP(_) => Box::new(GzDecoder::new(input_file)) as Box<dyn Read>,
+        Compression::BROTLI(_) => {
+            Box::new(Decompressor::new(input_file, 0)) as Box<dyn Read>
+        }
+        Compression::LZ4 => Box::new(lz4::Decoder::new(input_file).map_err(|e| {
+            ParquetFromCsvError::with_context(e, "Failed to create lz4::Decoder")
+        })?) as Box<dyn Read>,
+        Compression::ZSTD(_) => Box::new(zstd::Decoder::new(input_file).map_err(|e| {
+            ParquetFromCsvError::with_context(e, "Failed to create zstd::Decoder")
+        })?) as Box<dyn Read>,
+        // TODO: I wonder which crates should i use to decompress lzo and lz4_raw?
+        _ => panic!("compression type not support yet"),

Review Comment:
   ```suggestion
           d => unimplemented!("compression type {d}"),
   ```



##########
parquet/src/bin/parquet-fromcsv.rs:
##########
@@ -368,9 +376,28 @@ fn convert_csv_to_parquet(args: &Args) -> Result<(), ParquetFromCsvError> {
             &format!("Failed to open input file {:#?}", &args.input_file),
         )
     })?;
+
+    // open input file decoder
+    let input_file_decoder = match args.csv_compression {
+        Compression::UNCOMPRESSED => Box::new(input_file) as Box<dyn Read>,
+        Compression::SNAPPY => Box::new(FrameDecoder::new(input_file)) as Box<dyn Read>,
+        Compression::GZIP(_) => Box::new(GzDecoder::new(input_file)) as Box<dyn Read>,
+        Compression::BROTLI(_) => {
+            Box::new(Decompressor::new(input_file, 0)) as Box<dyn Read>
+        }
+        Compression::LZ4 => Box::new(lz4::Decoder::new(input_file).map_err(|e| {
+            ParquetFromCsvError::with_context(e, "Failed to create lz4::Decoder")
+        })?) as Box<dyn Read>,
+        Compression::ZSTD(_) => Box::new(zstd::Decoder::new(input_file).map_err(|e| {
+            ParquetFromCsvError::with_context(e, "Failed to create zstd::Decoder")
+        })?) as Box<dyn Read>,
+        // TODO: I wonder which crates should i use to decompress lzo and lz4_raw?

Review Comment:
   They are codecs that only make sense in the context of parquet's block compression



##########
parquet/src/bin/parquet-fromcsv.rs:
##########
@@ -72,20 +73,24 @@
 use std::{
     fmt::Display,
     fs::{read_to_string, File},
+    io::Read,
     path::{Path, PathBuf},
     sync::Arc,
 };
 
 use arrow_csv::ReaderBuilder;
 use arrow_schema::{ArrowError, Schema};
+use brotli::Decompressor;
 use clap::{Parser, ValueEnum};
+use flate2::read::GzDecoder;

Review Comment:
   I believe the required-features for `parquet-fromcsv` need to be updated to include the various compression codecs, or we need to make these imports gated on these features being enabled



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org