You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Stanislav Savulchik (Jira)" <ji...@apache.org> on 2021/03/19 07:16:00 UTC
[jira] [Commented] (SPARK-34651) Improve ZSTD support
[ https://issues.apache.org/jira/browse/SPARK-34651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17304676#comment-17304676 ]
Stanislav Savulchik commented on SPARK-34651:
---------------------------------------------
[~dongjoon] I noticed that "zstd" is not supported as a short compression codec name for text files in spark 3.1.1 though I can use it via its full class name {{org.apache.hadoop.io.compress.ZStandardCodec}} in
{code:java}
scala> spark.read.textFile("hdfs://path/to/file.txt").write.option("compression", "zstd").text("hdfs://path/to/file.txt.zst")
java.lang.IllegalArgumentException: Codec [zstd] is not available. Known codecs are bzip2, deflate, uncompressed, lz4, gzip, snappy, none.
at org.apache.spark.sql.catalyst.util.CompressionCodecs$.getCodecClassName(CompressionCodecs.scala:53)
...
scala> spark.read.textFile("hdfs://pato/to/file.txt").write.option("compression", "org.apache.hadoop.io.compress.ZStandardCodec").text("hdfs://path/to/file.txt.zst")
// no exceptions{code}
Source code for the stack frame above
[https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/CompressionCodecs.scala#L29]
Should I create a Jira issue to add a short codec name for zstd to the list?
> Improve ZSTD support
> --------------------
>
> Key: SPARK-34651
> URL: https://issues.apache.org/jira/browse/SPARK-34651
> Project: Spark
> Issue Type: Umbrella
> Components: Spark Core, SQL
> Affects Versions: 3.2.0
> Reporter: Dongjoon Hyun
> Priority: Major
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org