You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@parquet.apache.org by "Fokko Driesprong (Jira)" <ji...@apache.org> on 2021/04/19 15:50:00 UTC
[jira] [Updated] (PARQUET-2025) Bump snappy to 1.1.8.3 to support
Mac m1
[ https://issues.apache.org/jira/browse/PARQUET-2025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Fokko Driesprong updated PARQUET-2025:
--------------------------------------
Affects Version/s: 1.12.0
> Bump snappy to 1.1.8.3 to support Mac m1
> ----------------------------------------
>
> Key: PARQUET-2025
> URL: https://issues.apache.org/jira/browse/PARQUET-2025
> Project: Parquet
> Issue Type: Bug
> Affects Versions: 1.12.0
> Reporter: Junjie Chen
> Priority: Minor
> Fix For: 1.13.0
>
>
> When running unit tests of iceberg on Mac m1 , it throws:
>
> Caused by:
> java.lang.NoClassDefFoundError: Could not initialize class org.xerial.snappy.Snappy
> at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67)
> at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81)
> at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92)
> at org.apache.parquet.hadoop.CodecFactory$HeapBytesCompressor.compress(CodecFactory.java:165)
> at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:122)
> at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:53)
> at org.apache.parquet.column.impl.ColumnWriterBase.writePage(ColumnWriterBase.java:315)
> at org.apache.parquet.column.impl.ColumnWriteStoreBase.flush(ColumnWriteStoreBase.java:152)
> at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:27)
> at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:172)
> at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:114)
> at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:165)
> at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:42)
> at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:57)
> at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:74)
> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:247)
> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:242)
> at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
> at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:248)
> ... 10 more
--
This message was sent by Atlassian Jira
(v8.3.4#803005)