You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "icyjhl (Jira)" <ji...@apache.org> on 2022/10/10 12:39:00 UTC

[jira] [Commented] (SPARK-36681) Fail to load Snappy codec

    [ https://issues.apache.org/jira/browse/SPARK-36681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17615087#comment-17615087 ] 

icyjhl commented on SPARK-36681:
--------------------------------

Hi [~viirya], So this is only fixed in 3.3.0 and after?
Any workaround in 3.2?

Many Thanks!


> Fail to load Snappy codec
> -------------------------
>
>                 Key: SPARK-36681
>                 URL: https://issues.apache.org/jira/browse/SPARK-36681
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 3.2.0
>            Reporter: L. C. Hsieh
>            Assignee: L. C. Hsieh
>            Priority: Major
>             Fix For: 3.3.0
>
>
> snappy-java as a native library should not be relocated in Hadoop shaded client libraries. Currently we use Hadoop shaded client libraries in Spark. If trying to use SnappyCodec to write sequence file, we will encounter the following error:
> {code}
> [info]   Cause: java.lang.UnsatisfiedLinkError: org.apache.hadoop.shaded.org.xerial.snappy.SnappyNative.rawCompress(Ljava/nio/ByteBuffer;IILjava/nio/ByteBuffer;I)I
> [info]   at org.apache.hadoop.shaded.org.xerial.snappy.SnappyNative.rawCompress(Native Method)                                                                                                 
> [info]   at org.apache.hadoop.shaded.org.xerial.snappy.Snappy.compress(Snappy.java:151)                                                                                                        
> [info]   at org.apache.hadoop.io.compress.snappy.SnappyCompressor.compressDirectBuf(SnappyCompressor.java:282)
> [info]   at org.apache.hadoop.io.compress.snappy.SnappyCompressor.compress(SnappyCompressor.java:210)
> [info]   at org.apache.hadoop.io.compress.BlockCompressorStream.compress(BlockCompressorStream.java:149)
> [info]   at org.apache.hadoop.io.compress.BlockCompressorStream.finish(BlockCompressorStream.java:142)
> [info]   at org.apache.hadoop.io.SequenceFile$BlockCompressWriter.writeBuffer(SequenceFile.java:1589) 
> [info]   at org.apache.hadoop.io.SequenceFile$BlockCompressWriter.sync(SequenceFile.java:1605)
> [info]   at org.apache.hadoop.io.SequenceFile$BlockCompressWriter.close(SequenceFile.java:1629) 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org