You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Sasikumar Muthukrishnan Sampath (Jira)" <ji...@apache.org> on 2023/06/16 15:40:00 UTC

[jira] [Created] (KAFKA-15096) CVE 2023-34455 - Vulnerability identified with Apache kafka

Sasikumar Muthukrishnan Sampath created KAFKA-15096:
-------------------------------------------------------

             Summary: CVE 2023-34455 - Vulnerability identified with Apache kafka
                 Key: KAFKA-15096
                 URL: https://issues.apache.org/jira/browse/KAFKA-15096
             Project: Kafka
          Issue Type: Bug
            Reporter: Sasikumar Muthukrishnan Sampath


A new vulnerability CVE-2023-34455 is identified with camel-kafka dependencies. The vulnerability is coming from snappy-java:1.1.8.4

Version 1.1.10.1 contains a patch for this issue. Please upgrade the snappy-java version to fix this issue

 
snappy-java is a fast compressor/decompressor for Java. Due to use of an unchecked chunk length, an unrecoverable fatal error can occur in versions prior to 1.1.10.1.
The code in the function hasNextChunk in the fileSnappyInputStream.java checks if a given stream has more chunks to read. It does that by attempting to read 4 bytes. If it wasn’t possible to read the 4 bytes, the function returns false. Otherwise, if 4 bytes were available, the code treats them as the length of the next chunk.
In the case that the `compressed` variable is null, a byte array is allocated with the size given by the input data. Since the code doesn’t test the legality of the `chunkSize` variable, it is possible to pass a negative number (such as 0xFFFFFFFF which is -1), which will cause the code to raise a `java.lang.NegativeArraySizeException` exception. A worse case would happen when passing a huge positive value (such as 0x7FFFFFFF), which would raise the fatal `java.lang.OutOfMemoryError` error.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)