You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@commons.apache.org by bo...@apache.org on 2019/08/18 10:42:28 UTC
[commons-compress] branch master updated: COMPRESS-483 throw
exception if stored DD doesn't match what has been read
This is an automated email from the ASF dual-hosted git repository.
bodewig pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/commons-compress.git
The following commit(s) were added to refs/heads/master by this push:
new f5d0bb1 COMPRESS-483 throw exception if stored DD doesn't match what has been read
f5d0bb1 is described below
commit f5d0bb1e287038de05415ca65145d82166a7bf0f
Author: Stefan Bodewig <bo...@apache.org>
AuthorDate: Sun Aug 18 12:41:50 2019 +0200
COMPRESS-483 throw exception if stored DD doesn't match what has been read
---
src/changes/changes.xml | 19 ++++++++++++-
.../archivers/zip/ZipArchiveInputStream.java | 15 ++++++++++-
src/site/xdoc/zip.xml | 8 ++++--
.../archivers/zip/ZipArchiveInputStreamTest.java | 30 +++++++++++++++++++++
.../bla-stored-dd-contradicts-actualsize.zip | Bin 0 -> 1023 bytes
src/test/resources/bla-stored-dd-sizes-differ.zip | Bin 0 -> 1022 bytes
6 files changed, 68 insertions(+), 4 deletions(-)
diff --git a/src/changes/changes.xml b/src/changes/changes.xml
index d18539b..4e381c0 100644
--- a/src/changes/changes.xml
+++ b/src/changes/changes.xml
@@ -127,11 +127,28 @@ when they are read via ZipArchiveInputStream or ZipFile.
parsing process with a new overload of
ZipArchiveEntry#getExtraFields.
</action>
- <action type="update" date="2019-08-18" issue="COMPRESS-482">
+ <action type="fix" date="2019-08-18" issue="COMPRESS-482">
ZipArchiveInputStream failed to read stored entries with a
data descriptor if the data descriptor didn't use the
signature invented by InfoZIP.
</action>
+ <action type="update" date="2019-08-18" issue="COMPRESS-483">
+ ZipArchiveInputStream will now throw an exception if reading a
+ stored entry with a data descriptor and the data descriptor
+ doesn't match what it has actually read.
+
+ The most common case for a situation like this is a stored ZIP
+ archive inside of the archive ZipArchiveInputStream currently
+ reads. In such a case ZipArchiveInputStream would happily
+ extract the contained archive and stop once the central
+ directory of the inner archive has been hit. This is a case
+ where ZipArchiveInputStream simply can not be used and only
+ ZipFile is able to read the archive.
+
+ The only other explanation is a broken archive. So the
+ exception prevents uers from thinking they had successfully
+ read the contents of the archive.
+ </action>
</release>
<release version="1.18" date="2018-08-16"
description="Release 1.18">
diff --git a/src/main/java/org/apache/commons/compress/archivers/zip/ZipArchiveInputStream.java b/src/main/java/org/apache/commons/compress/archivers/zip/ZipArchiveInputStream.java
index 229e915..5871598 100644
--- a/src/main/java/org/apache/commons/compress/archivers/zip/ZipArchiveInputStream.java
+++ b/src/main/java/org/apache/commons/compress/archivers/zip/ZipArchiveInputStream.java
@@ -865,6 +865,12 @@ public class ZipArchiveInputStream extends ArchiveInputStream implements InputSt
&& entry.getMethod() == ZipEntry.STORED);
}
+ private static final String USE_ZIPFILE_INSTEAD_OF_STREAM_DISCLAIMER =
+ " while reading a stored entry using data descriptor. Either the archive is broken"
+ + " or it can not be read using ZipArchiveInputStream and you must use ZipFile."
+ + " A common cause for this is a ZIP archive containing a ZIP archive."
+ + " See http://commons.apache.org/proper/commons-compress/zip.html#ZipArchiveInputStream_vs_ZipFile";
+
/**
* Caches a stored entry that uses the data descriptor.
*
@@ -908,8 +914,15 @@ public class ZipArchiveInputStream extends ArchiveInputStream implements InputSt
off = cacheBytesRead(bos, off, r, ddLen);
}
}
-
+ if (current.entry.getCompressedSize() != current.entry.getSize()) {
+ throw new ZipException("compressed and uncompressed size don't match"
+ + USE_ZIPFILE_INSTEAD_OF_STREAM_DISCLAIMER);
+ }
final byte[] b = bos.toByteArray();
+ if (b.length != current.entry.getSize()) {
+ throw new ZipException("actual and claimed size don't match"
+ + USE_ZIPFILE_INSTEAD_OF_STREAM_DISCLAIMER);
+ }
lastStoredEntry = new ByteArrayInputStream(b);
}
diff --git a/src/site/xdoc/zip.xml b/src/site/xdoc/zip.xml
index 855537f..2653431 100644
--- a/src/site/xdoc/zip.xml
+++ b/src/site/xdoc/zip.xml
@@ -104,9 +104,13 @@
entries <code>ZipArchiveInputStream</code> can try to read
ahead until it finds the next entry, but this approach is
not safe and has to be enabled by a constructor argument
- explicitly.</p>
+ explicitly. For example it will completely fail if the
+ stored entry is a ZIP archive itself. Starting with Compress 1.19
+ <code>ZipArchiveInputStream</code> will perform a few sanity
+ for STORED entries with data descriptors and throw an
+ exception if they fail.</p>
- <p>If possible, you should always prefer <code>ZipFile</code>
+ <p>If possible, you should <strong>always</strong> prefer <code>ZipFile</code>
over <code>ZipArchiveInputStream</code>.</p>
<p><code>ZipFile</code> requires a
diff --git a/src/test/java/org/apache/commons/compress/archivers/zip/ZipArchiveInputStreamTest.java b/src/test/java/org/apache/commons/compress/archivers/zip/ZipArchiveInputStreamTest.java
index 0392fd5..43a509e 100644
--- a/src/test/java/org/apache/commons/compress/archivers/zip/ZipArchiveInputStreamTest.java
+++ b/src/test/java/org/apache/commons/compress/archivers/zip/ZipArchiveInputStreamTest.java
@@ -566,6 +566,36 @@ public class ZipArchiveInputStreamTest {
}
}
+ @Test
+ public void throwsIfStoredDDIsInconsistent() throws IOException {
+ try (FileInputStream fs = new FileInputStream(getFile("bla-stored-dd-sizes-differ.zip"));
+ ZipArchiveInputStream archive = new ZipArchiveInputStream(fs, "UTF-8", true, true)) {
+ ZipArchiveEntry e = archive.getNextZipEntry();
+ assertNotNull(e);
+ assertEquals("test1.xml", e.getName());
+ assertEquals(-1, e.getCompressedSize());
+ assertEquals(-1, e.getSize());
+ thrown.expect(ZipException.class);
+ thrown.expectMessage("compressed and uncompressed size don't match");
+ byte[] data = IOUtils.toByteArray(archive);
+ }
+ }
+
+ @Test
+ public void throwsIfStoredDDIsDifferentFromLengthRead() throws IOException {
+ try (FileInputStream fs = new FileInputStream(getFile("bla-stored-dd-contradicts-actualsize.zip"));
+ ZipArchiveInputStream archive = new ZipArchiveInputStream(fs, "UTF-8", true, true)) {
+ ZipArchiveEntry e = archive.getNextZipEntry();
+ assertNotNull(e);
+ assertEquals("test1.xml", e.getName());
+ assertEquals(-1, e.getCompressedSize());
+ assertEquals(-1, e.getSize());
+ thrown.expect(ZipException.class);
+ thrown.expectMessage("actual and claimed size don't match");
+ byte[] data = IOUtils.toByteArray(archive);
+ }
+ }
+
private static byte[] readEntry(ZipArchiveInputStream zip, ZipArchiveEntry zae) throws IOException {
final int len = (int)zae.getSize();
final byte[] buff = new byte[len];
diff --git a/src/test/resources/bla-stored-dd-contradicts-actualsize.zip b/src/test/resources/bla-stored-dd-contradicts-actualsize.zip
new file mode 100644
index 0000000..18c0cd6
Binary files /dev/null and b/src/test/resources/bla-stored-dd-contradicts-actualsize.zip differ
diff --git a/src/test/resources/bla-stored-dd-sizes-differ.zip b/src/test/resources/bla-stored-dd-sizes-differ.zip
new file mode 100644
index 0000000..1c03425
Binary files /dev/null and b/src/test/resources/bla-stored-dd-sizes-differ.zip differ