You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@commons.apache.org by "Diego Rivera (Commented) (JIRA)" <ji...@apache.org> on 2012/01/23 21:51:41 UTC

[jira] [Commented] (JCS-88) Block cache fails to validate a cache file on startup when it contains elements with more than 2 blocks.

    [ https://issues.apache.org/jira/browse/JCS-88?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13191439#comment-13191439 ] 

Diego Rivera commented on JCS-88:
---------------------------------

diff -ruN jcs-1.3/src/java/org/apache/jcs/auxiliary/disk/block/BlockDisk.java jcs-1.3.new/src/java/org/apache/jcs/auxiliary/disk/block/BlockDisk.java
--- jcs-1.3/src/java/org/apache/jcs/auxiliary/disk/block/BlockDisk.java 2007-05-30 12:23:53.000000000 -0600
+++ jcs-1.3.new/src/java/org/apache/jcs/auxiliary/disk/block/BlockDisk.java     2012-01-23 14:33:07.437164316 -0600
@@ -193,7 +193,7 @@
             {
                 // use the max that can be written to a block or whatever is left in the original
                 // array
-                int chunkSize = Math.min( totalUsed + maxChunkSize, totalBytes - totalUsed );
+                int chunkSize = Math.min( maxChunkSize, totalBytes - totalUsed );
                 byte[] chunk = new byte[chunkSize];
                 // copy from the used position to the chunk size on the complete array to the chunk
                 // array.
                
> Block cache fails to validate a cache file on startup when it contains elements with more than 2 blocks.
> --------------------------------------------------------------------------------------------------------
>
>                 Key: JCS-88
>                 URL: https://issues.apache.org/jira/browse/JCS-88
>             Project: Commons JCS
>          Issue Type: Bug
>    Affects Versions: jcs-1.3
>            Reporter: Diego Rivera
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> The arithmetic for calculating block sizes is wrong.  The code adds a term that shouldn't be considered at that point.  For each block that needs to be written, the size of the block is currently calculated as:
> int chunkSize = Math.min( totalUsed + maxChunkSize, totalBytes - totalUsed )
> The term "totalUsed" should not be added to maxChunkSize, since the intent is to construct a chunk that's either as big as is allowed (maxChunkSize) or as big as the remaining bytes (totalBytes - totalUsed).  Thus, the correct calculation should be:
> int chunkSize = Math.min( maxChunkSize, totalBytes - totalUsed )
> The problem occurs in src/java/org/apache/jcs/auxiliary/disk/block/BlockDisk.java, line 196, inside byte[][] getBlockChunks(byte[] complete, int numBlocksNeeded).
> A patch has been devised and will be submitted as a comment (since attachments aren't possible at this point).  I still need to take the time to devise a unit test for this since the existing unit test passed without issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira