You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Harsh J (Commented) (JIRA)" <ji...@apache.org> on 2012/01/03 13:12:39 UTC

[jira] [Commented] (HBASE-5071) HFile has a possible cast issue.

    [ https://issues.apache.org/jira/browse/HBASE-5071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13178709#comment-13178709 ] 

Harsh J commented on HBASE-5071:
--------------------------------

Hm, given the array building, I can't really figure out a way to bypass this one.

The following is ugly, but lemme know what you think of it:
{code}
int sizeToLoadOnOpen = min(fileSize - trailer.getLoadOnOpenDataOffset() - trailer.getTrailerSize(), Integer.MAX_VALUE);
// First param computes for Long type, and we cap that to Integer.MAX_VALUE.
{code}
                
> HFile has a possible cast issue.
> --------------------------------
>
>                 Key: HBASE-5071
>                 URL: https://issues.apache.org/jira/browse/HBASE-5071
>             Project: HBase
>          Issue Type: Bug
>          Components: io
>    Affects Versions: 0.90.0
>            Reporter: Harsh J
>              Labels: hfile
>
> HBASE-3040 introduced this line originally in HFile.Reader#loadFileInfo(...):
> {code}
> int allIndexSize = (int)(this.fileSize - this.trailer.dataIndexOffset - FixedFileTrailer.trailerSize());
> {code}
> Which on trunk today, for HFile v1 is:
> {code}
> int sizeToLoadOnOpen = (int) (fileSize - trailer.getLoadOnOpenDataOffset() -
>         trailer.getTrailerSize());
> {code}
> This computed (and casted) integer is then used to build an array of the same size. But if fileSize is very large (>> Integer.MAX_VALUE), then there's an easy chance this can go negative at some point and spew out exceptions such as:
> {code}
> java.lang.NegativeArraySizeException 
> at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readAllIndex(HFile.java:805) 
> at org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:832) 
> at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.loadFileInfo(StoreFile.java:1003) 
> at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:382) 
> at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:438) 
> at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:267) 
> at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:209) 
> at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:2088) 
> at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:358) 
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661) 
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647) 
> {code}
> Did we accidentally limit single region sizes this way?
> (Unsure about HFile v2's structure so far, so do not know if v2 has the same issue.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira