You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@trafodion.apache.org by "David Wayne Birdsall (JIRA)" <ji...@apache.org> on 2016/06/09 16:44:21 UTC

[jira] [Commented] (TRAFODION-2043) Bulk load may fail if bucket cache is configured and is large

    [ https://issues.apache.org/jira/browse/TRAFODION-2043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15322842#comment-15322842 ] 

David Wayne Birdsall commented on TRAFODION-2043:
-------------------------------------------------

The cause here is similar to that in JIRA Trafodion 2041. And the fix is similar.

> Bulk load may fail if bucket cache is configured and is large
> -------------------------------------------------------------
>
>                 Key: TRAFODION-2043
>                 URL: https://issues.apache.org/jira/browse/TRAFODION-2043
>             Project: Apache Trafodion
>          Issue Type: Bug
>          Components: sql-cmu
>    Affects Versions: 2.0-incubating, 2.1-incubating
>         Environment: Potentially all; this particular example was seen on a 10-node cluster
>            Reporter: David Wayne Birdsall
>            Assignee: David Wayne Birdsall
>             Fix For: 2.1-incubating
>
>
> Bulk load may fail when HBase is configured to use bucket cache. An example: 
> SQL>LOAD WITH CONTINUE ON ERROR INTO TK.DEVICES SELECT * FROM HIVE.TK.DEVICES ;
>  
> UTIL_OUTPUT
> --------------------------------------------------------------------------------------------------------------------------------
> Task: LOAD             Status: Started    Object: TRAFODION.TK.DEVICES                                                          
> Task:  CLEANUP         Status: Started    Object: TRAFODION.TK.DEVICES                                                          
> Task:  CLEANUP         Status: Ended      Object: TRAFODION.TK.DEVICES                                                          
> Task:  PREPARATION     Status: Started    Object: TRAFODION.TK.DEVICES                                                          
> *** ERROR[8448] Unable to access Hbase interface. Call to ExpHbaseInterface::addToHFile returned error HBASE_ADD_TO_HFILE_ERROR(-713). Cause: 
> java.lang.OutOfMemoryError: Direct buffer memory
> java.nio.Bits.reserveMemory(Bits.java:658)
> java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
> java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306)
> org.apache.hadoop.hbase.util.ByteBufferArray.<init>(ByteBufferArray.java:65)
> org.apache.hadoop.hbase.io.hfile.bucket.ByteBufferIOEngine.<init>(ByteBufferIOEngine.java:47)
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getIOEngineFromName(BucketCache.java:307)
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.<init>(BucketCache.java:217)
> org.apache.hadoop.hbase.io.hfile.CacheConfig.getBucketCache(CacheConfig.java:614)
> org.apache.hadoop.hbase.io.hfile.CacheConfig.getL2(CacheConfig.java:553)
> org.apache.hadoop.hbase.io.hfile.CacheConfig.instantiateBlockCache(CacheConfig.java:637)
> org.apache.hadoop.hbase.io.hfile.CacheConfig.<init>(CacheConfig.java:231)
> org.trafodion.sql.HBulkLoadClient.doCreateHFile(HBulkLoadClient.java:209)
> org.trafodion.sql.HBulkLoadClient.addToHFile(HBulkLoadClient.java:245)
> . [2016-06-09 00:31:55]
> The failure occurs because the bulk load client code is using a server-side API that requires a CacheConfig object, and that object configures itself according to the settings in the hbase-site.xml file. In particular, if a large bucket cache is configured, it may exceed the memory we specify for Trafodion client servers.
> The fix is to either avoid using cache at all, or to unset the bucket cache property before constructing a CacheConfig object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)