You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@trafodion.apache.org by "Alice Chen (JIRA)" <ji...@apache.org> on 2015/07/22 20:17:51 UTC

[jira] [Created] (TRAFODION-695) LP Bug: 1380826 - Select sees error 8848 java.lang.OutOfMemoryError: GC overhead limit exceeded

Alice Chen created TRAFODION-695:
------------------------------------

             Summary: LP Bug: 1380826 - Select sees error 8848 java.lang.OutOfMemoryError: GC overhead limit exceeded
                 Key: TRAFODION-695
                 URL: https://issues.apache.org/jira/browse/TRAFODION-695
             Project: Apache Trafodion
          Issue Type: Bug
          Components: dtm
            Reporter: Weishiun Tsai
            Assignee: Oliver Bucaojit
            Priority: Blocker
             Fix For: 1.0 (pre-incubation)


When running a select query on a larger table ABASE (32000000 rows), the query returns error 8848 indicating an out of memory error 
‘java.lang.OutOfMemoryError: GC overhead limit exceeded’.  When this happens, the accompanying java heap dump files are generated in $SQL_HOME/logs/java_pid<pid>.hprof.

The initial analysis is that TM may have issue handling this larger number of rows.  This is seen on the v1011_0917 build installed on a 4-node Cloudera cluster Amethyst9.  The problem is reproducible using sqlci.  But the query requires the QA g_wisc32 tables to be populated on the system first:

>>set schema g_wisc32;

--- SQL operation complete.
>>select count(*) from abase;

(EXPR)              
--------------------

            32000000

--- 1 row(s) selected.

==================================================

Here are 2 separate sets of queries that can be used to reproduce this problem:

QUERY I

prepare x2 from
create table t032tab store by (unique2) AS (
select * from trafodion.g_wisc32.ABASE
where unique2 = unique1
and stringu1 = stringu2
and unique3 < 3200)

execute x2;

QUERY II

begin work;
select [last 0] * from trafodion.g_wisc32.ABASE;

[Note: the begin work is necessary since that’s when the TransactionScanner code is involved.]

==================================================

Here are the execution outputs and the error messages for the 2 sets of queries:

QUERY I

>>prepare x2 from
+>create table t032tab store by (unique2) AS (
+>select * from trafodion.g_wisc32.ABASE
+>where unique2 = unique1
+>and stringu1 = stringu2
+>and unique3 < 3200)
+>;

--- SQL command prepared.
>>
>>execute x2;
java.lang.OutOfMemoryError: GC overhead limit exceeded
Dumping heap to /opt/home/trafodion/v1011_0917/logs/java_pid8687.hprof ...
java.lang.OutOfMemoryError: GC overhead limit exceeded
Dumping heap to /opt/home/trafodion/v1011_0917/logs/java_pid8688.hprof ...
java.lang.OutOfMemoryError: GC overhead limit exceeded
Dumping heap to /opt/home/trafodion/v1011_0917/logs/java_pid5345.hprof ...
java.lang.OutOfMemoryError: GC overhead limit exceeded
Dumping heap to /opt/home/trafodion/v1011_0917/logs/java_pid5346.hprof ...
java.lang.OutOfMemoryError: GC overhead limit exceeded
Dumping heap to /opt/home/trafodion/v1011_0917/logs/java_pid20767.hprof ...
java.lang.OutOfMemoryError: GC overhead limit exceeded
Dumping heap to /opt/home/trafodion/v1011_0917/logs/java_pid12601.hprof ...
java.lang.OutOfMemoryError: GC overhead limit exceeded
Dumping heap to /opt/home/trafodion/v1011_0917/logs/java_pid20768.hprof ...
Heap dump file created [1269351108 bytes in 11.544 secs]
Heap dump file created [1268164201 bytes in 11.273 secs]
java.lang.OutOfMemoryError: GC overhead limit exceeded
Dumping heap to /opt/home/trafodion/v1011_0917/logs/java_pid12599.hprof ...
Heap dump file created [1269421982 bytes in 12.786 secs]
Heap dump file created [1269147045 bytes in 11.188 secs]
Heap dump file created [1269363019 bytes in 11.640 secs]
Heap dump file created [1269299969 bytes in 11.506 secs]
Heap dump file created [1268157131 bytes in 10.964 secs]
Heap dump file created [1269387272 bytes in 11.446 secs]

*** ERROR[8448] Unable to access Hbase interface. Call to ExpHbaseInterface::nextRow returned error HBASE_ACCESS_ERROR(-705). Cause: 
java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: GC overhead limit exceeded
java.util.concurrent.FutureTask.report(FutureTask.java:122)
java.util.concurrent.FutureTask.get(FutureTask.java:188)
org.trafodion.sql.HBaseAccess.HTableClient.fetchRows(HTableClient.java:458)
.

--- 0 row(s) inserted.

QUERY II

>>begin work;

--- SQL operation complete.
>>select [last 0] * from trafodion.g_wisc32.ABASE;
java.lang.OutOfMemoryError: GC overhead limit exceeded
Dumping heap to /opt/home/trafodion/v1011_0917/logs/java_pid9004.hprof ...
Heap dump file created [1269585718 bytes in 11.053 secs]

*** ERROR[8448] Unable to access Hbase interface. Call to ExpHbaseInterface::nextRow returned error HBASE_ACCESS_ERROR(-705). Cause: 
java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: GC overhead limit exceeded
java.util.concurrent.FutureTask.report(FutureTask.java:122)
java.util.concurrent.FutureTask.get(FutureTask.java:188)
org.trafodion.sql.HBaseAccess.HTableClient.fetchRows(HTableClient.java:458)
.

--- 0 row(s) selected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)