You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@trafodion.apache.org by "David Wayne Birdsall (JIRA)" <ji...@apache.org> on 2015/10/28 19:57:27 UTC
[jira] [Created] (TRAFODION-1566) Ungraceful failure when
transaction size limit reached
David Wayne Birdsall created TRAFODION-1566:
-----------------------------------------------
Summary: Ungraceful failure when transaction size limit reached
Key: TRAFODION-1566
URL: https://issues.apache.org/jira/browse/TRAFODION-1566
Project: Apache Trafodion
Issue Type: Bug
Components: dtm, sql-exe
Affects Versions: 1.3-incubating
Reporter: David Wayne Birdsall
Priority: Minor
When exceeding transaction size limits, DELETE fails with a puzzling error message.
The following script produces the problem on a workstation (using install_local_hadoop, so the HMaster process is handling all four regions). Best results if the setup section is done in a separate sqlci from the deleteTest section (so you get current statistics):
?section setup
create schema DeleteFailure;
set schema DeleteFailure;
-- create a table saltx with 458752 (=7*65536) rows, and another table
-- salty, that is a copy of saltx
CREATE TABLE saltx
(
A INT NO DEFAULT NOT NULL NOT DROPPABLE
SERIALIZED
, B INT NO DEFAULT NOT NULL NOT DROPPABLE
SERIALIZED
, C VARCHAR(20) CHARACTER SET ISO88591 COLLATE
DEFAULT NO DEFAULT NOT NULL NOT DROPPABLE SERIALIZED
, PRIMARY KEY (A ASC, B ASC)
)
SALT USING 4 PARTITIONS
;
insert into saltx values (1,1,'hi there!'),
(2,1,'bye there!'),(3,1,'Happy Tuesday!'),(4,1,'Huckleberry Pie');
insert into saltx select a+4,b,c from saltx;
insert into saltx select a+8,b,c from saltx;
insert into saltx select a+16,b,c from saltx;
insert into saltx select a+32,b,c from saltx;
insert into saltx select a+64,b,c from saltx;
insert into saltx select a+128,b,c from saltx;
insert into saltx select a+256,b,c from saltx;
insert into saltx select a+512,b,c from saltx;
insert into saltx select a+1024,b,c from saltx;
upsert into saltx select a+2048,b,c from saltx;
upsert into saltx select a+4096,b,c from saltx;
upsert into saltx select a+8192,b,c from saltx;
upsert into saltx select a+16384,b,c from saltx;
upsert into saltx select a+32768,b,c from saltx;
upsert using load into saltx select a,b+1,c from saltx;
upsert using load into saltx select a,b+2,c from saltx where b = 1;
upsert using load into saltx select a,b+3,c from saltx where b = 1;
upsert using load into saltx select a,b+4,c from saltx where b = 1;
upsert using load into saltx select a,b+5,c from saltx where b = 1;
upsert using load into saltx select a,b+6,c from saltx where b = 1;
update statistics for table saltx on every column;
create table salty like saltx;
upsert using load into salty select * from saltx;
update statistics for table salty on every column;
?section deleteTest
set schema DeleteFailure;
set param ?b '4'; -- change it to '5' and the delete will succeed
prepare xx from delete from saltx where b > ?b;
explain options 'f' xx;
execute xx; -- fails with ungracious error message
Here is a log showing the deleteTest section failing:
[birdsall@dev02 IUDCosting]$ sqlci
Apache Trafodion Conversational Interface 1.3.0
Copyright (c) 2015 Apache Software Foundation
>>obey deleteFailure.sql(deleteTest);
>>?section deleteTest
>>
>>set schema DeleteFailure;
--- SQL operation complete.
>>
>>set param ?b '4';
>> -- change it to '5' and the delete will succeed
>>
>>prepare xx from delete from saltx where b > ?b;
--- SQL command prepared.
>>
>>explain options 'f' xx;
LC RC OP OPERATOR OPT DESCRIPTION CARD
---- ---- ---- -------------------- -------- -------------------- ---------
4 . 5 root x 1.52E+005
3 . 4 esp_exchange 1:4(hash2) 1.52E+005
1 2 3 tuple_flow 1.52E+005
. . 2 trafodion_vsbb_delet SALTX 1.00E+000
. . 1 trafodion_scan SALTX 1.52E+005
--- SQL operation complete.
>>
>>execute xx;
*** ERROR[8448] Unable to access Hbase interface. Call to ExpHbaseInterface::nextRow returned error HBASE_ACCESS_ERROR(-706). Cause:
java.util.concurrent.ExecutionException: java.io.IOException: PerformScan error on coprocessor call, scannerID: 14 java.io.IOException: performScan encountered Exception txID: 70081 Exception: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: TrxRegionEndpoint coprocessor: getScanner - scanner id 14, Expected nextCallSeq: 8, But the nextCallSeq received from client: 7
java.util.concurrent.FutureTask.report(FutureTask.java:122)
java.util.concurrent.FutureTask.get(FutureTask.java:188)
org.trafodion.sql.HTableClient.fetchRows(HTableClient.java:652)
.
--- 0 row(s) deleted.
>> -- fails with ungracious error message
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
RE: [jira] [Created] (TRAFODION-1566) Ungraceful failure when
transaction size limit reached
Posted by Rohit Jain <ro...@esgyn.com>.
What is the transaction size limit and where is it documented?
-----Original Message-----
From: David Wayne Birdsall (JIRA) [mailto:jira@apache.org]
Sent: Wednesday, October 28, 2015 1:57 PM
To: issues@trafodion.incubator.apache.org
Subject: [jira] [Created] (TRAFODION-1566) Ungraceful failure when
transaction size limit reached
David Wayne Birdsall created TRAFODION-1566:
-----------------------------------------------
Summary: Ungraceful failure when transaction size limit reached
Key: TRAFODION-1566
URL: https://issues.apache.org/jira/browse/TRAFODION-1566
Project: Apache Trafodion
Issue Type: Bug
Components: dtm, sql-exe
Affects Versions: 1.3-incubating
Reporter: David Wayne Birdsall
Priority: Minor
When exceeding transaction size limits, DELETE fails with a puzzling error
message.
The following script produces the problem on a workstation (using
install_local_hadoop, so the HMaster process is handling all four regions).
Best results if the setup section is done in a separate sqlci from the
deleteTest section (so you get current statistics):
?section setup
create schema DeleteFailure;
set schema DeleteFailure;
-- create a table saltx with 458752 (=7*65536) rows, and another table
-- salty, that is a copy of saltx
CREATE TABLE saltx
(
A INT NO DEFAULT NOT NULL NOT DROPPABLE
SERIALIZED
, B INT NO DEFAULT NOT NULL NOT DROPPABLE
SERIALIZED
, C VARCHAR(20) CHARACTER SET ISO88591
COLLATE
DEFAULT NO DEFAULT NOT NULL NOT DROPPABLE SERIALIZED
, PRIMARY KEY (A ASC, B ASC)
)
SALT USING 4 PARTITIONS
;
insert into saltx values (1,1,'hi there!'),
(2,1,'bye there!'),(3,1,'Happy Tuesday!'),(4,1,'Huckleberry Pie');
insert into saltx select a+4,b,c from saltx;
insert into saltx select a+8,b,c from saltx;
insert into saltx select a+16,b,c from saltx;
insert into saltx select a+32,b,c from saltx;
insert into saltx select a+64,b,c from saltx;
insert into saltx select a+128,b,c from saltx;
insert into saltx select a+256,b,c from saltx;
insert into saltx select a+512,b,c from saltx;
insert into saltx select a+1024,b,c from saltx;
upsert into saltx select a+2048,b,c from saltx;
upsert into saltx select a+4096,b,c from saltx;
upsert into saltx select a+8192,b,c from saltx;
upsert into saltx select a+16384,b,c from saltx;
upsert into saltx select a+32768,b,c from saltx;
upsert using load into saltx select a,b+1,c from saltx;
upsert using load into saltx select a,b+2,c from saltx where b = 1;
upsert using load into saltx select a,b+3,c from saltx where b = 1;
upsert using load into saltx select a,b+4,c from saltx where b = 1;
upsert using load into saltx select a,b+5,c from saltx where b = 1;
upsert using load into saltx select a,b+6,c from saltx where b = 1;
update statistics for table saltx on every column;
create table salty like saltx;
upsert using load into salty select * from saltx;
update statistics for table salty on every column;
?section deleteTest
set schema DeleteFailure;
set param ?b '4'; -- change it to '5' and the delete will succeed
prepare xx from delete from saltx where b > ?b;
explain options 'f' xx;
execute xx; -- fails with ungracious error message
Here is a log showing the deleteTest section failing:
[birdsall@dev02 IUDCosting]$ sqlci
Apache Trafodion Conversational Interface 1.3.0
Copyright (c) 2015 Apache Software Foundation
>>obey deleteFailure.sql(deleteTest);
>>?section deleteTest
>>
>>set schema DeleteFailure;
--- SQL operation complete.
>>
>>set param ?b '4';
>> -- change it to '5' and the delete will succeed
>>
>>prepare xx from delete from saltx where b > ?b;
--- SQL command prepared.
>>
>>explain options 'f' xx;
LC RC OP OPERATOR OPT DESCRIPTION CARD
---- ---- ---- -------------------- -------- -------------------- ---------
4 . 5 root x
1.52E+005
3 . 4 esp_exchange 1:4(hash2)
1.52E+005
1 2 3 tuple_flow
1.52E+005
. . 2 trafodion_vsbb_delet SALTX
1.00E+000
. . 1 trafodion_scan SALTX
1.52E+005
--- SQL operation complete.
>>
>>execute xx;
*** ERROR[8448] Unable to access Hbase interface. Call to
ExpHbaseInterface::nextRow returned error HBASE_ACCESS_ERROR(-706). Cause:
java.util.concurrent.ExecutionException: java.io.IOException: PerformScan
error on coprocessor call, scannerID: 14 java.io.IOException: performScan
encountered Exception txID: 70081 Exception:
org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException:
TrxRegionEndpoint coprocessor: getScanner - scanner id 14, Expected
nextCallSeq: 8, But the nextCallSeq received from client: 7
java.util.concurrent.FutureTask.report(FutureTask.java:122)
java.util.concurrent.FutureTask.get(FutureTask.java:188)
org.trafodion.sql.HTableClient.fetchRows(HTableClient.java:652)
.
--- 0 row(s) deleted.
>> -- fails with ungracious error message
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)