You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@drill.apache.org by "Aman Sinha (JIRA)" <ji...@apache.org> on 2016/06/21 14:12:58 UTC
[jira] [Updated] (DRILL-4734) Query against HBase table on a 5 node
cluster fails with SchemaChangeException
[ https://issues.apache.org/jira/browse/DRILL-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Aman Sinha updated DRILL-4734:
------------------------------
Description:
[Creating this JIRA on behalf of Qiang Li]
Let say I have two tables.
{noformat}
offers_ref0 : rowkey salt(1byte)+long uid(8 byte ) , family: v, qualifier:
v(string)
offers_nation_idx: rowkey salt(1byte) + string, family:v, qualifier: v(long
8 byte)
{noformat}
there is the SQL:
{noformat}
select CONVERT_FROM(BYTE_SUBSTR(`ref0`.row_key,-8,8),'BIGINT_BE') as did, convert_from(`ref0`.`v`.`v`,'UTF8') as v from hbase.`offers_nation_idx` as`nation` join hbase.offers_ref0 as `ref0` on CONVERT_FROM(BYTE_SUBSTR(`ref0`.row_key,-8,8),'BIGINT_BE') = CONVERT_FROM(nation.`v`.`v`,'BIGINT_BE') where `nation`.row_key > '0br' and `nation`.row_key < '0bs' limit 10
{noformat}
When I execute the query with single node or less than 5 nodes, its working
good. But when I execute it in cluster which have about 14 nodes, its throw
a exception:
First time will throw this exception:
*Caused by: java.sql.SQLException: SYSTEM ERROR: SchemaChangeException:
Hash join does not support schema changes*
Then if I query again, it will always throw below exception:
{noformat}
*Query Failed: An Error Occurred*
*org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR:IllegalStateException: Failure while reading vector. Expected vector class of org.apache.drill.exec.vector.NullableIntVector but was holding vector class org.apache.drill.exec.vector.complex.MapVector, field=v(MAP:REQUIRED [v(VARBINARY:OPTIONAL)[$bits$(UINT1:REQUIRED),
v(VARBINARY:OPTIONAL)[$offsets$(UINT4:REQUIRED)]]] Fragment 12:4 [Error Id:06c6eae4-0822-4714-b0bf-a6e04ebfec79 on xxx:31010]*
{noformat}
was:
[Creating this JIRA on behalf of Qiang Li]
Let say I have two tables.
offers_ref0 : rowkey salt(1byte)+long uid(8 byte ) , family: v, qualifier:
v(string)
offers_nation_idx: rowkey salt(1byte) + string, family:v, qualifier: v(long
8 byte)
there is the SQL:
select CONVERT_FROM(BYTE_SUBSTR(`ref0`.row_key,-8,8),'BIGINT_BE') as uid,
convert_from(`ref0`.`v`.`v`,'UTF8') as v from hbase.`offers_nation_idx` as
`nation` join hbase.offers_ref0 as `ref0` on
CONVERT_FROM(BYTE_SUBSTR(`ref0`.row_key,-8,8),'BIGINT_BE') =
CONVERT_FROM(nation.`v`.`v`,'BIGINT_BE') where `nation`.row_key > '0br'
and `nation`.row_key < '0bs' limit 10
When I execute the query with single node or less than 5 nodes, its working
good. But when I execute it in cluster which have about 14 nodes, its throw
a exception:
First time will throw this exception:
*Caused by: java.sql.SQLException: SYSTEM ERROR: SchemaChangeException:
Hash join does not support schema changes*
Then if I query again, it will always throw below exception:
*Query Failed: An Error Occurred*
*org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR:
IllegalStateException: Failure while reading vector. Expected vector class
of org.apache.drill.exec.vector.NullableIntVector but was holding vector
class org.apache.drill.exec.vector.complex.MapVector, field=
v(MAP:REQUIRED)[v(VARBINARY:OPTIONAL)[$bits$(UINT1:REQUIRED),
v(VARBINARY:OPTIONAL)[$offsets$(UINT4:REQUIRED)]]] Fragment 12:4 [Error Id:
06c6eae4-0822-4714-b0bf-a6e04ebfec79 on xxx:31010]*
Its very strange, and I do not know how to solve it.
I tried add node to the cluster one by one, it will reproduce when I added
5 nodes. Can anyone help me solve this issue?
> Query against HBase table on a 5 node cluster fails with SchemaChangeException
> ------------------------------------------------------------------------------
>
> Key: DRILL-4734
> URL: https://issues.apache.org/jira/browse/DRILL-4734
> Project: Apache Drill
> Issue Type: Bug
> Components: Execution - Relational Operators, Storage - HBase
> Affects Versions: 1.6.0
> Reporter: Aman Sinha
>
> [Creating this JIRA on behalf of Qiang Li]
> Let say I have two tables.
> {noformat}
> offers_ref0 : rowkey salt(1byte)+long uid(8 byte ) , family: v, qualifier:
> v(string)
> offers_nation_idx: rowkey salt(1byte) + string, family:v, qualifier: v(long
> 8 byte)
> {noformat}
> there is the SQL:
> {noformat}
> select CONVERT_FROM(BYTE_SUBSTR(`ref0`.row_key,-8,8),'BIGINT_BE') as did, convert_from(`ref0`.`v`.`v`,'UTF8') as v from hbase.`offers_nation_idx` as`nation` join hbase.offers_ref0 as `ref0` on CONVERT_FROM(BYTE_SUBSTR(`ref0`.row_key,-8,8),'BIGINT_BE') = CONVERT_FROM(nation.`v`.`v`,'BIGINT_BE') where `nation`.row_key > '0br' and `nation`.row_key < '0bs' limit 10
> {noformat}
> When I execute the query with single node or less than 5 nodes, its working
> good. But when I execute it in cluster which have about 14 nodes, its throw
> a exception:
> First time will throw this exception:
> *Caused by: java.sql.SQLException: SYSTEM ERROR: SchemaChangeException:
> Hash join does not support schema changes*
> Then if I query again, it will always throw below exception:
> {noformat}
> *Query Failed: An Error Occurred*
> *org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR:IllegalStateException: Failure while reading vector. Expected vector class of org.apache.drill.exec.vector.NullableIntVector but was holding vector class org.apache.drill.exec.vector.complex.MapVector, field=v(MAP:REQUIRED [v(VARBINARY:OPTIONAL)[$bits$(UINT1:REQUIRED),
> v(VARBINARY:OPTIONAL)[$offsets$(UINT4:REQUIRED)]]] Fragment 12:4 [Error Id:06c6eae4-0822-4714-b0bf-a6e04ebfec79 on xxx:31010]*
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)