You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@phoenix.apache.org by "maghamravikiran (JIRA)" <ji...@apache.org> on 2015/12/21 21:10:46 UTC
[jira] [Commented] (PHOENIX-2540) Same column twice in CREATE TABLE
leads unusable state of the table
[ https://issues.apache.org/jira/browse/PHOENIX-2540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15067016#comment-15067016 ]
maghamravikiran commented on PHOENIX-2540:
------------------------------------------
[~warwithin] A quick test against the 4.6.0 v confirms that duplicate columns aren't allowed in the DDL and an exception is thrown.
{code}
@Test
public void testDuplicateColumnNames() throws Exception {
String ddl = "create table IF NOT EXISTS TEST_DUP_COLUMNS ("
+ " id char(1) NOT NULL,"
+ " col1 integer NOT NULL,"
+ " col2 integer,"
+ " col2 integer, "
+ " CONSTRAINT NAME_PK PRIMARY KEY (id, col1)"
+ " )";
Connection conn = DriverManager.getConnection(getUrl());
try {
conn.createStatement().execute(ddl);
fail(" Duplicate column col2 exists in the ddl");
} catch(SQLException sqle) {
assertEquals(SQLExceptionCode.COLUMN_EXIST_IN_DEF.getErrorCode(),sqle.getErrorCode());
}
}
{code}
> Same column twice in CREATE TABLE leads unusable state of the table
> -------------------------------------------------------------------
>
> Key: PHOENIX-2540
> URL: https://issues.apache.org/jira/browse/PHOENIX-2540
> Project: Phoenix
> Issue Type: Bug
> Affects Versions: 4.6.0
> Environment: Phoenix 4.6 and current master branch / HBase 1.1.2
> Reporter: YoungWoo Kim
> Fix For: 4.7.0
>
>
> If users define same column twice in a table, the table would be unusable. when I try to drop the table, I got ArrayIndexOutOfBoundsException as following. To prevent this, CREATE TABLE should check duplicated columns. E.g.,
> CREATE TABLE tbl (a varchar not null primary key, b bigint, b bigint, c date);
> This DDL works without an error but It should be failed because column 'b' is defined twice.
> {noformat}
> 2015-12-18 12:11:52,171 ERROR [B.defaultRpcServer.handler=46,queue=4,port=16020] coprocessor.MetaDataEndpointImpl: dropTable failed
> java.lang.ArrayIndexOutOfBoundsException: 20
> at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:380)
> at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:301)
> at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:290)
> at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:844)
> at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:472)
> at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doDropTable(MetaDataEndpointImpl.java:1450)
> at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1403)
> at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:11629)
> at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7435)
> at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1875)
> at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1857)
> at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32209)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
> at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)