You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@phoenix.apache.org by "Hadoop QA (JIRA)" <ji...@apache.org> on 2016/08/18 17:31:21 UTC
[jira] [Commented] (PHOENIX-930) duplicated columns cause query
exception and drop table exception
[ https://issues.apache.org/jira/browse/PHOENIX-930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15426852#comment-15426852 ]
Hadoop QA commented on PHOENIX-930:
-----------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12807325/PHOENIX-930-v3.patch
against master branch at commit 386cbbbf7a5dd736888e8e0bfe16e513d54e215c.
ATTACHMENT ID: 12807325
{color:green}+1 @author{color}. The patch does not contain any @author tags.
{color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests.
{color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings.
{color:red}-1 javadoc{color}. The javadoc tool appears to have generated 34 warning messages.
{color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings.
{color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100
{color:green}+1 core tests{color}. The patch passed unit tests in .
Test results: https://builds.apache.org/job/PreCommit-PHOENIX-Build/521//testReport/
Javadoc warnings: https://builds.apache.org/job/PreCommit-PHOENIX-Build/521//artifact/patchprocess/patchJavadocWarnings.txt
Console output: https://builds.apache.org/job/PreCommit-PHOENIX-Build/521//console
This message is automatically generated.
> duplicated columns cause query exception and drop table exception
> -----------------------------------------------------------------
>
> Key: PHOENIX-930
> URL: https://issues.apache.org/jira/browse/PHOENIX-930
> Project: Phoenix
> Issue Type: Bug
> Reporter: wangkai
> Assignee: Junegunn Choi
> Fix For: 4.8.1
>
> Attachments: PHOENIX-930, PHOENIX-930-v2.patch, PHOENIX-930-v3.patch, PHOENIX-930.patch
>
>
> when I create table like this: "create table test (id varchar not null primary key, f.name varchar, f.email varchar, f.email varchar)", this will cause an org.apache.phoenix.schema.ColumnAlreadyExistsException, but the table is successful created.
> Then I run a query like "select * from test", an exception is threw:
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 3
> at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:283)
> at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:216)
> at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:209)
> at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:443)
> at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:254)
> at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1077)
> at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:1023)
> ... 10 more
> then I try to drop the table: "drop table test", an exception is also threw:
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 3
> at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:283)
> at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:216)
> at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:209)
> at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:443)
> at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:254)
> at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:1077)
> at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:1023)
> ... 10 more
> So I have to drop SYSTEM.CATALOG, SYSTEM.SEQUENCE from hbase shell……
> The ArrayIndexOutOfBoundsException is threw out because the position of f.email column in CATALOG table is not correct. I think it's better to check columns before creating table.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)