You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@phoenix.apache.org by "JeongMin Ju (JIRA)" <ji...@apache.org> on 2017/09/21 08:41:00 UTC
[jira] [Updated] (PHOENIX-4223) PhoenixStorageHandler for Hive is
not working properly.
[ https://issues.apache.org/jira/browse/PHOENIX-4223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
JeongMin Ju updated PHOENIX-4223:
---------------------------------
Description:
continued from [PHOENIX-4222|https://issues.apache.org/jira/browse/PHOENIX-4222].
I ran the query "select * from movies where movie_id < 10;"
and error occurred.
{noformat}
ERROR mapreduce.PhoenixInputFormat: Failed to get the query plan with error [ERROR 601 (42P00): Syntax error. Encountered "," at line 1, column 25.]
java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): Syntax error. Encountered "," at line 1, column 25.
17/09/21 17:27:59 ERROR exec.Task: Job Submission failed with exception 'java.lang.RuntimeException(org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): Syntax error. Encountered "," at line 1, column 25.)'
java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): Syntax error. Encountered "," at line 1, column 25.
at org.apache.phoenix.hive.mapreduce.PhoenixInputFormat.getQueryPlan(PhoenixInputFormat.java:266)
at org.apache.phoenix.hive.mapreduce.PhoenixInputFormat.getSplits(PhoenixInputFormat.java:131)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:306)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:408)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getCombineSplits(CombineHiveInputFormat.java:363)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:573)
at org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:332)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:324)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:200)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1304)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1714)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1304)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:578)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:573)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1714)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:573)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:564)
at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:416)
at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:140)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1969)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1682)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1419)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1203)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1193)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:220)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:172)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:383)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:775)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:693)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
Caused by: org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): Syntax error. Encountered "," at line 1, column 25.
at org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
at org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1547)
at org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1630)
at org.apache.phoenix.jdbc.PhoenixStatement.compileQuery(PhoenixStatement.java:1640)
at org.apache.phoenix.jdbc.PhoenixStatement.optimizeQuery(PhoenixStatement.java:1635)
at org.apache.phoenix.hive.mapreduce.PhoenixInputFormat.getQueryPlan(PhoenixInputFormat.java:260)
... 36 more
Caused by: NoViableAltException(29@[])
at org.apache.phoenix.parse.PhoenixSQLParser.select_list(PhoenixSQLParser.java:5954)
at org.apache.phoenix.parse.PhoenixSQLParser.single_select(PhoenixSQLParser.java:4720)
at org.apache.phoenix.parse.PhoenixSQLParser.unioned_selects(PhoenixSQLParser.java:4837)
at org.apache.phoenix.parse.PhoenixSQLParser.select_node(PhoenixSQLParser.java:4903)
at org.apache.phoenix.parse.PhoenixSQLParser.oneStatement(PhoenixSQLParser.java:817)
at org.apache.phoenix.parse.PhoenixSQLParser.statement(PhoenixSQLParser.java:518)
at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:108)
... 41 more
{noformat}
The query created by the phoenix handler is:
select /*+ NO_CACHE */ {color:red},{color}MOVIE_ID,TITLE,GENRES from phoenix.movies where movie_id < 10
The comma is listed first in the column list of the select clause.
PhoenixStorageHandlerUtil > getReadColumnNames must be modified.
was:
continued from [PHOENIX-4222|https://issues.apache.org/jira/browse/PHOENIX-4222].
I ran the query "select * from movies where movie_id < 10;"
and error occurred.
{noformat}
ERROR mapreduce.PhoenixInputFormat: Failed to get the query plan with error [ERROR 601 (42P00): Syntax error. Encountered "," at line 1, column 25.]
java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): Syntax error. Encountered "," at line 1, column 25.
17/09/21 17:27:59 ERROR exec.Task: Job Submission failed with exception 'java.lang.RuntimeException(org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): Syntax error. Encountered "," at line 1, column 25.)'
java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): Syntax error. Encountered "," at line 1, column 25.
at org.apache.phoenix.hive.mapreduce.PhoenixInputFormat.getQueryPlan(PhoenixInputFormat.java:266)
at org.apache.phoenix.hive.mapreduce.PhoenixInputFormat.getSplits(PhoenixInputFormat.java:131)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:306)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:408)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getCombineSplits(CombineHiveInputFormat.java:363)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:573)
at org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:332)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:324)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:200)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1304)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1714)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1304)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:578)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:573)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1714)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:573)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:564)
at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:416)
at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:140)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1969)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1682)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1419)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1203)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1193)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:220)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:172)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:383)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:775)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:693)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
Caused by: org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): Syntax error. Encountered "," at line 1, column 25.
at org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
at org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1547)
at org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1630)
at org.apache.phoenix.jdbc.PhoenixStatement.compileQuery(PhoenixStatement.java:1640)
at org.apache.phoenix.jdbc.PhoenixStatement.optimizeQuery(PhoenixStatement.java:1635)
at org.apache.phoenix.hive.mapreduce.PhoenixInputFormat.getQueryPlan(PhoenixInputFormat.java:260)
... 36 more
Caused by: NoViableAltException(29@[])
at org.apache.phoenix.parse.PhoenixSQLParser.select_list(PhoenixSQLParser.java:5954)
at org.apache.phoenix.parse.PhoenixSQLParser.single_select(PhoenixSQLParser.java:4720)
at org.apache.phoenix.parse.PhoenixSQLParser.unioned_selects(PhoenixSQLParser.java:4837)
at org.apache.phoenix.parse.PhoenixSQLParser.select_node(PhoenixSQLParser.java:4903)
at org.apache.phoenix.parse.PhoenixSQLParser.oneStatement(PhoenixSQLParser.java:817)
at org.apache.phoenix.parse.PhoenixSQLParser.statement(PhoenixSQLParser.java:518)
at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:108)
... 41 more
{noformat}
The query created by the phoenix handler is:
select /*+ NO_CACHE */ *{color:red},{color}*MOVIE_ID,TITLE,GENRES from phoenix.movies where movie_id < 10
The comma is listed first in the column list of the select clause.
PhoenixStorageHandlerUtil > getReadColumnNames must be modified.
> PhoenixStorageHandler for Hive is not working properly.
> -------------------------------------------------------
>
> Key: PHOENIX-4223
> URL: https://issues.apache.org/jira/browse/PHOENIX-4223
> Project: Phoenix
> Issue Type: Bug
> Affects Versions: 4.11.0
> Reporter: JeongMin Ju
>
> continued from [PHOENIX-4222|https://issues.apache.org/jira/browse/PHOENIX-4222].
> I ran the query "select * from movies where movie_id < 10;"
> and error occurred.
> {noformat}
> ERROR mapreduce.PhoenixInputFormat: Failed to get the query plan with error [ERROR 601 (42P00): Syntax error. Encountered "," at line 1, column 25.]
> java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): Syntax error. Encountered "," at line 1, column 25.
> 17/09/21 17:27:59 ERROR exec.Task: Job Submission failed with exception 'java.lang.RuntimeException(org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): Syntax error. Encountered "," at line 1, column 25.)'
> java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): Syntax error. Encountered "," at line 1, column 25.
> at org.apache.phoenix.hive.mapreduce.PhoenixInputFormat.getQueryPlan(PhoenixInputFormat.java:266)
> at org.apache.phoenix.hive.mapreduce.PhoenixInputFormat.getSplits(PhoenixInputFormat.java:131)
> at org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:306)
> at org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:408)
> at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getCombineSplits(CombineHiveInputFormat.java:363)
> at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:573)
> at org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:332)
> at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:324)
> at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:200)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1304)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1714)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1304)
> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:578)
> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:573)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1714)
> at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:573)
> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:564)
> at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:416)
> at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:140)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212)
> at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1969)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1682)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1419)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1203)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1193)
> at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:220)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:172)
> at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:383)
> at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:775)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:693)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
> Caused by: org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): Syntax error. Encountered "," at line 1, column 25.
> at org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
> at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
> at org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1547)
> at org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1630)
> at org.apache.phoenix.jdbc.PhoenixStatement.compileQuery(PhoenixStatement.java:1640)
> at org.apache.phoenix.jdbc.PhoenixStatement.optimizeQuery(PhoenixStatement.java:1635)
> at org.apache.phoenix.hive.mapreduce.PhoenixInputFormat.getQueryPlan(PhoenixInputFormat.java:260)
> ... 36 more
> Caused by: NoViableAltException(29@[])
> at org.apache.phoenix.parse.PhoenixSQLParser.select_list(PhoenixSQLParser.java:5954)
> at org.apache.phoenix.parse.PhoenixSQLParser.single_select(PhoenixSQLParser.java:4720)
> at org.apache.phoenix.parse.PhoenixSQLParser.unioned_selects(PhoenixSQLParser.java:4837)
> at org.apache.phoenix.parse.PhoenixSQLParser.select_node(PhoenixSQLParser.java:4903)
> at org.apache.phoenix.parse.PhoenixSQLParser.oneStatement(PhoenixSQLParser.java:817)
> at org.apache.phoenix.parse.PhoenixSQLParser.statement(PhoenixSQLParser.java:518)
> at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:108)
> ... 41 more
> {noformat}
> The query created by the phoenix handler is:
> select /*+ NO_CACHE */ {color:red},{color}MOVIE_ID,TITLE,GENRES from phoenix.movies where movie_id < 10
> The comma is listed first in the column list of the select clause.
> PhoenixStorageHandlerUtil > getReadColumnNames must be modified.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)