You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "keliang (JIRA)" <ji...@apache.org> on 2016/07/25 02:47:20 UTC

[jira] [Comment Edited] (SPARK-16603) Spark2.0 fail in executing the sql statement which field name begins with number,like "d.30_day_loss_user" while spark1.6 supports

    [ https://issues.apache.org/jira/browse/SPARK-16603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15391268#comment-15391268 ] 

keliang edited comment on SPARK-16603 at 7/25/16 2:46 AM:
----------------------------------------------------------

Hi, I test this feature with spark-2.0.1-snapshot:
first. create testing table tsp with column name [10_user_age: Int, 20_user_addr: String], and success
==========================================================
CREATE TABLE `tsp`(`10_user_age` int, `20_user_addr` string)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
WITH SERDEPROPERTIES (
  'serialization.format' = '1'
)
STORED AS
  INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat'
  OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
TBLPROPERTIES (
  'transient_lastDdlTime' = '1469418111'
)

second, query tsp with "select * from tsp where tsp.20_user_addr <10". get error:
Error in query:
mismatched input '.20' expecting {<EOF>, '.', '[', 'GROUP', 'ORDER', 'HAVING', 'LIMIT', 'OR', 'AND', 'IN', NOT, 'BETWEEN', 'LIKE', RLIKE, 'IS', 'WINDOW', 'UNION', 'EXCEPT', 'INTERSECT', EQ, '<=>', '<>', '!=', '<', LTE, '>', GTE, '+', '-', '*', '/', '%', 'DIV', '&', '|', '^', 'SORT', 'CLUSTER', 'DISTRIBUTE'}(line 1, pos 27)

== SQL ==
select * from tsp where tsp.20_user_addr <10
---------------------------^^^

Question is Why using Digit prefix in the column name to create table passed  the validation rules, but failed when query table ? self-contradiction ?




was (Author: biglobster):
Hi, I test this feature with spark-2.0.1-snapshot:
first. create testing table tsp with column name [10_user_age: Int, 20_user_addr: String], and success
==========================================================
CREATE TABLE `tsp`(`10_user_age` int, `20_user_addr` string)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
WITH SERDEPROPERTIES (
  'serialization.format' = '1'
)
STORED AS
  INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat'
  OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
TBLPROPERTIES (
  'transient_lastDdlTime' = '1469418111'
)

second, query tsp with "select * from tsp where tsp.20_user_addr <10". get error:
Error in query:
mismatched input '.20' expecting {<EOF>, '.', '[', 'GROUP', 'ORDER', 'HAVING', 'LIMIT', 'OR', 'AND', 'IN', NOT, 'BETWEEN', 'LIKE', RLIKE, 'IS', 'WINDOW', 'UNION', 'EXCEPT', 'INTERSECT', EQ, '<=>', '<>', '!=', '<', LTE, '>', GTE, '+', '-', '*', '/', '%', 'DIV', '&', '|', '^', 'SORT', 'CLUSTER', 'DISTRIBUTE'}(line 1, pos 27)

== SQL ==
select * from tsp where tsp.20_user_addr <10
---------------------------^^^

Question is Why using Digit prefix in the column name to create table passed  the validation rules, but failed when query table ?



> Spark2.0 fail in executing the sql statement which field name begins with number,like "d.30_day_loss_user" while spark1.6 supports
> ----------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-16603
>                 URL: https://issues.apache.org/jira/browse/SPARK-16603
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.0.0
>            Reporter: marymwu
>            Priority: Minor
>
> Spark2.0 fail in executing the sql statement which field name begins with number,like "d.30_day_loss_user" while spark1.6 supports
> Error: org.apache.spark.sql.catalyst.parser.ParseException: mismatched input '.30' expecting
> {')', ','}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org