You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by cj <12...@qq.com> on 2016/07/26 04:13:35 UTC

回复: read parquetfile in spark-sql error

thank you.I see this sql in the spark doc: http://spark.apache.org/docs/1.6.1/sql-programming-guide.html













------------------ 原始邮件 ------------------
发件人: "Takeshi Yamamuro";<li...@gmail.com>;
发送时间: 2016年7月26日(星期二) 上午6:15
收件人: "cj"<12...@qq.com>; 
抄送: "user"<us...@spark.apache.org>; 
主题: Re: read parquetfile in spark-sql error



Hi,

Seems your query was not consist with the HQL syntax.
you'd better off re-checking the definitions: https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-CreateTable


// maropu


On Mon, Jul 25, 2016 at 11:36 PM, Kabeer Ahmed <ka...@outlook.com> wrote:
  I hope the below sample helps you:  
 
 val parquetDF = hiveContext.read.parquet("hdfs://<path tofile>.parquet") 
 parquetDF.registerTempTable("parquetTable")
 sql("SELECT * FROM parquetTable").collect().foreach(println)
 
 
 
 Kabeer.
 Sent from
 
  Nylas N1, the extensible, open source mail client.
 
  On Jul 25 2016, at 12:09 pm, cj <12...@qq.com> wrote: 
  hi,all:
 
 
       I use spark1.6.1 as my work env.
       
       when I saved the following content as test1.sql file :
   
 CREATE TEMPORARY TABLE parquetTable USING org.apache.spark.sql.parquet OPTIONS (   path "examples/src/main/resources/people.parquet" ) SELECT * FROM parquetTable 
 
 and use bin/spark-sql to run it (/home/bae/dataplatform/spark-1.6.1/bin/spark-sql  --properties-file ./spark-dataplatform.conf -f test1.sql ),I encountered a grammar error.
 
 
 
 
  SET hive.support.sql11.reserved.keywords=false
 SET spark.sql.hive.version=1.2.1
 SET spark.sql.hive.version=1.2.1
 NoViableAltException(280@[192:1: tableName : (db= identifier DOT tab= identifier -> ^( TOK_TABNAME $db $tab) |tab= identifier -> ^( TOK_TABNAME $tab) );])
         at org.antlr.runtime.DFA.noViableAlt(DFA.java:158)
         at org.antlr.runtime.DFA.predict(DFA.java:116)
         at org.apache.hadoop.hive.ql.parse.HiveParser_FromClauseParser.tableName(HiveParser_FromClauseParser.java:4747)
         at org.apache.hadoop.hive.ql.parse.HiveParser.tableName(HiveParser.java:45918)
         at org.apache.hadoop.hive.ql.parse.HiveParser.createTableStatement(HiveParser.java:5029)
         at org.apache.hadoop.hive.ql.parse.HiveParser.ddlStatement(HiveParser.java:2640)
         at org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:1650)
         at org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1109)
         at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:202)
         at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166)
         at org.apache.spark.sql.hive.HiveQl$.getAst(HiveQl.scala:276)
         at org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:303)
         at org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:41)
         at org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:40)
         at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
         at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
         at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
         at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
         at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
         at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
         at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
         at scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
         at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
         at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
         at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
         at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
         at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
         at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
         at scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
         at scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
         at org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:34)
         at org.apache.spark.sql.hive.HiveQl$.parseSql(HiveQl.scala:295)
         at org.apache.spark.sql.hive.HiveQLDialect$$anonfun$parse$1.apply(HiveContext.scala:66)
         at org.apache.spark.sql.hive.HiveQLDialect$$anonfun$parse$1.apply(HiveContext.scala:66)
         at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$withHiveState$1.apply(ClientWrapper.scala:290)
         at org.apache.spark.sql.hive.client.ClientWrapper.liftedTree1$1(ClientWrapper.scala:237)
         at org.apache.spark.sql.hive.client.ClientWrapper.retryLocked(ClientWrapper.scala:236)
         at org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:279)
         at org.apache.spark.sql.hive.HiveQLDialect.parse(HiveContext.scala:65)
         at org.apache.spark.sql.SQLContext$$anonfun$2.apply(SQLContext.scala:211)
         at org.apache.spark.sql.SQLContext$$anonfun$2.apply(SQLContext.scala:211)
         at org.apache.spark.sql.execution.SparkSQLParser$$anonfun$org$apache$spark$sql$execution$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:114)
         at org.apache.spark.sql.execution.SparkSQLParser$$anonfun$org$apache$spark$sql$execution$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:113)
         at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
         at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
         at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
         at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
         at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
         at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
         at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
         at scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
         at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
         at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
         at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
         at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
         at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
         at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
         at scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
         at scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
         at org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:34)
         at org.apache.spark.sql.SQLContext$$anonfun$1.apply(SQLContext.scala:208)
         at org.apache.spark.sql.SQLContext$$anonfun$1.apply(SQLContext.scala:208)
         at org.apache.spark.sql.execution.datasources.DDLParser.parse(DDLParser.scala:43)
         at org.apache.spark.sql.SQLContext.parseSql(SQLContext.scala:231)
         at org.apache.spark.sql.hive.HiveContext.parseSql(HiveContext.scala:331)
         at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817)
         at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:63)
         at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:311)
         at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
         at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:311)
         at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:409)
         at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:425)
         at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:166)
         at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
         at java.lang.reflect.Method.invoke(Method.java:606)
         at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
         at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
         at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
 Error in query: cannot recognize input near 'parquetTable' 'USING' 'org' in table name; line 2 pos 0
 
 
 
 
 
 am I use it in the wrong way?
 
 
 
 
 
 
 
 
 
 
 thanks 
  
 


 




-- 
---
Takeshi Yamamuro