You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by wangxiaojing <gi...@git.apache.org> on 2014/10/14 06:05:49 UTC

[GitHub] spark pull request: [spark-3940][sql]sql Print the error code thre...

GitHub user wangxiaojing opened a pull request:

    https://github.com/apache/spark/pull/2790

    [spark-3940][sql]sql Print the error code three times

    IF  wrong sql ,the console print error one times。
    eg:
    spark-sql> show tabless;
    show tabless;
    14/10/13 21:03:48 INFO ParseDriver: Parsing command: show tabless
    NoViableAltException(26@[598:1: ddlStatement : ( createDatabaseStatement | switchDatabaseStatement | dropDatabaseStatement | createTableStatement | dropTableStatement | truncateTableStatement | alterStatement | descStatement | showStatement | metastoreCheck | createViewStatement | dropViewStatement | createFunctionStatement | createMacroStatement | createIndexStatement | dropIndexStatement | dropFunctionStatement | dropMacroStatement | analyzeStatement | lockStatement | unlockStatement | createRoleStatement | dropRoleStatement | grantPrivileges | revokePrivileges | showGrants | showRoleGrants | grantRole | revokeRole );])
    	at org.antlr.runtime.DFA.noViableAlt(DFA.java:158)
    	at org.antlr.runtime.DFA.predict(DFA.java:144)
    	at org.apache.hadoop.hive.ql.parse.HiveParser.ddlStatement(HiveParser.java:1962)
    	at org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:1298)
    	at org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:938)
    	at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:190)
    	at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:161)
    	at org.apache.spark.sql.hive.HiveQl$.getAst(HiveQl.scala:218)
    	at org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:226)
    	at org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:50)
    	at org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:49)
    	at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
    	at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
    	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
    	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
    	at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
    	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
    	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
    	at scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
    	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
    	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
    	at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
    	at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
    	at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
    	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
    	at scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
    	at scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
    	at org.apache.spark.sql.catalyst.AbstractSparkSQLParser.apply(SparkSQLParser.scala:31)
    	at org.apache.spark.sql.hive.HiveQl$$anonfun$3.apply(HiveQl.scala:130)
    	at org.apache.spark.sql.hive.HiveQl$$anonfun$3.apply(HiveQl.scala:130)
    	at org.apache.spark.sql.catalyst.SparkSQLParser$$anonfun$org$apache$spark$sql$catalyst$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:184)
    	at org.apache.spark.sql.catalyst.SparkSQLParser$$anonfun$org$apache$spark$sql$catalyst$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:183)
    	at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
    	at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
    	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
    	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
    	at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
    	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
    	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
    	at scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
    	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
    	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
    	at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
    	at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
    	at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
    	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
    	at scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
    	at scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
    	at org.apache.spark.sql.catalyst.AbstractSparkSQLParser.apply(SparkSQLParser.scala:31)
    	at org.apache.spark.sql.hive.HiveQl$.parseSql(HiveQl.scala:221)
    	at org.apache.spark.sql.hive.HiveContext.sql(HiveContext.scala:98)
    	at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:58)
    	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:274)
    	at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
    	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:209)
    	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
    14/10/13 21:03:49 ERROR SparkSQLDriver: Failed in [show tabless]
    org.apache.spark.sql.hive.HiveQl$ParseException: Failed to parse: show tabless
    	at org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:225)
    	at org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:50)
    	at org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:49)
    	at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
    	at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
    	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
    	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
    	at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
    	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
    	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
    	at scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
    	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
    	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
    	at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
    	at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
    	at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
    	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
    	at scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
    	at scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
    	at org.apache.spark.sql.catalyst.AbstractSparkSQLParser.apply(SparkSQLParser.scala:31)
    	at org.apache.spark.sql.hive.HiveQl$$anonfun$3.apply(HiveQl.scala:130)
    	at org.apache.spark.sql.hive.HiveQl$$anonfun$3.apply(HiveQl.scala:130)
    	at org.apache.spark.sql.catalyst.SparkSQLParser$$anonfun$org$apache$spark$sql$catalyst$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:184)
    	at org.apache.spark.sql.catalyst.SparkSQLParser$$anonfun$org$apache$spark$sql$catalyst$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:183)
    	at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
    	at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
    	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
    	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
    	at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
    	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
    	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
    	at scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
    	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
    	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
    	at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
    	at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
    	at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
    	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
    	at scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
    	at scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
    	at org.apache.spark.sql.catalyst.AbstractSparkSQLParser.apply(SparkSQLParser.scala:31)
    	at org.apache.spark.sql.hive.HiveQl$.parseSql(HiveQl.scala:221)
    	at org.apache.spark.sql.hive.HiveContext.sql(HiveContext.scala:98)
    	at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:58)
    	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:274)
    	at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
    	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:209)
    	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
    Caused by: org.apache.hadoop.hive.ql.parse.ParseException: line 1:5 cannot recognize input near 'show' 'tabless' '<EOF>' in ddl statement
    
    	at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:193)
    	at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:161)
    	at org.apache.spark.sql.hive.HiveQl$.getAst(HiveQl.scala:218)
    	at org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:226)
    	... 47 more
    Time taken: 4.35 seconds
    14/10/13 21:03:51 INFO CliDriver: Time taken: 4.35 seconds

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/wangxiaojing/spark spark-3940

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/2790.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #2790
    
----
commit e2e5c140269cc9271e11ff33ca7f9221f567a89b
Author: wangxiaojing <u9...@gmail.com>
Date:   2014-10-14T04:00:36Z

    sql Print the error code three times

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [spark-3940][sql]sql Print the error code thre...

Posted by liancheng <gi...@git.apache.org>.
Github user liancheng commented on a diff in the pull request:

    https://github.com/apache/spark/pull/2790#discussion_r18826823
  
    --- Diff: sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLDriver.scala ---
    @@ -62,7 +62,7 @@ private[hive] class SparkSQLDriver(val context: HiveContext = SparkSQLEnv.hiveCo
         } catch {
           case cause: Throwable =>
             logError(s"Failed in [$command]", cause)
    -        new CommandProcessorResponse(-3, ExceptionUtils.getFullStackTrace(cause), null)
    +        new CommandProcessorResponse(0, ExceptionUtils.getFullStackTrace(cause), null)
    --- End diff --
    
    Ah, sorry, I misread the change. It's changing -3 to 0. I get it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [spark-3940][sql]sql Print the error code thre...

Posted by pwendell <gi...@git.apache.org>.
Github user pwendell commented on the pull request:

    https://github.com/apache/spark/pull/2790#issuecomment-59629160
  
    @wangxiaojing I updated the JIRA title a bit - do you mind updating your pull request?
    
    https://issues.apache.org/jira/browse/SPARK-3940


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [spark-3940][sql]sql Print the error code thre...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the pull request:

    https://github.com/apache/spark/pull/2790#issuecomment-59050958
  
      [QA tests have finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/379/consoleFull) for   PR 2790 at commit [`e2e5c14`](https://github.com/apache/spark/commit/e2e5c140269cc9271e11ff33ca7f9221f567a89b).
     * This patch **passes all tests**.
     * This patch merges cleanly.
     * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [spark-3940][sql]sql Print the error code thre...

Posted by liancheng <gi...@git.apache.org>.
Github user liancheng commented on the pull request:

    https://github.com/apache/spark/pull/2790#issuecomment-59039692
  
    ok to test


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [spark-3940][sql]sql Print the error code thre...

Posted by liancheng <gi...@git.apache.org>.
Github user liancheng commented on a diff in the pull request:

    https://github.com/apache/spark/pull/2790#discussion_r18826596
  
    --- Diff: sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLDriver.scala ---
    @@ -62,7 +62,7 @@ private[hive] class SparkSQLDriver(val context: HiveContext = SparkSQLEnv.hiveCo
         } catch {
           case cause: Throwable =>
             logError(s"Failed in [$command]", cause)
    -        new CommandProcessorResponse(-3, ExceptionUtils.getFullStackTrace(cause), null)
    +        new CommandProcessorResponse(0, ExceptionUtils.getFullStackTrace(cause), null)
    --- End diff --
    
    Would you mind to elaborate on why changing the response code to -3 solves this issue?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [spark-3940][sql]sql Print the error code thre...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/2790#issuecomment-58987831
  
    Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [spark-3940][sql]SQL console prints error mess...

Posted by asfgit <gi...@git.apache.org>.
Github user asfgit closed the pull request at:

    https://github.com/apache/spark/pull/2790


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [spark-3940][sql]sql Print the error code thre...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the pull request:

    https://github.com/apache/spark/pull/2790#issuecomment-59041761
  
      [QA tests have started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/379/consoleFull) for   PR 2790 at commit [`e2e5c14`](https://github.com/apache/spark/commit/e2e5c140269cc9271e11ff33ca7f9221f567a89b).
     * This patch merges cleanly.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org