You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@livy.apache.org by "Marco Gaido (JIRA)" <ji...@apache.org> on 2019/07/22 10:59:00 UTC

[jira] [Assigned] (LIVY-571) When an exception happens running a query, we should report it to end user

     [ https://issues.apache.org/jira/browse/LIVY-571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Marco Gaido reassigned LIVY-571:
--------------------------------

    Assignee: Jeffrey(Xilang) Yan

> When an exception happens running a query, we should report it to end user
> --------------------------------------------------------------------------
>
>                 Key: LIVY-571
>                 URL: https://issues.apache.org/jira/browse/LIVY-571
>             Project: Livy
>          Issue Type: Bug
>          Components: Thriftserver
>            Reporter: Marco Gaido
>            Assignee: Jeffrey(Xilang) Yan
>            Priority: Major
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> When a query fails with an exception on livy thriftserver, instead of reporting this exception to the end user, a meaningless one is reported. Eg, with Hive support not enabled on spark, the following query causes:
> {code}
> 0: jdbc:hive2://localhost:10090/> create table test as select a.* from (select 1, "2") a;
> Error: java.lang.RuntimeException: java.util.NoSuchElementException: Statement 820bb5c2-018b-46ea-9b7f-b0e3b9c31c46 not found in session acf3712b-1f08-4111-950f-559fc3f3f10c.
> org.apache.livy.thriftserver.session.ThriftSessionState.statementNotFound(ThriftSessionState.java:118)
> org.apache.livy.thriftserver.session.ThriftSessionState.cleanupStatement(ThriftSessionState.java:107)
> org.apache.livy.thriftserver.session.CleanupStatementJob.call(CleanupStatementJob.java:43)
> org.apache.livy.thriftserver.session.CleanupStatementJob.call(CleanupStatementJob.java:26)
> org.apache.livy.rsc.driver.JobWrapper.call(JobWrapper.java:64)
> org.apache.livy.rsc.driver.JobWrapper.call(JobWrapper.java:31)
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> java.lang.Thread.run(Thread.java:748)
> {code}
> Looking at the logs, of course the real problem is:
> {code}
> 19/03/23 10:40:32 ERROR LivyExecuteStatementOperation: Error running hive query: 
> org.apache.hive.service.cli.HiveSQLException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: org.apache.spark.sql.AnalysisException: Hive support is required to CREATE Hive TABLE (AS SELECT);;
> 'CreateTable `test`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, ErrorIfExists
> +- Project [1#1, 2#2]
>    +- SubqueryAlias `a`
>       +- Project [1 AS 1#1, 2 AS 2#2]
>          +- OneRowRelation
> org.apache.spark.sql.execution.datasources.HiveOnlyCheck$$anonfun$apply$12.apply(rules.scala:392)
> org.apache.spark.sql.execution.datasources.HiveOnlyCheck$$anonfun$apply$12.apply(rules.scala:390)
> org.apache.spark.sql.catalyst.trees.TreeNode.foreach(TreeNode.scala:117)
> org.apache.spark.sql.execution.datasources.HiveOnlyCheck$.apply(rules.scala:390)
> org.apache.spark.sql.execution.datasources.HiveOnlyCheck$.apply(rules.scala:388)
> org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$2.apply(CheckAnalysis.scala:386)
> org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$2.apply(CheckAnalysis.scala:386)
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
> org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:386)
> org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:95)
> org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:108)
> org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:105)
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:201)
> org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105)
> org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
> org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
> org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
> org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:79)
> org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)
> org.apache.livy.thriftserver.session.SqlJob.executeSql(SqlJob.java:74)
> org.apache.livy.thriftserver.session.SqlJob.call(SqlJob.java:64)
> org.apache.livy.thriftserver.session.SqlJob.call(SqlJob.java:35)
> org.apache.livy.rsc.driver.JobWrapper.call(JobWrapper.java:64)
> org.apache.livy.rsc.driver.JobWrapper.call(JobWrapper.java:31)
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> java.lang.Thread.run(Thread.java:748)
> 	at org.apache.livy.thriftserver.LivyExecuteStatementOperation.execute(LivyExecuteStatementOperation.scala:147)
> 	at org.apache.livy.thriftserver.LivyExecuteStatementOperation$$anon$1$$anon$2.run(LivyExecuteStatementOperation.scala:97)
> 	at org.apache.livy.thriftserver.LivyExecuteStatementOperation$$anon$1$$anon$2.run(LivyExecuteStatementOperation.scala:94)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:422)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
> 	at org.apache.livy.thriftserver.LivyExecuteStatementOperation$$anon$1.run(LivyExecuteStatementOperation.scala:107)
> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> 	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 	at java.lang.Thread.run(Thread.java:748)
>  ....
> {code}
> And this should be reported to the end user.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)