You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Jacek Laskowski (JIRA)" <ji...@apache.org> on 2018/01/04 09:18:00 UTC

[jira] [Created] (SPARK-22954) ANALYZE TABLE fails with NoSuchTableException for temporary tables (but should have reported "not supported on views")

Jacek Laskowski created SPARK-22954:
---------------------------------------

             Summary: ANALYZE TABLE fails with NoSuchTableException for temporary tables (but should have reported "not supported on views")
                 Key: SPARK-22954
                 URL: https://issues.apache.org/jira/browse/SPARK-22954
             Project: Spark
          Issue Type: Bug
          Components: SQL
    Affects Versions: 2.3.0
         Environment: {code}
$ ./bin/spark-shell --version
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.3.0-SNAPSHOT
      /_/

Using Scala version 2.11.8, Java HotSpot(TM) 64-Bit Server VM, 1.8.0_152
Branch master
Compiled by user jacek on 2018-01-04T05:44:05Z
Revision 7d045c5f00e2c7c67011830e2169a4e130c3ace8
{code}
            Reporter: Jacek Laskowski
            Priority: Minor


{{ANALYZE TABLE}} fails with {{NoSuchTableException: Table or view 'names' not found in database 'default';}} for temporary tables (views) while the reason is that it can only work with permanent tables (which [it can report|https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/command/AnalyzeTableCommand.scala#L38] if it had a chance).

{code}
scala> names.createOrReplaceTempView("names")

scala> sql("ANALYZE TABLE names COMPUTE STATISTICS")
org.apache.spark.sql.catalyst.analysis.NoSuchTableException: Table or view 'names' not found in database 'default';
  at org.apache.spark.sql.catalyst.catalog.SessionCatalog.requireTableExists(SessionCatalog.scala:181)
  at org.apache.spark.sql.catalyst.catalog.SessionCatalog.getTableMetadata(SessionCatalog.scala:398)
  at org.apache.spark.sql.execution.command.AnalyzeTableCommand.run(AnalyzeTableCommand.scala:36)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
  at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:187)
  at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:187)
  at org.apache.spark.sql.Dataset$$anonfun$51.apply(Dataset.scala:3244)
  at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
  at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3243)
  at org.apache.spark.sql.Dataset.<init>(Dataset.scala:187)
  at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:72)
  at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:638)
  ... 50 elided
{code}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org