You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Andreas Maier (JIRA)" <ji...@apache.org> on 2017/11/27 10:51:00 UTC
[jira] [Created] (SPARK-22613) Make UNCACHE TABLE behaviour
consistent with CACHE TABLE
Andreas Maier created SPARK-22613:
-------------------------------------
Summary: Make UNCACHE TABLE behaviour consistent with CACHE TABLE
Key: SPARK-22613
URL: https://issues.apache.org/jira/browse/SPARK-22613
Project: Spark
Issue Type: Improvement
Components: Spark Core, SQL
Affects Versions: 2.2.0
Reporter: Andreas Maier
Priority: Minor
The Spark SQL function CACHE TABLE is eager by default. Therefore it offers an optional keyword LAZY in case you do not want to cache the complete table immediately (See https://docs.databricks.com/spark/latest/spark-sql/language-manual/cache-table.html). But the corresponding Spark SQL function UNCACHE TABLE is lazy by default and doesn't offer an option EAGER (See https://docs.databricks.com/spark/latest/spark-sql/language-manual/uncache-table.html, https://stackoverflow.com/questions/47226494/is-uncache-table-a-lazy-operation-in-spark-sql). So one cannot cache and uncache a table in an eager way using Spark SQL.
As a user I want an option EAGER for UNCACHE TABLE. An alternative could be to change the behaviour of UNCACHE TABLE to be eager by default (consistent with CACHE TABLE) and then offer an option LAZY also for UNCACHE TABLE.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org