You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by li...@apache.org on 2017/02/14 06:49:34 UTC

spark git commit: [SPARK-19585][DOC][SQL] Fix the cacheTable and uncacheTable api call in the doc

Repository: spark
Updated Branches:
  refs/heads/master 1ab97310e -> 9b5e460a9


[SPARK-19585][DOC][SQL] Fix the cacheTable and uncacheTable api call in the doc

## What changes were proposed in this pull request?

https://spark.apache.org/docs/latest/sql-programming-guide.html#caching-data-in-memory
In the doc, the call spark.cacheTable(\u201ctableName\u201d) and spark.uncacheTable(\u201ctableName\u201d) actually needs to be spark.catalog.cacheTable and spark.catalog.uncacheTable

## How was this patch tested?
Built the docs and verified the change shows up fine.

Author: Sunitha Kambhampati <sk...@us.ibm.com>

Closes #16919 from skambha/docChange.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/9b5e460a
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/9b5e460a
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/9b5e460a

Branch: refs/heads/master
Commit: 9b5e460a9168ab78607034434ca45ab6cb51e5a6
Parents: 1ab9731
Author: Sunitha Kambhampati <sk...@us.ibm.com>
Authored: Mon Feb 13 22:49:29 2017 -0800
Committer: Xiao Li <ga...@gmail.com>
Committed: Mon Feb 13 22:49:29 2017 -0800

----------------------------------------------------------------------
 docs/sql-programming-guide.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/9b5e460a/docs/sql-programming-guide.md
----------------------------------------------------------------------
diff --git a/docs/sql-programming-guide.md b/docs/sql-programming-guide.md
index 9cf480c..235f5ec 100644
--- a/docs/sql-programming-guide.md
+++ b/docs/sql-programming-guide.md
@@ -1272,9 +1272,9 @@ turning on some experimental options.
 
 ## Caching Data In Memory
 
-Spark SQL can cache tables using an in-memory columnar format by calling `spark.cacheTable("tableName")` or `dataFrame.cache()`.
+Spark SQL can cache tables using an in-memory columnar format by calling `spark.catalog.cacheTable("tableName")` or `dataFrame.cache()`.
 Then Spark SQL will scan only required columns and will automatically tune compression to minimize
-memory usage and GC pressure. You can call `spark.uncacheTable("tableName")` to remove the table from memory.
+memory usage and GC pressure. You can call `spark.catalog.uncacheTable("tableName")` to remove the table from memory.
 
 Configuration of in-memory caching can be done using the `setConf` method on `SparkSession` or by running
 `SET key=value` commands using SQL.


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org