You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by ma...@apache.org on 2014/09/17 21:41:53 UTC

git commit: [SQL][DOCS] Improve table caching section

Repository: spark
Updated Branches:
  refs/heads/master 8fbd5f4a9 -> cbf983bb4


[SQL][DOCS] Improve table caching section

Author: Michael Armbrust <mi...@databricks.com>

Closes #2434 from marmbrus/patch-1 and squashes the following commits:

67215be [Michael Armbrust] [SQL][DOCS] Improve table caching section


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/cbf983bb
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/cbf983bb
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/cbf983bb

Branch: refs/heads/master
Commit: cbf983bb4a550ff26756ed7308fb03db42cffcff
Parents: 8fbd5f4
Author: Michael Armbrust <mi...@databricks.com>
Authored: Wed Sep 17 12:41:49 2014 -0700
Committer: Michael Armbrust <mi...@databricks.com>
Committed: Wed Sep 17 12:41:49 2014 -0700

----------------------------------------------------------------------
 docs/sql-programming-guide.md | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/cbf983bb/docs/sql-programming-guide.md
----------------------------------------------------------------------
diff --git a/docs/sql-programming-guide.md b/docs/sql-programming-guide.md
index c498b41..5212e19 100644
--- a/docs/sql-programming-guide.md
+++ b/docs/sql-programming-guide.md
@@ -801,12 +801,12 @@ turning on some experimental options.
 
 ## Caching Data In Memory
 
-Spark SQL can cache tables using an in-memory columnar format by calling `cacheTable("tableName")`.
+Spark SQL can cache tables using an in-memory columnar format by calling `sqlContext.cacheTable("tableName")`.
 Then Spark SQL will scan only required columns and will automatically tune compression to minimize
-memory usage and GC pressure. You can call `uncacheTable("tableName")` to remove the table from memory.
+memory usage and GC pressure. You can call `sqlContext.uncacheTable("tableName")` to remove the table from memory.
 
-Note that if you call `cache` rather than `cacheTable`, tables will _not_ be cached using
-the in-memory columnar format, and therefore `cacheTable` is strongly recommended for this use case.
+Note that if you call `schemaRDD.cache()` rather than `sqlContext.cacheTable(...)`, tables will _not_ be cached using
+the in-memory columnar format, and therefore `sqlContext.cacheTable(...)` is strongly recommended for this use case.
 
 Configuration of in-memory caching can be done using the `setConf` method on SQLContext or by running
 `SET key=value` commands using SQL.


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org