You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by rx...@apache.org on 2016/11/11 22:03:39 UTC

[50/51] [partial] spark-website git commit: Add docs for 2.0.2.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/0bd36316/site/docs/2.0.2/README.md
----------------------------------------------------------------------
diff --git a/site/docs/2.0.2/README.md b/site/docs/2.0.2/README.md
new file mode 100644
index 0000000..ffd3b57
--- /dev/null
+++ b/site/docs/2.0.2/README.md
@@ -0,0 +1,72 @@
+Welcome to the Spark documentation!
+
+This readme will walk you through navigating and building the Spark documentation, which is included
+here with the Spark source code. You can also find documentation specific to release versions of
+Spark at http://spark.apache.org/documentation.html.
+
+Read on to learn more about viewing documentation in plain text (i.e., markdown) or building the
+documentation yourself. Why build it yourself? So that you have the docs that corresponds to
+whichever version of Spark you currently have checked out of revision control.
+
+## Prerequisites
+The Spark documentation build uses a number of tools to build HTML docs and API docs in Scala,
+Python and R.
+
+You need to have [Ruby](https://www.ruby-lang.org/en/documentation/installation/) and
+[Python](https://docs.python.org/2/using/unix.html#getting-and-installing-the-latest-version-of-python)
+installed. Also install the following libraries:
+```sh
+    $ sudo gem install jekyll jekyll-redirect-from pygments.rb
+    $ sudo pip install Pygments
+    # Following is needed only for generating API docs
+    $ sudo pip install sphinx pypandoc
+    $ sudo Rscript -e 'install.packages(c("knitr", "devtools", "roxygen2", "testthat", "rmarkdown"), repos="http://cran.stat.ucla.edu/")'
+```
+(Note: If you are on a system with both Ruby 1.9 and Ruby 2.0 you may need to replace gem with gem2.0)
+
+## Generating the Documentation HTML
+
+We include the Spark documentation as part of the source (as opposed to using a hosted wiki, such as
+the github wiki, as the definitive documentation) to enable the documentation to evolve along with
+the source code and be captured by revision control (currently git). This way the code automatically
+includes the version of the documentation that is relevant regardless of which version or release
+you have checked out or downloaded.
+
+In this directory you will find textfiles formatted using Markdown, with an ".md" suffix. You can
+read those text files directly if you want. Start with index.md.
+
+Execute `jekyll build` from the `docs/` directory to compile the site. Compiling the site with
+Jekyll will create a directory called `_site` containing index.html as well as the rest of the
+compiled files.
+
+    $ cd docs
+    $ jekyll build
+
+You can modify the default Jekyll build as follows:
+```sh
+    # Skip generating API docs (which takes a while)
+    $ SKIP_API=1 jekyll build
+    
+    # Serve content locally on port 4000
+    $ jekyll serve --watch
+    
+    # Build the site with extra features used on the live page
+    $ PRODUCTION=1 jekyll build
+```
+
+## API Docs (Scaladoc, Sphinx, roxygen2)
+
+You can build just the Spark scaladoc by running `build/sbt unidoc` from the SPARK_PROJECT_ROOT directory.
+
+Similarly, you can build just the PySpark docs by running `make html` from the
+SPARK_PROJECT_ROOT/python/docs directory. Documentation is only generated for classes that are listed as
+public in `__init__.py`. The SparkR docs can be built by running SPARK_PROJECT_ROOT/R/create-docs.sh.
+
+When you run `jekyll` in the `docs` directory, it will also copy over the scaladoc for the various
+Spark subprojects into the `docs` directory (and then also into the `_site` directory). We use a
+jekyll plugin to run `build/sbt unidoc` before building the site so if you haven't run it (recently) it
+may take some time as it generates all of the scaladoc.  The jekyll plugin also generates the
+PySpark docs using [Sphinx](http://sphinx-doc.org/).
+
+NOTE: To skip the step of building and copying over the Scala, Python, R API docs, run `SKIP_API=1
+jekyll`.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/0bd36316/site/docs/2.0.2/api.html
----------------------------------------------------------------------
diff --git a/site/docs/2.0.2/api.html b/site/docs/2.0.2/api.html
new file mode 100644
index 0000000..731bd07
--- /dev/null
+++ b/site/docs/2.0.2/api.html
@@ -0,0 +1,178 @@
+
+<!DOCTYPE html>
+<!--[if lt IE 7]>      <html class="no-js lt-ie9 lt-ie8 lt-ie7"> <![endif]-->
+<!--[if IE 7]>         <html class="no-js lt-ie9 lt-ie8"> <![endif]-->
+<!--[if IE 8]>         <html class="no-js lt-ie9"> <![endif]-->
+<!--[if gt IE 8]><!--> <html class="no-js"> <!--<![endif]-->
+    <head>
+        <meta charset="utf-8">
+        <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
+        <title>Spark API Documentation - Spark 2.0.2 Documentation</title>
+        
+
+        
+
+        <link rel="stylesheet" href="css/bootstrap.min.css">
+        <style>
+            body {
+                padding-top: 60px;
+                padding-bottom: 40px;
+            }
+        </style>
+        <meta name="viewport" content="width=device-width">
+        <link rel="stylesheet" href="css/bootstrap-responsive.min.css">
+        <link rel="stylesheet" href="css/main.css">
+
+        <script src="js/vendor/modernizr-2.6.1-respond-1.1.0.min.js"></script>
+
+        <link rel="stylesheet" href="css/pygments-default.css">
+
+        
+        <!-- Google analytics script -->
+        <script type="text/javascript">
+          var _gaq = _gaq || [];
+          _gaq.push(['_setAccount', 'UA-32518208-2']);
+          _gaq.push(['_trackPageview']);
+
+          (function() {
+            var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
+            ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
+            var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
+          })();
+        </script>
+        
+
+    </head>
+    <body>
+        <!--[if lt IE 7]>
+            <p class="chromeframe">You are using an outdated browser. <a href="http://browsehappy.com/">Upgrade your browser today</a> or <a href="http://www.google.com/chromeframe/?redirect=true">install Google Chrome Frame</a> to better experience this site.</p>
+        <![endif]-->
+
+        <!-- This code is taken from http://twitter.github.com/bootstrap/examples/hero.html -->
+
+        <div class="navbar navbar-fixed-top" id="topbar">
+            <div class="navbar-inner">
+                <div class="container">
+                    <div class="brand"><a href="index.html">
+                      <img src="img/spark-logo-hd.png" style="height:50px;"/></a><span class="version">2.0.2</span>
+                    </div>
+                    <ul class="nav">
+                        <!--TODO(andyk): Add class="active" attribute to li some how.-->
+                        <li><a href="index.html">Overview</a></li>
+
+                        <li class="dropdown">
+                            <a href="#" class="dropdown-toggle" data-toggle="dropdown">Programming Guides<b class="caret"></b></a>
+                            <ul class="dropdown-menu">
+                                <li><a href="quick-start.html">Quick Start</a></li>
+                                <li><a href="programming-guide.html">Spark Programming Guide</a></li>
+                                <li class="divider"></li>
+                                <li><a href="streaming-programming-guide.html">Spark Streaming</a></li>
+                                <li><a href="sql-programming-guide.html">DataFrames, Datasets and SQL</a></li>
+                                <li><a href="structured-streaming-programming-guide.html">Structured Streaming</a></li>
+                                <li><a href="ml-guide.html">MLlib (Machine Learning)</a></li>
+                                <li><a href="graphx-programming-guide.html">GraphX (Graph Processing)</a></li>
+                                <li><a href="sparkr.html">SparkR (R on Spark)</a></li>
+                            </ul>
+                        </li>
+
+                        <li class="dropdown">
+                            <a href="#" class="dropdown-toggle" data-toggle="dropdown">API Docs<b class="caret"></b></a>
+                            <ul class="dropdown-menu">
+                                <li><a href="api/scala/index.html#org.apache.spark.package">Scala</a></li>
+                                <li><a href="api/java/index.html">Java</a></li>
+                                <li><a href="api/python/index.html">Python</a></li>
+                                <li><a href="api/R/index.html">R</a></li>
+                            </ul>
+                        </li>
+
+                        <li class="dropdown">
+                            <a href="#" class="dropdown-toggle" data-toggle="dropdown">Deploying<b class="caret"></b></a>
+                            <ul class="dropdown-menu">
+                                <li><a href="cluster-overview.html">Overview</a></li>
+                                <li><a href="submitting-applications.html">Submitting Applications</a></li>
+                                <li class="divider"></li>
+                                <li><a href="spark-standalone.html">Spark Standalone</a></li>
+                                <li><a href="running-on-mesos.html">Mesos</a></li>
+                                <li><a href="running-on-yarn.html">YARN</a></li>
+                            </ul>
+                        </li>
+
+                        <li class="dropdown">
+                            <a href="api.html" class="dropdown-toggle" data-toggle="dropdown">More<b class="caret"></b></a>
+                            <ul class="dropdown-menu">
+                                <li><a href="configuration.html">Configuration</a></li>
+                                <li><a href="monitoring.html">Monitoring</a></li>
+                                <li><a href="tuning.html">Tuning Guide</a></li>
+                                <li><a href="job-scheduling.html">Job Scheduling</a></li>
+                                <li><a href="security.html">Security</a></li>
+                                <li><a href="hardware-provisioning.html">Hardware Provisioning</a></li>
+                                <li class="divider"></li>
+                                <li><a href="building-spark.html">Building Spark</a></li>
+                                <li><a href="https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark">Contributing to Spark</a></li>
+                                <li><a href="https://cwiki.apache.org/confluence/display/SPARK/Third+Party+Projects">Third Party Projects</a></li>
+                            </ul>
+                        </li>
+                    </ul>
+                    <!--<p class="navbar-text pull-right"><span class="version-text">v2.0.2</span></p>-->
+                </div>
+            </div>
+        </div>
+
+        <div class="container-wrapper">
+
+            
+                <div class="content" id="content">
+                    
+                        <h1 class="title">Spark API Documentation</h1>
+                    
+
+                    <p>Here you can read API docs for Spark and its submodules.</p>
+
+<ul>
+  <li><a href="api/scala/index.html">Spark Scala API (Scaladoc)</a></li>
+  <li><a href="api/java/index.html">Spark Java API (Javadoc)</a></li>
+  <li><a href="api/python/index.html">Spark Python API (Sphinx)</a></li>
+  <li><a href="api/R/index.html">Spark R API (Roxygen2)</a></li>
+</ul>
+
+
+                </div>
+            
+             <!-- /container -->
+        </div>
+
+        <script src="js/vendor/jquery-1.8.0.min.js"></script>
+        <script src="js/vendor/bootstrap.min.js"></script>
+        <script src="js/vendor/anchor.min.js"></script>
+        <script src="js/main.js"></script>
+
+        <!-- MathJax Section -->
+        <script type="text/x-mathjax-config">
+            MathJax.Hub.Config({
+                TeX: { equationNumbers: { autoNumber: "AMS" } }
+            });
+        </script>
+        <script>
+            // Note that we load MathJax this way to work with local file (file://), HTTP and HTTPS.
+            // We could use "//cdn.mathjax...", but that won't support "file://".
+            (function(d, script) {
+                script = d.createElement('script');
+                script.type = 'text/javascript';
+                script.async = true;
+                script.onload = function(){
+                    MathJax.Hub.Config({
+                        tex2jax: {
+                            inlineMath: [ ["$", "$"], ["\\\\(","\\\\)"] ],
+                            displayMath: [ ["$$","$$"], ["\\[", "\\]"] ],
+                            processEscapes: true,
+                            skipTags: ['script', 'noscript', 'style', 'textarea', 'pre']
+                        }
+                    });
+                };
+                script.src = ('https:' == document.location.protocol ? 'https://' : 'http://') +
+                    'cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML';
+                d.getElementsByTagName('head')[0].appendChild(script);
+            }(document));
+        </script>
+    </body>
+</html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/0bd36316/site/docs/2.0.2/api/R/00Index.html
----------------------------------------------------------------------
diff --git a/site/docs/2.0.2/api/R/00Index.html b/site/docs/2.0.2/api/R/00Index.html
new file mode 100644
index 0000000..731a957
--- /dev/null
+++ b/site/docs/2.0.2/api/R/00Index.html
@@ -0,0 +1,1432 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
+<html><head><title>R: R Frontend for Apache Spark</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
+<link rel="stylesheet" type="text/css" href="R.css">
+</head><body>
+<h1> R Frontend for Apache Spark
+<img class="toplogo" src="http://stat.ethz.ch/R-manual/R-devel/doc/html/logo.jpg" alt="[R logo]">
+</h1>
+<hr>
+<div align="center">
+<a href="http://stat.ethz.ch/R-manual/R-devel/doc/html/packages.html"><img src="http://stat.ethz.ch/R-manual/R-devel/doc/html/left.jpg" alt="[Up]" width="30" height="30" border="0"></a>
+<a href="http://stat.ethz.ch/R-manual/R-devel/doc/html/index.html"><img src="http://stat.ethz.ch/R-manual/R-devel/doc/html/up.jpg" alt="[Top]" width="30" height="30" border="0"></a>
+</div><h2>Documentation for package &lsquo;SparkR&rsquo; version 2.0.2</h2>
+
+<ul><li><a href="../DESCRIPTION">DESCRIPTION file</a>.</li>
+</ul>
+
+<h2>Help Pages</h2>
+
+
+<p align="center">
+<a href="#A">A</a>
+<a href="#B">B</a>
+<a href="#C">C</a>
+<a href="#D">D</a>
+<a href="#E">E</a>
+<a href="#F">F</a>
+<a href="#G">G</a>
+<a href="#H">H</a>
+<a href="#I">I</a>
+<a href="#J">J</a>
+<a href="#K">K</a>
+<a href="#L">L</a>
+<a href="#M">M</a>
+<a href="#N">N</a>
+<a href="#O">O</a>
+<a href="#P">P</a>
+<a href="#Q">Q</a>
+<a href="#R">R</a>
+<a href="#S">S</a>
+<a href="#T">T</a>
+<a href="#U">U</a>
+<a href="#V">V</a>
+<a href="#W">W</a>
+<a href="#Y">Y</a>
+<a href="#misc">misc</a>
+</p>
+
+
+<h2><a name="A">-- A --</a></h2>
+
+<table width="100%">
+<tr><td width="25%"><a href="abs.html">abs</a></td>
+<td>abs</td></tr>
+<tr><td width="25%"><a href="abs.html">abs-method</a></td>
+<td>abs</td></tr>
+<tr><td width="25%"><a href="acos.html">acos</a></td>
+<td>acos</td></tr>
+<tr><td width="25%"><a href="acos.html">acos-method</a></td>
+<td>acos</td></tr>
+<tr><td width="25%"><a href="add_months.html">add_months</a></td>
+<td>add_months</td></tr>
+<tr><td width="25%"><a href="add_months.html">add_months-method</a></td>
+<td>add_months</td></tr>
+<tr><td width="25%"><a href="AFTSurvivalRegressionModel-class.html">AFTSurvivalRegressionModel-class</a></td>
+<td>S4 class that represents a AFTSurvivalRegressionModel</td></tr>
+<tr><td width="25%"><a href="summarize.html">agg</a></td>
+<td>Summarize data across columns</td></tr>
+<tr><td width="25%"><a href="summarize.html">agg-method</a></td>
+<td>Summarize data across columns</td></tr>
+<tr><td width="25%"><a href="alias.html">alias</a></td>
+<td>alias</td></tr>
+<tr><td width="25%"><a href="alias.html">alias-method</a></td>
+<td>alias</td></tr>
+<tr><td width="25%"><a href="approxCountDistinct.html">approxCountDistinct</a></td>
+<td>Returns the approximate number of distinct items in a group</td></tr>
+<tr><td width="25%"><a href="approxCountDistinct.html">approxCountDistinct-method</a></td>
+<td>Returns the approximate number of distinct items in a group</td></tr>
+<tr><td width="25%"><a href="approxQuantile.html">approxQuantile</a></td>
+<td>Calculates the approximate quantiles of a numerical column of a SparkDataFrame</td></tr>
+<tr><td width="25%"><a href="approxQuantile.html">approxQuantile-method</a></td>
+<td>Calculates the approximate quantiles of a numerical column of a SparkDataFrame</td></tr>
+<tr><td width="25%"><a href="arrange.html">arrange</a></td>
+<td>Arrange Rows by Variables</td></tr>
+<tr><td width="25%"><a href="arrange.html">arrange-method</a></td>
+<td>Arrange Rows by Variables</td></tr>
+<tr><td width="25%"><a href="array_contains.html">array_contains</a></td>
+<td>array_contains</td></tr>
+<tr><td width="25%"><a href="array_contains.html">array_contains-method</a></td>
+<td>array_contains</td></tr>
+<tr><td width="25%"><a href="as.data.frame.html">as.data.frame</a></td>
+<td>Download data from a SparkDataFrame into a R data.frame</td></tr>
+<tr><td width="25%"><a href="as.data.frame.html">as.data.frame-method</a></td>
+<td>Download data from a SparkDataFrame into a R data.frame</td></tr>
+<tr><td width="25%"><a href="createDataFrame.html">as.DataFrame</a></td>
+<td>Create a SparkDataFrame</td></tr>
+<tr><td width="25%"><a href="createDataFrame.html">as.DataFrame.default</a></td>
+<td>Create a SparkDataFrame</td></tr>
+<tr><td width="25%"><a href="columnfunctions.html">asc</a></td>
+<td>A set of operations working with SparkDataFrame columns</td></tr>
+<tr><td width="25%"><a href="ascii.html">ascii</a></td>
+<td>ascii</td></tr>
+<tr><td width="25%"><a href="ascii.html">ascii-method</a></td>
+<td>ascii</td></tr>
+<tr><td width="25%"><a href="asin.html">asin</a></td>
+<td>asin</td></tr>
+<tr><td width="25%"><a href="asin.html">asin-method</a></td>
+<td>asin</td></tr>
+<tr><td width="25%"><a href="atan.html">atan</a></td>
+<td>atan</td></tr>
+<tr><td width="25%"><a href="atan.html">atan-method</a></td>
+<td>atan</td></tr>
+<tr><td width="25%"><a href="atan2.html">atan2</a></td>
+<td>atan2</td></tr>
+<tr><td width="25%"><a href="atan2.html">atan2-method</a></td>
+<td>atan2</td></tr>
+<tr><td width="25%"><a href="attach.html">attach</a></td>
+<td>Attach SparkDataFrame to R search path</td></tr>
+<tr><td width="25%"><a href="attach.html">attach-method</a></td>
+<td>Attach SparkDataFrame to R search path</td></tr>
+<tr><td width="25%"><a href="avg.html">avg</a></td>
+<td>avg</td></tr>
+<tr><td width="25%"><a href="avg.html">avg-method</a></td>
+<td>avg</td></tr>
+</table>
+
+<h2><a name="B">-- B --</a></h2>
+
+<table width="100%">
+<tr><td width="25%"><a href="base64.html">base64</a></td>
+<td>base64</td></tr>
+<tr><td width="25%"><a href="base64.html">base64-method</a></td>
+<td>base64</td></tr>
+<tr><td width="25%"><a href="between.html">between</a></td>
+<td>between</td></tr>
+<tr><td width="25%"><a href="between.html">between-method</a></td>
+<td>between</td></tr>
+<tr><td width="25%"><a href="bin.html">bin</a></td>
+<td>bin</td></tr>
+<tr><td width="25%"><a href="bin.html">bin-method</a></td>
+<td>bin</td></tr>
+<tr><td width="25%"><a href="bitwiseNOT.html">bitwiseNOT</a></td>
+<td>bitwiseNOT</td></tr>
+<tr><td width="25%"><a href="bitwiseNOT.html">bitwiseNOT-method</a></td>
+<td>bitwiseNOT</td></tr>
+<tr><td width="25%"><a href="bround.html">bround</a></td>
+<td>bround</td></tr>
+<tr><td width="25%"><a href="bround.html">bround-method</a></td>
+<td>bround</td></tr>
+</table>
+
+<h2><a name="C">-- C --</a></h2>
+
+<table width="100%">
+<tr><td width="25%"><a href="cache.html">cache</a></td>
+<td>Cache</td></tr>
+<tr><td width="25%"><a href="cache.html">cache-method</a></td>
+<td>Cache</td></tr>
+<tr><td width="25%"><a href="cacheTable.html">cacheTable</a></td>
+<td>Cache Table</td></tr>
+<tr><td width="25%"><a href="cacheTable.html">cacheTable.default</a></td>
+<td>Cache Table</td></tr>
+<tr><td width="25%"><a href="cancelJobGroup.html">cancelJobGroup</a></td>
+<td>Cancel active jobs for the specified group</td></tr>
+<tr><td width="25%"><a href="cancelJobGroup.html">cancelJobGroup.default</a></td>
+<td>Cancel active jobs for the specified group</td></tr>
+<tr><td width="25%"><a href="cast.html">cast</a></td>
+<td>Casts the column to a different data type.</td></tr>
+<tr><td width="25%"><a href="cast.html">cast-method</a></td>
+<td>Casts the column to a different data type.</td></tr>
+<tr><td width="25%"><a href="cbrt.html">cbrt</a></td>
+<td>cbrt</td></tr>
+<tr><td width="25%"><a href="cbrt.html">cbrt-method</a></td>
+<td>cbrt</td></tr>
+<tr><td width="25%"><a href="ceil.html">ceil</a></td>
+<td>Computes the ceiling of the given value</td></tr>
+<tr><td width="25%"><a href="ceil.html">ceil-method</a></td>
+<td>Computes the ceiling of the given value</td></tr>
+<tr><td width="25%"><a href="ceil.html">ceiling</a></td>
+<td>Computes the ceiling of the given value</td></tr>
+<tr><td width="25%"><a href="ceil.html">ceiling-method</a></td>
+<td>Computes the ceiling of the given value</td></tr>
+<tr><td width="25%"><a href="clearCache.html">clearCache</a></td>
+<td>Clear Cache</td></tr>
+<tr><td width="25%"><a href="clearCache.html">clearCache.default</a></td>
+<td>Clear Cache</td></tr>
+<tr><td width="25%"><a href="clearJobGroup.html">clearJobGroup</a></td>
+<td>Clear current job group ID and its description</td></tr>
+<tr><td width="25%"><a href="clearJobGroup.html">clearJobGroup.default</a></td>
+<td>Clear current job group ID and its description</td></tr>
+<tr><td width="25%"><a href="collect.html">collect</a></td>
+<td>Collects all the elements of a SparkDataFrame and coerces them into an R data.frame.</td></tr>
+<tr><td width="25%"><a href="collect.html">collect-method</a></td>
+<td>Collects all the elements of a SparkDataFrame and coerces them into an R data.frame.</td></tr>
+<tr><td width="25%"><a href="columns.html">colnames</a></td>
+<td>Column Names of SparkDataFrame</td></tr>
+<tr><td width="25%"><a href="columns.html">colnames-method</a></td>
+<td>Column Names of SparkDataFrame</td></tr>
+<tr><td width="25%"><a href="columns.html">colnames&lt;-</a></td>
+<td>Column Names of SparkDataFrame</td></tr>
+<tr><td width="25%"><a href="columns.html">colnames&lt;--method</a></td>
+<td>Column Names of SparkDataFrame</td></tr>
+<tr><td width="25%"><a href="coltypes.html">coltypes</a></td>
+<td>coltypes</td></tr>
+<tr><td width="25%"><a href="coltypes.html">coltypes-method</a></td>
+<td>coltypes</td></tr>
+<tr><td width="25%"><a href="coltypes.html">coltypes&lt;-</a></td>
+<td>coltypes</td></tr>
+<tr><td width="25%"><a href="coltypes.html">coltypes&lt;--method</a></td>
+<td>coltypes</td></tr>
+<tr><td width="25%"><a href="column.html">column</a></td>
+<td>S4 class that represents a SparkDataFrame column</td></tr>
+<tr><td width="25%"><a href="column.html">Column-class</a></td>
+<td>S4 class that represents a SparkDataFrame column</td></tr>
+<tr><td width="25%"><a href="column.html">column-method</a></td>
+<td>S4 class that represents a SparkDataFrame column</td></tr>
+<tr><td width="25%"><a href="columnfunctions.html">columnfunctions</a></td>
+<td>A set of operations working with SparkDataFrame columns</td></tr>
+<tr><td width="25%"><a href="columns.html">columns</a></td>
+<td>Column Names of SparkDataFrame</td></tr>
+<tr><td width="25%"><a href="columns.html">columns-method</a></td>
+<td>Column Names of SparkDataFrame</td></tr>
+<tr><td width="25%"><a href="concat.html">concat</a></td>
+<td>concat</td></tr>
+<tr><td width="25%"><a href="concat.html">concat-method</a></td>
+<td>concat</td></tr>
+<tr><td width="25%"><a href="concat_ws.html">concat_ws</a></td>
+<td>concat_ws</td></tr>
+<tr><td width="25%"><a href="concat_ws.html">concat_ws-method</a></td>
+<td>concat_ws</td></tr>
+<tr><td width="25%"><a href="columnfunctions.html">contains</a></td>
+<td>A set of operations working with SparkDataFrame columns</td></tr>
+<tr><td width="25%"><a href="conv.html">conv</a></td>
+<td>conv</td></tr>
+<tr><td width="25%"><a href="conv.html">conv-method</a></td>
+<td>conv</td></tr>
+<tr><td width="25%"><a href="corr.html">corr</a></td>
+<td>corr</td></tr>
+<tr><td width="25%"><a href="corr.html">corr-method</a></td>
+<td>corr</td></tr>
+<tr><td width="25%"><a href="cos.html">cos</a></td>
+<td>cos</td></tr>
+<tr><td width="25%"><a href="cos.html">cos-method</a></td>
+<td>cos</td></tr>
+<tr><td width="25%"><a href="cosh.html">cosh</a></td>
+<td>cosh</td></tr>
+<tr><td width="25%"><a href="cosh.html">cosh-method</a></td>
+<td>cosh</td></tr>
+<tr><td width="25%"><a href="count.html">count</a></td>
+<td>Returns the number of items in a group</td></tr>
+<tr><td width="25%"><a href="count.html">count-method</a></td>
+<td>Returns the number of items in a group</td></tr>
+<tr><td width="25%"><a href="nrow.html">count-method</a></td>
+<td>Returns the number of rows in a SparkDataFrame</td></tr>
+<tr><td width="25%"><a href="countDistinct.html">countDistinct</a></td>
+<td>Count Distinct Values</td></tr>
+<tr><td width="25%"><a href="countDistinct.html">countDistinct-method</a></td>
+<td>Count Distinct Values</td></tr>
+<tr><td width="25%"><a href="cov.html">cov</a></td>
+<td>cov</td></tr>
+<tr><td width="25%"><a href="cov.html">cov-method</a></td>
+<td>cov</td></tr>
+<tr><td width="25%"><a href="covar_pop.html">covar_pop</a></td>
+<td>covar_pop</td></tr>
+<tr><td width="25%"><a href="covar_pop.html">covar_pop-method</a></td>
+<td>covar_pop</td></tr>
+<tr><td width="25%"><a href="cov.html">covar_samp</a></td>
+<td>cov</td></tr>
+<tr><td width="25%"><a href="cov.html">covar_samp-method</a></td>
+<td>cov</td></tr>
+<tr><td width="25%"><a href="crc32.html">crc32</a></td>
+<td>crc32</td></tr>
+<tr><td width="25%"><a href="crc32.html">crc32-method</a></td>
+<td>crc32</td></tr>
+<tr><td width="25%"><a href="createDataFrame.html">createDataFrame</a></td>
+<td>Create a SparkDataFrame</td></tr>
+<tr><td width="25%"><a href="createDataFrame.html">createDataFrame.default</a></td>
+<td>Create a SparkDataFrame</td></tr>
+<tr><td width="25%"><a href="createExternalTable.html">createExternalTable</a></td>
+<td>Create an external table</td></tr>
+<tr><td width="25%"><a href="createExternalTable.html">createExternalTable.default</a></td>
+<td>Create an external table</td></tr>
+<tr><td width="25%"><a href="createOrReplaceTempView.html">createOrReplaceTempView</a></td>
+<td>Creates a temporary view using the given name.</td></tr>
+<tr><td width="25%"><a href="createOrReplaceTempView.html">createOrReplaceTempView-method</a></td>
+<td>Creates a temporary view using the given name.</td></tr>
+<tr><td width="25%"><a href="crosstab.html">crosstab</a></td>
+<td>Computes a pair-wise frequency table of the given columns</td></tr>
+<tr><td width="25%"><a href="crosstab.html">crosstab-method</a></td>
+<td>Computes a pair-wise frequency table of the given columns</td></tr>
+<tr><td width="25%"><a href="cume_dist.html">cume_dist</a></td>
+<td>cume_dist</td></tr>
+<tr><td width="25%"><a href="cume_dist.html">cume_dist-method</a></td>
+<td>cume_dist</td></tr>
+</table>
+
+<h2><a name="D">-- D --</a></h2>
+
+<table width="100%">
+<tr><td width="25%"><a href="dapply.html">dapply</a></td>
+<td>dapply</td></tr>
+<tr><td width="25%"><a href="dapply.html">dapply-method</a></td>
+<td>dapply</td></tr>
+<tr><td width="25%"><a href="dapplyCollect.html">dapplyCollect</a></td>
+<td>dapplyCollect</td></tr>
+<tr><td width="25%"><a href="dapplyCollect.html">dapplyCollect-method</a></td>
+<td>dapplyCollect</td></tr>
+<tr><td width="25%"><a href="datediff.html">datediff</a></td>
+<td>datediff</td></tr>
+<tr><td width="25%"><a href="datediff.html">datediff-method</a></td>
+<td>datediff</td></tr>
+<tr><td width="25%"><a href="date_add.html">date_add</a></td>
+<td>date_add</td></tr>
+<tr><td width="25%"><a href="date_add.html">date_add-method</a></td>
+<td>date_add</td></tr>
+<tr><td width="25%"><a href="date_format.html">date_format</a></td>
+<td>date_format</td></tr>
+<tr><td width="25%"><a href="date_format.html">date_format-method</a></td>
+<td>date_format</td></tr>
+<tr><td width="25%"><a href="date_sub.html">date_sub</a></td>
+<td>date_sub</td></tr>
+<tr><td width="25%"><a href="date_sub.html">date_sub-method</a></td>
+<td>date_sub</td></tr>
+<tr><td width="25%"><a href="dayofmonth.html">dayofmonth</a></td>
+<td>dayofmonth</td></tr>
+<tr><td width="25%"><a href="dayofmonth.html">dayofmonth-method</a></td>
+<td>dayofmonth</td></tr>
+<tr><td width="25%"><a href="dayofyear.html">dayofyear</a></td>
+<td>dayofyear</td></tr>
+<tr><td width="25%"><a href="dayofyear.html">dayofyear-method</a></td>
+<td>dayofyear</td></tr>
+<tr><td width="25%"><a href="decode.html">decode</a></td>
+<td>decode</td></tr>
+<tr><td width="25%"><a href="decode.html">decode-method</a></td>
+<td>decode</td></tr>
+<tr><td width="25%"><a href="dense_rank.html">dense_rank</a></td>
+<td>dense_rank</td></tr>
+<tr><td width="25%"><a href="dense_rank.html">dense_rank-method</a></td>
+<td>dense_rank</td></tr>
+<tr><td width="25%"><a href="columnfunctions.html">desc</a></td>
+<td>A set of operations working with SparkDataFrame columns</td></tr>
+<tr><td width="25%"><a href="summary.html">describe</a></td>
+<td>summary</td></tr>
+<tr><td width="25%"><a href="summary.html">describe-method</a></td>
+<td>summary</td></tr>
+<tr><td width="25%"><a href="dim.html">dim</a></td>
+<td>Returns the dimensions of SparkDataFrame</td></tr>
+<tr><td width="25%"><a href="dim.html">dim-method</a></td>
+<td>Returns the dimensions of SparkDataFrame</td></tr>
+<tr><td width="25%"><a href="distinct.html">distinct</a></td>
+<td>Distinct</td></tr>
+<tr><td width="25%"><a href="distinct.html">distinct-method</a></td>
+<td>Distinct</td></tr>
+<tr><td width="25%"><a href="drop.html">drop</a></td>
+<td>drop</td></tr>
+<tr><td width="25%"><a href="drop.html">drop-method</a></td>
+<td>drop</td></tr>
+<tr><td width="25%"><a href="dropDuplicates.html">dropDuplicates</a></td>
+<td>dropDuplicates</td></tr>
+<tr><td width="25%"><a href="dropDuplicates.html">dropDuplicates-method</a></td>
+<td>dropDuplicates</td></tr>
+<tr><td width="25%"><a href="nafunctions.html">dropna</a></td>
+<td>A set of SparkDataFrame functions working with NA values</td></tr>
+<tr><td width="25%"><a href="nafunctions.html">dropna-method</a></td>
+<td>A set of SparkDataFrame functions working with NA values</td></tr>
+<tr><td width="25%"><a href="dropTempTable-deprecated.html">dropTempTable</a></td>
+<td>(Deprecated) Drop Temporary Table</td></tr>
+<tr><td width="25%"><a href="dropTempTable-deprecated.html">dropTempTable.default</a></td>
+<td>(Deprecated) Drop Temporary Table</td></tr>
+<tr><td width="25%"><a href="dropTempView.html">dropTempView</a></td>
+<td>Drops the temporary view with the given view name in the catalog.</td></tr>
+<tr><td width="25%"><a href="dtypes.html">dtypes</a></td>
+<td>DataTypes</td></tr>
+<tr><td width="25%"><a href="dtypes.html">dtypes-method</a></td>
+<td>DataTypes</td></tr>
+</table>
+
+<h2><a name="E">-- E --</a></h2>
+
+<table width="100%">
+<tr><td width="25%"><a href="encode.html">encode</a></td>
+<td>encode</td></tr>
+<tr><td width="25%"><a href="encode.html">encode-method</a></td>
+<td>encode</td></tr>
+<tr><td width="25%"><a href="endsWith.html">endsWith</a></td>
+<td>endsWith</td></tr>
+<tr><td width="25%"><a href="endsWith.html">endsWith-method</a></td>
+<td>endsWith</td></tr>
+<tr><td width="25%"><a href="except.html">except</a></td>
+<td>except</td></tr>
+<tr><td width="25%"><a href="except.html">except-method</a></td>
+<td>except</td></tr>
+<tr><td width="25%"><a href="exp.html">exp</a></td>
+<td>exp</td></tr>
+<tr><td width="25%"><a href="exp.html">exp-method</a></td>
+<td>exp</td></tr>
+<tr><td width="25%"><a href="explain.html">explain</a></td>
+<td>Explain</td></tr>
+<tr><td width="25%"><a href="explain.html">explain-method</a></td>
+<td>Explain</td></tr>
+<tr><td width="25%"><a href="explode.html">explode</a></td>
+<td>explode</td></tr>
+<tr><td width="25%"><a href="explode.html">explode-method</a></td>
+<td>explode</td></tr>
+<tr><td width="25%"><a href="expm1.html">expm1</a></td>
+<td>expm1</td></tr>
+<tr><td width="25%"><a href="expm1.html">expm1-method</a></td>
+<td>expm1</td></tr>
+<tr><td width="25%"><a href="expr.html">expr</a></td>
+<td>expr</td></tr>
+<tr><td width="25%"><a href="expr.html">expr-method</a></td>
+<td>expr</td></tr>
+</table>
+
+<h2><a name="F">-- F --</a></h2>
+
+<table width="100%">
+<tr><td width="25%"><a href="factorial.html">factorial</a></td>
+<td>factorial</td></tr>
+<tr><td width="25%"><a href="factorial.html">factorial-method</a></td>
+<td>factorial</td></tr>
+<tr><td width="25%"><a href="nafunctions.html">fillna</a></td>
+<td>A set of SparkDataFrame functions working with NA values</td></tr>
+<tr><td width="25%"><a href="nafunctions.html">fillna-method</a></td>
+<td>A set of SparkDataFrame functions working with NA values</td></tr>
+<tr><td width="25%"><a href="filter.html">filter</a></td>
+<td>Filter</td></tr>
+<tr><td width="25%"><a href="filter.html">filter-method</a></td>
+<td>Filter</td></tr>
+<tr><td width="25%"><a href="first.html">first</a></td>
+<td>Return the first row of a SparkDataFrame</td></tr>
+<tr><td width="25%"><a href="first.html">first-method</a></td>
+<td>Return the first row of a SparkDataFrame</td></tr>
+<tr><td width="25%"><a href="fitted.html">fitted</a></td>
+<td>Get fitted result from a k-means model</td></tr>
+<tr><td width="25%"><a href="fitted.html">fitted-method</a></td>
+<td>Get fitted result from a k-means model</td></tr>
+<tr><td width="25%"><a href="floor.html">floor</a></td>
+<td>floor</td></tr>
+<tr><td width="25%"><a href="floor.html">floor-method</a></td>
+<td>floor</td></tr>
+<tr><td width="25%"><a href="format_number.html">format_number</a></td>
+<td>format_number</td></tr>
+<tr><td width="25%"><a href="format_number.html">format_number-method</a></td>
+<td>format_number</td></tr>
+<tr><td width="25%"><a href="format_string.html">format_string</a></td>
+<td>format_string</td></tr>
+<tr><td width="25%"><a href="format_string.html">format_string-method</a></td>
+<td>format_string</td></tr>
+<tr><td width="25%"><a href="freqItems.html">freqItems</a></td>
+<td>Finding frequent items for columns, possibly with false positives</td></tr>
+<tr><td width="25%"><a href="freqItems.html">freqItems-method</a></td>
+<td>Finding frequent items for columns, possibly with false positives</td></tr>
+<tr><td width="25%"><a href="from_unixtime.html">from_unixtime</a></td>
+<td>from_unixtime</td></tr>
+<tr><td width="25%"><a href="from_unixtime.html">from_unixtime-method</a></td>
+<td>from_unixtime</td></tr>
+<tr><td width="25%"><a href="from_utc_timestamp.html">from_utc_timestamp</a></td>
+<td>from_utc_timestamp</td></tr>
+<tr><td width="25%"><a href="from_utc_timestamp.html">from_utc_timestamp-method</a></td>
+<td>from_utc_timestamp</td></tr>
+</table>
+
+<h2><a name="G">-- G --</a></h2>
+
+<table width="100%">
+<tr><td width="25%"><a href="gapply.html">gapply</a></td>
+<td>gapply</td></tr>
+<tr><td width="25%"><a href="gapply.html">gapply-method</a></td>
+<td>gapply</td></tr>
+<tr><td width="25%"><a href="gapplyCollect.html">gapplyCollect</a></td>
+<td>gapplyCollect</td></tr>
+<tr><td width="25%"><a href="gapplyCollect.html">gapplyCollect-method</a></td>
+<td>gapplyCollect</td></tr>
+<tr><td width="25%"><a href="GeneralizedLinearRegressionModel-class.html">GeneralizedLinearRegressionModel-class</a></td>
+<td>S4 class that represents a generalized linear model</td></tr>
+<tr><td width="25%"><a href="generateAliasesForIntersectedCols.html">generateAliasesForIntersectedCols</a></td>
+<td>Creates a list of columns by replacing the intersected ones with aliases</td></tr>
+<tr><td width="25%"><a href="columnfunctions.html">getField</a></td>
+<td>A set of operations working with SparkDataFrame columns</td></tr>
+<tr><td width="25%"><a href="columnfunctions.html">getItem</a></td>
+<td>A set of operations working with SparkDataFrame columns</td></tr>
+<tr><td width="25%"><a href="glm.html">glm</a></td>
+<td>Generalized Linear Models (R-compliant)</td></tr>
+<tr><td width="25%"><a href="glm.html">glm-method</a></td>
+<td>Generalized Linear Models (R-compliant)</td></tr>
+<tr><td width="25%"><a href="greatest.html">greatest</a></td>
+<td>greatest</td></tr>
+<tr><td width="25%"><a href="greatest.html">greatest-method</a></td>
+<td>greatest</td></tr>
+<tr><td width="25%"><a href="groupBy.html">groupBy</a></td>
+<td>GroupBy</td></tr>
+<tr><td width="25%"><a href="groupBy.html">groupBy-method</a></td>
+<td>GroupBy</td></tr>
+<tr><td width="25%"><a href="GroupedData.html">groupedData</a></td>
+<td>S4 class that represents a GroupedData</td></tr>
+<tr><td width="25%"><a href="GroupedData.html">GroupedData-class</a></td>
+<td>S4 class that represents a GroupedData</td></tr>
+<tr><td width="25%"><a href="groupBy.html">group_by</a></td>
+<td>GroupBy</td></tr>
+<tr><td width="25%"><a href="groupBy.html">group_by-method</a></td>
+<td>GroupBy</td></tr>
+</table>
+
+<h2><a name="H">-- H --</a></h2>
+
+<table width="100%">
+<tr><td width="25%"><a href="hash.html">hash</a></td>
+<td>hash</td></tr>
+<tr><td width="25%"><a href="hash.html">hash-method</a></td>
+<td>hash</td></tr>
+<tr><td width="25%"><a href="hashCode.html">hashCode</a></td>
+<td>Compute the hashCode of an object</td></tr>
+<tr><td width="25%"><a href="head.html">head</a></td>
+<td>Head</td></tr>
+<tr><td width="25%"><a href="head.html">head-method</a></td>
+<td>Head</td></tr>
+<tr><td width="25%"><a href="hex.html">hex</a></td>
+<td>hex</td></tr>
+<tr><td width="25%"><a href="hex.html">hex-method</a></td>
+<td>hex</td></tr>
+<tr><td width="25%"><a href="histogram.html">histogram</a></td>
+<td>Compute histogram statistics for given column</td></tr>
+<tr><td width="25%"><a href="histogram.html">histogram-method</a></td>
+<td>Compute histogram statistics for given column</td></tr>
+<tr><td width="25%"><a href="hour.html">hour</a></td>
+<td>hour</td></tr>
+<tr><td width="25%"><a href="hour.html">hour-method</a></td>
+<td>hour</td></tr>
+<tr><td width="25%"><a href="hypot.html">hypot</a></td>
+<td>hypot</td></tr>
+<tr><td width="25%"><a href="hypot.html">hypot-method</a></td>
+<td>hypot</td></tr>
+</table>
+
+<h2><a name="I">-- I --</a></h2>
+
+<table width="100%">
+<tr><td width="25%"><a href="ifelse.html">ifelse</a></td>
+<td>ifelse</td></tr>
+<tr><td width="25%"><a href="ifelse.html">ifelse-method</a></td>
+<td>ifelse</td></tr>
+<tr><td width="25%"><a href="initcap.html">initcap</a></td>
+<td>initcap</td></tr>
+<tr><td width="25%"><a href="initcap.html">initcap-method</a></td>
+<td>initcap</td></tr>
+<tr><td width="25%"><a href="insertInto.html">insertInto</a></td>
+<td>insertInto</td></tr>
+<tr><td width="25%"><a href="insertInto.html">insertInto-method</a></td>
+<td>insertInto</td></tr>
+<tr><td width="25%"><a href="install.spark.html">install.spark</a></td>
+<td>Download and Install Apache Spark to a Local Directory</td></tr>
+<tr><td width="25%"><a href="instr.html">instr</a></td>
+<td>instr</td></tr>
+<tr><td width="25%"><a href="instr.html">instr-method</a></td>
+<td>instr</td></tr>
+<tr><td width="25%"><a href="intersect.html">intersect</a></td>
+<td>Intersect</td></tr>
+<tr><td width="25%"><a href="intersect.html">intersect-method</a></td>
+<td>Intersect</td></tr>
+<tr><td width="25%"><a href="is.nan.html">is.nan</a></td>
+<td>is.nan</td></tr>
+<tr><td width="25%"><a href="is.nan.html">is.nan-method</a></td>
+<td>is.nan</td></tr>
+<tr><td width="25%"><a href="isLocal.html">isLocal</a></td>
+<td>isLocal</td></tr>
+<tr><td width="25%"><a href="isLocal.html">isLocal-method</a></td>
+<td>isLocal</td></tr>
+<tr><td width="25%"><a href="columnfunctions.html">isNaN</a></td>
+<td>A set of operations working with SparkDataFrame columns</td></tr>
+<tr><td width="25%"><a href="is.nan.html">isnan</a></td>
+<td>is.nan</td></tr>
+<tr><td width="25%"><a href="is.nan.html">isnan-method</a></td>
+<td>is.nan</td></tr>
+<tr><td width="25%"><a href="columnfunctions.html">isNotNull</a></td>
+<td>A set of operations working with SparkDataFrame columns</td></tr>
+<tr><td width="25%"><a href="columnfunctions.html">isNull</a></td>
+<td>A set of operations working with SparkDataFrame columns</td></tr>
+</table>
+
+<h2><a name="J">-- J --</a></h2>
+
+<table width="100%">
+<tr><td width="25%"><a href="join.html">join</a></td>
+<td>Join</td></tr>
+<tr><td width="25%"><a href="join.html">join-method</a></td>
+<td>Join</td></tr>
+<tr><td width="25%"><a href="read.json.html">jsonFile</a></td>
+<td>Create a SparkDataFrame from a JSON file.</td></tr>
+<tr><td width="25%"><a href="read.json.html">jsonFile.default</a></td>
+<td>Create a SparkDataFrame from a JSON file.</td></tr>
+</table>
+
+<h2><a name="K">-- K --</a></h2>
+
+<table width="100%">
+<tr><td width="25%"><a href="KMeansModel-class.html">KMeansModel-class</a></td>
+<td>S4 class that represents a KMeansModel</td></tr>
+<tr><td width="25%"><a href="kurtosis.html">kurtosis</a></td>
+<td>kurtosis</td></tr>
+<tr><td width="25%"><a href="kurtosis.html">kurtosis-method</a></td>
+<td>kurtosis</td></tr>
+</table>
+
+<h2><a name="L">-- L --</a></h2>
+
+<table width="100%">
+<tr><td width="25%"><a href="lag.html">lag</a></td>
+<td>lag</td></tr>
+<tr><td width="25%"><a href="lag.html">lag-method</a></td>
+<td>lag</td></tr>
+<tr><td width="25%"><a href="last.html">last</a></td>
+<td>last</td></tr>
+<tr><td width="25%"><a href="last.html">last-method</a></td>
+<td>last</td></tr>
+<tr><td width="25%"><a href="last_day.html">last_day</a></td>
+<td>last_day</td></tr>
+<tr><td width="25%"><a href="last_day.html">last_day-method</a></td>
+<td>last_day</td></tr>
+<tr><td width="25%"><a href="lead.html">lead</a></td>
+<td>lead</td></tr>
+<tr><td width="25%"><a href="lead.html">lead-method</a></td>
+<td>lead</td></tr>
+<tr><td width="25%"><a href="least.html">least</a></td>
+<td>least</td></tr>
+<tr><td width="25%"><a href="least.html">least-method</a></td>
+<td>least</td></tr>
+<tr><td width="25%"><a href="length.html">length</a></td>
+<td>length</td></tr>
+<tr><td width="25%"><a href="length.html">length-method</a></td>
+<td>length</td></tr>
+<tr><td width="25%"><a href="levenshtein.html">levenshtein</a></td>
+<td>levenshtein</td></tr>
+<tr><td width="25%"><a href="levenshtein.html">levenshtein-method</a></td>
+<td>levenshtein</td></tr>
+<tr><td width="25%"><a href="columnfunctions.html">like</a></td>
+<td>A set of operations working with SparkDataFrame columns</td></tr>
+<tr><td width="25%"><a href="limit.html">limit</a></td>
+<td>Limit</td></tr>
+<tr><td width="25%"><a href="limit.html">limit-method</a></td>
+<td>Limit</td></tr>
+<tr><td width="25%"><a href="lit.html">lit</a></td>
+<td>lit</td></tr>
+<tr><td width="25%"><a href="lit.html">lit-method</a></td>
+<td>lit</td></tr>
+<tr><td width="25%"><a href="read.df.html">loadDF</a></td>
+<td>Load a SparkDataFrame</td></tr>
+<tr><td width="25%"><a href="read.df.html">loadDF.default</a></td>
+<td>Load a SparkDataFrame</td></tr>
+<tr><td width="25%"><a href="locate.html">locate</a></td>
+<td>locate</td></tr>
+<tr><td width="25%"><a href="locate.html">locate-method</a></td>
+<td>locate</td></tr>
+<tr><td width="25%"><a href="log.html">log</a></td>
+<td>log</td></tr>
+<tr><td width="25%"><a href="log.html">log-method</a></td>
+<td>log</td></tr>
+<tr><td width="25%"><a href="log10.html">log10</a></td>
+<td>log10</td></tr>
+<tr><td width="25%"><a href="log10.html">log10-method</a></td>
+<td>log10</td></tr>
+<tr><td width="25%"><a href="log1p.html">log1p</a></td>
+<td>log1p</td></tr>
+<tr><td width="25%"><a href="log1p.html">log1p-method</a></td>
+<td>log1p</td></tr>
+<tr><td width="25%"><a href="log2.html">log2</a></td>
+<td>log2</td></tr>
+<tr><td width="25%"><a href="log2.html">log2-method</a></td>
+<td>log2</td></tr>
+<tr><td width="25%"><a href="lower.html">lower</a></td>
+<td>lower</td></tr>
+<tr><td width="25%"><a href="lower.html">lower-method</a></td>
+<td>lower</td></tr>
+<tr><td width="25%"><a href="lpad.html">lpad</a></td>
+<td>lpad</td></tr>
+<tr><td width="25%"><a href="lpad.html">lpad-method</a></td>
+<td>lpad</td></tr>
+<tr><td width="25%"><a href="ltrim.html">ltrim</a></td>
+<td>ltrim</td></tr>
+<tr><td width="25%"><a href="ltrim.html">ltrim-method</a></td>
+<td>ltrim</td></tr>
+</table>
+
+<h2><a name="M">-- M --</a></h2>
+
+<table width="100%">
+<tr><td width="25%"><a href="max.html">max</a></td>
+<td>max</td></tr>
+<tr><td width="25%"><a href="max.html">max-method</a></td>
+<td>max</td></tr>
+<tr><td width="25%"><a href="md5.html">md5</a></td>
+<td>md5</td></tr>
+<tr><td width="25%"><a href="md5.html">md5-method</a></td>
+<td>md5</td></tr>
+<tr><td width="25%"><a href="mean.html">mean</a></td>
+<td>mean</td></tr>
+<tr><td width="25%"><a href="mean.html">mean-method</a></td>
+<td>mean</td></tr>
+<tr><td width="25%"><a href="merge.html">merge</a></td>
+<td>Merges two data frames</td></tr>
+<tr><td width="25%"><a href="merge.html">merge-method</a></td>
+<td>Merges two data frames</td></tr>
+<tr><td width="25%"><a href="min.html">min</a></td>
+<td>min</td></tr>
+<tr><td width="25%"><a href="min.html">min-method</a></td>
+<td>min</td></tr>
+<tr><td width="25%"><a href="minute.html">minute</a></td>
+<td>minute</td></tr>
+<tr><td width="25%"><a href="minute.html">minute-method</a></td>
+<td>minute</td></tr>
+<tr><td width="25%"><a href="monotonically_increasing_id.html">monotonically_increasing_id</a></td>
+<td>monotonically_increasing_id</td></tr>
+<tr><td width="25%"><a href="monotonically_increasing_id.html">monotonically_increasing_id-method</a></td>
+<td>monotonically_increasing_id</td></tr>
+<tr><td width="25%"><a href="month.html">month</a></td>
+<td>month</td></tr>
+<tr><td width="25%"><a href="month.html">month-method</a></td>
+<td>month</td></tr>
+<tr><td width="25%"><a href="months_between.html">months_between</a></td>
+<td>months_between</td></tr>
+<tr><td width="25%"><a href="months_between.html">months_between-method</a></td>
+<td>months_between</td></tr>
+<tr><td width="25%"><a href="mutate.html">mutate</a></td>
+<td>Mutate</td></tr>
+<tr><td width="25%"><a href="mutate.html">mutate-method</a></td>
+<td>Mutate</td></tr>
+</table>
+
+<h2><a name="N">-- N --</a></h2>
+
+<table width="100%">
+<tr><td width="25%"><a href="count.html">n</a></td>
+<td>Returns the number of items in a group</td></tr>
+<tr><td width="25%"><a href="count.html">n-method</a></td>
+<td>Returns the number of items in a group</td></tr>
+<tr><td width="25%"><a href="nafunctions.html">na.omit</a></td>
+<td>A set of SparkDataFrame functions working with NA values</td></tr>
+<tr><td width="25%"><a href="nafunctions.html">na.omit-method</a></td>
+<td>A set of SparkDataFrame functions working with NA values</td></tr>
+<tr><td width="25%"><a href="NaiveBayesModel-class.html">NaiveBayesModel-class</a></td>
+<td>S4 class that represents a NaiveBayesModel</td></tr>
+<tr><td width="25%"><a href="columns.html">names</a></td>
+<td>Column Names of SparkDataFrame</td></tr>
+<tr><td width="25%"><a href="columns.html">names-method</a></td>
+<td>Column Names of SparkDataFrame</td></tr>
+<tr><td width="25%"><a href="columns.html">names&lt;-</a></td>
+<td>Column Names of SparkDataFrame</td></tr>
+<tr><td width="25%"><a href="columns.html">names&lt;--method</a></td>
+<td>Column Names of SparkDataFrame</td></tr>
+<tr><td width="25%"><a href="nanvl.html">nanvl</a></td>
+<td>nanvl</td></tr>
+<tr><td width="25%"><a href="nanvl.html">nanvl-method</a></td>
+<td>nanvl</td></tr>
+<tr><td width="25%"><a href="ncol.html">ncol</a></td>
+<td>Returns the number of columns in a SparkDataFrame</td></tr>
+<tr><td width="25%"><a href="ncol.html">ncol-method</a></td>
+<td>Returns the number of columns in a SparkDataFrame</td></tr>
+<tr><td width="25%"><a href="negate.html">negate</a></td>
+<td>negate</td></tr>
+<tr><td width="25%"><a href="negate.html">negate-method</a></td>
+<td>negate</td></tr>
+<tr><td width="25%"><a href="next_day.html">next_day</a></td>
+<td>next_day</td></tr>
+<tr><td width="25%"><a href="next_day.html">next_day-method</a></td>
+<td>next_day</td></tr>
+<tr><td width="25%"><a href="nrow.html">nrow</a></td>
+<td>Returns the number of rows in a SparkDataFrame</td></tr>
+<tr><td width="25%"><a href="nrow.html">nrow-method</a></td>
+<td>Returns the number of rows in a SparkDataFrame</td></tr>
+<tr><td width="25%"><a href="ntile.html">ntile</a></td>
+<td>ntile</td></tr>
+<tr><td width="25%"><a href="ntile.html">ntile-method</a></td>
+<td>ntile</td></tr>
+<tr><td width="25%"><a href="countDistinct.html">n_distinct</a></td>
+<td>Count Distinct Values</td></tr>
+<tr><td width="25%"><a href="countDistinct.html">n_distinct-method</a></td>
+<td>Count Distinct Values</td></tr>
+</table>
+
+<h2><a name="O">-- O --</a></h2>
+
+<table width="100%">
+<tr><td width="25%"><a href="orderBy.html">orderBy</a></td>
+<td>Ordering Columns in a WindowSpec</td></tr>
+<tr><td width="25%"><a href="arrange.html">orderBy-method</a></td>
+<td>Arrange Rows by Variables</td></tr>
+<tr><td width="25%"><a href="orderBy.html">orderBy-method</a></td>
+<td>Ordering Columns in a WindowSpec</td></tr>
+<tr><td width="25%"><a href="otherwise.html">otherwise</a></td>
+<td>otherwise</td></tr>
+<tr><td width="25%"><a href="otherwise.html">otherwise-method</a></td>
+<td>otherwise</td></tr>
+<tr><td width="25%"><a href="over.html">over</a></td>
+<td>over</td></tr>
+<tr><td width="25%"><a href="over.html">over-method</a></td>
+<td>over</td></tr>
+</table>
+
+<h2><a name="P">-- P --</a></h2>
+
+<table width="100%">
+<tr><td width="25%"><a href="read.parquet.html">parquetFile</a></td>
+<td>Create a SparkDataFrame from a Parquet file.</td></tr>
+<tr><td width="25%"><a href="read.parquet.html">parquetFile.default</a></td>
+<td>Create a SparkDataFrame from a Parquet file.</td></tr>
+<tr><td width="25%"><a href="partitionBy.html">partitionBy</a></td>
+<td>partitionBy</td></tr>
+<tr><td width="25%"><a href="partitionBy.html">partitionBy-method</a></td>
+<td>partitionBy</td></tr>
+<tr><td width="25%"><a href="percent_rank.html">percent_rank</a></td>
+<td>percent_rank</td></tr>
+<tr><td width="25%"><a href="percent_rank.html">percent_rank-method</a></td>
+<td>percent_rank</td></tr>
+<tr><td width="25%"><a href="persist.html">persist</a></td>
+<td>Persist</td></tr>
+<tr><td width="25%"><a href="persist.html">persist-method</a></td>
+<td>Persist</td></tr>
+<tr><td width="25%"><a href="pivot.html">pivot</a></td>
+<td>Pivot a column of the GroupedData and perform the specified aggregation.</td></tr>
+<tr><td width="25%"><a href="pivot.html">pivot-method</a></td>
+<td>Pivot a column of the GroupedData and perform the specified aggregation.</td></tr>
+<tr><td width="25%"><a href="pmod.html">pmod</a></td>
+<td>pmod</td></tr>
+<tr><td width="25%"><a href="pmod.html">pmod-method</a></td>
+<td>pmod</td></tr>
+<tr><td width="25%"><a href="posexplode.html">posexplode</a></td>
+<td>posexplode</td></tr>
+<tr><td width="25%"><a href="posexplode.html">posexplode-method</a></td>
+<td>posexplode</td></tr>
+<tr><td width="25%"><a href="predict.html">predict</a></td>
+<td>Makes predictions from a MLlib model</td></tr>
+<tr><td width="25%"><a href="spark.glm.html">predict-method</a></td>
+<td>Generalized Linear Models</td></tr>
+<tr><td width="25%"><a href="spark.kmeans.html">predict-method</a></td>
+<td>K-Means Clustering Model</td></tr>
+<tr><td width="25%"><a href="spark.naiveBayes.html">predict-method</a></td>
+<td>Naive Bayes Models</td></tr>
+<tr><td width="25%"><a href="spark.survreg.html">predict-method</a></td>
+<td>Accelerated Failure Time (AFT) Survival Regression Model</td></tr>
+<tr><td width="25%"><a href="print.jobj.html">print.jobj</a></td>
+<td>Print a JVM object reference.</td></tr>
+<tr><td width="25%"><a href="print.structField.html">print.structField</a></td>
+<td>Print a Spark StructField.</td></tr>
+<tr><td width="25%"><a href="print.structType.html">print.structType</a></td>
+<td>Print a Spark StructType.</td></tr>
+<tr><td width="25%"><a href="spark.glm.html">print.summary.GeneralizedLinearRegressionModel</a></td>
+<td>Generalized Linear Models</td></tr>
+<tr><td width="25%"><a href="printSchema.html">printSchema</a></td>
+<td>Print Schema of a SparkDataFrame</td></tr>
+<tr><td width="25%"><a href="printSchema.html">printSchema-method</a></td>
+<td>Print Schema of a SparkDataFrame</td></tr>
+</table>
+
+<h2><a name="Q">-- Q --</a></h2>
+
+<table width="100%">
+<tr><td width="25%"><a href="quarter.html">quarter</a></td>
+<td>quarter</td></tr>
+<tr><td width="25%"><a href="quarter.html">quarter-method</a></td>
+<td>quarter</td></tr>
+</table>
+
+<h2><a name="R">-- R --</a></h2>
+
+<table width="100%">
+<tr><td width="25%"><a href="rand.html">rand</a></td>
+<td>rand</td></tr>
+<tr><td width="25%"><a href="rand.html">rand-method</a></td>
+<td>rand</td></tr>
+<tr><td width="25%"><a href="randn.html">randn</a></td>
+<td>randn</td></tr>
+<tr><td width="25%"><a href="randn.html">randn-method</a></td>
+<td>randn</td></tr>
+<tr><td width="25%"><a href="randomSplit.html">randomSplit</a></td>
+<td>randomSplit</td></tr>
+<tr><td width="25%"><a href="randomSplit.html">randomSplit-method</a></td>
+<td>randomSplit</td></tr>
+<tr><td width="25%"><a href="rangeBetween.html">rangeBetween</a></td>
+<td>rangeBetween</td></tr>
+<tr><td width="25%"><a href="rangeBetween.html">rangeBetween-method</a></td>
+<td>rangeBetween</td></tr>
+<tr><td width="25%"><a href="rank.html">rank</a></td>
+<td>rank</td></tr>
+<tr><td width="25%"><a href="rank.html">rank-method</a></td>
+<td>rank</td></tr>
+<tr><td width="25%"><a href="rbind.html">rbind</a></td>
+<td>Union two or more SparkDataFrames</td></tr>
+<tr><td width="25%"><a href="rbind.html">rbind-method</a></td>
+<td>Union two or more SparkDataFrames</td></tr>
+<tr><td width="25%"><a href="read.df.html">read.df</a></td>
+<td>Load a SparkDataFrame</td></tr>
+<tr><td width="25%"><a href="read.df.html">read.df.default</a></td>
+<td>Load a SparkDataFrame</td></tr>
+<tr><td width="25%"><a href="read.jdbc.html">read.jdbc</a></td>
+<td>Create a SparkDataFrame representing the database table accessible via JDBC URL</td></tr>
+<tr><td width="25%"><a href="read.json.html">read.json</a></td>
+<td>Create a SparkDataFrame from a JSON file.</td></tr>
+<tr><td width="25%"><a href="read.json.html">read.json.default</a></td>
+<td>Create a SparkDataFrame from a JSON file.</td></tr>
+<tr><td width="25%"><a href="read.ml.html">read.ml</a></td>
+<td>Load a fitted MLlib model from the input path.</td></tr>
+<tr><td width="25%"><a href="read.orc.html">read.orc</a></td>
+<td>Create a SparkDataFrame from an ORC file.</td></tr>
+<tr><td width="25%"><a href="read.parquet.html">read.parquet</a></td>
+<td>Create a SparkDataFrame from a Parquet file.</td></tr>
+<tr><td width="25%"><a href="read.parquet.html">read.parquet.default</a></td>
+<td>Create a SparkDataFrame from a Parquet file.</td></tr>
+<tr><td width="25%"><a href="read.text.html">read.text</a></td>
+<td>Create a SparkDataFrame from a text file.</td></tr>
+<tr><td width="25%"><a href="read.text.html">read.text.default</a></td>
+<td>Create a SparkDataFrame from a text file.</td></tr>
+<tr><td width="25%"><a href="regexp_extract.html">regexp_extract</a></td>
+<td>regexp_extract</td></tr>
+<tr><td width="25%"><a href="regexp_extract.html">regexp_extract-method</a></td>
+<td>regexp_extract</td></tr>
+<tr><td width="25%"><a href="regexp_replace.html">regexp_replace</a></td>
+<td>regexp_replace</td></tr>
+<tr><td width="25%"><a href="regexp_replace.html">regexp_replace-method</a></td>
+<td>regexp_replace</td></tr>
+<tr><td width="25%"><a href="registerTempTable-deprecated.html">registerTempTable</a></td>
+<td>(Deprecated) Register Temporary Table</td></tr>
+<tr><td width="25%"><a href="registerTempTable-deprecated.html">registerTempTable-method</a></td>
+<td>(Deprecated) Register Temporary Table</td></tr>
+<tr><td width="25%"><a href="rename.html">rename</a></td>
+<td>rename</td></tr>
+<tr><td width="25%"><a href="rename.html">rename-method</a></td>
+<td>rename</td></tr>
+<tr><td width="25%"><a href="repartition.html">repartition</a></td>
+<td>Repartition</td></tr>
+<tr><td width="25%"><a href="repartition.html">repartition-method</a></td>
+<td>Repartition</td></tr>
+<tr><td width="25%"><a href="reverse.html">reverse</a></td>
+<td>reverse</td></tr>
+<tr><td width="25%"><a href="reverse.html">reverse-method</a></td>
+<td>reverse</td></tr>
+<tr><td width="25%"><a href="rint.html">rint</a></td>
+<td>rint</td></tr>
+<tr><td width="25%"><a href="rint.html">rint-method</a></td>
+<td>rint</td></tr>
+<tr><td width="25%"><a href="columnfunctions.html">rlike</a></td>
+<td>A set of operations working with SparkDataFrame columns</td></tr>
+<tr><td width="25%"><a href="round.html">round</a></td>
+<td>round</td></tr>
+<tr><td width="25%"><a href="round.html">round-method</a></td>
+<td>round</td></tr>
+<tr><td width="25%"><a href="rowsBetween.html">rowsBetween</a></td>
+<td>rowsBetween</td></tr>
+<tr><td width="25%"><a href="rowsBetween.html">rowsBetween-method</a></td>
+<td>rowsBetween</td></tr>
+<tr><td width="25%"><a href="row_number.html">row_number</a></td>
+<td>row_number</td></tr>
+<tr><td width="25%"><a href="row_number.html">row_number-method</a></td>
+<td>row_number</td></tr>
+<tr><td width="25%"><a href="rpad.html">rpad</a></td>
+<td>rpad</td></tr>
+<tr><td width="25%"><a href="rpad.html">rpad-method</a></td>
+<td>rpad</td></tr>
+<tr><td width="25%"><a href="rtrim.html">rtrim</a></td>
+<td>rtrim</td></tr>
+<tr><td width="25%"><a href="rtrim.html">rtrim-method</a></td>
+<td>rtrim</td></tr>
+</table>
+
+<h2><a name="S">-- S --</a></h2>
+
+<table width="100%">
+<tr><td width="25%"><a href="sample.html">sample</a></td>
+<td>Sample</td></tr>
+<tr><td width="25%"><a href="sample.html">sample-method</a></td>
+<td>Sample</td></tr>
+<tr><td width="25%"><a href="sampleBy.html">sampleBy</a></td>
+<td>Returns a stratified sample without replacement</td></tr>
+<tr><td width="25%"><a href="sampleBy.html">sampleBy-method</a></td>
+<td>Returns a stratified sample without replacement</td></tr>
+<tr><td width="25%"><a href="sample.html">sample_frac</a></td>
+<td>Sample</td></tr>
+<tr><td width="25%"><a href="sample.html">sample_frac-method</a></td>
+<td>Sample</td></tr>
+<tr><td width="25%"><a href="write.parquet.html">saveAsParquetFile</a></td>
+<td>Save the contents of SparkDataFrame as a Parquet file, preserving the schema.</td></tr>
+<tr><td width="25%"><a href="write.parquet.html">saveAsParquetFile-method</a></td>
+<td>Save the contents of SparkDataFrame as a Parquet file, preserving the schema.</td></tr>
+<tr><td width="25%"><a href="saveAsTable.html">saveAsTable</a></td>
+<td>Save the contents of the SparkDataFrame to a data source as a table</td></tr>
+<tr><td width="25%"><a href="saveAsTable.html">saveAsTable-method</a></td>
+<td>Save the contents of the SparkDataFrame to a data source as a table</td></tr>
+<tr><td width="25%"><a href="write.df.html">saveDF</a></td>
+<td>Save the contents of SparkDataFrame to a data source.</td></tr>
+<tr><td width="25%"><a href="write.df.html">saveDF-method</a></td>
+<td>Save the contents of SparkDataFrame to a data source.</td></tr>
+<tr><td width="25%"><a href="schema.html">schema</a></td>
+<td>Get schema object</td></tr>
+<tr><td width="25%"><a href="schema.html">schema-method</a></td>
+<td>Get schema object</td></tr>
+<tr><td width="25%"><a href="sd.html">sd</a></td>
+<td>sd</td></tr>
+<tr><td width="25%"><a href="sd.html">sd-method</a></td>
+<td>sd</td></tr>
+<tr><td width="25%"><a href="second.html">second</a></td>
+<td>second</td></tr>
+<tr><td width="25%"><a href="second.html">second-method</a></td>
+<td>second</td></tr>
+<tr><td width="25%"><a href="select.html">select</a></td>
+<td>Select</td></tr>
+<tr><td width="25%"><a href="select.html">select-method</a></td>
+<td>Select</td></tr>
+<tr><td width="25%"><a href="selectExpr.html">selectExpr</a></td>
+<td>SelectExpr</td></tr>
+<tr><td width="25%"><a href="selectExpr.html">selectExpr-method</a></td>
+<td>SelectExpr</td></tr>
+<tr><td width="25%"><a href="setJobGroup.html">setJobGroup</a></td>
+<td>Assigns a group ID to all the jobs started by this thread until the group ID is set to a different value or cleared.</td></tr>
+<tr><td width="25%"><a href="setJobGroup.html">setJobGroup.default</a></td>
+<td>Assigns a group ID to all the jobs started by this thread until the group ID is set to a different value or cleared.</td></tr>
+<tr><td width="25%"><a href="setLogLevel.html">setLogLevel</a></td>
+<td>Set new log level</td></tr>
+<tr><td width="25%"><a href="sha1.html">sha1</a></td>
+<td>sha1</td></tr>
+<tr><td width="25%"><a href="sha1.html">sha1-method</a></td>
+<td>sha1</td></tr>
+<tr><td width="25%"><a href="sha2.html">sha2</a></td>
+<td>sha2</td></tr>
+<tr><td width="25%"><a href="sha2.html">sha2-method</a></td>
+<td>sha2</td></tr>
+<tr><td width="25%"><a href="shiftLeft.html">shiftLeft</a></td>
+<td>shiftLeft</td></tr>
+<tr><td width="25%"><a href="shiftLeft.html">shiftLeft-method</a></td>
+<td>shiftLeft</td></tr>
+<tr><td width="25%"><a href="shiftRight.html">shiftRight</a></td>
+<td>shiftRight</td></tr>
+<tr><td width="25%"><a href="shiftRight.html">shiftRight-method</a></td>
+<td>shiftRight</td></tr>
+<tr><td width="25%"><a href="shiftRightUnsigned.html">shiftRightUnsigned</a></td>
+<td>shiftRightUnsigned</td></tr>
+<tr><td width="25%"><a href="shiftRightUnsigned.html">shiftRightUnsigned-method</a></td>
+<td>shiftRightUnsigned</td></tr>
+<tr><td width="25%"><a href="show.html">show</a></td>
+<td>show</td></tr>
+<tr><td width="25%"><a href="show.html">show-method</a></td>
+<td>show</td></tr>
+<tr><td width="25%"><a href="showDF.html">showDF</a></td>
+<td>showDF</td></tr>
+<tr><td width="25%"><a href="showDF.html">showDF-method</a></td>
+<td>showDF</td></tr>
+<tr><td width="25%"><a href="sign.html">sign</a></td>
+<td>signum</td></tr>
+<tr><td width="25%"><a href="sign.html">sign-method</a></td>
+<td>signum</td></tr>
+<tr><td width="25%"><a href="sign.html">signum</a></td>
+<td>signum</td></tr>
+<tr><td width="25%"><a href="sign.html">signum-method</a></td>
+<td>signum</td></tr>
+<tr><td width="25%"><a href="sin.html">sin</a></td>
+<td>sin</td></tr>
+<tr><td width="25%"><a href="sin.html">sin-method</a></td>
+<td>sin</td></tr>
+<tr><td width="25%"><a href="sinh.html">sinh</a></td>
+<td>sinh</td></tr>
+<tr><td width="25%"><a href="sinh.html">sinh-method</a></td>
+<td>sinh</td></tr>
+<tr><td width="25%"><a href="size.html">size</a></td>
+<td>size</td></tr>
+<tr><td width="25%"><a href="size.html">size-method</a></td>
+<td>size</td></tr>
+<tr><td width="25%"><a href="skewness.html">skewness</a></td>
+<td>skewness</td></tr>
+<tr><td width="25%"><a href="skewness.html">skewness-method</a></td>
+<td>skewness</td></tr>
+<tr><td width="25%"><a href="sort_array.html">sort_array</a></td>
+<td>sort_array</td></tr>
+<tr><td width="25%"><a href="sort_array.html">sort_array-method</a></td>
+<td>sort_array</td></tr>
+<tr><td width="25%"><a href="soundex.html">soundex</a></td>
+<td>soundex</td></tr>
+<tr><td width="25%"><a href="soundex.html">soundex-method</a></td>
+<td>soundex</td></tr>
+<tr><td width="25%"><a href="spark.glm.html">spark.glm</a></td>
+<td>Generalized Linear Models</td></tr>
+<tr><td width="25%"><a href="spark.glm.html">spark.glm-method</a></td>
+<td>Generalized Linear Models</td></tr>
+<tr><td width="25%"><a href="spark.kmeans.html">spark.kmeans</a></td>
+<td>K-Means Clustering Model</td></tr>
+<tr><td width="25%"><a href="spark.kmeans.html">spark.kmeans-method</a></td>
+<td>K-Means Clustering Model</td></tr>
+<tr><td width="25%"><a href="spark.lapply.html">spark.lapply</a></td>
+<td>Run a function over a list of elements, distributing the computations with Spark</td></tr>
+<tr><td width="25%"><a href="spark.naiveBayes.html">spark.naiveBayes</a></td>
+<td>Naive Bayes Models</td></tr>
+<tr><td width="25%"><a href="spark.naiveBayes.html">spark.naiveBayes-method</a></td>
+<td>Naive Bayes Models</td></tr>
+<tr><td width="25%"><a href="spark.survreg.html">spark.survreg</a></td>
+<td>Accelerated Failure Time (AFT) Survival Regression Model</td></tr>
+<tr><td width="25%"><a href="spark.survreg.html">spark.survreg-method</a></td>
+<td>Accelerated Failure Time (AFT) Survival Regression Model</td></tr>
+<tr><td width="25%"><a href="SparkDataFrame.html">SparkDataFrame-class</a></td>
+<td>S4 class that represents a SparkDataFrame</td></tr>
+<tr><td width="25%"><a href="sparkR.callJMethod.html">sparkR.callJMethod</a></td>
+<td>Call Java Methods</td></tr>
+<tr><td width="25%"><a href="sparkR.callJStatic.html">sparkR.callJStatic</a></td>
+<td>Call Static Java Methods</td></tr>
+<tr><td width="25%"><a href="sparkR.conf.html">sparkR.conf</a></td>
+<td>Get Runtime Config from the current active SparkSession</td></tr>
+<tr><td width="25%"><a href="sparkR.init-deprecated.html">sparkR.init</a></td>
+<td>(Deprecated) Initialize a new Spark Context</td></tr>
+<tr><td width="25%"><a href="sparkR.newJObject.html">sparkR.newJObject</a></td>
+<td>Create Java Objects</td></tr>
+<tr><td width="25%"><a href="sparkR.session.html">sparkR.session</a></td>
+<td>Get the existing SparkSession or initialize a new SparkSession.</td></tr>
+<tr><td width="25%"><a href="sparkR.session.stop.html">sparkR.session.stop</a></td>
+<td>Stop the Spark Session and Spark Context</td></tr>
+<tr><td width="25%"><a href="sparkR.session.stop.html">sparkR.stop</a></td>
+<td>Stop the Spark Session and Spark Context</td></tr>
+<tr><td width="25%"><a href="sparkR.version.html">sparkR.version</a></td>
+<td>Get version of Spark on which this application is running</td></tr>
+<tr><td width="25%"><a href="sparkRHive.init-deprecated.html">sparkRHive.init</a></td>
+<td>(Deprecated) Initialize a new HiveContext</td></tr>
+<tr><td width="25%"><a href="sparkRSQL.init-deprecated.html">sparkRSQL.init</a></td>
+<td>(Deprecated) Initialize a new SQLContext</td></tr>
+<tr><td width="25%"><a href="spark_partition_id.html">spark_partition_id</a></td>
+<td>Return the partition ID as a column</td></tr>
+<tr><td width="25%"><a href="spark_partition_id.html">spark_partition_id-method</a></td>
+<td>Return the partition ID as a column</td></tr>
+<tr><td width="25%"><a href="sql.html">sql</a></td>
+<td>SQL Query</td></tr>
+<tr><td width="25%"><a href="sql.html">sql.default</a></td>
+<td>SQL Query</td></tr>
+<tr><td width="25%"><a href="sqrt.html">sqrt</a></td>
+<td>sqrt</td></tr>
+<tr><td width="25%"><a href="sqrt.html">sqrt-method</a></td>
+<td>sqrt</td></tr>
+<tr><td width="25%"><a href="startsWith.html">startsWith</a></td>
+<td>startsWith</td></tr>
+<tr><td width="25%"><a href="startsWith.html">startsWith-method</a></td>
+<td>startsWith</td></tr>
+<tr><td width="25%"><a href="sd.html">stddev</a></td>
+<td>sd</td></tr>
+<tr><td width="25%"><a href="sd.html">stddev-method</a></td>
+<td>sd</td></tr>
+<tr><td width="25%"><a href="stddev_pop.html">stddev_pop</a></td>
+<td>stddev_pop</td></tr>
+<tr><td width="25%"><a href="stddev_pop.html">stddev_pop-method</a></td>
+<td>stddev_pop</td></tr>
+<tr><td width="25%"><a href="stddev_samp.html">stddev_samp</a></td>
+<td>stddev_samp</td></tr>
+<tr><td width="25%"><a href="stddev_samp.html">stddev_samp-method</a></td>
+<td>stddev_samp</td></tr>
+<tr><td width="25%"><a href="str.html">str</a></td>
+<td>Compactly display the structure of a dataset</td></tr>
+<tr><td width="25%"><a href="str.html">str-method</a></td>
+<td>Compactly display the structure of a dataset</td></tr>
+<tr><td width="25%"><a href="struct.html">struct</a></td>
+<td>struct</td></tr>
+<tr><td width="25%"><a href="struct.html">struct-method</a></td>
+<td>struct</td></tr>
+<tr><td width="25%"><a href="structField.html">structField</a></td>
+<td>structField</td></tr>
+<tr><td width="25%"><a href="structField.html">structField.character</a></td>
+<td>structField</td></tr>
+<tr><td width="25%"><a href="structField.html">structField.jobj</a></td>
+<td>structField</td></tr>
+<tr><td width="25%"><a href="structType.html">structType</a></td>
+<td>structType</td></tr>
+<tr><td width="25%"><a href="structType.html">structType.jobj</a></td>
+<td>structType</td></tr>
+<tr><td width="25%"><a href="structType.html">structType.structField</a></td>
+<td>structType</td></tr>
+<tr><td width="25%"><a href="subset.html">subset</a></td>
+<td>Subset</td></tr>
+<tr><td width="25%"><a href="subset.html">subset-method</a></td>
+<td>Subset</td></tr>
+<tr><td width="25%"><a href="substr.html">substr</a></td>
+<td>substr</td></tr>
+<tr><td width="25%"><a href="substr.html">substr-method</a></td>
+<td>substr</td></tr>
+<tr><td width="25%"><a href="substring_index.html">substring_index</a></td>
+<td>substring_index</td></tr>
+<tr><td width="25%"><a href="substring_index.html">substring_index-method</a></td>
+<td>substring_index</td></tr>
+<tr><td width="25%"><a href="sum.html">sum</a></td>
+<td>sum</td></tr>
+<tr><td width="25%"><a href="sum.html">sum-method</a></td>
+<td>sum</td></tr>
+<tr><td width="25%"><a href="sumDistinct.html">sumDistinct</a></td>
+<td>sumDistinct</td></tr>
+<tr><td width="25%"><a href="sumDistinct.html">sumDistinct-method</a></td>
+<td>sumDistinct</td></tr>
+<tr><td width="25%"><a href="summarize.html">summarize</a></td>
+<td>Summarize data across columns</td></tr>
+<tr><td width="25%"><a href="summarize.html">summarize-method</a></td>
+<td>Summarize data across columns</td></tr>
+<tr><td width="25%"><a href="summary.html">summary</a></td>
+<td>summary</td></tr>
+<tr><td width="25%"><a href="spark.glm.html">summary-method</a></td>
+<td>Generalized Linear Models</td></tr>
+<tr><td width="25%"><a href="spark.kmeans.html">summary-method</a></td>
+<td>K-Means Clustering Model</td></tr>
+<tr><td width="25%"><a href="spark.naiveBayes.html">summary-method</a></td>
+<td>Naive Bayes Models</td></tr>
+<tr><td width="25%"><a href="spark.survreg.html">summary-method</a></td>
+<td>Accelerated Failure Time (AFT) Survival Regression Model</td></tr>
+<tr><td width="25%"><a href="summary.html">summary-method</a></td>
+<td>summary</td></tr>
+</table>
+
+<h2><a name="T">-- T --</a></h2>
+
+<table width="100%">
+<tr><td width="25%"><a href="tableNames.html">tableNames</a></td>
+<td>Table Names</td></tr>
+<tr><td width="25%"><a href="tableNames.html">tableNames.default</a></td>
+<td>Table Names</td></tr>
+<tr><td width="25%"><a href="tables.html">tables</a></td>
+<td>Tables</td></tr>
+<tr><td width="25%"><a href="tables.html">tables.default</a></td>
+<td>Tables</td></tr>
+<tr><td width="25%"><a href="tableToDF.html">tableToDF</a></td>
+<td>Create a SparkDataFrame from a SparkSQL Table</td></tr>
+<tr><td width="25%"><a href="take.html">take</a></td>
+<td>Take the first NUM rows of a SparkDataFrame and return the results as a R data.frame</td></tr>
+<tr><td width="25%"><a href="take.html">take-method</a></td>
+<td>Take the first NUM rows of a SparkDataFrame and return the results as a R data.frame</td></tr>
+<tr><td width="25%"><a href="tan.html">tan</a></td>
+<td>tan</td></tr>
+<tr><td width="25%"><a href="tan.html">tan-method</a></td>
+<td>tan</td></tr>
+<tr><td width="25%"><a href="tanh.html">tanh</a></td>
+<td>tanh</td></tr>
+<tr><td width="25%"><a href="tanh.html">tanh-method</a></td>
+<td>tanh</td></tr>
+<tr><td width="25%"><a href="toDegrees.html">toDegrees</a></td>
+<td>toDegrees</td></tr>
+<tr><td width="25%"><a href="toDegrees.html">toDegrees-method</a></td>
+<td>toDegrees</td></tr>
+<tr><td width="25%"><a href="toRadians.html">toRadians</a></td>
+<td>toRadians</td></tr>
+<tr><td width="25%"><a href="toRadians.html">toRadians-method</a></td>
+<td>toRadians</td></tr>
+<tr><td width="25%"><a href="to_date.html">to_date</a></td>
+<td>to_date</td></tr>
+<tr><td width="25%"><a href="to_date.html">to_date-method</a></td>
+<td>to_date</td></tr>
+<tr><td width="25%"><a href="to_utc_timestamp.html">to_utc_timestamp</a></td>
+<td>to_utc_timestamp</td></tr>
+<tr><td width="25%"><a href="to_utc_timestamp.html">to_utc_timestamp-method</a></td>
+<td>to_utc_timestamp</td></tr>
+<tr><td width="25%"><a href="mutate.html">transform</a></td>
+<td>Mutate</td></tr>
+<tr><td width="25%"><a href="mutate.html">transform-method</a></td>
+<td>Mutate</td></tr>
+<tr><td width="25%"><a href="translate.html">translate</a></td>
+<td>translate</td></tr>
+<tr><td width="25%"><a href="translate.html">translate-method</a></td>
+<td>translate</td></tr>
+<tr><td width="25%"><a href="trim.html">trim</a></td>
+<td>trim</td></tr>
+<tr><td width="25%"><a href="trim.html">trim-method</a></td>
+<td>trim</td></tr>
+</table>
+
+<h2><a name="U">-- U --</a></h2>
+
+<table width="100%">
+<tr><td width="25%"><a href="unbase64.html">unbase64</a></td>
+<td>unbase64</td></tr>
+<tr><td width="25%"><a href="unbase64.html">unbase64-method</a></td>
+<td>unbase64</td></tr>
+<tr><td width="25%"><a href="uncacheTable.html">uncacheTable</a></td>
+<td>Uncache Table</td></tr>
+<tr><td width="25%"><a href="uncacheTable.html">uncacheTable.default</a></td>
+<td>Uncache Table</td></tr>
+<tr><td width="25%"><a href="unhex.html">unhex</a></td>
+<td>unhex</td></tr>
+<tr><td width="25%"><a href="unhex.html">unhex-method</a></td>
+<td>unhex</td></tr>
+<tr><td width="25%"><a href="union.html">union</a></td>
+<td>Return a new SparkDataFrame containing the union of rows</td></tr>
+<tr><td width="25%"><a href="union.html">union-method</a></td>
+<td>Return a new SparkDataFrame containing the union of rows</td></tr>
+<tr><td width="25%"><a href="union.html">unionAll</a></td>
+<td>Return a new SparkDataFrame containing the union of rows</td></tr>
+<tr><td width="25%"><a href="union.html">unionAll-method</a></td>
+<td>Return a new SparkDataFrame containing the union of rows</td></tr>
+<tr><td width="25%"><a href="distinct.html">unique</a></td>
+<td>Distinct</td></tr>
+<tr><td width="25%"><a href="distinct.html">unique-method</a></td>
+<td>Distinct</td></tr>
+<tr><td width="25%"><a href="unix_timestamp.html">unix_timestamp</a></td>
+<td>unix_timestamp</td></tr>
+<tr><td width="25%"><a href="unix_timestamp.html">unix_timestamp-method</a></td>
+<td>unix_timestamp</td></tr>
+<tr><td width="25%"><a href="unpersist-methods.html">unpersist</a></td>
+<td>Unpersist</td></tr>
+<tr><td width="25%"><a href="unpersist-methods.html">unpersist-method</a></td>
+<td>Unpersist</td></tr>
+<tr><td width="25%"><a href="upper.html">upper</a></td>
+<td>upper</td></tr>
+<tr><td width="25%"><a href="upper.html">upper-method</a></td>
+<td>upper</td></tr>
+</table>
+
+<h2><a name="V">-- V --</a></h2>
+
+<table width="100%">
+<tr><td width="25%"><a href="var.html">var</a></td>
+<td>var</td></tr>
+<tr><td width="25%"><a href="var.html">var-method</a></td>
+<td>var</td></tr>
+<tr><td width="25%"><a href="var.html">variance</a></td>
+<td>var</td></tr>
+<tr><td width="25%"><a href="var.html">variance-method</a></td>
+<td>var</td></tr>
+<tr><td width="25%"><a href="var_pop.html">var_pop</a></td>
+<td>var_pop</td></tr>
+<tr><td width="25%"><a href="var_pop.html">var_pop-method</a></td>
+<td>var_pop</td></tr>
+<tr><td width="25%"><a href="var_samp.html">var_samp</a></td>
+<td>var_samp</td></tr>
+<tr><td width="25%"><a href="var_samp.html">var_samp-method</a></td>
+<td>var_samp</td></tr>
+</table>
+
+<h2><a name="W">-- W --</a></h2>
+
+<table width="100%">
+<tr><td width="25%"><a href="weekofyear.html">weekofyear</a></td>
+<td>weekofyear</td></tr>
+<tr><td width="25%"><a href="weekofyear.html">weekofyear-method</a></td>
+<td>weekofyear</td></tr>
+<tr><td width="25%"><a href="when.html">when</a></td>
+<td>when</td></tr>
+<tr><td width="25%"><a href="when.html">when-method</a></td>
+<td>when</td></tr>
+<tr><td width="25%"><a href="filter.html">where</a></td>
+<td>Filter</td></tr>
+<tr><td width="25%"><a href="filter.html">where-method</a></td>
+<td>Filter</td></tr>
+<tr><td width="25%"><a href="window.html">window</a></td>
+<td>window</td></tr>
+<tr><td width="25%"><a href="window.html">window-method</a></td>
+<td>window</td></tr>
+<tr><td width="25%"><a href="windowOrderBy.html">windowOrderBy</a></td>
+<td>windowOrderBy</td></tr>
+<tr><td width="25%"><a href="windowOrderBy.html">windowOrderBy-method</a></td>
+<td>windowOrderBy</td></tr>
+<tr><td width="25%"><a href="windowPartitionBy.html">windowPartitionBy</a></td>
+<td>windowPartitionBy</td></tr>
+<tr><td width="25%"><a href="windowPartitionBy.html">windowPartitionBy-method</a></td>
+<td>windowPartitionBy</td></tr>
+<tr><td width="25%"><a href="WindowSpec.html">WindowSpec-class</a></td>
+<td>S4 class that represents a WindowSpec</td></tr>
+<tr><td width="25%"><a href="with.html">with</a></td>
+<td>Evaluate a R expression in an environment constructed from a SparkDataFrame</td></tr>
+<tr><td width="25%"><a href="with.html">with-method</a></td>
+<td>Evaluate a R expression in an environment constructed from a SparkDataFrame</td></tr>
+<tr><td width="25%"><a href="withColumn.html">withColumn</a></td>
+<td>WithColumn</td></tr>
+<tr><td width="25%"><a href="withColumn.html">withColumn-method</a></td>
+<td>WithColumn</td></tr>
+<tr><td width="25%"><a href="rename.html">withColumnRenamed</a></td>
+<td>rename</td></tr>
+<tr><td width="25%"><a href="rename.html">withColumnRenamed-method</a></td>
+<td>rename</td></tr>
+<tr><td width="25%"><a href="write.df.html">write.df</a></td>
+<td>Save the contents of SparkDataFrame to a data source.</td></tr>
+<tr><td width="25%"><a href="write.df.html">write.df-method</a></td>
+<td>Save the contents of SparkDataFrame to a data source.</td></tr>
+<tr><td width="25%"><a href="write.jdbc.html">write.jdbc</a></td>
+<td>Save the content of SparkDataFrame to an external database table via JDBC.</td></tr>
+<tr><td width="25%"><a href="write.jdbc.html">write.jdbc-method</a></td>
+<td>Save the content of SparkDataFrame to an external database table via JDBC.</td></tr>
+<tr><td width="25%"><a href="write.json.html">write.json</a></td>
+<td>Save the contents of SparkDataFrame as a JSON file</td></tr>
+<tr><td width="25%"><a href="write.json.html">write.json-method</a></td>
+<td>Save the contents of SparkDataFrame as a JSON file</td></tr>
+<tr><td width="25%"><a href="write.ml.html">write.ml</a></td>
+<td>Saves the MLlib model to the input path</td></tr>
+<tr><td width="25%"><a href="spark.glm.html">write.ml-method</a></td>
+<td>Generalized Linear Models</td></tr>
+<tr><td width="25%"><a href="spark.kmeans.html">write.ml-method</a></td>
+<td>K-Means Clustering Model</td></tr>
+<tr><td width="25%"><a href="spark.naiveBayes.html">write.ml-method</a></td>
+<td>Naive Bayes Models</td></tr>
+<tr><td width="25%"><a href="spark.survreg.html">write.ml-method</a></td>
+<td>Accelerated Failure Time (AFT) Survival Regression Model</td></tr>
+<tr><td width="25%"><a href="write.orc.html">write.orc</a></td>
+<td>Save the contents of SparkDataFrame as an ORC file, preserving the schema.</td></tr>
+<tr><td width="25%"><a href="write.orc.html">write.orc-method</a></td>
+<td>Save the contents of SparkDataFrame as an ORC file, preserving the schema.</td></tr>
+<tr><td width="25%"><a href="write.parquet.html">write.parquet</a></td>
+<td>Save the contents of SparkDataFrame as a Parquet file, preserving the schema.</td></tr>
+<tr><td width="25%"><a href="write.parquet.html">write.parquet-method</a></td>
+<td>Save the contents of SparkDataFrame as a Parquet file, preserving the schema.</td></tr>
+<tr><td width="25%"><a href="write.text.html">write.text</a></td>
+<td>Save the content of SparkDataFrame in a text file at the specified path.</td></tr>
+<tr><td width="25%"><a href="write.text.html">write.text-method</a></td>
+<td>Save the content of SparkDataFrame in a text file at the specified path.</td></tr>
+</table>
+
+<h2><a name="Y">-- Y --</a></h2>
+
+<table width="100%">
+<tr><td width="25%"><a href="year.html">year</a></td>
+<td>year</td></tr>
+<tr><td width="25%"><a href="year.html">year-method</a></td>
+<td>year</td></tr>
+</table>
+
+<h2><a name="misc">-- misc --</a></h2>
+
+<table width="100%">
+<tr><td width="25%"><a href="select.html">$</a></td>
+<td>Select</td></tr>
+<tr><td width="25%"><a href="select.html">$-method</a></td>
+<td>Select</td></tr>
+<tr><td width="25%"><a href="select.html">$&lt;-</a></td>
+<td>Select</td></tr>
+<tr><td width="25%"><a href="select.html">$&lt;--method</a></td>
+<td>Select</td></tr>
+<tr><td width="25%"><a href="match.html">%in%</a></td>
+<td>Match a column with given values.</td></tr>
+<tr><td width="25%"><a href="match.html">%in%-method</a></td>
+<td>Match a column with given values.</td></tr>
+<tr><td width="25%"><a href="subset.html">[</a></td>
+<td>Subset</td></tr>
+<tr><td width="25%"><a href="subset.html">[-method</a></td>
+<td>Subset</td></tr>
+<tr><td width="25%"><a href="subset.html">[[</a></td>
+<td>Subset</td></tr>
+<tr><td width="25%"><a href="subset.html">[[-method</a></td>
+<td>Subset</td></tr>
+</table>
+</body></html>


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org