You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by ma...@apache.org on 2017/07/11 22:36:28 UTC

[23/56] [partial] spark-website git commit: Add Spark 2.2.0 docs

http://git-wip-us.apache.org/repos/asf/spark-website/blob/f7ec1155/site/docs/2.2.0/api/R/skewness.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.0/api/R/skewness.html b/site/docs/2.2.0/api/R/skewness.html
new file mode 100644
index 0000000..f84aa10
--- /dev/null
+++ b/site/docs/2.2.0/api/R/skewness.html
@@ -0,0 +1,105 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
+<html><head><title>R: skewness</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
+<link rel="stylesheet" type="text/css" href="R.css">
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for skewness {SparkR}"><tr><td>skewness {SparkR}</td><td align="right">R Documentation</td></tr></table>
+
+<h2>skewness</h2>
+
+<h3>Description</h3>
+
+<p>Aggregate function: returns the skewness of the values in a group.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+## S4 method for signature 'Column'
+skewness(x)
+
+skewness(x)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>x</code></td>
+<td>
+<p>Column to compute on.</p>
+</td></tr>
+</table>
+
+
+<h3>Note</h3>
+
+<p>skewness since 1.6.0
+</p>
+
+
+<h3>See Also</h3>
+
+<p>Other agg_funcs: <code><a href="summarize.html">agg</a></code>, <code><a href="summarize.html">agg</a></code>,
+<code><a href="summarize.html">agg</a></code>, <code><a href="summarize.html">agg,GroupedData-method</a></code>,
+<code><a href="summarize.html">agg,SparkDataFrame-method</a></code>,
+<code><a href="summarize.html">summarize</a></code>, <code><a href="summarize.html">summarize</a></code>,
+<code><a href="summarize.html">summarize</a></code>,
+<code><a href="summarize.html">summarize,GroupedData-method</a></code>,
+<code><a href="summarize.html">summarize,SparkDataFrame-method</a></code>;
+<code><a href="avg.html">avg</a></code>, <code><a href="avg.html">avg</a></code>,
+<code><a href="avg.html">avg,Column-method</a></code>;
+<code><a href="countDistinct.html">countDistinct</a></code>, <code><a href="countDistinct.html">countDistinct</a></code>,
+<code><a href="countDistinct.html">countDistinct,Column-method</a></code>,
+<code><a href="countDistinct.html">n_distinct</a></code>, <code><a href="countDistinct.html">n_distinct</a></code>,
+<code><a href="countDistinct.html">n_distinct,Column-method</a></code>;
+<code><a href="count.html">count</a></code>, <code><a href="count.html">count</a></code>,
+<code><a href="count.html">count,Column-method</a></code>,
+<code><a href="count.html">count,GroupedData-method</a></code>, <code><a href="count.html">n</a></code>,
+<code><a href="count.html">n</a></code>, <code><a href="count.html">n,Column-method</a></code>;
+<code><a href="first.html">first</a></code>, <code><a href="first.html">first</a></code>,
+<code><a href="first.html">first</a></code>,
+<code><a href="first.html">first,SparkDataFrame-method</a></code>,
+<code><a href="first.html">first,characterOrColumn-method</a></code>;
+<code><a href="kurtosis.html">kurtosis</a></code>, <code><a href="kurtosis.html">kurtosis</a></code>,
+<code><a href="kurtosis.html">kurtosis,Column-method</a></code>; <code><a href="last.html">last</a></code>,
+<code><a href="last.html">last</a></code>,
+<code><a href="last.html">last,characterOrColumn-method</a></code>;
+<code><a href="max.html">max</a></code>, <code><a href="max.html">max,Column-method</a></code>;
+<code><a href="mean.html">mean</a></code>, <code><a href="mean.html">mean,Column-method</a></code>;
+<code><a href="min.html">min</a></code>, <code><a href="min.html">min,Column-method</a></code>;
+<code><a href="sd.html">sd</a></code>, <code><a href="sd.html">sd</a></code>,
+<code><a href="sd.html">sd,Column-method</a></code>, <code><a href="sd.html">stddev</a></code>,
+<code><a href="sd.html">stddev</a></code>, <code><a href="sd.html">stddev,Column-method</a></code>;
+<code><a href="stddev_pop.html">stddev_pop</a></code>, <code><a href="stddev_pop.html">stddev_pop</a></code>,
+<code><a href="stddev_pop.html">stddev_pop,Column-method</a></code>;
+<code><a href="stddev_samp.html">stddev_samp</a></code>, <code><a href="stddev_samp.html">stddev_samp</a></code>,
+<code><a href="stddev_samp.html">stddev_samp,Column-method</a></code>;
+<code><a href="sumDistinct.html">sumDistinct</a></code>, <code><a href="sumDistinct.html">sumDistinct</a></code>,
+<code><a href="sumDistinct.html">sumDistinct,Column-method</a></code>;
+<code><a href="sum.html">sum</a></code>, <code><a href="sum.html">sum,Column-method</a></code>;
+<code><a href="var_pop.html">var_pop</a></code>, <code><a href="var_pop.html">var_pop</a></code>,
+<code><a href="var_pop.html">var_pop,Column-method</a></code>;
+<code><a href="var_samp.html">var_samp</a></code>, <code><a href="var_samp.html">var_samp</a></code>,
+<code><a href="var_samp.html">var_samp,Column-method</a></code>; <code><a href="var.html">var</a></code>,
+<code><a href="var.html">var</a></code>, <code><a href="var.html">var,Column-method</a></code>,
+<code><a href="var.html">variance</a></code>, <code><a href="var.html">variance</a></code>,
+<code><a href="var.html">variance,Column-method</a></code>
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: skewness(df$c)
+</code></pre>
+
+
+<hr><div align="center">[Package <em>SparkR</em> version 2.2.0 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/f7ec1155/site/docs/2.2.0/api/R/sort_array.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.0/api/R/sort_array.html b/site/docs/2.2.0/api/R/sort_array.html
new file mode 100644
index 0000000..ecf9e86
--- /dev/null
+++ b/site/docs/2.2.0/api/R/sort_array.html
@@ -0,0 +1,79 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
+<html><head><title>R: sort_array</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
+<link rel="stylesheet" type="text/css" href="R.css">
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for sort_array {SparkR}"><tr><td>sort_array {SparkR}</td><td align="right">R Documentation</td></tr></table>
+
+<h2>sort_array</h2>
+
+<h3>Description</h3>
+
+<p>Sorts the input array in ascending or descending order according
+to the natural ordering of the array elements.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+## S4 method for signature 'Column'
+sort_array(x, asc = TRUE)
+
+sort_array(x, asc = TRUE)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>x</code></td>
+<td>
+<p>A Column to sort</p>
+</td></tr>
+<tr valign="top"><td><code>asc</code></td>
+<td>
+<p>A logical flag indicating the sorting order.
+TRUE, sorting is in ascending order.
+FALSE, sorting is in descending order.</p>
+</td></tr>
+</table>
+
+
+<h3>Note</h3>
+
+<p>sort_array since 1.6.0
+</p>
+
+
+<h3>See Also</h3>
+
+<p>Other collection_funcs: <code><a href="array_contains.html">array_contains</a></code>,
+<code><a href="array_contains.html">array_contains</a></code>,
+<code><a href="array_contains.html">array_contains,Column-method</a></code>;
+<code><a href="explode.html">explode</a></code>, <code><a href="explode.html">explode</a></code>,
+<code><a href="explode.html">explode,Column-method</a></code>;
+<code><a href="posexplode.html">posexplode</a></code>, <code><a href="posexplode.html">posexplode</a></code>,
+<code><a href="posexplode.html">posexplode,Column-method</a></code>;
+<code><a href="size.html">size</a></code>, <code><a href="size.html">size</a></code>,
+<code><a href="size.html">size,Column-method</a></code>
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: 
+##D sort_array(df$c)
+##D sort_array(df$c, FALSE)
+## End(Not run)
+</code></pre>
+
+
+<hr><div align="center">[Package <em>SparkR</em> version 2.2.0 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/f7ec1155/site/docs/2.2.0/api/R/soundex.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.0/api/R/soundex.html b/site/docs/2.2.0/api/R/soundex.html
new file mode 100644
index 0000000..cae90af
--- /dev/null
+++ b/site/docs/2.2.0/api/R/soundex.html
@@ -0,0 +1,114 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
+<html><head><title>R: soundex</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
+<link rel="stylesheet" type="text/css" href="R.css">
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for soundex {SparkR}"><tr><td>soundex {SparkR}</td><td align="right">R Documentation</td></tr></table>
+
+<h2>soundex</h2>
+
+<h3>Description</h3>
+
+<p>Return the soundex code for the specified expression.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+## S4 method for signature 'Column'
+soundex(x)
+
+soundex(x)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>x</code></td>
+<td>
+<p>Column to compute on.</p>
+</td></tr>
+</table>
+
+
+<h3>Note</h3>
+
+<p>soundex since 1.5.0
+</p>
+
+
+<h3>See Also</h3>
+
+<p>Other string_funcs: <code><a href="ascii.html">ascii</a></code>,
+<code><a href="ascii.html">ascii</a></code>, <code><a href="ascii.html">ascii,Column-method</a></code>;
+<code><a href="base64.html">base64</a></code>, <code><a href="base64.html">base64</a></code>,
+<code><a href="base64.html">base64,Column-method</a></code>;
+<code><a href="concat_ws.html">concat_ws</a></code>, <code><a href="concat_ws.html">concat_ws</a></code>,
+<code><a href="concat_ws.html">concat_ws,character,Column-method</a></code>;
+<code><a href="concat.html">concat</a></code>, <code><a href="concat.html">concat</a></code>,
+<code><a href="concat.html">concat,Column-method</a></code>; <code><a href="decode.html">decode</a></code>,
+<code><a href="decode.html">decode</a></code>,
+<code><a href="decode.html">decode,Column,character-method</a></code>;
+<code><a href="encode.html">encode</a></code>, <code><a href="encode.html">encode</a></code>,
+<code><a href="encode.html">encode,Column,character-method</a></code>;
+<code><a href="format_number.html">format_number</a></code>, <code><a href="format_number.html">format_number</a></code>,
+<code><a href="format_number.html">format_number,Column,numeric-method</a></code>;
+<code><a href="format_string.html">format_string</a></code>, <code><a href="format_string.html">format_string</a></code>,
+<code><a href="format_string.html">format_string,character,Column-method</a></code>;
+<code><a href="initcap.html">initcap</a></code>, <code><a href="initcap.html">initcap</a></code>,
+<code><a href="initcap.html">initcap,Column-method</a></code>; <code><a href="instr.html">instr</a></code>,
+<code><a href="instr.html">instr</a></code>,
+<code><a href="instr.html">instr,Column,character-method</a></code>;
+<code><a href="length.html">length</a></code>, <code><a href="length.html">length,Column-method</a></code>;
+<code><a href="levenshtein.html">levenshtein</a></code>, <code><a href="levenshtein.html">levenshtein</a></code>,
+<code><a href="levenshtein.html">levenshtein,Column-method</a></code>;
+<code><a href="locate.html">locate</a></code>, <code><a href="locate.html">locate</a></code>,
+<code><a href="locate.html">locate,character,Column-method</a></code>;
+<code><a href="lower.html">lower</a></code>, <code><a href="lower.html">lower</a></code>,
+<code><a href="lower.html">lower,Column-method</a></code>; <code><a href="lpad.html">lpad</a></code>,
+<code><a href="lpad.html">lpad</a></code>,
+<code><a href="lpad.html">lpad,Column,numeric,character-method</a></code>;
+<code><a href="ltrim.html">ltrim</a></code>, <code><a href="ltrim.html">ltrim</a></code>,
+<code><a href="ltrim.html">ltrim,Column-method</a></code>;
+<code><a href="regexp_extract.html">regexp_extract</a></code>,
+<code><a href="regexp_extract.html">regexp_extract</a></code>,
+<code><a href="regexp_extract.html">regexp_extract,Column,character,numeric-method</a></code>;
+<code><a href="regexp_replace.html">regexp_replace</a></code>,
+<code><a href="regexp_replace.html">regexp_replace</a></code>,
+<code><a href="regexp_replace.html">regexp_replace,Column,character,character-method</a></code>;
+<code><a href="reverse.html">reverse</a></code>, <code><a href="reverse.html">reverse</a></code>,
+<code><a href="reverse.html">reverse,Column-method</a></code>; <code><a href="rpad.html">rpad</a></code>,
+<code><a href="rpad.html">rpad</a></code>,
+<code><a href="rpad.html">rpad,Column,numeric,character-method</a></code>;
+<code><a href="rtrim.html">rtrim</a></code>, <code><a href="rtrim.html">rtrim</a></code>,
+<code><a href="rtrim.html">rtrim,Column-method</a></code>;
+<code><a href="substring_index.html">substring_index</a></code>,
+<code><a href="substring_index.html">substring_index</a></code>,
+<code><a href="substring_index.html">substring_index,Column,character,numeric-method</a></code>;
+<code><a href="translate.html">translate</a></code>, <code><a href="translate.html">translate</a></code>,
+<code><a href="translate.html">translate,Column,character,character-method</a></code>;
+<code><a href="trim.html">trim</a></code>, <code><a href="trim.html">trim</a></code>,
+<code><a href="trim.html">trim,Column-method</a></code>; <code><a href="unbase64.html">unbase64</a></code>,
+<code><a href="unbase64.html">unbase64</a></code>,
+<code><a href="unbase64.html">unbase64,Column-method</a></code>;
+<code><a href="upper.html">upper</a></code>, <code><a href="upper.html">upper</a></code>,
+<code><a href="upper.html">upper,Column-method</a></code>
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: soundex(df$c)
+</code></pre>
+
+
+<hr><div align="center">[Package <em>SparkR</em> version 2.2.0 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/f7ec1155/site/docs/2.2.0/api/R/spark.addFile.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.0/api/R/spark.addFile.html b/site/docs/2.2.0/api/R/spark.addFile.html
new file mode 100644
index 0000000..33bf289
--- /dev/null
+++ b/site/docs/2.2.0/api/R/spark.addFile.html
@@ -0,0 +1,68 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
+<html><head><title>R: Add a file or directory to be downloaded with this Spark job...</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
+<link rel="stylesheet" type="text/css" href="R.css">
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for spark.addFile {SparkR}"><tr><td>spark.addFile {SparkR}</td><td align="right">R Documentation</td></tr></table>
+
+<h2>Add a file or directory to be downloaded with this Spark job on every node.</h2>
+
+<h3>Description</h3>
+
+<p>The path passed can be either a local file, a file in HDFS (or other Hadoop-supported
+filesystems), or an HTTP, HTTPS or FTP URI. To access the file in Spark jobs,
+use spark.getSparkFiles(fileName) to find its download location.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+spark.addFile(path, recursive = FALSE)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>path</code></td>
+<td>
+<p>The path of the file to be added</p>
+</td></tr>
+<tr valign="top"><td><code>recursive</code></td>
+<td>
+<p>Whether to add files recursively from the path. Default is FALSE.</p>
+</td></tr>
+</table>
+
+
+<h3>Details</h3>
+
+<p>A directory can be given if the recursive option is set to true.
+Currently directories are only supported for Hadoop-supported filesystems.
+Refer Hadoop-supported filesystems at <a href="https://wiki.apache.org/hadoop/HCFS">https://wiki.apache.org/hadoop/HCFS</a>.
+</p>
+
+
+<h3>Note</h3>
+
+<p>spark.addFile since 2.1.0
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: 
+##D spark.addFile(&quot;~/myfile&quot;)
+## End(Not run)
+</code></pre>
+
+
+<hr><div align="center">[Package <em>SparkR</em> version 2.2.0 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/f7ec1155/site/docs/2.2.0/api/R/spark.als.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.0/api/R/spark.als.html b/site/docs/2.2.0/api/R/spark.als.html
new file mode 100644
index 0000000..47617de
--- /dev/null
+++ b/site/docs/2.2.0/api/R/spark.als.html
@@ -0,0 +1,203 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
+<html><head><title>R: Alternating Least Squares (ALS) for Collaborative Filtering</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
+<link rel="stylesheet" type="text/css" href="R.css">
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for spark.als {SparkR}"><tr><td>spark.als {SparkR}</td><td align="right">R Documentation</td></tr></table>
+
+<h2>Alternating Least Squares (ALS) for Collaborative Filtering</h2>
+
+<h3>Description</h3>
+
+<p><code>spark.als</code> learns latent factors in collaborative filtering via alternating least
+squares. Users can call <code>summary</code> to obtain fitted latent factors, <code>predict</code>
+to make predictions on new data, and <code>write.ml</code>/<code>read.ml</code> to save/load fitted models.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+spark.als(data, ...)
+
+## S4 method for signature 'SparkDataFrame'
+spark.als(data, ratingCol = "rating",
+  userCol = "user", itemCol = "item", rank = 10, regParam = 0.1,
+  maxIter = 10, nonnegative = FALSE, implicitPrefs = FALSE, alpha = 1,
+  numUserBlocks = 10, numItemBlocks = 10, checkpointInterval = 10,
+  seed = 0)
+
+## S4 method for signature 'ALSModel'
+summary(object)
+
+## S4 method for signature 'ALSModel'
+predict(object, newData)
+
+## S4 method for signature 'ALSModel,character'
+write.ml(object, path, overwrite = FALSE)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>data</code></td>
+<td>
+<p>a SparkDataFrame for training.</p>
+</td></tr>
+<tr valign="top"><td><code>...</code></td>
+<td>
+<p>additional argument(s) passed to the method.</p>
+</td></tr>
+<tr valign="top"><td><code>ratingCol</code></td>
+<td>
+<p>column name for ratings.</p>
+</td></tr>
+<tr valign="top"><td><code>userCol</code></td>
+<td>
+<p>column name for user ids. Ids must be (or can be coerced into) integers.</p>
+</td></tr>
+<tr valign="top"><td><code>itemCol</code></td>
+<td>
+<p>column name for item ids. Ids must be (or can be coerced into) integers.</p>
+</td></tr>
+<tr valign="top"><td><code>rank</code></td>
+<td>
+<p>rank of the matrix factorization (&gt; 0).</p>
+</td></tr>
+<tr valign="top"><td><code>regParam</code></td>
+<td>
+<p>regularization parameter (&gt;= 0).</p>
+</td></tr>
+<tr valign="top"><td><code>maxIter</code></td>
+<td>
+<p>maximum number of iterations (&gt;= 0).</p>
+</td></tr>
+<tr valign="top"><td><code>nonnegative</code></td>
+<td>
+<p>logical value indicating whether to apply nonnegativity constraints.</p>
+</td></tr>
+<tr valign="top"><td><code>implicitPrefs</code></td>
+<td>
+<p>logical value indicating whether to use implicit preference.</p>
+</td></tr>
+<tr valign="top"><td><code>alpha</code></td>
+<td>
+<p>alpha parameter in the implicit preference formulation (&gt;= 0).</p>
+</td></tr>
+<tr valign="top"><td><code>numUserBlocks</code></td>
+<td>
+<p>number of user blocks used to parallelize computation (&gt; 0).</p>
+</td></tr>
+<tr valign="top"><td><code>numItemBlocks</code></td>
+<td>
+<p>number of item blocks used to parallelize computation (&gt; 0).</p>
+</td></tr>
+<tr valign="top"><td><code>checkpointInterval</code></td>
+<td>
+<p>number of checkpoint intervals (&gt;= 1) or disable checkpoint (-1).</p>
+</td></tr>
+<tr valign="top"><td><code>seed</code></td>
+<td>
+<p>integer seed for random number generation.</p>
+</td></tr>
+<tr valign="top"><td><code>object</code></td>
+<td>
+<p>a fitted ALS model.</p>
+</td></tr>
+<tr valign="top"><td><code>newData</code></td>
+<td>
+<p>a SparkDataFrame for testing.</p>
+</td></tr>
+<tr valign="top"><td><code>path</code></td>
+<td>
+<p>the directory where the model is saved.</p>
+</td></tr>
+<tr valign="top"><td><code>overwrite</code></td>
+<td>
+<p>logical value indicating whether to overwrite if the output path
+already exists. Default is FALSE which means throw exception
+if the output path exists.</p>
+</td></tr>
+</table>
+
+
+<h3>Details</h3>
+
+<p>For more details, see
+<a href="http://spark.apache.org/docs/latest/ml-collaborative-filtering.html">MLlib:
+Collaborative Filtering</a>.
+</p>
+
+
+<h3>Value</h3>
+
+<p><code>spark.als</code> returns a fitted ALS model.
+</p>
+<p><code>summary</code> returns summary information of the fitted model, which is a list.
+The list includes <code>user</code> (the names of the user column),
+<code>item</code> (the item column), <code>rating</code> (the rating column), <code>userFactors</code>
+(the estimated user factors), <code>itemFactors</code> (the estimated item factors),
+and <code>rank</code> (rank of the matrix factorization model).
+</p>
+<p><code>predict</code> returns a SparkDataFrame containing predicted values.
+</p>
+
+
+<h3>Note</h3>
+
+<p>spark.als since 2.1.0
+</p>
+<p>summary(ALSModel) since 2.1.0
+</p>
+<p>predict(ALSModel) since 2.1.0
+</p>
+<p>write.ml(ALSModel, character) since 2.1.0
+</p>
+
+
+<h3>See Also</h3>
+
+<p><a href="read.ml.html">read.ml</a>
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: 
+##D ratings &lt;- list(list(0, 0, 4.0), list(0, 1, 2.0), list(1, 1, 3.0), list(1, 2, 4.0),
+##D                 list(2, 1, 1.0), list(2, 2, 5.0))
+##D df &lt;- createDataFrame(ratings, c(&quot;user&quot;, &quot;item&quot;, &quot;rating&quot;))
+##D model &lt;- spark.als(df, &quot;rating&quot;, &quot;user&quot;, &quot;item&quot;)
+##D 
+##D # extract latent factors
+##D stats &lt;- summary(model)
+##D userFactors &lt;- stats$userFactors
+##D itemFactors &lt;- stats$itemFactors
+##D 
+##D # make predictions
+##D predicted &lt;- predict(model, df)
+##D showDF(predicted)
+##D 
+##D # save and load the model
+##D path &lt;- &quot;path/to/model&quot;
+##D write.ml(model, path)
+##D savedModel &lt;- read.ml(path)
+##D summary(savedModel)
+##D 
+##D # set other arguments
+##D modelS &lt;- spark.als(df, &quot;rating&quot;, &quot;user&quot;, &quot;item&quot;, rank = 20,
+##D                     regParam = 0.1, nonnegative = TRUE)
+##D statsS &lt;- summary(modelS)
+## End(Not run)
+</code></pre>
+
+
+<hr><div align="center">[Package <em>SparkR</em> version 2.2.0 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/f7ec1155/site/docs/2.2.0/api/R/spark.bisectingKmeans.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.0/api/R/spark.bisectingKmeans.html b/site/docs/2.2.0/api/R/spark.bisectingKmeans.html
new file mode 100644
index 0000000..ad43bb0
--- /dev/null
+++ b/site/docs/2.2.0/api/R/spark.bisectingKmeans.html
@@ -0,0 +1,180 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
+<html><head><title>R: Bisecting K-Means Clustering Model</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
+<link rel="stylesheet" type="text/css" href="R.css">
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for spark.bisectingKmeans {SparkR}"><tr><td>spark.bisectingKmeans {SparkR}</td><td align="right">R Documentation</td></tr></table>
+
+<h2>Bisecting K-Means Clustering Model</h2>
+
+<h3>Description</h3>
+
+<p>Fits a bisecting k-means clustering model against a SparkDataFrame.
+Users can call <code>summary</code> to print a summary of the fitted model, <code>predict</code> to make
+predictions on new data, and <code>write.ml</code>/<code>read.ml</code> to save/load fitted models.
+</p>
+<p>Get fitted result from a bisecting k-means model.
+Note: A saved-loaded model does not support this method.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+spark.bisectingKmeans(data, formula, ...)
+
+## S4 method for signature 'SparkDataFrame,formula'
+spark.bisectingKmeans(data, formula, k = 4,
+  maxIter = 20, seed = NULL, minDivisibleClusterSize = 1)
+
+## S4 method for signature 'BisectingKMeansModel'
+summary(object)
+
+## S4 method for signature 'BisectingKMeansModel'
+predict(object, newData)
+
+## S4 method for signature 'BisectingKMeansModel'
+fitted(object, method = c("centers",
+  "classes"))
+
+## S4 method for signature 'BisectingKMeansModel,character'
+write.ml(object, path,
+  overwrite = FALSE)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>data</code></td>
+<td>
+<p>a SparkDataFrame for training.</p>
+</td></tr>
+<tr valign="top"><td><code>formula</code></td>
+<td>
+<p>a symbolic description of the model to be fitted. Currently only a few formula
+operators are supported, including '~', '.', ':', '+', and '-'.
+Note that the response variable of formula is empty in spark.bisectingKmeans.</p>
+</td></tr>
+<tr valign="top"><td><code>...</code></td>
+<td>
+<p>additional argument(s) passed to the method.</p>
+</td></tr>
+<tr valign="top"><td><code>k</code></td>
+<td>
+<p>the desired number of leaf clusters. Must be &gt; 1.
+The actual number could be smaller if there are no divisible leaf clusters.</p>
+</td></tr>
+<tr valign="top"><td><code>maxIter</code></td>
+<td>
+<p>maximum iteration number.</p>
+</td></tr>
+<tr valign="top"><td><code>seed</code></td>
+<td>
+<p>the random seed.</p>
+</td></tr>
+<tr valign="top"><td><code>minDivisibleClusterSize</code></td>
+<td>
+<p>The minimum number of points (if greater than or equal to 1.0)
+or the minimum proportion of points (if less than 1.0) of a divisible cluster.
+Note that it is an expert parameter. The default value should be good enough
+for most cases.</p>
+</td></tr>
+<tr valign="top"><td><code>object</code></td>
+<td>
+<p>a fitted bisecting k-means model.</p>
+</td></tr>
+<tr valign="top"><td><code>newData</code></td>
+<td>
+<p>a SparkDataFrame for testing.</p>
+</td></tr>
+<tr valign="top"><td><code>method</code></td>
+<td>
+<p>type of fitted results, <code>"centers"</code> for cluster centers
+or <code>"classes"</code> for assigned classes.</p>
+</td></tr>
+<tr valign="top"><td><code>path</code></td>
+<td>
+<p>the directory where the model is saved.</p>
+</td></tr>
+<tr valign="top"><td><code>overwrite</code></td>
+<td>
+<p>overwrites or not if the output path already exists. Default is FALSE
+which means throw exception if the output path exists.</p>
+</td></tr>
+</table>
+
+
+<h3>Value</h3>
+
+<p><code>spark.bisectingKmeans</code> returns a fitted bisecting k-means model.
+</p>
+<p><code>summary</code> returns summary information of the fitted model, which is a list.
+The list includes the model's <code>k</code> (number of cluster centers),
+<code>coefficients</code> (model cluster centers),
+<code>size</code> (number of data points in each cluster), <code>cluster</code>
+(cluster centers of the transformed data; cluster is NULL if is.loaded is TRUE),
+and <code>is.loaded</code> (whether the model is loaded from a saved file).
+</p>
+<p><code>predict</code> returns the predicted values based on a bisecting k-means model.
+</p>
+<p><code>fitted</code> returns a SparkDataFrame containing fitted values.
+</p>
+
+
+<h3>Note</h3>
+
+<p>spark.bisectingKmeans since 2.2.0
+</p>
+<p>summary(BisectingKMeansModel) since 2.2.0
+</p>
+<p>predict(BisectingKMeansModel) since 2.2.0
+</p>
+<p>fitted since 2.2.0
+</p>
+<p>write.ml(BisectingKMeansModel, character) since 2.2.0
+</p>
+
+
+<h3>See Also</h3>
+
+<p><a href="predict.html">predict</a>, <a href="read.ml.html">read.ml</a>, <a href="write.ml.html">write.ml</a>
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: 
+##D sparkR.session()
+##D t &lt;- as.data.frame(Titanic)
+##D df &lt;- createDataFrame(t)
+##D model &lt;- spark.bisectingKmeans(df, Class ~ Survived, k = 4)
+##D summary(model)
+##D 
+##D # get fitted result from a bisecting k-means model
+##D fitted.model &lt;- fitted(model, &quot;centers&quot;)
+##D showDF(fitted.model)
+##D 
+##D # fitted values on training data
+##D fitted &lt;- predict(model, df)
+##D head(select(fitted, &quot;Class&quot;, &quot;prediction&quot;))
+##D 
+##D # save fitted model to input path
+##D path &lt;- &quot;path/to/model&quot;
+##D write.ml(model, path)
+##D 
+##D # can also read back the saved model and print
+##D savedModel &lt;- read.ml(path)
+##D summary(savedModel)
+## End(Not run)
+</code></pre>
+
+
+<hr><div align="center">[Package <em>SparkR</em> version 2.2.0 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/f7ec1155/site/docs/2.2.0/api/R/spark.fpGrowth.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.0/api/R/spark.fpGrowth.html b/site/docs/2.2.0/api/R/spark.fpGrowth.html
new file mode 100644
index 0000000..0a23b82
--- /dev/null
+++ b/site/docs/2.2.0/api/R/spark.fpGrowth.html
@@ -0,0 +1,181 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
+<html><head><title>R: FP-growth</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
+<link rel="stylesheet" type="text/css" href="R.css">
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for spark.fpGrowth {SparkR}"><tr><td>spark.fpGrowth {SparkR}</td><td align="right">R Documentation</td></tr></table>
+
+<h2>FP-growth</h2>
+
+<h3>Description</h3>
+
+<p>A parallel FP-growth algorithm to mine frequent itemsets.
+<code>spark.fpGrowth</code> fits a FP-growth model on a SparkDataFrame. Users can
+<code>spark.freqItemsets</code> to get frequent itemsets, <code>spark.associationRules</code> to get
+association rules, <code>predict</code> to make predictions on new data based on generated association
+rules, and <code>write.ml</code>/<code>read.ml</code> to save/load fitted models.
+For more details, see
+<a href="https://spark.apache.org/docs/latest/mllib-frequent-pattern-mining.html#fp-growth">
+FP-growth</a>.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+spark.fpGrowth(data, ...)
+
+spark.freqItemsets(object)
+
+spark.associationRules(object)
+
+## S4 method for signature 'SparkDataFrame'
+spark.fpGrowth(data, minSupport = 0.3,
+  minConfidence = 0.8, itemsCol = "items", numPartitions = NULL)
+
+## S4 method for signature 'FPGrowthModel'
+spark.freqItemsets(object)
+
+## S4 method for signature 'FPGrowthModel'
+spark.associationRules(object)
+
+## S4 method for signature 'FPGrowthModel'
+predict(object, newData)
+
+## S4 method for signature 'FPGrowthModel,character'
+write.ml(object, path, overwrite = FALSE)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>data</code></td>
+<td>
+<p>A SparkDataFrame for training.</p>
+</td></tr>
+<tr valign="top"><td><code>...</code></td>
+<td>
+<p>additional argument(s) passed to the method.</p>
+</td></tr>
+<tr valign="top"><td><code>object</code></td>
+<td>
+<p>a fitted FPGrowth model.</p>
+</td></tr>
+<tr valign="top"><td><code>minSupport</code></td>
+<td>
+<p>Minimal support level.</p>
+</td></tr>
+<tr valign="top"><td><code>minConfidence</code></td>
+<td>
+<p>Minimal confidence level.</p>
+</td></tr>
+<tr valign="top"><td><code>itemsCol</code></td>
+<td>
+<p>Features column name.</p>
+</td></tr>
+<tr valign="top"><td><code>numPartitions</code></td>
+<td>
+<p>Number of partitions used for fitting.</p>
+</td></tr>
+<tr valign="top"><td><code>newData</code></td>
+<td>
+<p>a SparkDataFrame for testing.</p>
+</td></tr>
+<tr valign="top"><td><code>path</code></td>
+<td>
+<p>the directory where the model is saved.</p>
+</td></tr>
+<tr valign="top"><td><code>overwrite</code></td>
+<td>
+<p>logical value indicating whether to overwrite if the output path
+already exists. Default is FALSE which means throw exception
+if the output path exists.</p>
+</td></tr>
+</table>
+
+
+<h3>Value</h3>
+
+<p><code>spark.fpGrowth</code> returns a fitted FPGrowth model.
+</p>
+<p>A <code>SparkDataFrame</code> with frequent itemsets.
+The <code>SparkDataFrame</code> contains two columns:
+<code>items</code> (an array of the same type as the input column)
+and <code>freq</code> (frequency of the itemset).
+</p>
+<p>A <code>SparkDataFrame</code> with association rules.
+The <code>SparkDataFrame</code> contains three columns:
+<code>antecedent</code> (an array of the same type as the input column),
+<code>consequent</code> (an array of the same type as the input column),
+and <code>condfidence</code> (confidence).
+</p>
+<p><code>predict</code> returns a SparkDataFrame containing predicted values.
+</p>
+
+
+<h3>Note</h3>
+
+<p>spark.fpGrowth since 2.2.0
+</p>
+<p>spark.freqItemsets(FPGrowthModel) since 2.2.0
+</p>
+<p>spark.associationRules(FPGrowthModel) since 2.2.0
+</p>
+<p>predict(FPGrowthModel) since 2.2.0
+</p>
+<p>write.ml(FPGrowthModel, character) since 2.2.0
+</p>
+
+
+<h3>See Also</h3>
+
+<p><a href="read.ml.html">read.ml</a>
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: 
+##D raw_data &lt;- read.df(
+##D   &quot;data/mllib/sample_fpgrowth.txt&quot;,
+##D   source = &quot;csv&quot;,
+##D   schema = structType(structField(&quot;raw_items&quot;, &quot;string&quot;)))
+##D 
+##D data &lt;- selectExpr(raw_data, &quot;split(raw_items, &#39; &#39;) as items&quot;)
+##D model &lt;- spark.fpGrowth(data)
+##D 
+##D # Show frequent itemsets
+##D frequent_itemsets &lt;- spark.freqItemsets(model)
+##D showDF(frequent_itemsets)
+##D 
+##D # Show association rules
+##D association_rules &lt;- spark.associationRules(model)
+##D showDF(association_rules)
+##D 
+##D # Predict on new data
+##D new_itemsets &lt;- data.frame(items = c(&quot;t&quot;, &quot;t,s&quot;))
+##D new_data &lt;- selectExpr(createDataFrame(new_itemsets), &quot;split(items, &#39;,&#39;) as items&quot;)
+##D predict(model, new_data)
+##D 
+##D # Save and load model
+##D path &lt;- &quot;/path/to/model&quot;
+##D write.ml(model, path)
+##D read.ml(path)
+##D 
+##D # Optional arguments
+##D baskets_data &lt;- selectExpr(createDataFrame(itemsets), &quot;split(items, &#39;,&#39;) as baskets&quot;)
+##D another_model &lt;- spark.fpGrowth(data, minSupport = 0.1, minConfidence = 0.5,
+##D                                 itemsCol = &quot;baskets&quot;, numPartitions = 10)
+## End(Not run)
+</code></pre>
+
+
+<hr><div align="center">[Package <em>SparkR</em> version 2.2.0 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/f7ec1155/site/docs/2.2.0/api/R/spark.gaussianMixture.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.0/api/R/spark.gaussianMixture.html b/site/docs/2.2.0/api/R/spark.gaussianMixture.html
new file mode 100644
index 0000000..7ed710b
--- /dev/null
+++ b/site/docs/2.2.0/api/R/spark.gaussianMixture.html
@@ -0,0 +1,157 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
+<html><head><title>R: Multivariate Gaussian Mixture Model (GMM)</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
+<link rel="stylesheet" type="text/css" href="R.css">
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for spark.gaussianMixture {SparkR}"><tr><td>spark.gaussianMixture {SparkR}</td><td align="right">R Documentation</td></tr></table>
+
+<h2>Multivariate Gaussian Mixture Model (GMM)</h2>
+
+<h3>Description</h3>
+
+<p>Fits multivariate gaussian mixture model against a SparkDataFrame, similarly to R's
+mvnormalmixEM(). Users can call <code>summary</code> to print a summary of the fitted model,
+<code>predict</code> to make predictions on new data, and <code>write.ml</code>/<code>read.ml</code>
+to save/load fitted models.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+spark.gaussianMixture(data, formula, ...)
+
+## S4 method for signature 'SparkDataFrame,formula'
+spark.gaussianMixture(data, formula, k = 2,
+  maxIter = 100, tol = 0.01)
+
+## S4 method for signature 'GaussianMixtureModel'
+summary(object)
+
+## S4 method for signature 'GaussianMixtureModel'
+predict(object, newData)
+
+## S4 method for signature 'GaussianMixtureModel,character'
+write.ml(object, path,
+  overwrite = FALSE)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>data</code></td>
+<td>
+<p>a SparkDataFrame for training.</p>
+</td></tr>
+<tr valign="top"><td><code>formula</code></td>
+<td>
+<p>a symbolic description of the model to be fitted. Currently only a few formula
+operators are supported, including '~', '.', ':', '+', and '-'.
+Note that the response variable of formula is empty in spark.gaussianMixture.</p>
+</td></tr>
+<tr valign="top"><td><code>...</code></td>
+<td>
+<p>additional arguments passed to the method.</p>
+</td></tr>
+<tr valign="top"><td><code>k</code></td>
+<td>
+<p>number of independent Gaussians in the mixture model.</p>
+</td></tr>
+<tr valign="top"><td><code>maxIter</code></td>
+<td>
+<p>maximum iteration number.</p>
+</td></tr>
+<tr valign="top"><td><code>tol</code></td>
+<td>
+<p>the convergence tolerance.</p>
+</td></tr>
+<tr valign="top"><td><code>object</code></td>
+<td>
+<p>a fitted gaussian mixture model.</p>
+</td></tr>
+<tr valign="top"><td><code>newData</code></td>
+<td>
+<p>a SparkDataFrame for testing.</p>
+</td></tr>
+<tr valign="top"><td><code>path</code></td>
+<td>
+<p>the directory where the model is saved.</p>
+</td></tr>
+<tr valign="top"><td><code>overwrite</code></td>
+<td>
+<p>overwrites or not if the output path already exists. Default is FALSE
+which means throw exception if the output path exists.</p>
+</td></tr>
+</table>
+
+
+<h3>Value</h3>
+
+<p><code>spark.gaussianMixture</code> returns a fitted multivariate gaussian mixture model.
+</p>
+<p><code>summary</code> returns summary of the fitted model, which is a list.
+The list includes the model's <code>lambda</code> (lambda), <code>mu</code> (mu),
+<code>sigma</code> (sigma), <code>loglik</code> (loglik), and <code>posterior</code> (posterior).
+</p>
+<p><code>predict</code> returns a SparkDataFrame containing predicted labels in a column named
+&quot;prediction&quot;.
+</p>
+
+
+<h3>Note</h3>
+
+<p>spark.gaussianMixture since 2.1.0
+</p>
+<p>summary(GaussianMixtureModel) since 2.1.0
+</p>
+<p>predict(GaussianMixtureModel) since 2.1.0
+</p>
+<p>write.ml(GaussianMixtureModel, character) since 2.1.0
+</p>
+
+
+<h3>See Also</h3>
+
+<p>mixtools: <a href="https://cran.r-project.org/package=mixtools">https://cran.r-project.org/package=mixtools</a>
+</p>
+<p><a href="predict.html">predict</a>, <a href="read.ml.html">read.ml</a>, <a href="write.ml.html">write.ml</a>
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: 
+##D sparkR.session()
+##D library(mvtnorm)
+##D set.seed(100)
+##D a &lt;- rmvnorm(4, c(0, 0))
+##D b &lt;- rmvnorm(6, c(3, 4))
+##D data &lt;- rbind(a, b)
+##D df &lt;- createDataFrame(as.data.frame(data))
+##D model &lt;- spark.gaussianMixture(df, ~ V1 + V2, k = 2)
+##D summary(model)
+##D 
+##D # fitted values on training data
+##D fitted &lt;- predict(model, df)
+##D head(select(fitted, &quot;V1&quot;, &quot;prediction&quot;))
+##D 
+##D # save fitted model to input path
+##D path &lt;- &quot;path/to/model&quot;
+##D write.ml(model, path)
+##D 
+##D # can also read back the saved model and print
+##D savedModel &lt;- read.ml(path)
+##D summary(savedModel)
+## End(Not run)
+</code></pre>
+
+
+<hr><div align="center">[Package <em>SparkR</em> version 2.2.0 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/f7ec1155/site/docs/2.2.0/api/R/spark.gbt.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.0/api/R/spark.gbt.html b/site/docs/2.2.0/api/R/spark.gbt.html
new file mode 100644
index 0000000..06befc6
--- /dev/null
+++ b/site/docs/2.2.0/api/R/spark.gbt.html
@@ -0,0 +1,246 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
+<html><head><title>R: Gradient Boosted Tree Model for Regression and Classification</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
+<link rel="stylesheet" type="text/css" href="R.css">
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for spark.gbt {SparkR}"><tr><td>spark.gbt {SparkR}</td><td align="right">R Documentation</td></tr></table>
+
+<h2>Gradient Boosted Tree Model for Regression and Classification</h2>
+
+<h3>Description</h3>
+
+<p><code>spark.gbt</code> fits a Gradient Boosted Tree Regression model or Classification model on a
+SparkDataFrame. Users can call <code>summary</code> to get a summary of the fitted
+Gradient Boosted Tree model, <code>predict</code> to make predictions on new data, and
+<code>write.ml</code>/<code>read.ml</code> to save/load fitted models.
+For more details, see
+<a href="http://spark.apache.org/docs/latest/ml-classification-regression.html#gradient-boosted-tree-regression">
+GBT Regression</a> and
+<a href="http://spark.apache.org/docs/latest/ml-classification-regression.html#gradient-boosted-tree-classifier">
+GBT Classification</a>
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+spark.gbt(data, formula, ...)
+
+## S4 method for signature 'SparkDataFrame,formula'
+spark.gbt(data, formula,
+  type = c("regression", "classification"), maxDepth = 5, maxBins = 32,
+  maxIter = 20, stepSize = 0.1, lossType = NULL, seed = NULL,
+  subsamplingRate = 1, minInstancesPerNode = 1, minInfoGain = 0,
+  checkpointInterval = 10, maxMemoryInMB = 256, cacheNodeIds = FALSE)
+
+## S4 method for signature 'GBTRegressionModel'
+summary(object)
+
+## S3 method for class 'summary.GBTRegressionModel'
+print(x, ...)
+
+## S4 method for signature 'GBTClassificationModel'
+summary(object)
+
+## S3 method for class 'summary.GBTClassificationModel'
+print(x, ...)
+
+## S4 method for signature 'GBTRegressionModel'
+predict(object, newData)
+
+## S4 method for signature 'GBTClassificationModel'
+predict(object, newData)
+
+## S4 method for signature 'GBTRegressionModel,character'
+write.ml(object, path,
+  overwrite = FALSE)
+
+## S4 method for signature 'GBTClassificationModel,character'
+write.ml(object, path,
+  overwrite = FALSE)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>data</code></td>
+<td>
+<p>a SparkDataFrame for training.</p>
+</td></tr>
+<tr valign="top"><td><code>formula</code></td>
+<td>
+<p>a symbolic description of the model to be fitted. Currently only a few formula
+operators are supported, including '~', ':', '+', and '-'.</p>
+</td></tr>
+<tr valign="top"><td><code>...</code></td>
+<td>
+<p>additional arguments passed to the method.</p>
+</td></tr>
+<tr valign="top"><td><code>type</code></td>
+<td>
+<p>type of model, one of &quot;regression&quot; or &quot;classification&quot;, to fit</p>
+</td></tr>
+<tr valign="top"><td><code>maxDepth</code></td>
+<td>
+<p>Maximum depth of the tree (&gt;= 0).</p>
+</td></tr>
+<tr valign="top"><td><code>maxBins</code></td>
+<td>
+<p>Maximum number of bins used for discretizing continuous features and for choosing
+how to split on features at each node. More bins give higher granularity. Must be
+&gt;= 2 and &gt;= number of categories in any categorical feature.</p>
+</td></tr>
+<tr valign="top"><td><code>maxIter</code></td>
+<td>
+<p>Param for maximum number of iterations (&gt;= 0).</p>
+</td></tr>
+<tr valign="top"><td><code>stepSize</code></td>
+<td>
+<p>Param for Step size to be used for each iteration of optimization.</p>
+</td></tr>
+<tr valign="top"><td><code>lossType</code></td>
+<td>
+<p>Loss function which GBT tries to minimize.
+For classification, must be &quot;logistic&quot;. For regression, must be one of
+&quot;squared&quot; (L2) and &quot;absolute&quot; (L1), default is &quot;squared&quot;.</p>
+</td></tr>
+<tr valign="top"><td><code>seed</code></td>
+<td>
+<p>integer seed for random number generation.</p>
+</td></tr>
+<tr valign="top"><td><code>subsamplingRate</code></td>
+<td>
+<p>Fraction of the training data used for learning each decision tree, in
+range (0, 1].</p>
+</td></tr>
+<tr valign="top"><td><code>minInstancesPerNode</code></td>
+<td>
+<p>Minimum number of instances each child must have after split. If a
+split causes the left or right child to have fewer than
+minInstancesPerNode, the split will be discarded as invalid. Should be
+&gt;= 1.</p>
+</td></tr>
+<tr valign="top"><td><code>minInfoGain</code></td>
+<td>
+<p>Minimum information gain for a split to be considered at a tree node.</p>
+</td></tr>
+<tr valign="top"><td><code>checkpointInterval</code></td>
+<td>
+<p>Param for set checkpoint interval (&gt;= 1) or disable checkpoint (-1).</p>
+</td></tr>
+<tr valign="top"><td><code>maxMemoryInMB</code></td>
+<td>
+<p>Maximum memory in MB allocated to histogram aggregation.</p>
+</td></tr>
+<tr valign="top"><td><code>cacheNodeIds</code></td>
+<td>
+<p>If FALSE, the algorithm will pass trees to executors to match instances with
+nodes. If TRUE, the algorithm will cache node IDs for each instance. Caching
+can speed up training of deeper trees. Users can set how often should the
+cache be checkpointed or disable it by setting checkpointInterval.</p>
+</td></tr>
+<tr valign="top"><td><code>object</code></td>
+<td>
+<p>A fitted Gradient Boosted Tree regression model or classification model.</p>
+</td></tr>
+<tr valign="top"><td><code>x</code></td>
+<td>
+<p>summary object of Gradient Boosted Tree regression model or classification model
+returned by <code>summary</code>.</p>
+</td></tr>
+<tr valign="top"><td><code>newData</code></td>
+<td>
+<p>a SparkDataFrame for testing.</p>
+</td></tr>
+<tr valign="top"><td><code>path</code></td>
+<td>
+<p>The directory where the model is saved.</p>
+</td></tr>
+<tr valign="top"><td><code>overwrite</code></td>
+<td>
+<p>Overwrites or not if the output path already exists. Default is FALSE
+which means throw exception if the output path exists.</p>
+</td></tr>
+</table>
+
+
+<h3>Value</h3>
+
+<p><code>spark.gbt</code> returns a fitted Gradient Boosted Tree model.
+</p>
+<p><code>summary</code> returns summary information of the fitted model, which is a list.
+The list of components includes <code>formula</code> (formula),
+<code>numFeatures</code> (number of features), <code>features</code> (list of features),
+<code>featureImportances</code> (feature importances), <code>maxDepth</code> (max depth of trees),
+<code>numTrees</code> (number of trees), and <code>treeWeights</code> (tree weights).
+</p>
+<p><code>predict</code> returns a SparkDataFrame containing predicted labeled in a column named
+&quot;prediction&quot;.
+</p>
+
+
+<h3>Note</h3>
+
+<p>spark.gbt since 2.1.0
+</p>
+<p>summary(GBTRegressionModel) since 2.1.0
+</p>
+<p>print.summary.GBTRegressionModel since 2.1.0
+</p>
+<p>summary(GBTClassificationModel) since 2.1.0
+</p>
+<p>print.summary.GBTClassificationModel since 2.1.0
+</p>
+<p>predict(GBTRegressionModel) since 2.1.0
+</p>
+<p>predict(GBTClassificationModel) since 2.1.0
+</p>
+<p>write.ml(GBTRegressionModel, character) since 2.1.0
+</p>
+<p>write.ml(GBTClassificationModel, character) since 2.1.0
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: 
+##D # fit a Gradient Boosted Tree Regression Model
+##D df &lt;- createDataFrame(longley)
+##D model &lt;- spark.gbt(df, Employed ~ ., type = &quot;regression&quot;, maxDepth = 5, maxBins = 16)
+##D 
+##D # get the summary of the model
+##D summary(model)
+##D 
+##D # make predictions
+##D predictions &lt;- predict(model, df)
+##D 
+##D # save and load the model
+##D path &lt;- &quot;path/to/model&quot;
+##D write.ml(model, path)
+##D savedModel &lt;- read.ml(path)
+##D summary(savedModel)
+##D 
+##D # fit a Gradient Boosted Tree Classification Model
+##D # label must be binary - Only binary classification is supported for GBT.
+##D t &lt;- as.data.frame(Titanic)
+##D df &lt;- createDataFrame(t)
+##D model &lt;- spark.gbt(df, Survived ~ Age + Freq, &quot;classification&quot;)
+##D 
+##D # numeric label is also supported
+##D t2 &lt;- as.data.frame(Titanic)
+##D t2$NumericGender &lt;- ifelse(t2$Sex == &quot;Male&quot;, 0, 1)
+##D df &lt;- createDataFrame(t2)
+##D model &lt;- spark.gbt(df, NumericGender ~ ., type = &quot;classification&quot;)
+## End(Not run)
+</code></pre>
+
+
+<hr><div align="center">[Package <em>SparkR</em> version 2.2.0 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/f7ec1155/site/docs/2.2.0/api/R/spark.getSparkFiles.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.0/api/R/spark.getSparkFiles.html b/site/docs/2.2.0/api/R/spark.getSparkFiles.html
new file mode 100644
index 0000000..142455e
--- /dev/null
+++ b/site/docs/2.2.0/api/R/spark.getSparkFiles.html
@@ -0,0 +1,60 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
+<html><head><title>R: Get the absolute path of a file added through spark.addFile.</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
+<link rel="stylesheet" type="text/css" href="R.css">
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for spark.getSparkFiles {SparkR}"><tr><td>spark.getSparkFiles {SparkR}</td><td align="right">R Documentation</td></tr></table>
+
+<h2>Get the absolute path of a file added through spark.addFile.</h2>
+
+<h3>Description</h3>
+
+<p>Get the absolute path of a file added through spark.addFile.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+spark.getSparkFiles(fileName)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>fileName</code></td>
+<td>
+<p>The name of the file added through spark.addFile</p>
+</td></tr>
+</table>
+
+
+<h3>Value</h3>
+
+<p>the absolute path of a file added through spark.addFile.
+</p>
+
+
+<h3>Note</h3>
+
+<p>spark.getSparkFiles since 2.1.0
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: 
+##D spark.getSparkFiles(&quot;myfile&quot;)
+## End(Not run)
+</code></pre>
+
+
+<hr><div align="center">[Package <em>SparkR</em> version 2.2.0 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/f7ec1155/site/docs/2.2.0/api/R/spark.getSparkFilesRootDirectory.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.0/api/R/spark.getSparkFilesRootDirectory.html b/site/docs/2.2.0/api/R/spark.getSparkFilesRootDirectory.html
new file mode 100644
index 0000000..9868ad0
--- /dev/null
+++ b/site/docs/2.2.0/api/R/spark.getSparkFilesRootDirectory.html
@@ -0,0 +1,50 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
+<html><head><title>R: Get the root directory that contains files added through...</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
+<link rel="stylesheet" type="text/css" href="R.css">
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for spark.getSparkFilesRootDirectory {SparkR}"><tr><td>spark.getSparkFilesRootDirectory {SparkR}</td><td align="right">R Documentation</td></tr></table>
+
+<h2>Get the root directory that contains files added through spark.addFile.</h2>
+
+<h3>Description</h3>
+
+<p>Get the root directory that contains files added through spark.addFile.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+spark.getSparkFilesRootDirectory()
+</pre>
+
+
+<h3>Value</h3>
+
+<p>the root directory that contains files added through spark.addFile
+</p>
+
+
+<h3>Note</h3>
+
+<p>spark.getSparkFilesRootDirectory since 2.1.0
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: 
+##D spark.getSparkFilesRootDirectory()
+## End(Not run)
+</code></pre>
+
+
+<hr><div align="center">[Package <em>SparkR</em> version 2.2.0 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/f7ec1155/site/docs/2.2.0/api/R/spark.glm.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.0/api/R/spark.glm.html b/site/docs/2.2.0/api/R/spark.glm.html
new file mode 100644
index 0000000..9a1036e
--- /dev/null
+++ b/site/docs/2.2.0/api/R/spark.glm.html
@@ -0,0 +1,208 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
+<html><head><title>R: Generalized Linear Models</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
+<link rel="stylesheet" type="text/css" href="R.css">
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for spark.glm {SparkR}"><tr><td>spark.glm {SparkR}</td><td align="right">R Documentation</td></tr></table>
+
+<h2>Generalized Linear Models</h2>
+
+<h3>Description</h3>
+
+<p>Fits generalized linear model against a SparkDataFrame.
+Users can call <code>summary</code> to print a summary of the fitted model, <code>predict</code> to make
+predictions on new data, and <code>write.ml</code>/<code>read.ml</code> to save/load fitted models.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+spark.glm(data, formula, ...)
+
+## S4 method for signature 'SparkDataFrame,formula'
+spark.glm(data, formula, family = gaussian,
+  tol = 1e-06, maxIter = 25, weightCol = NULL, regParam = 0,
+  var.power = 0, link.power = 1 - var.power)
+
+## S4 method for signature 'GeneralizedLinearRegressionModel'
+summary(object)
+
+## S3 method for class 'summary.GeneralizedLinearRegressionModel'
+print(x, ...)
+
+## S4 method for signature 'GeneralizedLinearRegressionModel'
+predict(object, newData)
+
+## S4 method for signature 'GeneralizedLinearRegressionModel,character'
+write.ml(object, path,
+  overwrite = FALSE)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>data</code></td>
+<td>
+<p>a SparkDataFrame for training.</p>
+</td></tr>
+<tr valign="top"><td><code>formula</code></td>
+<td>
+<p>a symbolic description of the model to be fitted. Currently only a few formula
+operators are supported, including '~', '.', ':', '+', and '-'.</p>
+</td></tr>
+<tr valign="top"><td><code>...</code></td>
+<td>
+<p>additional arguments passed to the method.</p>
+</td></tr>
+<tr valign="top"><td><code>family</code></td>
+<td>
+<p>a description of the error distribution and link function to be used in the model.
+This can be a character string naming a family function, a family function or
+the result of a call to a family function. Refer R family at
+<a href="https://stat.ethz.ch/R-manual/R-devel/library/stats/html/family.html">https://stat.ethz.ch/R-manual/R-devel/library/stats/html/family.html</a>.
+Currently these families are supported: <code>binomial</code>, <code>gaussian</code>,
+<code>Gamma</code>, <code>poisson</code> and <code>tweedie</code>.
+</p>
+<p>Note that there are two ways to specify the tweedie family.
+</p>
+
+<ul>
+<li><p> Set <code>family = "tweedie"</code> and specify the var.power and link.power;
+</p>
+</li>
+<li><p> When package <code>statmod</code> is loaded, the tweedie family is specified using the
+family definition therein, i.e., <code>tweedie(var.power, link.power)</code>.
+</p>
+</li></ul>
+</td></tr>
+<tr valign="top"><td><code>tol</code></td>
+<td>
+<p>positive convergence tolerance of iterations.</p>
+</td></tr>
+<tr valign="top"><td><code>maxIter</code></td>
+<td>
+<p>integer giving the maximal number of IRLS iterations.</p>
+</td></tr>
+<tr valign="top"><td><code>weightCol</code></td>
+<td>
+<p>the weight column name. If this is not set or <code>NULL</code>, we treat all instance
+weights as 1.0.</p>
+</td></tr>
+<tr valign="top"><td><code>regParam</code></td>
+<td>
+<p>regularization parameter for L2 regularization.</p>
+</td></tr>
+<tr valign="top"><td><code>var.power</code></td>
+<td>
+<p>the power in the variance function of the Tweedie distribution which provides
+the relationship between the variance and mean of the distribution. Only
+applicable to the Tweedie family.</p>
+</td></tr>
+<tr valign="top"><td><code>link.power</code></td>
+<td>
+<p>the index in the power link function. Only applicable to the Tweedie family.</p>
+</td></tr>
+<tr valign="top"><td><code>object</code></td>
+<td>
+<p>a fitted generalized linear model.</p>
+</td></tr>
+<tr valign="top"><td><code>x</code></td>
+<td>
+<p>summary object of fitted generalized linear model returned by <code>summary</code> function.</p>
+</td></tr>
+<tr valign="top"><td><code>newData</code></td>
+<td>
+<p>a SparkDataFrame for testing.</p>
+</td></tr>
+<tr valign="top"><td><code>path</code></td>
+<td>
+<p>the directory where the model is saved.</p>
+</td></tr>
+<tr valign="top"><td><code>overwrite</code></td>
+<td>
+<p>overwrites or not if the output path already exists. Default is FALSE
+which means throw exception if the output path exists.</p>
+</td></tr>
+</table>
+
+
+<h3>Value</h3>
+
+<p><code>spark.glm</code> returns a fitted generalized linear model.
+</p>
+<p><code>summary</code> returns summary information of the fitted model, which is a list.
+The list of components includes at least the <code>coefficients</code> (coefficients matrix, which includes
+coefficients, standard error of coefficients, t value and p value),
+<code>null.deviance</code> (null/residual degrees of freedom), <code>aic</code> (AIC)
+and <code>iter</code> (number of iterations IRLS takes). If there are collinear columns in the data,
+the coefficients matrix only provides coefficients.
+</p>
+<p><code>predict</code> returns a SparkDataFrame containing predicted labels in a column named
+&quot;prediction&quot;.
+</p>
+
+
+<h3>Note</h3>
+
+<p>spark.glm since 2.0.0
+</p>
+<p>summary(GeneralizedLinearRegressionModel) since 2.0.0
+</p>
+<p>print.summary.GeneralizedLinearRegressionModel since 2.0.0
+</p>
+<p>predict(GeneralizedLinearRegressionModel) since 1.5.0
+</p>
+<p>write.ml(GeneralizedLinearRegressionModel, character) since 2.0.0
+</p>
+
+
+<h3>See Also</h3>
+
+<p><a href="glm.html">glm</a>, <a href="read.ml.html">read.ml</a>
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: 
+##D sparkR.session()
+##D t &lt;- as.data.frame(Titanic)
+##D df &lt;- createDataFrame(t)
+##D model &lt;- spark.glm(df, Freq ~ Sex + Age, family = &quot;gaussian&quot;)
+##D summary(model)
+##D 
+##D # fitted values on training data
+##D fitted &lt;- predict(model, df)
+##D head(select(fitted, &quot;Freq&quot;, &quot;prediction&quot;))
+##D 
+##D # save fitted model to input path
+##D path &lt;- &quot;path/to/model&quot;
+##D write.ml(model, path)
+##D 
+##D # can also read back the saved model and print
+##D savedModel &lt;- read.ml(path)
+##D summary(savedModel)
+##D 
+##D # fit tweedie model
+##D model &lt;- spark.glm(df, Freq ~ Sex + Age, family = &quot;tweedie&quot;,
+##D                    var.power = 1.2, link.power = 0)
+##D summary(model)
+##D 
+##D # use the tweedie family from statmod
+##D library(statmod)
+##D model &lt;- spark.glm(df, Freq ~ Sex + Age, family = tweedie(1.2, 0))
+##D summary(model)
+## End(Not run)
+</code></pre>
+
+
+<hr><div align="center">[Package <em>SparkR</em> version 2.2.0 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/f7ec1155/site/docs/2.2.0/api/R/spark.isoreg.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.0/api/R/spark.isoreg.html b/site/docs/2.2.0/api/R/spark.isoreg.html
new file mode 100644
index 0000000..2b6c0e4
--- /dev/null
+++ b/site/docs/2.2.0/api/R/spark.isoreg.html
@@ -0,0 +1,147 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
+<html><head><title>R: Isotonic Regression Model</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
+<link rel="stylesheet" type="text/css" href="R.css">
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for spark.isoreg {SparkR}"><tr><td>spark.isoreg {SparkR}</td><td align="right">R Documentation</td></tr></table>
+
+<h2>Isotonic Regression Model</h2>
+
+<h3>Description</h3>
+
+<p>Fits an Isotonic Regression model against a SparkDataFrame, similarly to R's isoreg().
+Users can print, make predictions on the produced model and save the model to the input path.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+spark.isoreg(data, formula, ...)
+
+## S4 method for signature 'SparkDataFrame,formula'
+spark.isoreg(data, formula,
+  isotonic = TRUE, featureIndex = 0, weightCol = NULL)
+
+## S4 method for signature 'IsotonicRegressionModel'
+summary(object)
+
+## S4 method for signature 'IsotonicRegressionModel'
+predict(object, newData)
+
+## S4 method for signature 'IsotonicRegressionModel,character'
+write.ml(object, path,
+  overwrite = FALSE)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>data</code></td>
+<td>
+<p>SparkDataFrame for training.</p>
+</td></tr>
+<tr valign="top"><td><code>formula</code></td>
+<td>
+<p>A symbolic description of the model to be fitted. Currently only a few formula
+operators are supported, including '~', '.', ':', '+', and '-'.</p>
+</td></tr>
+<tr valign="top"><td><code>...</code></td>
+<td>
+<p>additional arguments passed to the method.</p>
+</td></tr>
+<tr valign="top"><td><code>isotonic</code></td>
+<td>
+<p>Whether the output sequence should be isotonic/increasing (TRUE) or
+antitonic/decreasing (FALSE).</p>
+</td></tr>
+<tr valign="top"><td><code>featureIndex</code></td>
+<td>
+<p>The index of the feature if <code>featuresCol</code> is a vector column
+(default: 0), no effect otherwise.</p>
+</td></tr>
+<tr valign="top"><td><code>weightCol</code></td>
+<td>
+<p>The weight column name.</p>
+</td></tr>
+<tr valign="top"><td><code>object</code></td>
+<td>
+<p>a fitted IsotonicRegressionModel.</p>
+</td></tr>
+<tr valign="top"><td><code>newData</code></td>
+<td>
+<p>SparkDataFrame for testing.</p>
+</td></tr>
+<tr valign="top"><td><code>path</code></td>
+<td>
+<p>The directory where the model is saved.</p>
+</td></tr>
+<tr valign="top"><td><code>overwrite</code></td>
+<td>
+<p>Overwrites or not if the output path already exists. Default is FALSE
+which means throw exception if the output path exists.</p>
+</td></tr>
+</table>
+
+
+<h3>Value</h3>
+
+<p><code>spark.isoreg</code> returns a fitted Isotonic Regression model.
+</p>
+<p><code>summary</code> returns summary information of the fitted model, which is a list.
+The list includes model's <code>boundaries</code> (boundaries in increasing order)
+and <code>predictions</code> (predictions associated with the boundaries at the same index).
+</p>
+<p><code>predict</code> returns a SparkDataFrame containing predicted values.
+</p>
+
+
+<h3>Note</h3>
+
+<p>spark.isoreg since 2.1.0
+</p>
+<p>summary(IsotonicRegressionModel) since 2.1.0
+</p>
+<p>predict(IsotonicRegressionModel) since 2.1.0
+</p>
+<p>write.ml(IsotonicRegression, character) since 2.1.0
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: 
+##D sparkR.session()
+##D data &lt;- list(list(7.0, 0.0), list(5.0, 1.0), list(3.0, 2.0),
+##D         list(5.0, 3.0), list(1.0, 4.0))
+##D df &lt;- createDataFrame(data, c(&quot;label&quot;, &quot;feature&quot;))
+##D model &lt;- spark.isoreg(df, label ~ feature, isotonic = FALSE)
+##D # return model boundaries and prediction as lists
+##D result &lt;- summary(model, df)
+##D # prediction based on fitted model
+##D predict_data &lt;- list(list(-2.0), list(-1.0), list(0.5),
+##D                 list(0.75), list(1.0), list(2.0), list(9.0))
+##D predict_df &lt;- createDataFrame(predict_data, c(&quot;feature&quot;))
+##D # get prediction column
+##D predict_result &lt;- collect(select(predict(model, predict_df), &quot;prediction&quot;))
+##D 
+##D # save fitted model to input path
+##D path &lt;- &quot;path/to/model&quot;
+##D write.ml(model, path)
+##D 
+##D # can also read back the saved model and print
+##D savedModel &lt;- read.ml(path)
+##D summary(savedModel)
+## End(Not run)
+</code></pre>
+
+
+<hr><div align="center">[Package <em>SparkR</em> version 2.2.0 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/f7ec1155/site/docs/2.2.0/api/R/spark.kmeans.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.0/api/R/spark.kmeans.html b/site/docs/2.2.0/api/R/spark.kmeans.html
new file mode 100644
index 0000000..0d8276a
--- /dev/null
+++ b/site/docs/2.2.0/api/R/spark.kmeans.html
@@ -0,0 +1,167 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
+<html><head><title>R: K-Means Clustering Model</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
+<link rel="stylesheet" type="text/css" href="R.css">
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for spark.kmeans {SparkR}"><tr><td>spark.kmeans {SparkR}</td><td align="right">R Documentation</td></tr></table>
+
+<h2>K-Means Clustering Model</h2>
+
+<h3>Description</h3>
+
+<p>Fits a k-means clustering model against a SparkDataFrame, similarly to R's kmeans().
+Users can call <code>summary</code> to print a summary of the fitted model, <code>predict</code> to make
+predictions on new data, and <code>write.ml</code>/<code>read.ml</code> to save/load fitted models.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+spark.kmeans(data, formula, ...)
+
+## S4 method for signature 'SparkDataFrame,formula'
+spark.kmeans(data, formula, k = 2,
+  maxIter = 20, initMode = c("k-means||", "random"), seed = NULL,
+  initSteps = 2, tol = 1e-04)
+
+## S4 method for signature 'KMeansModel'
+summary(object)
+
+## S4 method for signature 'KMeansModel'
+predict(object, newData)
+
+## S4 method for signature 'KMeansModel,character'
+write.ml(object, path, overwrite = FALSE)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>data</code></td>
+<td>
+<p>a SparkDataFrame for training.</p>
+</td></tr>
+<tr valign="top"><td><code>formula</code></td>
+<td>
+<p>a symbolic description of the model to be fitted. Currently only a few formula
+operators are supported, including '~', '.', ':', '+', and '-'.
+Note that the response variable of formula is empty in spark.kmeans.</p>
+</td></tr>
+<tr valign="top"><td><code>...</code></td>
+<td>
+<p>additional argument(s) passed to the method.</p>
+</td></tr>
+<tr valign="top"><td><code>k</code></td>
+<td>
+<p>number of centers.</p>
+</td></tr>
+<tr valign="top"><td><code>maxIter</code></td>
+<td>
+<p>maximum iteration number.</p>
+</td></tr>
+<tr valign="top"><td><code>initMode</code></td>
+<td>
+<p>the initialization algorithm choosen to fit the model.</p>
+</td></tr>
+<tr valign="top"><td><code>seed</code></td>
+<td>
+<p>the random seed for cluster initialization.</p>
+</td></tr>
+<tr valign="top"><td><code>initSteps</code></td>
+<td>
+<p>the number of steps for the k-means|| initialization mode.
+This is an advanced setting, the default of 2 is almost always enough. Must be &gt; 0.</p>
+</td></tr>
+<tr valign="top"><td><code>tol</code></td>
+<td>
+<p>convergence tolerance of iterations.</p>
+</td></tr>
+<tr valign="top"><td><code>object</code></td>
+<td>
+<p>a fitted k-means model.</p>
+</td></tr>
+<tr valign="top"><td><code>newData</code></td>
+<td>
+<p>a SparkDataFrame for testing.</p>
+</td></tr>
+<tr valign="top"><td><code>path</code></td>
+<td>
+<p>the directory where the model is saved.</p>
+</td></tr>
+<tr valign="top"><td><code>overwrite</code></td>
+<td>
+<p>overwrites or not if the output path already exists. Default is FALSE
+which means throw exception if the output path exists.</p>
+</td></tr>
+</table>
+
+
+<h3>Value</h3>
+
+<p><code>spark.kmeans</code> returns a fitted k-means model.
+</p>
+<p><code>summary</code> returns summary information of the fitted model, which is a list.
+The list includes the model's <code>k</code> (the configured number of cluster centers),
+<code>coefficients</code> (model cluster centers),
+<code>size</code> (number of data points in each cluster), <code>cluster</code>
+(cluster centers of the transformed data), is.loaded (whether the model is loaded
+from a saved file), and <code>clusterSize</code>
+(the actual number of cluster centers. When using initMode = &quot;random&quot;,
+<code>clusterSize</code> may not equal to <code>k</code>).
+</p>
+<p><code>predict</code> returns the predicted values based on a k-means model.
+</p>
+
+
+<h3>Note</h3>
+
+<p>spark.kmeans since 2.0.0
+</p>
+<p>summary(KMeansModel) since 2.0.0
+</p>
+<p>predict(KMeansModel) since 2.0.0
+</p>
+<p>write.ml(KMeansModel, character) since 2.0.0
+</p>
+
+
+<h3>See Also</h3>
+
+<p><a href="predict.html">predict</a>, <a href="read.ml.html">read.ml</a>, <a href="write.ml.html">write.ml</a>
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: 
+##D sparkR.session()
+##D t &lt;- as.data.frame(Titanic)
+##D df &lt;- createDataFrame(t)
+##D model &lt;- spark.kmeans(df, Class ~ Survived, k = 4, initMode = &quot;random&quot;)
+##D summary(model)
+##D 
+##D # fitted values on training data
+##D fitted &lt;- predict(model, df)
+##D head(select(fitted, &quot;Class&quot;, &quot;prediction&quot;))
+##D 
+##D # save fitted model to input path
+##D path &lt;- &quot;path/to/model&quot;
+##D write.ml(model, path)
+##D 
+##D # can also read back the saved model and print
+##D savedModel &lt;- read.ml(path)
+##D summary(savedModel)
+## End(Not run)
+</code></pre>
+
+
+<hr><div align="center">[Package <em>SparkR</em> version 2.2.0 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/f7ec1155/site/docs/2.2.0/api/R/spark.kstest.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.0/api/R/spark.kstest.html b/site/docs/2.2.0/api/R/spark.kstest.html
new file mode 100644
index 0000000..4cb9b99
--- /dev/null
+++ b/site/docs/2.2.0/api/R/spark.kstest.html
@@ -0,0 +1,131 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
+<html><head><title>R: (One-Sample) Kolmogorov-Smirnov Test</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
+<link rel="stylesheet" type="text/css" href="R.css">
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for spark.kstest {SparkR}"><tr><td>spark.kstest {SparkR}</td><td align="right">R Documentation</td></tr></table>
+
+<h2>(One-Sample) Kolmogorov-Smirnov Test</h2>
+
+<h3>Description</h3>
+
+<p><code>spark.kstest</code> Conduct the two-sided Kolmogorov-Smirnov (KS) test for data sampled from a
+continuous distribution.
+</p>
+<p>By comparing the largest difference between the empirical cumulative
+distribution of the sample data and the theoretical distribution we can provide a test for the
+the null hypothesis that the sample data comes from that theoretical distribution.
+</p>
+<p>Users can call <code>summary</code> to obtain a summary of the test, and <code>print.summary.KSTest</code>
+to print out a summary result.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+spark.kstest(data, ...)
+
+## S4 method for signature 'SparkDataFrame'
+spark.kstest(data, testCol = "test",
+  nullHypothesis = c("norm"), distParams = c(0, 1))
+
+## S4 method for signature 'KSTest'
+summary(object)
+
+## S3 method for class 'summary.KSTest'
+print(x, ...)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>data</code></td>
+<td>
+<p>a SparkDataFrame of user data.</p>
+</td></tr>
+<tr valign="top"><td><code>...</code></td>
+<td>
+<p>additional argument(s) passed to the method.</p>
+</td></tr>
+<tr valign="top"><td><code>testCol</code></td>
+<td>
+<p>column name where the test data is from. It should be a column of double type.</p>
+</td></tr>
+<tr valign="top"><td><code>nullHypothesis</code></td>
+<td>
+<p>name of the theoretical distribution tested against. Currently only
+<code>"norm"</code> for normal distribution is supported.</p>
+</td></tr>
+<tr valign="top"><td><code>distParams</code></td>
+<td>
+<p>parameters(s) of the distribution. For <code>nullHypothesis = "norm"</code>,
+we can provide as a vector the mean and standard deviation of
+the distribution. If none is provided, then standard normal will be used.
+If only one is provided, then the standard deviation will be set to be one.</p>
+</td></tr>
+<tr valign="top"><td><code>object</code></td>
+<td>
+<p>test result object of KSTest by <code>spark.kstest</code>.</p>
+</td></tr>
+<tr valign="top"><td><code>x</code></td>
+<td>
+<p>summary object of KSTest returned by <code>summary</code>.</p>
+</td></tr>
+</table>
+
+
+<h3>Value</h3>
+
+<p><code>spark.kstest</code> returns a test result object.
+</p>
+<p><code>summary</code> returns summary information of KSTest object, which is a list.
+The list includes the <code>p.value</code> (p-value), <code>statistic</code> (test statistic
+computed for the test), <code>nullHypothesis</code> (the null hypothesis with its
+parameters tested against) and <code>degreesOfFreedom</code> (degrees of freedom of the test).
+</p>
+
+
+<h3>Note</h3>
+
+<p>spark.kstest since 2.1.0
+</p>
+<p>summary(KSTest) since 2.1.0
+</p>
+<p>print.summary.KSTest since 2.1.0
+</p>
+
+
+<h3>See Also</h3>
+
+<p><a href="http://spark.apache.org/docs/latest/mllib-statistics.html#hypothesis-testing">
+MLlib: Hypothesis Testing</a>
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: 
+##D data &lt;- data.frame(test = c(0.1, 0.15, 0.2, 0.3, 0.25))
+##D df &lt;- createDataFrame(data)
+##D test &lt;- spark.kstest(df, &quot;test&quot;, &quot;norm&quot;, c(0, 1))
+##D 
+##D # get a summary of the test result
+##D testSummary &lt;- summary(test)
+##D testSummary
+##D 
+##D # print out the summary in an organized way
+##D print.summary.KSTest(testSummary)
+## End(Not run)
+</code></pre>
+
+
+<hr><div align="center">[Package <em>SparkR</em> version 2.2.0 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/f7ec1155/site/docs/2.2.0/api/R/spark.lapply.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.0/api/R/spark.lapply.html b/site/docs/2.2.0/api/R/spark.lapply.html
new file mode 100644
index 0000000..ee1aab9
--- /dev/null
+++ b/site/docs/2.2.0/api/R/spark.lapply.html
@@ -0,0 +1,96 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
+<html><head><title>R: Run a function over a list of elements, distributing the...</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
+<link rel="stylesheet" type="text/css" href="R.css">
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for spark.lapply {SparkR}"><tr><td>spark.lapply {SparkR}</td><td align="right">R Documentation</td></tr></table>
+
+<h2>Run a function over a list of elements, distributing the computations with Spark</h2>
+
+<h3>Description</h3>
+
+<p>Run a function over a list of elements, distributing the computations with Spark. Applies a
+function in a manner that is similar to doParallel or lapply to elements of a list.
+The computations are distributed using Spark. It is conceptually the same as the following code:
+lapply(list, func)
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+spark.lapply(list, func)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>list</code></td>
+<td>
+<p>the list of elements</p>
+</td></tr>
+<tr valign="top"><td><code>func</code></td>
+<td>
+<p>a function that takes one argument.</p>
+</td></tr>
+</table>
+
+
+<h3>Details</h3>
+
+<p>Known limitations:
+</p>
+
+<ul>
+<li><p> variable scoping and capture: compared to R's rich support for variable resolutions,
+the distributed nature of SparkR limits how variables are resolved at runtime. All the
+variables that are available through lexical scoping are embedded in the closure of the
+function and available as read-only variables within the function. The environment variables
+should be stored into temporary variables outside the function, and not directly accessed
+within the function.
+</p>
+</li>
+<li><p> loading external packages: In order to use a package, you need to load it inside the
+closure. For example, if you rely on the MASS module, here is how you would use it:
+</p>
+<PRE>
+    train &lt;- function(hyperparam) {
+      library(MASS)
+      lm.ridge("y ~ x+z", data, lambda=hyperparam)
+      model
+    }
+  </PRE>
+</li></ul>
+
+
+
+<h3>Value</h3>
+
+<p>a list of results (the exact type being determined by the function)
+</p>
+
+
+<h3>Note</h3>
+
+<p>spark.lapply since 2.0.0
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: 
+##D sparkR.session()
+##D doubled &lt;- spark.lapply(1:10, function(x){2 * x})
+## End(Not run)
+</code></pre>
+
+
+<hr><div align="center">[Package <em>SparkR</em> version 2.2.0 <a href="00Index.html">Index</a>]</div>
+</body></html>


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org