You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by tg...@apache.org on 2018/07/06 14:53:29 UTC

[36/51] [partial] spark-website git commit: Spark 2.2.2 docs

http://git-wip-us.apache.org/repos/asf/spark-website/blob/e1001463/site/docs/2.2.2/api/R/spark.randomForest.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.2/api/R/spark.randomForest.html b/site/docs/2.2.2/api/R/spark.randomForest.html
new file mode 100644
index 0000000..5d43b44
--- /dev/null
+++ b/site/docs/2.2.2/api/R/spark.randomForest.html
@@ -0,0 +1,237 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><title>R: Random Forest Model for Regression and Classification</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+<link rel="stylesheet" type="text/css" href="R.css" />
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for spark.randomForest {SparkR}"><tr><td>spark.randomForest {SparkR}</td><td style="text-align: right;">R Documentation</td></tr></table>
+
+<h2>Random Forest Model for Regression and Classification</h2>
+
+<h3>Description</h3>
+
+<p><code>spark.randomForest</code> fits a Random Forest Regression model or Classification model on
+a SparkDataFrame. Users can call <code>summary</code> to get a summary of the fitted Random Forest
+model, <code>predict</code> to make predictions on new data, and <code>write.ml</code>/<code>read.ml</code> to
+save/load fitted models.
+For more details, see
+<a href="http://spark.apache.org/docs/latest/ml-classification-regression.html#random-forest-regression">
+Random Forest Regression</a> and
+<a href="http://spark.apache.org/docs/latest/ml-classification-regression.html#random-forest-classifier">
+Random Forest Classification</a>
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+spark.randomForest(data, formula, ...)
+
+## S4 method for signature 'SparkDataFrame,formula'
+spark.randomForest(data, formula,
+  type = c("regression", "classification"), maxDepth = 5, maxBins = 32,
+  numTrees = 20, impurity = NULL, featureSubsetStrategy = "auto",
+  seed = NULL, subsamplingRate = 1, minInstancesPerNode = 1,
+  minInfoGain = 0, checkpointInterval = 10, maxMemoryInMB = 256,
+  cacheNodeIds = FALSE)
+
+## S4 method for signature 'RandomForestRegressionModel'
+summary(object)
+
+## S3 method for class 'summary.RandomForestRegressionModel'
+print(x, ...)
+
+## S4 method for signature 'RandomForestClassificationModel'
+summary(object)
+
+## S3 method for class 'summary.RandomForestClassificationModel'
+print(x, ...)
+
+## S4 method for signature 'RandomForestRegressionModel'
+predict(object, newData)
+
+## S4 method for signature 'RandomForestClassificationModel'
+predict(object, newData)
+
+## S4 method for signature 'RandomForestRegressionModel,character'
+write.ml(object, path,
+  overwrite = FALSE)
+
+## S4 method for signature 'RandomForestClassificationModel,character'
+write.ml(object, path,
+  overwrite = FALSE)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>data</code></td>
+<td>
+<p>a SparkDataFrame for training.</p>
+</td></tr>
+<tr valign="top"><td><code>formula</code></td>
+<td>
+<p>a symbolic description of the model to be fitted. Currently only a few formula
+operators are supported, including '~', ':', '+', and '-'.</p>
+</td></tr>
+<tr valign="top"><td><code>...</code></td>
+<td>
+<p>additional arguments passed to the method.</p>
+</td></tr>
+<tr valign="top"><td><code>type</code></td>
+<td>
+<p>type of model, one of &quot;regression&quot; or &quot;classification&quot;, to fit</p>
+</td></tr>
+<tr valign="top"><td><code>maxDepth</code></td>
+<td>
+<p>Maximum depth of the tree (&gt;= 0).</p>
+</td></tr>
+<tr valign="top"><td><code>maxBins</code></td>
+<td>
+<p>Maximum number of bins used for discretizing continuous features and for choosing
+how to split on features at each node. More bins give higher granularity. Must be
+&gt;= 2 and &gt;= number of categories in any categorical feature.</p>
+</td></tr>
+<tr valign="top"><td><code>numTrees</code></td>
+<td>
+<p>Number of trees to train (&gt;= 1).</p>
+</td></tr>
+<tr valign="top"><td><code>impurity</code></td>
+<td>
+<p>Criterion used for information gain calculation.
+For regression, must be &quot;variance&quot;. For classification, must be one of
+&quot;entropy&quot; and &quot;gini&quot;, default is &quot;gini&quot;.</p>
+</td></tr>
+<tr valign="top"><td><code>featureSubsetStrategy</code></td>
+<td>
+<p>The number of features to consider for splits at each tree node.
+Supported options: &quot;auto&quot;, &quot;all&quot;, &quot;onethird&quot;, &quot;sqrt&quot;, &quot;log2&quot;, (0.0-1.0], [1-n].</p>
+</td></tr>
+<tr valign="top"><td><code>seed</code></td>
+<td>
+<p>integer seed for random number generation.</p>
+</td></tr>
+<tr valign="top"><td><code>subsamplingRate</code></td>
+<td>
+<p>Fraction of the training data used for learning each decision tree, in
+range (0, 1].</p>
+</td></tr>
+<tr valign="top"><td><code>minInstancesPerNode</code></td>
+<td>
+<p>Minimum number of instances each child must have after split.</p>
+</td></tr>
+<tr valign="top"><td><code>minInfoGain</code></td>
+<td>
+<p>Minimum information gain for a split to be considered at a tree node.</p>
+</td></tr>
+<tr valign="top"><td><code>checkpointInterval</code></td>
+<td>
+<p>Param for set checkpoint interval (&gt;= 1) or disable checkpoint (-1).</p>
+</td></tr>
+<tr valign="top"><td><code>maxMemoryInMB</code></td>
+<td>
+<p>Maximum memory in MB allocated to histogram aggregation.</p>
+</td></tr>
+<tr valign="top"><td><code>cacheNodeIds</code></td>
+<td>
+<p>If FALSE, the algorithm will pass trees to executors to match instances with
+nodes. If TRUE, the algorithm will cache node IDs for each instance. Caching
+can speed up training of deeper trees. Users can set how often should the
+cache be checkpointed or disable it by setting checkpointInterval.</p>
+</td></tr>
+<tr valign="top"><td><code>object</code></td>
+<td>
+<p>A fitted Random Forest regression model or classification model.</p>
+</td></tr>
+<tr valign="top"><td><code>x</code></td>
+<td>
+<p>summary object of Random Forest regression model or classification model
+returned by <code>summary</code>.</p>
+</td></tr>
+<tr valign="top"><td><code>newData</code></td>
+<td>
+<p>a SparkDataFrame for testing.</p>
+</td></tr>
+<tr valign="top"><td><code>path</code></td>
+<td>
+<p>The directory where the model is saved.</p>
+</td></tr>
+<tr valign="top"><td><code>overwrite</code></td>
+<td>
+<p>Overwrites or not if the output path already exists. Default is FALSE
+which means throw exception if the output path exists.</p>
+</td></tr>
+</table>
+
+
+<h3>Value</h3>
+
+<p><code>spark.randomForest</code> returns a fitted Random Forest model.
+</p>
+<p><code>summary</code> returns summary information of the fitted model, which is a list.
+The list of components includes <code>formula</code> (formula),
+<code>numFeatures</code> (number of features), <code>features</code> (list of features),
+<code>featureImportances</code> (feature importances), <code>maxDepth</code> (max depth of trees),
+<code>numTrees</code> (number of trees), and <code>treeWeights</code> (tree weights).
+</p>
+<p><code>predict</code> returns a SparkDataFrame containing predicted labeled in a column named
+&quot;prediction&quot;.
+</p>
+
+
+<h3>Note</h3>
+
+<p>spark.randomForest since 2.1.0
+</p>
+<p>summary(RandomForestRegressionModel) since 2.1.0
+</p>
+<p>print.summary.RandomForestRegressionModel since 2.1.0
+</p>
+<p>summary(RandomForestClassificationModel) since 2.1.0
+</p>
+<p>print.summary.RandomForestClassificationModel since 2.1.0
+</p>
+<p>predict(RandomForestRegressionModel) since 2.1.0
+</p>
+<p>predict(RandomForestClassificationModel) since 2.1.0
+</p>
+<p>write.ml(RandomForestRegressionModel, character) since 2.1.0
+</p>
+<p>write.ml(RandomForestClassificationModel, character) since 2.1.0
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: 
+##D # fit a Random Forest Regression Model
+##D df &lt;- createDataFrame(longley)
+##D model &lt;- spark.randomForest(df, Employed ~ ., type = &quot;regression&quot;, maxDepth = 5, maxBins = 16)
+##D 
+##D # get the summary of the model
+##D summary(model)
+##D 
+##D # make predictions
+##D predictions &lt;- predict(model, df)
+##D 
+##D # save and load the model
+##D path &lt;- &quot;path/to/model&quot;
+##D write.ml(model, path)
+##D savedModel &lt;- read.ml(path)
+##D summary(savedModel)
+##D 
+##D # fit a Random Forest Classification Model
+##D t &lt;- as.data.frame(Titanic)
+##D df &lt;- createDataFrame(t)
+##D model &lt;- spark.randomForest(df, Survived ~ Freq + Age, &quot;classification&quot;)
+## End(Not run)
+</code></pre>
+
+
+<hr /><div style="text-align: center;">[Package <em>SparkR</em> version 2.2.2 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/e1001463/site/docs/2.2.2/api/R/spark.survreg.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.2/api/R/spark.survreg.html b/site/docs/2.2.2/api/R/spark.survreg.html
new file mode 100644
index 0000000..8b2fc71
--- /dev/null
+++ b/site/docs/2.2.2/api/R/spark.survreg.html
@@ -0,0 +1,144 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><title>R: Accelerated Failure Time (AFT) Survival Regression Model</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+<link rel="stylesheet" type="text/css" href="R.css" />
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for spark.survreg {SparkR}"><tr><td>spark.survreg {SparkR}</td><td style="text-align: right;">R Documentation</td></tr></table>
+
+<h2>Accelerated Failure Time (AFT) Survival Regression Model</h2>
+
+<h3>Description</h3>
+
+<p><code>spark.survreg</code> fits an accelerated failure time (AFT) survival regression model on
+a SparkDataFrame. Users can call <code>summary</code> to get a summary of the fitted AFT model,
+<code>predict</code> to make predictions on new data, and <code>write.ml</code>/<code>read.ml</code> to
+save/load fitted models.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+spark.survreg(data, formula, ...)
+
+## S4 method for signature 'SparkDataFrame,formula'
+spark.survreg(data, formula,
+  aggregationDepth = 2)
+
+## S4 method for signature 'AFTSurvivalRegressionModel'
+summary(object)
+
+## S4 method for signature 'AFTSurvivalRegressionModel'
+predict(object, newData)
+
+## S4 method for signature 'AFTSurvivalRegressionModel,character'
+write.ml(object, path,
+  overwrite = FALSE)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>data</code></td>
+<td>
+<p>a SparkDataFrame for training.</p>
+</td></tr>
+<tr valign="top"><td><code>formula</code></td>
+<td>
+<p>a symbolic description of the model to be fitted. Currently only a few formula
+operators are supported, including '~', ':', '+', and '-'.
+Note that operator '.' is not supported currently.</p>
+</td></tr>
+<tr valign="top"><td><code>...</code></td>
+<td>
+<p>additional arguments passed to the method.</p>
+</td></tr>
+<tr valign="top"><td><code>aggregationDepth</code></td>
+<td>
+<p>The depth for treeAggregate (greater than or equal to 2). If the dimensions of features
+or the number of partitions are large, this param could be adjusted to a larger size.
+This is an expert parameter. Default value should be good for most cases.</p>
+</td></tr>
+<tr valign="top"><td><code>object</code></td>
+<td>
+<p>a fitted AFT survival regression model.</p>
+</td></tr>
+<tr valign="top"><td><code>newData</code></td>
+<td>
+<p>a SparkDataFrame for testing.</p>
+</td></tr>
+<tr valign="top"><td><code>path</code></td>
+<td>
+<p>the directory where the model is saved.</p>
+</td></tr>
+<tr valign="top"><td><code>overwrite</code></td>
+<td>
+<p>overwrites or not if the output path already exists. Default is FALSE
+which means throw exception if the output path exists.</p>
+</td></tr>
+</table>
+
+
+<h3>Value</h3>
+
+<p><code>spark.survreg</code> returns a fitted AFT survival regression model.
+</p>
+<p><code>summary</code> returns summary information of the fitted model, which is a list.
+The list includes the model's <code>coefficients</code> (features, coefficients,
+intercept and log(scale)).
+</p>
+<p><code>predict</code> returns a SparkDataFrame containing predicted values
+on the original scale of the data (mean predicted value at scale = 1.0).
+</p>
+
+
+<h3>Note</h3>
+
+<p>spark.survreg since 2.0.0
+</p>
+<p>summary(AFTSurvivalRegressionModel) since 2.0.0
+</p>
+<p>predict(AFTSurvivalRegressionModel) since 2.0.0
+</p>
+<p>write.ml(AFTSurvivalRegressionModel, character) since 2.0.0
+</p>
+
+
+<h3>See Also</h3>
+
+<p>survival: <a href="https://cran.r-project.org/package=survival">https://cran.r-project.org/package=survival</a>
+</p>
+<p><a href="write.ml.html">write.ml</a>
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: 
+##D df &lt;- createDataFrame(ovarian)
+##D model &lt;- spark.survreg(df, Surv(futime, fustat) ~ ecog_ps + rx)
+##D 
+##D # get a summary of the model
+##D summary(model)
+##D 
+##D # make predictions
+##D predicted &lt;- predict(model, df)
+##D showDF(predicted)
+##D 
+##D # save and load the model
+##D path &lt;- &quot;path/to/model&quot;
+##D write.ml(model, path)
+##D savedModel &lt;- read.ml(path)
+##D summary(savedModel)
+## End(Not run)
+</code></pre>
+
+
+<hr /><div style="text-align: center;">[Package <em>SparkR</em> version 2.2.2 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/e1001463/site/docs/2.2.2/api/R/spark.svmLinear.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.2/api/R/spark.svmLinear.html b/site/docs/2.2.2/api/R/spark.svmLinear.html
new file mode 100644
index 0000000..3cd8e41
--- /dev/null
+++ b/site/docs/2.2.2/api/R/spark.svmLinear.html
@@ -0,0 +1,164 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><title>R: Linear SVM Model</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+<link rel="stylesheet" type="text/css" href="R.css" />
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for spark.svmLinear {SparkR}"><tr><td>spark.svmLinear {SparkR}</td><td style="text-align: right;">R Documentation</td></tr></table>
+
+<h2>Linear SVM Model</h2>
+
+<h3>Description</h3>
+
+<p>Fits a linear SVM model against a SparkDataFrame, similar to svm in e1071 package.
+Currently only supports binary classification model with linear kernel.
+Users can print, make predictions on the produced model and save the model to the input path.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+spark.svmLinear(data, formula, ...)
+
+## S4 method for signature 'SparkDataFrame,formula'
+spark.svmLinear(data, formula,
+  regParam = 0, maxIter = 100, tol = 1e-06, standardization = TRUE,
+  threshold = 0, weightCol = NULL, aggregationDepth = 2)
+
+## S4 method for signature 'LinearSVCModel'
+predict(object, newData)
+
+## S4 method for signature 'LinearSVCModel'
+summary(object)
+
+## S4 method for signature 'LinearSVCModel,character'
+write.ml(object, path, overwrite = FALSE)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>data</code></td>
+<td>
+<p>SparkDataFrame for training.</p>
+</td></tr>
+<tr valign="top"><td><code>formula</code></td>
+<td>
+<p>A symbolic description of the model to be fitted. Currently only a few formula
+operators are supported, including '~', '.', ':', '+', and '-'.</p>
+</td></tr>
+<tr valign="top"><td><code>...</code></td>
+<td>
+<p>additional arguments passed to the method.</p>
+</td></tr>
+<tr valign="top"><td><code>regParam</code></td>
+<td>
+<p>The regularization parameter. Only supports L2 regularization currently.</p>
+</td></tr>
+<tr valign="top"><td><code>maxIter</code></td>
+<td>
+<p>Maximum iteration number.</p>
+</td></tr>
+<tr valign="top"><td><code>tol</code></td>
+<td>
+<p>Convergence tolerance of iterations.</p>
+</td></tr>
+<tr valign="top"><td><code>standardization</code></td>
+<td>
+<p>Whether to standardize the training features before fitting the model. The coefficients
+of models will be always returned on the original scale, so it will be transparent for
+users. Note that with/without standardization, the models should be always converged
+to the same solution when no regularization is applied.</p>
+</td></tr>
+<tr valign="top"><td><code>threshold</code></td>
+<td>
+<p>The threshold in binary classification applied to the linear model prediction.
+This threshold can be any real number, where Inf will make all predictions 0.0
+and -Inf will make all predictions 1.0.</p>
+</td></tr>
+<tr valign="top"><td><code>weightCol</code></td>
+<td>
+<p>The weight column name.</p>
+</td></tr>
+<tr valign="top"><td><code>aggregationDepth</code></td>
+<td>
+<p>The depth for treeAggregate (greater than or equal to 2). If the dimensions of features
+or the number of partitions are large, this param could be adjusted to a larger size.
+This is an expert parameter. Default value should be good for most cases.</p>
+</td></tr>
+<tr valign="top"><td><code>object</code></td>
+<td>
+<p>a LinearSVCModel fitted by <code>spark.svmLinear</code>.</p>
+</td></tr>
+<tr valign="top"><td><code>newData</code></td>
+<td>
+<p>a SparkDataFrame for testing.</p>
+</td></tr>
+<tr valign="top"><td><code>path</code></td>
+<td>
+<p>The directory where the model is saved.</p>
+</td></tr>
+<tr valign="top"><td><code>overwrite</code></td>
+<td>
+<p>Overwrites or not if the output path already exists. Default is FALSE
+which means throw exception if the output path exists.</p>
+</td></tr>
+</table>
+
+
+<h3>Value</h3>
+
+<p><code>spark.svmLinear</code> returns a fitted linear SVM model.
+</p>
+<p><code>predict</code> returns the predicted values based on a LinearSVCModel.
+</p>
+<p><code>summary</code> returns summary information of the fitted model, which is a list.
+The list includes <code>coefficients</code> (coefficients of the fitted model),
+<code>numClasses</code> (number of classes), <code>numFeatures</code> (number of features).
+</p>
+
+
+<h3>Note</h3>
+
+<p>spark.svmLinear since 2.2.0
+</p>
+<p>predict(LinearSVCModel) since 2.2.0
+</p>
+<p>summary(LinearSVCModel) since 2.2.0
+</p>
+<p>write.ml(LogisticRegression, character) since 2.2.0
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: 
+##D sparkR.session()
+##D t &lt;- as.data.frame(Titanic)
+##D training &lt;- createDataFrame(t)
+##D model &lt;- spark.svmLinear(training, Survived ~ ., regParam = 0.5)
+##D summary &lt;- summary(model)
+##D 
+##D # fitted values on training data
+##D fitted &lt;- predict(model, training)
+##D 
+##D # save fitted model to input path
+##D path &lt;- &quot;path/to/model&quot;
+##D write.ml(model, path)
+##D 
+##D # can also read back the saved model and predict
+##D # Note that summary deos not work on loaded model
+##D savedModel &lt;- read.ml(path)
+##D summary(savedModel)
+## End(Not run)
+</code></pre>
+
+
+<hr /><div style="text-align: center;">[Package <em>SparkR</em> version 2.2.2 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/e1001463/site/docs/2.2.2/api/R/sparkR.callJMethod.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.2/api/R/sparkR.callJMethod.html b/site/docs/2.2.2/api/R/sparkR.callJMethod.html
new file mode 100644
index 0000000..0061a48
--- /dev/null
+++ b/site/docs/2.2.2/api/R/sparkR.callJMethod.html
@@ -0,0 +1,90 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><title>R: Call Java Methods</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+<link rel="stylesheet" type="text/css" href="R.css" />
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for sparkR.callJMethod {SparkR}"><tr><td>sparkR.callJMethod {SparkR}</td><td style="text-align: right;">R Documentation</td></tr></table>
+
+<h2>Call Java Methods</h2>
+
+<h3>Description</h3>
+
+<p>Call a Java method in the JVM running the Spark driver. The return
+values are automatically converted to R objects for simple objects. Other
+values are returned as &quot;jobj&quot; which are references to objects on JVM.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+sparkR.callJMethod(x, methodName, ...)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>x</code></td>
+<td>
+<p>object to invoke the method on. Should be a &quot;jobj&quot; created by newJObject.</p>
+</td></tr>
+<tr valign="top"><td><code>methodName</code></td>
+<td>
+<p>method name to call.</p>
+</td></tr>
+<tr valign="top"><td><code>...</code></td>
+<td>
+<p>parameters to pass to the Java method.</p>
+</td></tr>
+</table>
+
+
+<h3>Details</h3>
+
+<p>This is a low level function to access the JVM directly and should only be used
+for advanced use cases. The arguments and return values that are primitive R
+types (like integer, numeric, character, lists) are automatically translated to/from
+Java types (like Integer, Double, String, Array). A full list can be found in
+serialize.R and deserialize.R in the Apache Spark code base.
+</p>
+
+
+<h3>Value</h3>
+
+<p>the return value of the Java method. Either returned as a R object
+if it can be deserialized or returned as a &quot;jobj&quot;. See details section for more.
+</p>
+
+
+<h3>Note</h3>
+
+<p>sparkR.callJMethod since 2.0.1
+</p>
+
+
+<h3>See Also</h3>
+
+<p><a href="sparkR.callJStatic.html">sparkR.callJStatic</a>, <a href="sparkR.newJObject.html">sparkR.newJObject</a>
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: 
+##D sparkR.session() # Need to have a Spark JVM running before calling newJObject
+##D # Create a Java ArrayList and populate it
+##D jarray &lt;- sparkR.newJObject(&quot;java.util.ArrayList&quot;)
+##D sparkR.callJMethod(jarray, &quot;add&quot;, 42L)
+##D sparkR.callJMethod(jarray, &quot;get&quot;, 0L) # Will print 42
+## End(Not run)
+</code></pre>
+
+
+<hr /><div style="text-align: center;">[Package <em>SparkR</em> version 2.2.2 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/e1001463/site/docs/2.2.2/api/R/sparkR.callJStatic.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.2/api/R/sparkR.callJStatic.html b/site/docs/2.2.2/api/R/sparkR.callJStatic.html
new file mode 100644
index 0000000..5fe94e6
--- /dev/null
+++ b/site/docs/2.2.2/api/R/sparkR.callJStatic.html
@@ -0,0 +1,88 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><title>R: Call Static Java Methods</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+<link rel="stylesheet" type="text/css" href="R.css" />
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for sparkR.callJStatic {SparkR}"><tr><td>sparkR.callJStatic {SparkR}</td><td style="text-align: right;">R Documentation</td></tr></table>
+
+<h2>Call Static Java Methods</h2>
+
+<h3>Description</h3>
+
+<p>Call a static method in the JVM running the Spark driver. The return
+value is automatically converted to R objects for simple objects. Other
+values are returned as &quot;jobj&quot; which are references to objects on JVM.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+sparkR.callJStatic(x, methodName, ...)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>x</code></td>
+<td>
+<p>fully qualified Java class name that contains the static method to invoke.</p>
+</td></tr>
+<tr valign="top"><td><code>methodName</code></td>
+<td>
+<p>name of static method to invoke.</p>
+</td></tr>
+<tr valign="top"><td><code>...</code></td>
+<td>
+<p>parameters to pass to the Java method.</p>
+</td></tr>
+</table>
+
+
+<h3>Details</h3>
+
+<p>This is a low level function to access the JVM directly and should only be used
+for advanced use cases. The arguments and return values that are primitive R
+types (like integer, numeric, character, lists) are automatically translated to/from
+Java types (like Integer, Double, String, Array). A full list can be found in
+serialize.R and deserialize.R in the Apache Spark code base.
+</p>
+
+
+<h3>Value</h3>
+
+<p>the return value of the Java method. Either returned as a R object
+if it can be deserialized or returned as a &quot;jobj&quot;. See details section for more.
+</p>
+
+
+<h3>Note</h3>
+
+<p>sparkR.callJStatic since 2.0.1
+</p>
+
+
+<h3>See Also</h3>
+
+<p><a href="sparkR.callJMethod.html">sparkR.callJMethod</a>, <a href="sparkR.newJObject.html">sparkR.newJObject</a>
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: 
+##D sparkR.session() # Need to have a Spark JVM running before calling callJStatic
+##D sparkR.callJStatic(&quot;java.lang.System&quot;, &quot;currentTimeMillis&quot;)
+##D sparkR.callJStatic(&quot;java.lang.System&quot;, &quot;getProperty&quot;, &quot;java.home&quot;)
+## End(Not run)
+</code></pre>
+
+
+<hr /><div style="text-align: center;">[Package <em>SparkR</em> version 2.2.2 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/e1001463/site/docs/2.2.2/api/R/sparkR.conf.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.2/api/R/sparkR.conf.html b/site/docs/2.2.2/api/R/sparkR.conf.html
new file mode 100644
index 0000000..a75f07e
--- /dev/null
+++ b/site/docs/2.2.2/api/R/sparkR.conf.html
@@ -0,0 +1,68 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><title>R: Get Runtime Config from the current active SparkSession</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+<link rel="stylesheet" type="text/css" href="R.css" />
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for sparkR.conf {SparkR}"><tr><td>sparkR.conf {SparkR}</td><td style="text-align: right;">R Documentation</td></tr></table>
+
+<h2>Get Runtime Config from the current active SparkSession</h2>
+
+<h3>Description</h3>
+
+<p>Get Runtime Config from the current active SparkSession.
+To change SparkSession Runtime Config, please see <code>sparkR.session()</code>.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+sparkR.conf(key, defaultValue)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>key</code></td>
+<td>
+<p>(optional) The key of the config to get, if omitted, all config is returned</p>
+</td></tr>
+<tr valign="top"><td><code>defaultValue</code></td>
+<td>
+<p>(optional) The default value of the config to return if they config is not
+set, if omitted, the call fails if the config key is not set</p>
+</td></tr>
+</table>
+
+
+<h3>Value</h3>
+
+<p>a list of config values with keys as their names
+</p>
+
+
+<h3>Note</h3>
+
+<p>sparkR.conf since 2.0.0
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: 
+##D sparkR.session()
+##D allConfigs &lt;- sparkR.conf()
+##D masterValue &lt;- unlist(sparkR.conf(&quot;spark.master&quot;))
+##D namedConfig &lt;- sparkR.conf(&quot;spark.executor.memory&quot;, &quot;0g&quot;)
+## End(Not run)
+</code></pre>
+
+
+<hr /><div style="text-align: center;">[Package <em>SparkR</em> version 2.2.2 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/e1001463/site/docs/2.2.2/api/R/sparkR.init-deprecated.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.2/api/R/sparkR.init-deprecated.html b/site/docs/2.2.2/api/R/sparkR.init-deprecated.html
new file mode 100644
index 0000000..1a00811
--- /dev/null
+++ b/site/docs/2.2.2/api/R/sparkR.init-deprecated.html
@@ -0,0 +1,92 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><title>R: (Deprecated) Initialize a new Spark Context</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+<link rel="stylesheet" type="text/css" href="R.css" />
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for sparkR.init {SparkR}"><tr><td>sparkR.init {SparkR}</td><td style="text-align: right;">R Documentation</td></tr></table>
+
+<h2>(Deprecated) Initialize a new Spark Context</h2>
+
+<h3>Description</h3>
+
+<p>This function initializes a new SparkContext.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+sparkR.init(master = "", appName = "SparkR",
+  sparkHome = Sys.getenv("SPARK_HOME"), sparkEnvir = list(),
+  sparkExecutorEnv = list(), sparkJars = "", sparkPackages = "")
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>master</code></td>
+<td>
+<p>The Spark master URL</p>
+</td></tr>
+<tr valign="top"><td><code>appName</code></td>
+<td>
+<p>Application name to register with cluster manager</p>
+</td></tr>
+<tr valign="top"><td><code>sparkHome</code></td>
+<td>
+<p>Spark Home directory</p>
+</td></tr>
+<tr valign="top"><td><code>sparkEnvir</code></td>
+<td>
+<p>Named list of environment variables to set on worker nodes</p>
+</td></tr>
+<tr valign="top"><td><code>sparkExecutorEnv</code></td>
+<td>
+<p>Named list of environment variables to be used when launching executors</p>
+</td></tr>
+<tr valign="top"><td><code>sparkJars</code></td>
+<td>
+<p>Character vector of jar files to pass to the worker nodes</p>
+</td></tr>
+<tr valign="top"><td><code>sparkPackages</code></td>
+<td>
+<p>Character vector of package coordinates</p>
+</td></tr>
+</table>
+
+
+<h3>Note</h3>
+
+<p>sparkR.init since 1.4.0
+</p>
+
+
+<h3>See Also</h3>
+
+<p><a href="sparkR.session.html">sparkR.session</a>
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: 
+##D sc &lt;- sparkR.init(&quot;local[2]&quot;, &quot;SparkR&quot;, &quot;/home/spark&quot;)
+##D sc &lt;- sparkR.init(&quot;local[2]&quot;, &quot;SparkR&quot;, &quot;/home/spark&quot;,
+##D                  list(spark.executor.memory=&quot;1g&quot;))
+##D sc &lt;- sparkR.init(&quot;yarn-client&quot;, &quot;SparkR&quot;, &quot;/home/spark&quot;,
+##D                  list(spark.executor.memory=&quot;4g&quot;),
+##D                  list(LD_LIBRARY_PATH=&quot;/directory of JVM libraries (libjvm.so) on workers/&quot;),
+##D                  c(&quot;one.jar&quot;, &quot;two.jar&quot;, &quot;three.jar&quot;),
+##D                  c(&quot;com.databricks:spark-avro_2.10:2.0.1&quot;))
+## End(Not run)
+</code></pre>
+
+
+<hr /><div style="text-align: center;">[Package <em>SparkR</em> version 2.2.2 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/e1001463/site/docs/2.2.2/api/R/sparkR.newJObject.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.2/api/R/sparkR.newJObject.html b/site/docs/2.2.2/api/R/sparkR.newJObject.html
new file mode 100644
index 0000000..8acedd9
--- /dev/null
+++ b/site/docs/2.2.2/api/R/sparkR.newJObject.html
@@ -0,0 +1,86 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><title>R: Create Java Objects</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+<link rel="stylesheet" type="text/css" href="R.css" />
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for sparkR.newJObject {SparkR}"><tr><td>sparkR.newJObject {SparkR}</td><td style="text-align: right;">R Documentation</td></tr></table>
+
+<h2>Create Java Objects</h2>
+
+<h3>Description</h3>
+
+<p>Create a new Java object in the JVM running the Spark driver. The return
+value is automatically converted to an R object for simple objects. Other
+values are returned as a &quot;jobj&quot; which is a reference to an object on JVM.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+sparkR.newJObject(x, ...)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>x</code></td>
+<td>
+<p>fully qualified Java class name.</p>
+</td></tr>
+<tr valign="top"><td><code>...</code></td>
+<td>
+<p>arguments to be passed to the constructor.</p>
+</td></tr>
+</table>
+
+
+<h3>Details</h3>
+
+<p>This is a low level function to access the JVM directly and should only be used
+for advanced use cases. The arguments and return values that are primitive R
+types (like integer, numeric, character, lists) are automatically translated to/from
+Java types (like Integer, Double, String, Array). A full list can be found in
+serialize.R and deserialize.R in the Apache Spark code base.
+</p>
+
+
+<h3>Value</h3>
+
+<p>the object created. Either returned as a R object
+if it can be deserialized or returned as a &quot;jobj&quot;. See details section for more.
+</p>
+
+
+<h3>Note</h3>
+
+<p>sparkR.newJObject since 2.0.1
+</p>
+
+
+<h3>See Also</h3>
+
+<p><a href="sparkR.callJMethod.html">sparkR.callJMethod</a>, <a href="sparkR.callJStatic.html">sparkR.callJStatic</a>
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: 
+##D sparkR.session() # Need to have a Spark JVM running before calling newJObject
+##D # Create a Java ArrayList and populate it
+##D jarray &lt;- sparkR.newJObject(&quot;java.util.ArrayList&quot;)
+##D sparkR.callJMethod(jarray, &quot;add&quot;, 42L)
+##D sparkR.callJMethod(jarray, &quot;get&quot;, 0L) # Will print 42
+## End(Not run)
+</code></pre>
+
+
+<hr /><div style="text-align: center;">[Package <em>SparkR</em> version 2.2.2 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/e1001463/site/docs/2.2.2/api/R/sparkR.session.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.2/api/R/sparkR.session.html b/site/docs/2.2.2/api/R/sparkR.session.html
new file mode 100644
index 0000000..4abbb49
--- /dev/null
+++ b/site/docs/2.2.2/api/R/sparkR.session.html
@@ -0,0 +1,114 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><title>R: Get the existing SparkSession or initialize a new...</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+<link rel="stylesheet" type="text/css" href="R.css" />
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for sparkR.session {SparkR}"><tr><td>sparkR.session {SparkR}</td><td style="text-align: right;">R Documentation</td></tr></table>
+
+<h2>Get the existing SparkSession or initialize a new SparkSession.</h2>
+
+<h3>Description</h3>
+
+<p>SparkSession is the entry point into SparkR. <code>sparkR.session</code> gets the existing
+SparkSession or initializes a new SparkSession.
+Additional Spark properties can be set in <code>...</code>, and these named parameters take priority
+over values in <code>master</code>, <code>appName</code>, named lists of <code>sparkConfig</code>.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+sparkR.session(master = "", appName = "SparkR",
+  sparkHome = Sys.getenv("SPARK_HOME"), sparkConfig = list(),
+  sparkJars = "", sparkPackages = "", enableHiveSupport = TRUE, ...)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>master</code></td>
+<td>
+<p>the Spark master URL.</p>
+</td></tr>
+<tr valign="top"><td><code>appName</code></td>
+<td>
+<p>application name to register with cluster manager.</p>
+</td></tr>
+<tr valign="top"><td><code>sparkHome</code></td>
+<td>
+<p>Spark Home directory.</p>
+</td></tr>
+<tr valign="top"><td><code>sparkConfig</code></td>
+<td>
+<p>named list of Spark configuration to set on worker nodes.</p>
+</td></tr>
+<tr valign="top"><td><code>sparkJars</code></td>
+<td>
+<p>character vector of jar files to pass to the worker nodes.</p>
+</td></tr>
+<tr valign="top"><td><code>sparkPackages</code></td>
+<td>
+<p>character vector of package coordinates</p>
+</td></tr>
+<tr valign="top"><td><code>enableHiveSupport</code></td>
+<td>
+<p>enable support for Hive, fallback if not built with Hive support; once
+set, this cannot be turned off on an existing session</p>
+</td></tr>
+<tr valign="top"><td><code>...</code></td>
+<td>
+<p>named Spark properties passed to the method.</p>
+</td></tr>
+</table>
+
+
+<h3>Details</h3>
+
+<p>When called in an interactive session, this method checks for the Spark installation, and, if not
+found, it will be downloaded and cached automatically. Alternatively, <code>install.spark</code> can
+be called manually.
+</p>
+<p>A default warehouse is created automatically in the current directory when a managed table is
+created via <code>sql</code> statement <code>CREATE TABLE</code>, for example. To change the location of the
+warehouse, set the named parameter <code>spark.sql.warehouse.dir</code> to the SparkSession. Along with
+the warehouse, an accompanied metastore may also be automatically created in the current
+directory when a new SparkSession is initialized with <code>enableHiveSupport</code> set to
+<code>TRUE</code>, which is the default. For more details, refer to Hive configuration at
+<a href="http://spark.apache.org/docs/latest/sql-programming-guide.html#hive-tables">http://spark.apache.org/docs/latest/sql-programming-guide.html#hive-tables</a>.
+</p>
+<p>For details on how to initialize and use SparkR, refer to SparkR programming guide at
+<a href="http://spark.apache.org/docs/latest/sparkr.html#starting-up-sparksession">http://spark.apache.org/docs/latest/sparkr.html#starting-up-sparksession</a>.
+</p>
+
+
+<h3>Note</h3>
+
+<p>sparkR.session since 2.0.0
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: 
+##D sparkR.session()
+##D df &lt;- read.json(path)
+##D 
+##D sparkR.session(&quot;local[2]&quot;, &quot;SparkR&quot;, &quot;/home/spark&quot;)
+##D sparkR.session(&quot;yarn-client&quot;, &quot;SparkR&quot;, &quot;/home/spark&quot;,
+##D                list(spark.executor.memory=&quot;4g&quot;),
+##D                c(&quot;one.jar&quot;, &quot;two.jar&quot;, &quot;three.jar&quot;),
+##D                c(&quot;com.databricks:spark-avro_2.10:2.0.1&quot;))
+##D sparkR.session(spark.master = &quot;yarn-client&quot;, spark.executor.memory = &quot;4g&quot;)
+## End(Not run)
+</code></pre>
+
+
+<hr /><div style="text-align: center;">[Package <em>SparkR</em> version 2.2.2 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/e1001463/site/docs/2.2.2/api/R/sparkR.session.stop.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.2/api/R/sparkR.session.stop.html b/site/docs/2.2.2/api/R/sparkR.session.stop.html
new file mode 100644
index 0000000..33c4a92
--- /dev/null
+++ b/site/docs/2.2.2/api/R/sparkR.session.stop.html
@@ -0,0 +1,39 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><title>R: Stop the Spark Session and Spark Context</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+<link rel="stylesheet" type="text/css" href="R.css" />
+</head><body>
+
+<table width="100%" summary="page for sparkR.session.stop {SparkR}"><tr><td>sparkR.session.stop {SparkR}</td><td style="text-align: right;">R Documentation</td></tr></table>
+
+<h2>Stop the Spark Session and Spark Context</h2>
+
+<h3>Description</h3>
+
+<p>Stop the Spark Session and Spark Context.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+sparkR.session.stop()
+
+sparkR.stop()
+</pre>
+
+
+<h3>Details</h3>
+
+<p>Also terminates the backend this R session is connected to.
+</p>
+
+
+<h3>Note</h3>
+
+<p>sparkR.session.stop since 2.0.0
+</p>
+<p>sparkR.stop since 1.4.0
+</p>
+
+<hr /><div style="text-align: center;">[Package <em>SparkR</em> version 2.2.2 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/e1001463/site/docs/2.2.2/api/R/sparkR.uiWebUrl.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.2/api/R/sparkR.uiWebUrl.html b/site/docs/2.2.2/api/R/sparkR.uiWebUrl.html
new file mode 100644
index 0000000..01aa392
--- /dev/null
+++ b/site/docs/2.2.2/api/R/sparkR.uiWebUrl.html
@@ -0,0 +1,50 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><title>R: Get the URL of the SparkUI instance for the current active...</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+<link rel="stylesheet" type="text/css" href="R.css" />
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for sparkR.uiWebUrl {SparkR}"><tr><td>sparkR.uiWebUrl {SparkR}</td><td style="text-align: right;">R Documentation</td></tr></table>
+
+<h2>Get the URL of the SparkUI instance for the current active SparkSession</h2>
+
+<h3>Description</h3>
+
+<p>Get the URL of the SparkUI instance for the current active SparkSession.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+sparkR.uiWebUrl()
+</pre>
+
+
+<h3>Value</h3>
+
+<p>the SparkUI URL, or NA if it is disabled, or not started.
+</p>
+
+
+<h3>Note</h3>
+
+<p>sparkR.uiWebUrl since 2.1.1
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: 
+##D sparkR.session()
+##D url &lt;- sparkR.uiWebUrl()
+## End(Not run)
+</code></pre>
+
+
+<hr /><div style="text-align: center;">[Package <em>SparkR</em> version 2.2.2 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/e1001463/site/docs/2.2.2/api/R/sparkR.version.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.2/api/R/sparkR.version.html b/site/docs/2.2.2/api/R/sparkR.version.html
new file mode 100644
index 0000000..dcfd577
--- /dev/null
+++ b/site/docs/2.2.2/api/R/sparkR.version.html
@@ -0,0 +1,50 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><title>R: Get version of Spark on which this application is running</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+<link rel="stylesheet" type="text/css" href="R.css" />
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for sparkR.version {SparkR}"><tr><td>sparkR.version {SparkR}</td><td style="text-align: right;">R Documentation</td></tr></table>
+
+<h2>Get version of Spark on which this application is running</h2>
+
+<h3>Description</h3>
+
+<p>Get version of Spark on which this application is running.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+sparkR.version()
+</pre>
+
+
+<h3>Value</h3>
+
+<p>a character string of the Spark version
+</p>
+
+
+<h3>Note</h3>
+
+<p>sparkR.version since 2.0.1
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: 
+##D sparkR.session()
+##D version &lt;- sparkR.version()
+## End(Not run)
+</code></pre>
+
+
+<hr /><div style="text-align: center;">[Package <em>SparkR</em> version 2.2.2 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/e1001463/site/docs/2.2.2/api/R/sparkRHive.init-deprecated.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.2/api/R/sparkRHive.init-deprecated.html b/site/docs/2.2.2/api/R/sparkRHive.init-deprecated.html
new file mode 100644
index 0000000..1d7566d
--- /dev/null
+++ b/site/docs/2.2.2/api/R/sparkRHive.init-deprecated.html
@@ -0,0 +1,67 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><title>R: (Deprecated) Initialize a new HiveContext</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+<link rel="stylesheet" type="text/css" href="R.css" />
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for sparkRHive.init {SparkR}"><tr><td>sparkRHive.init {SparkR}</td><td style="text-align: right;">R Documentation</td></tr></table>
+
+<h2>(Deprecated) Initialize a new HiveContext</h2>
+
+<h3>Description</h3>
+
+<p>This function creates a HiveContext from an existing JavaSparkContext
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+sparkRHive.init(jsc = NULL)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>jsc</code></td>
+<td>
+<p>The existing JavaSparkContext created with SparkR.init()</p>
+</td></tr>
+</table>
+
+
+<h3>Details</h3>
+
+<p>Starting SparkR 2.0, a SparkSession is initialized and returned instead.
+This API is deprecated and kept for backward compatibility only.
+</p>
+
+
+<h3>Note</h3>
+
+<p>sparkRHive.init since 1.4.0
+</p>
+
+
+<h3>See Also</h3>
+
+<p><a href="sparkR.session.html">sparkR.session</a>
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: 
+##D sc &lt;- sparkR.init()
+##D sqlContext &lt;- sparkRHive.init(sc)
+## End(Not run)
+</code></pre>
+
+
+<hr /><div style="text-align: center;">[Package <em>SparkR</em> version 2.2.2 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/e1001463/site/docs/2.2.2/api/R/sparkRSQL.init-deprecated.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.2/api/R/sparkRSQL.init-deprecated.html b/site/docs/2.2.2/api/R/sparkRSQL.init-deprecated.html
new file mode 100644
index 0000000..ea99eed
--- /dev/null
+++ b/site/docs/2.2.2/api/R/sparkRSQL.init-deprecated.html
@@ -0,0 +1,68 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><title>R: (Deprecated) Initialize a new SQLContext</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+<link rel="stylesheet" type="text/css" href="R.css" />
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for sparkRSQL.init {SparkR}"><tr><td>sparkRSQL.init {SparkR}</td><td style="text-align: right;">R Documentation</td></tr></table>
+
+<h2>(Deprecated) Initialize a new SQLContext</h2>
+
+<h3>Description</h3>
+
+<p>This function creates a SparkContext from an existing JavaSparkContext and
+then uses it to initialize a new SQLContext
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+sparkRSQL.init(jsc = NULL)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>jsc</code></td>
+<td>
+<p>The existing JavaSparkContext created with SparkR.init()</p>
+</td></tr>
+</table>
+
+
+<h3>Details</h3>
+
+<p>Starting SparkR 2.0, a SparkSession is initialized and returned instead.
+This API is deprecated and kept for backward compatibility only.
+</p>
+
+
+<h3>Note</h3>
+
+<p>sparkRSQL.init since 1.4.0
+</p>
+
+
+<h3>See Also</h3>
+
+<p><a href="sparkR.session.html">sparkR.session</a>
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: 
+##D sc &lt;- sparkR.init()
+##D sqlContext &lt;- sparkRSQL.init(sc)
+## End(Not run)
+</code></pre>
+
+
+<hr /><div style="text-align: center;">[Package <em>SparkR</em> version 2.2.2 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/e1001463/site/docs/2.2.2/api/R/spark_partition_id.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.2/api/R/spark_partition_id.html b/site/docs/2.2.2/api/R/spark_partition_id.html
new file mode 100644
index 0000000..033f1a5
--- /dev/null
+++ b/site/docs/2.2.2/api/R/spark_partition_id.html
@@ -0,0 +1,62 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><title>R: Return the partition ID as a column</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+<link rel="stylesheet" type="text/css" href="R.css" />
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for spark_partition_id {SparkR}"><tr><td>spark_partition_id {SparkR}</td><td style="text-align: right;">R Documentation</td></tr></table>
+
+<h2>Return the partition ID as a column</h2>
+
+<h3>Description</h3>
+
+<p>Return the partition ID as a SparkDataFrame column.
+Note that this is nondeterministic because it depends on data partitioning and
+task scheduling.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+spark_partition_id(x = "missing")
+
+## S4 method for signature 'missing'
+spark_partition_id()
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>x</code></td>
+<td>
+<p>empty. Should be used with no argument.</p>
+</td></tr>
+</table>
+
+
+<h3>Details</h3>
+
+<p>This is equivalent to the SPARK_PARTITION_ID function in SQL.
+</p>
+
+
+<h3>Note</h3>
+
+<p>spark_partition_id since 2.0.0
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: select(df, spark_partition_id())
+</code></pre>
+
+
+<hr /><div style="text-align: center;">[Package <em>SparkR</em> version 2.2.2 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/e1001463/site/docs/2.2.2/api/R/sql.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.2/api/R/sql.html b/site/docs/2.2.2/api/R/sql.html
new file mode 100644
index 0000000..44dce98
--- /dev/null
+++ b/site/docs/2.2.2/api/R/sql.html
@@ -0,0 +1,64 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><title>R: SQL Query</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+<link rel="stylesheet" type="text/css" href="R.css" />
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for sql {SparkR}"><tr><td>sql {SparkR}</td><td style="text-align: right;">R Documentation</td></tr></table>
+
+<h2>SQL Query</h2>
+
+<h3>Description</h3>
+
+<p>Executes a SQL query using Spark, returning the result as a SparkDataFrame.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+## Default S3 method:
+sql(sqlQuery)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>sqlQuery</code></td>
+<td>
+<p>A character vector containing the SQL query</p>
+</td></tr>
+</table>
+
+
+<h3>Value</h3>
+
+<p>SparkDataFrame
+</p>
+
+
+<h3>Note</h3>
+
+<p>sql since 1.4.0
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: 
+##D sparkR.session()
+##D path &lt;- &quot;path/to/file.json&quot;
+##D df &lt;- read.json(path)
+##D createOrReplaceTempView(df, &quot;table&quot;)
+##D new_df &lt;- sql(&quot;SELECT * FROM table&quot;)
+## End(Not run)
+</code></pre>
+
+
+<hr /><div style="text-align: center;">[Package <em>SparkR</em> version 2.2.2 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/e1001463/site/docs/2.2.2/api/R/sqrt.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.2/api/R/sqrt.html b/site/docs/2.2.2/api/R/sqrt.html
new file mode 100644
index 0000000..1d43653
--- /dev/null
+++ b/site/docs/2.2.2/api/R/sqrt.html
@@ -0,0 +1,77 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><title>R: sqrt</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+<link rel="stylesheet" type="text/css" href="R.css" />
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for sqrt {SparkR}"><tr><td>sqrt {SparkR}</td><td style="text-align: right;">R Documentation</td></tr></table>
+
+<h2>sqrt</h2>
+
+<h3>Description</h3>
+
+<p>Computes the square root of the specified float value.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+## S4 method for signature 'Column'
+sqrt(x)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>x</code></td>
+<td>
+<p>Column to compute on.</p>
+</td></tr>
+</table>
+
+
+<h3>Note</h3>
+
+<p>sqrt since 1.5.0
+</p>
+
+
+<h3>See Also</h3>
+
+<p>Other math_funcs: <code><a href="acos.html">acos</a></code>, <code><a href="asin.html">asin</a></code>,
+<code><a href="atan2.html">atan2</a></code>, <code><a href="atan.html">atan</a></code>,
+<code><a href="bin.html">bin</a></code>, <code><a href="bround.html">bround</a></code>,
+<code><a href="cbrt.html">cbrt</a></code>, <code><a href="ceil.html">ceil</a></code>,
+<code><a href="conv.html">conv</a></code>, <code><a href="corr.html">corr</a></code>,
+<code><a href="cosh.html">cosh</a></code>, <code><a href="cos.html">cos</a></code>,
+<code><a href="covar_pop.html">covar_pop</a></code>, <code><a href="cov.html">cov</a></code>,
+<code><a href="expm1.html">expm1</a></code>, <code><a href="exp.html">exp</a></code>,
+<code><a href="factorial.html">factorial</a></code>, <code><a href="floor.html">floor</a></code>,
+<code><a href="hex.html">hex</a></code>, <code><a href="hypot.html">hypot</a></code>,
+<code><a href="log10.html">log10</a></code>, <code><a href="log1p.html">log1p</a></code>,
+<code><a href="log2.html">log2</a></code>, <code><a href="log.html">log</a></code>,
+<code><a href="pmod.html">pmod</a></code>, <code><a href="rint.html">rint</a></code>,
+<code><a href="round.html">round</a></code>, <code><a href="shiftLeft.html">shiftLeft</a></code>,
+<code><a href="shiftRightUnsigned.html">shiftRightUnsigned</a></code>,
+<code><a href="shiftRight.html">shiftRight</a></code>, <code><a href="sign.html">signum</a></code>,
+<code><a href="sinh.html">sinh</a></code>, <code><a href="sin.html">sin</a></code>,
+<code><a href="tanh.html">tanh</a></code>, <code><a href="tan.html">tan</a></code>,
+<code><a href="toDegrees.html">toDegrees</a></code>, <code><a href="toRadians.html">toRadians</a></code>,
+<code><a href="unhex.html">unhex</a></code>
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: sqrt(df$c)
+</code></pre>
+
+
+<hr /><div style="text-align: center;">[Package <em>SparkR</em> version 2.2.2 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/e1001463/site/docs/2.2.2/api/R/startsWith.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.2/api/R/startsWith.html b/site/docs/2.2.2/api/R/startsWith.html
new file mode 100644
index 0000000..63c3bce
--- /dev/null
+++ b/site/docs/2.2.2/api/R/startsWith.html
@@ -0,0 +1,56 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><title>R: startsWith</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+<link rel="stylesheet" type="text/css" href="R.css" />
+</head><body>
+
+<table width="100%" summary="page for startsWith {SparkR}"><tr><td>startsWith {SparkR}</td><td style="text-align: right;">R Documentation</td></tr></table>
+
+<h2>startsWith</h2>
+
+<h3>Description</h3>
+
+<p>Determines if entries of x start with string (entries of) prefix respectively,
+where strings are recycled to common lengths.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+startsWith(x, prefix)
+
+## S4 method for signature 'Column'
+startsWith(x, prefix)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>x</code></td>
+<td>
+<p>vector of character string whose &quot;starts&quot; are considered</p>
+</td></tr>
+<tr valign="top"><td><code>prefix</code></td>
+<td>
+<p>character vector (often of length one)</p>
+</td></tr>
+</table>
+
+
+<h3>Note</h3>
+
+<p>startsWith since 1.4.0
+</p>
+
+
+<h3>See Also</h3>
+
+<p>Other colum_func: <code><a href="alias.html">alias</a></code>,
+<code><a href="between.html">between</a></code>, <code><a href="cast.html">cast</a></code>,
+<code><a href="endsWith.html">endsWith</a></code>, <code><a href="otherwise.html">otherwise</a></code>,
+<code><a href="over.html">over</a></code>, <code><a href="substr.html">substr</a></code>
+</p>
+
+<hr /><div style="text-align: center;">[Package <em>SparkR</em> version 2.2.2 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/e1001463/site/docs/2.2.2/api/R/status.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.2/api/R/status.html b/site/docs/2.2.2/api/R/status.html
new file mode 100644
index 0000000..915772f
--- /dev/null
+++ b/site/docs/2.2.2/api/R/status.html
@@ -0,0 +1,65 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><title>R: status</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+<link rel="stylesheet" type="text/css" href="R.css" />
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for status {SparkR}"><tr><td>status {SparkR}</td><td style="text-align: right;">R Documentation</td></tr></table>
+
+<h2>status</h2>
+
+<h3>Description</h3>
+
+<p>Prints the current status of the query in JSON format.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+status(x)
+
+## S4 method for signature 'StreamingQuery'
+status(x)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>x</code></td>
+<td>
+<p>a StreamingQuery.</p>
+</td></tr>
+</table>
+
+
+<h3>Note</h3>
+
+<p>status(StreamingQuery) since 2.2.0
+</p>
+<p>experimental
+</p>
+
+
+<h3>See Also</h3>
+
+<p>Other StreamingQuery methods: <code><a href="awaitTermination.html">awaitTermination</a></code>,
+<code><a href="explain.html">explain</a></code>, <code><a href="isActive.html">isActive</a></code>,
+<code><a href="lastProgress.html">lastProgress</a></code>, <code><a href="queryName.html">queryName</a></code>,
+<code><a href="stopQuery.html">stopQuery</a></code>
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run:  status(sq) 
+</code></pre>
+
+
+<hr /><div style="text-align: center;">[Package <em>SparkR</em> version 2.2.2 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/e1001463/site/docs/2.2.2/api/R/stddev_pop.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.2/api/R/stddev_pop.html b/site/docs/2.2.2/api/R/stddev_pop.html
new file mode 100644
index 0000000..f7f25a8
--- /dev/null
+++ b/site/docs/2.2.2/api/R/stddev_pop.html
@@ -0,0 +1,70 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><title>R: stddev_pop</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+<link rel="stylesheet" type="text/css" href="R.css" />
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for stddev_pop {SparkR}"><tr><td>stddev_pop {SparkR}</td><td style="text-align: right;">R Documentation</td></tr></table>
+
+<h2>stddev_pop</h2>
+
+<h3>Description</h3>
+
+<p>Aggregate function: returns the population standard deviation of the expression in a group.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+stddev_pop(x)
+
+## S4 method for signature 'Column'
+stddev_pop(x)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>x</code></td>
+<td>
+<p>Column to compute on.</p>
+</td></tr>
+</table>
+
+
+<h3>Note</h3>
+
+<p>stddev_pop since 1.6.0
+</p>
+
+
+<h3>See Also</h3>
+
+<p><a href="sd.html">sd</a>, <a href="stddev_samp.html">stddev_samp</a>
+</p>
+<p>Other agg_funcs: <code><a href="summarize.html">agg</a></code>, <code><a href="avg.html">avg</a></code>,
+<code><a href="countDistinct.html">countDistinct</a></code>, <code><a href="count.html">count</a></code>,
+<code><a href="first.html">first</a></code>, <code><a href="kurtosis.html">kurtosis</a></code>,
+<code><a href="last.html">last</a></code>, <code><a href="max.html">max</a></code>,
+<code><a href="mean.html">mean</a></code>, <code><a href="min.html">min</a></code>, <code><a href="sd.html">sd</a></code>,
+<code><a href="skewness.html">skewness</a></code>, <code><a href="stddev_samp.html">stddev_samp</a></code>,
+<code><a href="sumDistinct.html">sumDistinct</a></code>, <code><a href="sum.html">sum</a></code>,
+<code><a href="var_pop.html">var_pop</a></code>, <code><a href="var_samp.html">var_samp</a></code>,
+<code><a href="var.html">var</a></code>
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: stddev_pop(df$c)
+</code></pre>
+
+
+<hr /><div style="text-align: center;">[Package <em>SparkR</em> version 2.2.2 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/e1001463/site/docs/2.2.2/api/R/stddev_samp.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.2/api/R/stddev_samp.html b/site/docs/2.2.2/api/R/stddev_samp.html
new file mode 100644
index 0000000..01326c5
--- /dev/null
+++ b/site/docs/2.2.2/api/R/stddev_samp.html
@@ -0,0 +1,70 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><title>R: stddev_samp</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+<link rel="stylesheet" type="text/css" href="R.css" />
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for stddev_samp {SparkR}"><tr><td>stddev_samp {SparkR}</td><td style="text-align: right;">R Documentation</td></tr></table>
+
+<h2>stddev_samp</h2>
+
+<h3>Description</h3>
+
+<p>Aggregate function: returns the unbiased sample standard deviation of the expression in a group.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+stddev_samp(x)
+
+## S4 method for signature 'Column'
+stddev_samp(x)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>x</code></td>
+<td>
+<p>Column to compute on.</p>
+</td></tr>
+</table>
+
+
+<h3>Note</h3>
+
+<p>stddev_samp since 1.6.0
+</p>
+
+
+<h3>See Also</h3>
+
+<p><a href="stddev_pop.html">stddev_pop</a>, <a href="sd.html">sd</a>
+</p>
+<p>Other agg_funcs: <code><a href="summarize.html">agg</a></code>, <code><a href="avg.html">avg</a></code>,
+<code><a href="countDistinct.html">countDistinct</a></code>, <code><a href="count.html">count</a></code>,
+<code><a href="first.html">first</a></code>, <code><a href="kurtosis.html">kurtosis</a></code>,
+<code><a href="last.html">last</a></code>, <code><a href="max.html">max</a></code>,
+<code><a href="mean.html">mean</a></code>, <code><a href="min.html">min</a></code>, <code><a href="sd.html">sd</a></code>,
+<code><a href="skewness.html">skewness</a></code>, <code><a href="stddev_pop.html">stddev_pop</a></code>,
+<code><a href="sumDistinct.html">sumDistinct</a></code>, <code><a href="sum.html">sum</a></code>,
+<code><a href="var_pop.html">var_pop</a></code>, <code><a href="var_samp.html">var_samp</a></code>,
+<code><a href="var.html">var</a></code>
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: stddev_samp(df$c)
+</code></pre>
+
+
+<hr /><div style="text-align: center;">[Package <em>SparkR</em> version 2.2.2 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/e1001463/site/docs/2.2.2/api/R/stopQuery.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.2/api/R/stopQuery.html b/site/docs/2.2.2/api/R/stopQuery.html
new file mode 100644
index 0000000..c21f9b7
--- /dev/null
+++ b/site/docs/2.2.2/api/R/stopQuery.html
@@ -0,0 +1,66 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><title>R: stopQuery</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+<link rel="stylesheet" type="text/css" href="R.css" />
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for stopQuery {SparkR}"><tr><td>stopQuery {SparkR}</td><td style="text-align: right;">R Documentation</td></tr></table>
+
+<h2>stopQuery</h2>
+
+<h3>Description</h3>
+
+<p>Stops the execution of this query if it is running. This method blocks until the execution is
+stopped.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+stopQuery(x)
+
+## S4 method for signature 'StreamingQuery'
+stopQuery(x)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>x</code></td>
+<td>
+<p>a StreamingQuery.</p>
+</td></tr>
+</table>
+
+
+<h3>Note</h3>
+
+<p>stopQuery(StreamingQuery) since 2.2.0
+</p>
+<p>experimental
+</p>
+
+
+<h3>See Also</h3>
+
+<p>Other StreamingQuery methods: <code><a href="awaitTermination.html">awaitTermination</a></code>,
+<code><a href="explain.html">explain</a></code>, <code><a href="isActive.html">isActive</a></code>,
+<code><a href="lastProgress.html">lastProgress</a></code>, <code><a href="queryName.html">queryName</a></code>,
+<code><a href="status.html">status</a></code>
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run:  stopQuery(sq) 
+</code></pre>
+
+
+<hr /><div style="text-align: center;">[Package <em>SparkR</em> version 2.2.2 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/e1001463/site/docs/2.2.2/api/R/storageLevel.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.2/api/R/storageLevel.html b/site/docs/2.2.2/api/R/storageLevel.html
new file mode 100644
index 0000000..026dd99
--- /dev/null
+++ b/site/docs/2.2.2/api/R/storageLevel.html
@@ -0,0 +1,101 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><title>R: StorageLevel</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+<link rel="stylesheet" type="text/css" href="R.css" />
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for storageLevel {SparkR}"><tr><td>storageLevel {SparkR}</td><td style="text-align: right;">R Documentation</td></tr></table>
+
+<h2>StorageLevel</h2>
+
+<h3>Description</h3>
+
+<p>Get storagelevel of this SparkDataFrame.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+## S4 method for signature 'SparkDataFrame'
+storageLevel(x)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>x</code></td>
+<td>
+<p>the SparkDataFrame to get the storageLevel.</p>
+</td></tr>
+</table>
+
+
+<h3>Note</h3>
+
+<p>storageLevel since 2.1.0
+</p>
+
+
+<h3>See Also</h3>
+
+<p>Other SparkDataFrame functions: <code><a href="SparkDataFrame.html">SparkDataFrame-class</a></code>,
+<code><a href="summarize.html">agg</a></code>, <code><a href="arrange.html">arrange</a></code>,
+<code><a href="as.data.frame.html">as.data.frame</a></code>,
+<code><a href="attach.html">attach,SparkDataFrame-method</a></code>,
+<code><a href="cache.html">cache</a></code>, <code><a href="checkpoint.html">checkpoint</a></code>,
+<code><a href="coalesce.html">coalesce</a></code>, <code><a href="collect.html">collect</a></code>,
+<code><a href="columns.html">colnames</a></code>, <code><a href="coltypes.html">coltypes</a></code>,
+<code><a href="createOrReplaceTempView.html">createOrReplaceTempView</a></code>,
+<code><a href="crossJoin.html">crossJoin</a></code>, <code><a href="dapplyCollect.html">dapplyCollect</a></code>,
+<code><a href="dapply.html">dapply</a></code>, <code><a href="summary.html">describe</a></code>,
+<code><a href="dim.html">dim</a></code>, <code><a href="distinct.html">distinct</a></code>,
+<code><a href="dropDuplicates.html">dropDuplicates</a></code>, <code><a href="nafunctions.html">dropna</a></code>,
+<code><a href="drop.html">drop</a></code>, <code><a href="dtypes.html">dtypes</a></code>,
+<code><a href="except.html">except</a></code>, <code><a href="explain.html">explain</a></code>,
+<code><a href="filter.html">filter</a></code>, <code><a href="first.html">first</a></code>,
+<code><a href="gapplyCollect.html">gapplyCollect</a></code>, <code><a href="gapply.html">gapply</a></code>,
+<code><a href="getNumPartitions.html">getNumPartitions</a></code>, <code><a href="groupBy.html">group_by</a></code>,
+<code><a href="head.html">head</a></code>, <code><a href="hint.html">hint</a></code>,
+<code><a href="histogram.html">histogram</a></code>, <code><a href="insertInto.html">insertInto</a></code>,
+<code><a href="intersect.html">intersect</a></code>, <code><a href="isLocal.html">isLocal</a></code>,
+<code><a href="isStreaming.html">isStreaming</a></code>, <code><a href="join.html">join</a></code>,
+<code><a href="limit.html">limit</a></code>, <code><a href="merge.html">merge</a></code>,
+<code><a href="mutate.html">mutate</a></code>, <code><a href="ncol.html">ncol</a></code>,
+<code><a href="nrow.html">nrow</a></code>, <code><a href="persist.html">persist</a></code>,
+<code><a href="printSchema.html">printSchema</a></code>, <code><a href="randomSplit.html">randomSplit</a></code>,
+<code><a href="rbind.html">rbind</a></code>, <code><a href="registerTempTable-deprecated.html">registerTempTable</a></code>,
+<code><a href="rename.html">rename</a></code>, <code><a href="repartition.html">repartition</a></code>,
+<code><a href="sample.html">sample</a></code>, <code><a href="saveAsTable.html">saveAsTable</a></code>,
+<code><a href="schema.html">schema</a></code>, <code><a href="selectExpr.html">selectExpr</a></code>,
+<code><a href="select.html">select</a></code>, <code><a href="showDF.html">showDF</a></code>,
+<code><a href="show.html">show</a></code>, <code><a href="str.html">str</a></code>,
+<code><a href="subset.html">subset</a></code>, <code><a href="take.html">take</a></code>,
+<code><a href="toJSON.html">toJSON</a></code>, <code><a href="union.html">union</a></code>,
+<code><a href="unpersist.html">unpersist</a></code>, <code><a href="withColumn.html">withColumn</a></code>,
+<code><a href="with.html">with</a></code>, <code><a href="write.df.html">write.df</a></code>,
+<code><a href="write.jdbc.html">write.jdbc</a></code>, <code><a href="write.json.html">write.json</a></code>,
+<code><a href="write.orc.html">write.orc</a></code>, <code><a href="write.parquet.html">write.parquet</a></code>,
+<code><a href="write.stream.html">write.stream</a></code>, <code><a href="write.text.html">write.text</a></code>
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: 
+##D sparkR.session()
+##D path &lt;- &quot;path/to/file.json&quot;
+##D df &lt;- read.json(path)
+##D persist(df, &quot;MEMORY_AND_DISK&quot;)
+##D storageLevel(df)
+## End(Not run)
+</code></pre>
+
+
+<hr /><div style="text-align: center;">[Package <em>SparkR</em> version 2.2.2 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/e1001463/site/docs/2.2.2/api/R/str.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.2/api/R/str.html b/site/docs/2.2.2/api/R/str.html
new file mode 100644
index 0000000..a5e2afe
--- /dev/null
+++ b/site/docs/2.2.2/api/R/str.html
@@ -0,0 +1,102 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><title>R: Compactly display the structure of a dataset</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+<link rel="stylesheet" type="text/css" href="R.css" />
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for str {SparkR}"><tr><td>str {SparkR}</td><td style="text-align: right;">R Documentation</td></tr></table>
+
+<h2>Compactly display the structure of a dataset</h2>
+
+<h3>Description</h3>
+
+<p>Display the structure of a SparkDataFrame, including column names, column types, as well as a
+a small sample of rows.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+## S4 method for signature 'SparkDataFrame'
+str(object)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>object</code></td>
+<td>
+<p>a SparkDataFrame</p>
+</td></tr>
+</table>
+
+
+<h3>Note</h3>
+
+<p>str since 1.6.1
+</p>
+
+
+<h3>See Also</h3>
+
+<p>Other SparkDataFrame functions: <code><a href="SparkDataFrame.html">SparkDataFrame-class</a></code>,
+<code><a href="summarize.html">agg</a></code>, <code><a href="arrange.html">arrange</a></code>,
+<code><a href="as.data.frame.html">as.data.frame</a></code>,
+<code><a href="attach.html">attach,SparkDataFrame-method</a></code>,
+<code><a href="cache.html">cache</a></code>, <code><a href="checkpoint.html">checkpoint</a></code>,
+<code><a href="coalesce.html">coalesce</a></code>, <code><a href="collect.html">collect</a></code>,
+<code><a href="columns.html">colnames</a></code>, <code><a href="coltypes.html">coltypes</a></code>,
+<code><a href="createOrReplaceTempView.html">createOrReplaceTempView</a></code>,
+<code><a href="crossJoin.html">crossJoin</a></code>, <code><a href="dapplyCollect.html">dapplyCollect</a></code>,
+<code><a href="dapply.html">dapply</a></code>, <code><a href="summary.html">describe</a></code>,
+<code><a href="dim.html">dim</a></code>, <code><a href="distinct.html">distinct</a></code>,
+<code><a href="dropDuplicates.html">dropDuplicates</a></code>, <code><a href="nafunctions.html">dropna</a></code>,
+<code><a href="drop.html">drop</a></code>, <code><a href="dtypes.html">dtypes</a></code>,
+<code><a href="except.html">except</a></code>, <code><a href="explain.html">explain</a></code>,
+<code><a href="filter.html">filter</a></code>, <code><a href="first.html">first</a></code>,
+<code><a href="gapplyCollect.html">gapplyCollect</a></code>, <code><a href="gapply.html">gapply</a></code>,
+<code><a href="getNumPartitions.html">getNumPartitions</a></code>, <code><a href="groupBy.html">group_by</a></code>,
+<code><a href="head.html">head</a></code>, <code><a href="hint.html">hint</a></code>,
+<code><a href="histogram.html">histogram</a></code>, <code><a href="insertInto.html">insertInto</a></code>,
+<code><a href="intersect.html">intersect</a></code>, <code><a href="isLocal.html">isLocal</a></code>,
+<code><a href="isStreaming.html">isStreaming</a></code>, <code><a href="join.html">join</a></code>,
+<code><a href="limit.html">limit</a></code>, <code><a href="merge.html">merge</a></code>,
+<code><a href="mutate.html">mutate</a></code>, <code><a href="ncol.html">ncol</a></code>,
+<code><a href="nrow.html">nrow</a></code>, <code><a href="persist.html">persist</a></code>,
+<code><a href="printSchema.html">printSchema</a></code>, <code><a href="randomSplit.html">randomSplit</a></code>,
+<code><a href="rbind.html">rbind</a></code>, <code><a href="registerTempTable-deprecated.html">registerTempTable</a></code>,
+<code><a href="rename.html">rename</a></code>, <code><a href="repartition.html">repartition</a></code>,
+<code><a href="sample.html">sample</a></code>, <code><a href="saveAsTable.html">saveAsTable</a></code>,
+<code><a href="schema.html">schema</a></code>, <code><a href="selectExpr.html">selectExpr</a></code>,
+<code><a href="select.html">select</a></code>, <code><a href="showDF.html">showDF</a></code>,
+<code><a href="show.html">show</a></code>, <code><a href="storageLevel.html">storageLevel</a></code>,
+<code><a href="subset.html">subset</a></code>, <code><a href="take.html">take</a></code>,
+<code><a href="toJSON.html">toJSON</a></code>, <code><a href="union.html">union</a></code>,
+<code><a href="unpersist.html">unpersist</a></code>, <code><a href="withColumn.html">withColumn</a></code>,
+<code><a href="with.html">with</a></code>, <code><a href="write.df.html">write.df</a></code>,
+<code><a href="write.jdbc.html">write.jdbc</a></code>, <code><a href="write.json.html">write.json</a></code>,
+<code><a href="write.orc.html">write.orc</a></code>, <code><a href="write.parquet.html">write.parquet</a></code>,
+<code><a href="write.stream.html">write.stream</a></code>, <code><a href="write.text.html">write.text</a></code>
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: 
+##D # Create a SparkDataFrame from the Iris dataset
+##D irisDF &lt;- createDataFrame(iris)
+##D 
+##D # Show the structure of the SparkDataFrame
+##D str(irisDF)
+## End(Not run)
+</code></pre>
+
+
+<hr /><div style="text-align: center;">[Package <em>SparkR</em> version 2.2.2 <a href="00Index.html">Index</a>]</div>
+</body></html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/e1001463/site/docs/2.2.2/api/R/struct.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.2/api/R/struct.html b/site/docs/2.2.2/api/R/struct.html
new file mode 100644
index 0000000..1823418
--- /dev/null
+++ b/site/docs/2.2.2/api/R/struct.html
@@ -0,0 +1,75 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><title>R: struct</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+<link rel="stylesheet" type="text/css" href="R.css" />
+
+<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
+<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
+<script>hljs.initHighlightingOnLoad();</script>
+</head><body>
+
+<table width="100%" summary="page for struct {SparkR}"><tr><td>struct {SparkR}</td><td style="text-align: right;">R Documentation</td></tr></table>
+
+<h2>struct</h2>
+
+<h3>Description</h3>
+
+<p>Creates a new struct column that composes multiple input columns.
+</p>
+
+
+<h3>Usage</h3>
+
+<pre>
+struct(x, ...)
+
+## S4 method for signature 'characterOrColumn'
+struct(x, ...)
+</pre>
+
+
+<h3>Arguments</h3>
+
+<table summary="R argblock">
+<tr valign="top"><td><code>x</code></td>
+<td>
+<p>a column to compute on.</p>
+</td></tr>
+<tr valign="top"><td><code>...</code></td>
+<td>
+<p>optional column(s) to be included.</p>
+</td></tr>
+</table>
+
+
+<h3>Note</h3>
+
+<p>struct since 1.6.0
+</p>
+
+
+<h3>See Also</h3>
+
+<p>Other normal_funcs: <code><a href="abs.html">abs</a></code>,
+<code><a href="bitwiseNOT.html">bitwiseNOT</a></code>, <code><a href="coalesce.html">coalesce</a></code>,
+<code><a href="column.html">column</a></code>, <code><a href="expr.html">expr</a></code>,
+<code><a href="from_json.html">from_json</a></code>, <code><a href="greatest.html">greatest</a></code>,
+<code><a href="ifelse.html">ifelse</a></code>, <code><a href="is.nan.html">isnan</a></code>,
+<code><a href="least.html">least</a></code>, <code><a href="lit.html">lit</a></code>,
+<code><a href="nanvl.html">nanvl</a></code>, <code><a href="negate.html">negate</a></code>,
+<code><a href="randn.html">randn</a></code>, <code><a href="rand.html">rand</a></code>,
+<code><a href="to_json.html">to_json</a></code>, <code><a href="when.html">when</a></code>
+</p>
+
+
+<h3>Examples</h3>
+
+<pre><code class="r">## Not run: 
+##D struct(df$c, df$d)
+##D struct(&quot;col1&quot;, &quot;col2&quot;)
+## End(Not run)
+</code></pre>
+
+
+<hr /><div style="text-align: center;">[Package <em>SparkR</em> version 2.2.2 <a href="00Index.html">Index</a>]</div>
+</body></html>


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org