You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@madlib.apache.org by ri...@apache.org on 2017/05/16 20:29:38 UTC

[23/51] [partial] incubator-madlib-site git commit: Add v1.11 docs

http://git-wip-us.apache.org/repos/asf/incubator-madlib-site/blob/b5b51c69/docs/v1.11/group__grp__lmf.html
----------------------------------------------------------------------
diff --git a/docs/v1.11/group__grp__lmf.html b/docs/v1.11/group__grp__lmf.html
new file mode 100644
index 0000000..7f7d90d
--- /dev/null
+++ b/docs/v1.11/group__grp__lmf.html
@@ -0,0 +1,271 @@
+<!-- HTML header for doxygen 1.8.4-->
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
+<html xmlns="http://www.w3.org/1999/xhtml">
+<head>
+<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
+<meta http-equiv="X-UA-Compatible" content="IE=9"/>
+<meta name="generator" content="Doxygen 1.8.13"/>
+<meta name="keywords" content="madlib,postgres,greenplum,machine learning,data mining,deep learning,ensemble methods,data science,market basket analysis,affinity analysis,pca,lda,regression,elastic net,huber white,proportional hazards,k-means,latent dirichlet allocation,bayes,support vector machines,svm"/>
+<title>MADlib: Low-rank Matrix Factorization</title>
+<link href="tabs.css" rel="stylesheet" type="text/css"/>
+<script type="text/javascript" src="jquery.js"></script>
+<script type="text/javascript" src="dynsections.js"></script>
+<link href="navtree.css" rel="stylesheet" type="text/css"/>
+<script type="text/javascript" src="resize.js"></script>
+<script type="text/javascript" src="navtreedata.js"></script>
+<script type="text/javascript" src="navtree.js"></script>
+<script type="text/javascript">
+  $(document).ready(initResizable);
+</script>
+<link href="search/search.css" rel="stylesheet" type="text/css"/>
+<script type="text/javascript" src="search/searchdata.js"></script>
+<script type="text/javascript" src="search/search.js"></script>
+<script type="text/javascript">
+  $(document).ready(function() { init_search(); });
+</script>
+<!-- hack in the navigation tree -->
+<script type="text/javascript" src="eigen_navtree_hacks.js"></script>
+<link href="doxygen.css" rel="stylesheet" type="text/css" />
+<link href="madlib_extra.css" rel="stylesheet" type="text/css"/>
+<!-- google analytics -->
+<script>
+  (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
+  (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
+  m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
+  })(window,document,'script','//www.google-analytics.com/analytics.js','ga');
+  ga('create', 'UA-45382226-1', 'madlib.incubator.apache.org');
+  ga('send', 'pageview');
+</script>
+</head>
+<body>
+<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
+<div id="titlearea">
+<table cellspacing="0" cellpadding="0">
+ <tbody>
+ <tr style="height: 56px;">
+  <td id="projectlogo"><a href="http://madlib.incubator.apache.org"><img alt="Logo" src="madlib.png" height="50" style="padding-left:0.5em;" border="0"/ ></a></td>
+  <td style="padding-left: 0.5em;">
+   <div id="projectname">
+   <span id="projectnumber">1.11</span>
+   </div>
+   <div id="projectbrief">User Documentation for MADlib</div>
+  </td>
+   <td>        <div id="MSearchBox" class="MSearchBoxInactive">
+        <span class="left">
+          <img id="MSearchSelect" src="search/mag_sel.png"
+               onmouseover="return searchBox.OnSearchSelectShow()"
+               onmouseout="return searchBox.OnSearchSelectHide()"
+               alt=""/>
+          <input type="text" id="MSearchField" value="Search" accesskey="S"
+               onfocus="searchBox.OnSearchFieldFocus(true)" 
+               onblur="searchBox.OnSearchFieldFocus(false)" 
+               onkeyup="searchBox.OnSearchFieldChange(event)"/>
+          </span><span class="right">
+            <a id="MSearchClose" href="javascript:searchBox.CloseResultsWindow()"><img id="MSearchCloseImg" border="0" src="search/close.png" alt=""/></a>
+          </span>
+        </div>
+</td>
+ </tr>
+ </tbody>
+</table>
+</div>
+<!-- end header part -->
+<!-- Generated by Doxygen 1.8.13 -->
+<script type="text/javascript">
+var searchBox = new SearchBox("searchBox", "search",false,'Search');
+</script>
+</div><!-- top -->
+<div id="side-nav" class="ui-resizable side-nav-resizable">
+  <div id="nav-tree">
+    <div id="nav-tree-contents">
+      <div id="nav-sync" class="sync"></div>
+    </div>
+  </div>
+  <div id="splitbar" style="-moz-user-select:none;" 
+       class="ui-resizable-handle">
+  </div>
+</div>
+<script type="text/javascript">
+$(document).ready(function(){initNavTree('group__grp__lmf.html','');});
+</script>
+<div id="doc-content">
+<!-- window showing the filter options -->
+<div id="MSearchSelectWindow"
+     onmouseover="return searchBox.OnSearchSelectShow()"
+     onmouseout="return searchBox.OnSearchSelectHide()"
+     onkeydown="return searchBox.OnSearchSelectKey(event)">
+</div>
+
+<!-- iframe showing the search results (closed by default) -->
+<div id="MSearchResultsWindow">
+<iframe src="javascript:void(0)" frameborder="0" 
+        name="MSearchResults" id="MSearchResults">
+</iframe>
+</div>
+
+<div class="header">
+  <div class="headertitle">
+<div class="title">Low-rank Matrix Factorization<div class="ingroups"><a class="el" href="group__grp__datatrans.html">Data Types and Transformations</a> &raquo; <a class="el" href="group__grp__arraysmatrix.html">Arrays and Matrices</a> &raquo; <a class="el" href="group__grp__matrix__factorization.html">Matrix Factorization</a></div></div>  </div>
+</div><!--header-->
+<div class="contents">
+<div class="toc"><b>Contents</b> <ul>
+<li>
+<a href="#syntax">Function Syntax</a> </li>
+<li>
+<a href="#examples">Examples</a> </li>
+<li>
+<a href="#literature">Literature</a> </li>
+</ul>
+</div><p>This module implements "factor model" for representing an incomplete matrix using a low-rank approximation [1]. Mathematically, this model seeks to find matrices U and V (also referred as factors) that, for any given incomplete matrix A, minimizes:</p>
+<p class="formulaDsp">
+<img class="formulaDsp" alt="\[ \|\boldsymbol A - \boldsymbol UV^{T} \|_2 \]" src="form_47.png"/>
+</p>
+<p>subject to <img class="formulaInl" alt="$rank(\boldsymbol UV^{T}) \leq r$" src="form_48.png"/>, where <img class="formulaInl" alt="$\|\cdot\|_2$" src="form_49.png"/> denotes the Frobenius norm. Let <img class="formulaInl" alt="$A$" src="form_42.png"/> be a <img class="formulaInl" alt="$m \times n$" src="form_50.png"/> matrix, then <img class="formulaInl" alt="$U$" src="form_51.png"/> will be <img class="formulaInl" alt="$m \times r$" src="form_52.png"/> and <img class="formulaInl" alt="$V$" src="form_53.png"/> will be <img class="formulaInl" alt="$n \times r$" src="form_54.png"/>, in dimension, and <img class="formulaInl" alt="$1 \leq r \ll \min(m, n)$" src="form_55.png"/>. This model is not intended to do the full decomposition, or to be used as part of inverse procedure. This model has been widely used in recommendation systems (e.g., Netflix [2]) and feature selection (e.g., image processing [3]).</p>
+<p><a class="anchor" id="syntax"></a></p><dl class="section user"><dt>Function Syntax</dt><dd></dd></dl>
+<p>Low-rank matrix factorization of an incomplete matrix into two factors.</p>
+<pre class="syntax">
+lmf_igd_run( rel_output,
+             rel_source,
+             col_row,
+             col_column,
+             col_value,
+             row_dim,
+             column_dim,
+             max_rank,
+             stepsize,
+             scale_factor,
+             num_iterations,
+             tolerance
+           )
+</pre><p> <b>Arguments</b> </p><dl class="arglist">
+<dt>rel_output </dt>
+<dd><p class="startdd">TEXT. The name of the table to receive the output.</p>
+<p>Output factors matrix U and V are in a flattened format. </p><pre>RESULT AS (
+        matrix_u    DOUBLE PRECISION[],
+        matrix_v    DOUBLE PRECISION[],
+        rmse        DOUBLE PRECISION
+);</pre><p class="enddd">Features correspond to row i is <code>matrix_u[i:i][1:r]</code>. Features correspond to column j is <code>matrix_v[j:j][1:r]</code>.  </p>
+</dd>
+<dt>rel_source </dt>
+<dd><p class="startdd">TEXT. The name of the table containing the input data.</p>
+<p>The input matrix&gt; is expected to be of the following form: </p><pre>{TABLE|VIEW} <em>input_table</em> (
+    <em>row</em>    INTEGER,
+    <em>col</em>    INTEGER,
+    <em>value</em>  DOUBLE PRECISION
+)</pre><p class="enddd">Input is contained in a table that describes an incomplete matrix, with available entries specified as (row, column, value). The input matrix should be 1-based, which means row &gt;= 1, and col &gt;= 1. NULL values are not expected.  </p>
+</dd>
+<dt>col_row </dt>
+<dd>TEXT. The name of the column containing the row number. </dd>
+<dt>col_column </dt>
+<dd>TEXT. The name of the column containing the column number. </dd>
+<dt>col_value </dt>
+<dd>DOUBLE PRECISION. The value at (row, col). </dd>
+<dt>row_dim (optional) </dt>
+<dd>INTEGER, default: "SELECT max(col_row) FROM rel_source". The number of columns in the matrix. </dd>
+<dt>column_dim (optional) </dt>
+<dd>INTEGER, default: "SELECT max(col_col) FROM rel_source". The number of rows in the matrix. </dd>
+<dt>max_rank </dt>
+<dd>INTEGER, default: 20. The rank of desired approximation. </dd>
+<dt>stepsize (optional) </dt>
+<dd>DOUBLE PRECISION, default: 0.01. Hyper-parameter that decides how aggressive the gradient steps are.  </dd>
+<dt>scale_factor (optional) </dt>
+<dd>DOUBLE PRECISION, default: 0.1. Hyper-parameter that decides scale of initial factors. </dd>
+<dt>num_iterations (optional) </dt>
+<dd>INTEGER, default: 10. Maximum number if iterations to perform regardless of convergence. </dd>
+<dt>tolerance (optional) </dt>
+<dd>DOUBLE PRECISION, default: 0.0001. Acceptable level of error in convergence. </dd>
+</dl>
+<p><a class="anchor" id="examples"></a></p><dl class="section user"><dt>Examples</dt><dd></dd></dl>
+<ol type="1">
+<li>Prepare an input table/view: <pre class="example">
+DROP TABLE IF EXISTS lmf_data;
+CREATE TABLE lmf_data (
+ row INT,
+ col INT,
+ val FLOAT8
+);
+</pre></li>
+<li>Populate the input table with some data. <pre class="example">
+INSERT INTO lmf_data VALUES (1, 1, 5.0);
+INSERT INTO lmf_data VALUES (3, 100, 1.0);
+INSERT INTO lmf_data VALUES (999, 10000, 2.0);
+</pre></li>
+<li>Call the <a class="el" href="lmf_8sql__in.html#ac1acb1f0e1f7008118f21c83546a4602" title="Low-rank matrix factorization of a incomplete matrix into two factors. ">lmf_igd_run()</a> stored procedure. <pre class="example">
+DROP TABLE IF EXISTS lmf_model;
+SELECT madlib.lmf_igd_run( 'lmf_model',
+                           'lmf_data',
+                           'row',
+                           'col',
+                           'val',
+                           999,
+                           10000,
+                           3,
+                           0.1,
+                           2,
+                           10,
+                           1e-9
+                         );
+</pre> Example result (the exact result may not be the same). <pre class="result">
+NOTICE:
+Finished low-rank matrix factorization using incremental gradient
+DETAIL:
+   table : lmf_data (row, col, val)
+Results:
+   RMSE = 0.0145966345300041
+Output:
+   view : SELECT * FROM lmf_model WHERE id = 1
+ lmf_igd_run
+&#160;-----------
+           1
+ (1 row)
+</pre></li>
+<li>Sanity check of the result. You may need a model id returned and also indicated by the function <a class="el" href="lmf_8sql__in.html#ac1acb1f0e1f7008118f21c83546a4602" title="Low-rank matrix factorization of a incomplete matrix into two factors. ">lmf_igd_run()</a>, assuming 1 here: <pre class="example">
+SELECT array_dims(matrix_u) AS u_dims, array_dims(matrix_v) AS v_dims
+FROM lmf_model
+WHERE id = 1;
+</pre> Result: <pre class="result">
+     u_dims    |     v_dims
+ --------------+----------------
+  [1:999][1:3] | [1:10000][1:3]
+ (1 row)
+</pre></li>
+<li>Query the result value. <pre class="example">
+SELECT matrix_u[2:2][1:3] AS row_2_features
+FROM lmf_model
+WHERE id = 1;
+</pre> Example output (the exact result may not be the same): <pre class="result">
+                       row_2_features
+&#160;---------------------------------------------------------
+  {{1.12030523084104,0.522217971272767,0.0264869043603539}}
+ (1 row)
+</pre></li>
+<li>Make prediction of a missing entry (row=2, col=7654). <pre class="example">
+SELECT madlib.array_dot(
+    matrix_u[2:2][1:3],
+    matrix_v[7654:7654][1:3]
+    ) AS row_2_col_7654
+FROM lmf_model
+WHERE id = 1;
+</pre> Example output (the exact result may not be the same due the randomness of the algorithm): <pre class="result">
+   row_2_col_7654
+&#160;------------------
+  1.3201582940851
+ (1 row)
+</pre></li>
+</ol>
+<p><a class="anchor" id="literature"></a></p><dl class="section user"><dt>Literature</dt><dd></dd></dl>
+<p>[1] N. Srebro and T. Jaakkola. “Weighted Low-Rank Approximations.” In: ICML. Ed. by T. Fawcett and N. Mishra. AAAI Press, 2003, pp. 720–727. isbn: 1-57735-189-4.</p>
+<p>[2] Simon Funk, Netflix Update: Try This at Home, December 11 2006, <a href="http://sifter.org/~simon/journal/20061211.html">http://sifter.org/~simon/journal/20061211.html</a></p>
+<p>[3] J. Wright, A. Ganesh, S. Rao, Y. Peng, and Y. Ma. “Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Matrices via Convex Optimization.” In: NIPS. Ed. by Y. Bengio, D. Schuurmans, J. D. Lafferty, C. K. I. Williams, and A. Culotta. Curran Associates, Inc., 2009, pp. 2080–2088. isbn: 9781615679119. </p>
+</div><!-- contents -->
+</div><!-- doc-content -->
+<!-- start footer part -->
+<div id="nav-path" class="navpath"><!-- id is needed for treeview function! -->
+  <ul>
+    <li class="footer">Generated on Tue May 16 2017 13:24:38 for MADlib by
+    <a href="http://www.doxygen.org/index.html">
+    <img class="footer" src="doxygen.png" alt="doxygen"/></a> 1.8.13 </li>
+  </ul>
+</div>
+</body>
+</html>

http://git-wip-us.apache.org/repos/asf/incubator-madlib-site/blob/b5b51c69/docs/v1.11/group__grp__logreg.html
----------------------------------------------------------------------
diff --git a/docs/v1.11/group__grp__logreg.html b/docs/v1.11/group__grp__logreg.html
new file mode 100644
index 0000000..d1549fb
--- /dev/null
+++ b/docs/v1.11/group__grp__logreg.html
@@ -0,0 +1,418 @@
+<!-- HTML header for doxygen 1.8.4-->
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
+<html xmlns="http://www.w3.org/1999/xhtml">
+<head>
+<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
+<meta http-equiv="X-UA-Compatible" content="IE=9"/>
+<meta name="generator" content="Doxygen 1.8.13"/>
+<meta name="keywords" content="madlib,postgres,greenplum,machine learning,data mining,deep learning,ensemble methods,data science,market basket analysis,affinity analysis,pca,lda,regression,elastic net,huber white,proportional hazards,k-means,latent dirichlet allocation,bayes,support vector machines,svm"/>
+<title>MADlib: Logistic Regression</title>
+<link href="tabs.css" rel="stylesheet" type="text/css"/>
+<script type="text/javascript" src="jquery.js"></script>
+<script type="text/javascript" src="dynsections.js"></script>
+<link href="navtree.css" rel="stylesheet" type="text/css"/>
+<script type="text/javascript" src="resize.js"></script>
+<script type="text/javascript" src="navtreedata.js"></script>
+<script type="text/javascript" src="navtree.js"></script>
+<script type="text/javascript">
+  $(document).ready(initResizable);
+</script>
+<link href="search/search.css" rel="stylesheet" type="text/css"/>
+<script type="text/javascript" src="search/searchdata.js"></script>
+<script type="text/javascript" src="search/search.js"></script>
+<script type="text/javascript">
+  $(document).ready(function() { init_search(); });
+</script>
+<!-- hack in the navigation tree -->
+<script type="text/javascript" src="eigen_navtree_hacks.js"></script>
+<link href="doxygen.css" rel="stylesheet" type="text/css" />
+<link href="madlib_extra.css" rel="stylesheet" type="text/css"/>
+<!-- google analytics -->
+<script>
+  (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
+  (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
+  m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
+  })(window,document,'script','//www.google-analytics.com/analytics.js','ga');
+  ga('create', 'UA-45382226-1', 'madlib.incubator.apache.org');
+  ga('send', 'pageview');
+</script>
+</head>
+<body>
+<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
+<div id="titlearea">
+<table cellspacing="0" cellpadding="0">
+ <tbody>
+ <tr style="height: 56px;">
+  <td id="projectlogo"><a href="http://madlib.incubator.apache.org"><img alt="Logo" src="madlib.png" height="50" style="padding-left:0.5em;" border="0"/ ></a></td>
+  <td style="padding-left: 0.5em;">
+   <div id="projectname">
+   <span id="projectnumber">1.11</span>
+   </div>
+   <div id="projectbrief">User Documentation for MADlib</div>
+  </td>
+   <td>        <div id="MSearchBox" class="MSearchBoxInactive">
+        <span class="left">
+          <img id="MSearchSelect" src="search/mag_sel.png"
+               onmouseover="return searchBox.OnSearchSelectShow()"
+               onmouseout="return searchBox.OnSearchSelectHide()"
+               alt=""/>
+          <input type="text" id="MSearchField" value="Search" accesskey="S"
+               onfocus="searchBox.OnSearchFieldFocus(true)" 
+               onblur="searchBox.OnSearchFieldFocus(false)" 
+               onkeyup="searchBox.OnSearchFieldChange(event)"/>
+          </span><span class="right">
+            <a id="MSearchClose" href="javascript:searchBox.CloseResultsWindow()"><img id="MSearchCloseImg" border="0" src="search/close.png" alt=""/></a>
+          </span>
+        </div>
+</td>
+ </tr>
+ </tbody>
+</table>
+</div>
+<!-- end header part -->
+<!-- Generated by Doxygen 1.8.13 -->
+<script type="text/javascript">
+var searchBox = new SearchBox("searchBox", "search",false,'Search');
+</script>
+</div><!-- top -->
+<div id="side-nav" class="ui-resizable side-nav-resizable">
+  <div id="nav-tree">
+    <div id="nav-tree-contents">
+      <div id="nav-sync" class="sync"></div>
+    </div>
+  </div>
+  <div id="splitbar" style="-moz-user-select:none;" 
+       class="ui-resizable-handle">
+  </div>
+</div>
+<script type="text/javascript">
+$(document).ready(function(){initNavTree('group__grp__logreg.html','');});
+</script>
+<div id="doc-content">
+<!-- window showing the filter options -->
+<div id="MSearchSelectWindow"
+     onmouseover="return searchBox.OnSearchSelectShow()"
+     onmouseout="return searchBox.OnSearchSelectHide()"
+     onkeydown="return searchBox.OnSearchSelectKey(event)">
+</div>
+
+<!-- iframe showing the search results (closed by default) -->
+<div id="MSearchResultsWindow">
+<iframe src="javascript:void(0)" frameborder="0" 
+        name="MSearchResults" id="MSearchResults">
+</iframe>
+</div>
+
+<div class="header">
+  <div class="headertitle">
+<div class="title">Logistic Regression<div class="ingroups"><a class="el" href="group__grp__super.html">Supervised Learning</a> &raquo; <a class="el" href="group__grp__regml.html">Regression Models</a></div></div>  </div>
+</div><!--header-->
+<div class="contents">
+<div class="toc"><b>Contents</b><ul>
+<li class="level1">
+<a href="#train">Training Function</a> </li>
+<li class="level1">
+<a href="#predict">Prediction Function</a> </li>
+<li class="level1">
+<a href="#examples">Examples</a> </li>
+<li class="level1">
+<a href="#background">Technical Background</a> </li>
+<li class="level1">
+<a href="#literature">Literature</a> </li>
+<li class="level1">
+<a href="#related">Related Topics</a> </li>
+</ul>
+</div><p>Binomial logistic regression models the relationship between a dichotomous dependent variable and one or more predictor variables. The dependent variable may be a Boolean value or a categorial variable that can be represented with a Boolean expression. The probabilities describing the possible outcomes of a single trial are modeled, as a function of the predictor variables, using a logistic function.</p>
+<p><a class="anchor" id="train"></a></p><dl class="section user"><dt>Training Function</dt><dd>The logistic regression training function has the following format: <pre class="syntax">
+logregr_train( source_table,
+               out_table,
+               dependent_varname,
+               independent_varname,
+               grouping_cols,
+               max_iter,
+               optimizer,
+               tolerance,
+               verbose
+             )
+</pre> <b>Arguments</b> <dl class="arglist">
+<dt>source_table </dt>
+<dd><p class="startdd">TEXT. The name of the table containing the training data.</p>
+<p class="enddd"></p>
+</dd>
+<dt>out_table </dt>
+<dd><p class="startdd">TEXT. Name of the generated table containing the output model.</p>
+<p>The output table produced by the logistic regression training function contains the following columns:</p>
+<table class="output">
+<tr>
+<th>&lt;...&gt; </th><td><p class="starttd">Text. Grouping columns, if provided in input. This could be multiple columns depending on the <code>grouping_col</code> input. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>coef </th><td><p class="starttd">FLOAT8. Vector of the coefficients of the regression. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>log_likelihood </th><td><p class="starttd">FLOAT8. The log-likelihood <img class="formulaInl" alt="$ l(\boldsymbol c) $" src="form_80.png"/>. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>std_err </th><td><p class="starttd">FLOAT8[]. Vector of the standard error of the coefficients. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>z_stats </th><td><p class="starttd">FLOAT8[]. Vector of the z-statistics of the coefficients. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>p_values </th><td><p class="starttd">FLOAT8[]. Vector of the p-values of the coefficients. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>odds_ratios </th><td><p class="starttd">FLOAT8[]. The odds ratio, <img class="formulaInl" alt="$ \exp(c_i) $" src="form_116.png"/>. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>condition_no </th><td><p class="starttd">FLOAT8[]. The condition number of the <img class="formulaInl" alt="$X^{*}X$" src="form_325.png"/> matrix. A high condition number is usually an indication that there may be some numeric instability in the result yielding a less reliable model. A high condition number often results when there is a significant amount of colinearity in the underlying design matrix, in which case other regression techniques may be more appropriate. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>num_iterations </th><td>INTEGER. The number of iterations actually completed. This would be different from the <code>nIterations</code> argument if a <code>tolerance</code> parameter is provided and the algorithm converges before all iterations are completed.  </td></tr>
+<tr>
+<th>num_rows_processed </th><td>INTEGER. The number of rows actually processed, which is equal to the total number of rows in the source table minus the number of skipped rows.  </td></tr>
+<tr>
+<th>num_missing_rows_skipped </th><td>INTEGER. The number of rows skipped during the training. A row will be skipped if the independent_varname is NULL or contains NULL values.  </td></tr>
+</table>
+<p>A summary table named &lt;out_table&gt;_summary is also created at the same time, which has the following columns: </p><table class="output">
+<tr>
+<th>source_table </th><td><p class="starttd">The data source table name. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>out_table </th><td><p class="starttd">The output table name. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>dependent_varname </th><td><p class="starttd">The dependent variable. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>independent_varname </th><td><p class="starttd">The independent variables </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>optimizer_params </th><td><p class="starttd">A string that contains all the optimizer parameters, and has the form of 'optimizer=..., max_iter=..., tolerance=...' </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>num_all_groups </th><td><p class="starttd">How many groups of data were fit by the logistic model. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>num_failed_groups </th><td><p class="starttd">How many groups' fitting processes failed. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>num_rows_processed </th><td><p class="starttd">The total number of rows usd in the computation. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>num_missing_rows_skipped </th><td>The total number of rows skipped.  </td></tr>
+</table>
+<p class="enddd"></p>
+</dd>
+<dt>dependent_varname </dt>
+<dd><p class="startdd">TEXT. Name of the dependent variable column (of type BOOLEAN) in the training data or an expression evaluating to a BOOLEAN.</p>
+<p class="enddd"></p>
+</dd>
+<dt>independent_varname </dt>
+<dd><p class="startdd">TEXT. Expression list to evaluate for the independent variables. An intercept variable is not assumed. It is common to provide an explicit intercept term by including a single constant <code>1</code> term in the independent variable list.</p>
+<p class="enddd"></p>
+</dd>
+<dt>grouping_cols (optional) </dt>
+<dd><p class="startdd">TEXT, default: NULL. An expression list used to group the input dataset into discrete groups, running one regression per group. Similar to the SQL "GROUP BY" clause. When this value is NULL, no grouping is used and a single result model is generated.</p>
+<p class="enddd"></p>
+</dd>
+<dt>max_iter (optional) </dt>
+<dd><p class="startdd">INTEGER, default: 20. The maximum number of iterations that are allowed.</p>
+<p class="enddd"></p>
+</dd>
+<dt>optimizer (optional) </dt>
+<dd><p class="startdd">TEXT, default: 'irls'. The name of the optimizer to use: </p><table class="output">
+<tr>
+<th>'newton' or 'irls' </th><td>Iteratively reweighted least squares  </td></tr>
+<tr>
+<th>'cg' </th><td>conjugate gradient  </td></tr>
+<tr>
+<th>'igd' </th><td>incremental gradient descent.  </td></tr>
+</table>
+<p class="enddd"></p>
+</dd>
+<dt>tolerance (optional) </dt>
+<dd><p class="startdd">FLOAT8, default: 0.0001. The difference between log-likelihood values in successive iterations that should indicate convergence. A zero disables the convergence criterion, so that execution stops after <code>n</code> iterations have completed.</p>
+<p class="enddd"></p>
+</dd>
+<dt>verbose (optional) </dt>
+<dd>BOOLEAN, default: FALSE. Provides verbose output of the results of training. </dd>
+</dl>
+</dd></dl>
+<dl class="section note"><dt>Note</dt><dd>For p-values, we just return the computation result directly. Other statistical packages, like 'R', produce the same result, but on printing the result to screen, another format function is used and any p-value that is smaller than the machine epsilon (the smallest positive floating-point number 'x' such that '1 + x != 1') will be printed on screen as "&lt; xxx" (xxx is the value of the machine epsilon). Although the result may look different, they are in fact the same.</dd></dl>
+<p><a class="anchor" id="predict"></a></p><dl class="section user"><dt>Prediction Function</dt><dd>Two prediction functions are provided to either predict the boolean value of the dependent variable or the probability of the value of dependent variable being 'True', both functions using the same syntax.</dd></dl>
+<p>The function to predict the boolean value (True/False) of the dependent variable has the following syntax: </p><pre class="syntax">
+logregr_predict(coefficients,
+                ind_var
+               )
+</pre><p>The function to predict the probability of the dependent variable being True has the following syntax: </p><pre class="syntax">
+logregr_predict_prob(coefficients,
+                     ind_var
+                    )
+</pre><p><b>Arguments</b> </p><dl class="arglist">
+<dt>coefficients </dt>
+<dd><p class="startdd">DOUBLE PRECISION[]. Model coefficients obtained from <a class="el" href="logistic_8sql__in.html#a74210a7ef513dfcbdfdd9f3b37bfe428">logregr_train()</a>.</p>
+<p class="enddd"></p>
+</dd>
+<dt>ind_var </dt>
+<dd>Independent variables, as a DOUBLE array. This should be the same length as the array obtained by evaluation of the 'independent_varname' argument in <a class="el" href="logistic_8sql__in.html#a74210a7ef513dfcbdfdd9f3b37bfe428">logregr_train()</a>. </dd>
+</dl>
+<p><a class="anchor" id="examples"></a></p><dl class="section user"><dt>Examples</dt><dd><ol type="1">
+<li>Create the training data table. <pre class="example">
+CREATE TABLE patients( id INTEGER NOT NULL,
+                       second_attack INTEGER,
+                       treatment INTEGER,
+                       trait_anxiety INTEGER);
+COPY patients FROM STDIN WITH DELIMITER '|';
+  1 |             1 |         1 |            70
+  3 |             1 |         1 |            50
+  5 |             1 |         0 |            40
+  7 |             1 |         0 |            75
+  9 |             1 |         0 |            70
+ 11 |             0 |         1 |            65
+ 13 |             0 |         1 |            45
+ 15 |             0 |         1 |            40
+ 17 |             0 |         0 |            55
+ 19 |             0 |         0 |            50
+  2 |             1 |         1 |            80
+  4 |             1 |         0 |            60
+  6 |             1 |         0 |            65
+  8 |             1 |         0 |            80
+ 10 |             1 |         0 |            60
+ 12 |             0 |         1 |            50
+ 14 |             0 |         1 |            35
+ 16 |             0 |         1 |            50
+ 18 |             0 |         0 |            45
+ 20 |             0 |         0 |            60
+\.
+</pre></li>
+<li>Train a regression model. <pre class="example">
+SELECT madlib.logregr_train( 'patients',
+                             'patients_logregr',
+                             'second_attack',
+                             'ARRAY[1, treatment, trait_anxiety]',
+                             NULL,
+                             20,
+                             'irls'
+                           );
+</pre> (Note that in this example we are dynamically creating the array of independent variables from column names. If you have large numbers of independent variables beyond the PostgreSQL limit of maximum columns per table, you would pre-build the arrays and store them in a single column.)</li>
+<li>View the regression results. <pre class="example">
+-- Set extended display on for easier reading of output
+\x on
+SELECT * from patients_logregr;
+</pre> Result: <pre class="result">
+coef           | {5.59049410898112,2.11077546770772,-0.237276684606453}
+log_likelihood | -467.214718489873
+std_err        | {0.318943457652178,0.101518723785383,0.294509929481773}
+z_stats        | {17.5281667482197,20.7919819024719,-0.805666162169712}
+p_values       | {8.73403463417837e-69,5.11539430631541e-96,0.420435365338518}
+odds_ratios    | {267.867942976278,8.2546400100702,0.788773016471171}
+condition_no   | 179.186118573205
+num_iterations | 9
+</pre></li>
+<li>Alternatively, unnest the arrays in the results for easier reading of output: <pre class="example">
+\x off
+SELECT unnest(array['intercept', 'treatment', 'trait_anxiety']) as attribute,
+       unnest(coef) as coefficient,
+       unnest(std_err) as standard_error,
+       unnest(z_stats) as z_stat,
+       unnest(p_values) as pvalue,
+       unnest(odds_ratios) as odds_ratio
+    FROM patients_logregr;
+</pre></li>
+<li>Predicting dependent variable using the logistic regression model. (This example uses the original data table to perform the prediction. Typically a different test dataset with the same features as the original training dataset would be used for prediction.) <pre class="example">
+\x off
+-- Display prediction value along with the original value
+SELECT p.id, madlib.logregr_predict(coef, ARRAY[1, treatment, trait_anxiety]),
+       p.second_attack
+FROM patients p, patients_logregr m
+ORDER BY p.id;
+</pre></li>
+<li>Predicting the probability of the dependent variable being TRUE. <pre class="example">
+\x off
+-- Display prediction value along with the original value
+SELECT p.id, madlib.logregr_predict_prob(coef, ARRAY[1, treatment, trait_anxiety])
+FROM patients p, patients_logregr m
+ORDER BY p.id;
+</pre></li>
+</ol>
+</dd></dl>
+<p><a class="anchor" id="notes"></a></p><dl class="section user"><dt>Notes</dt><dd>All table names can be optionally schema qualified (current_schemas() would be searched if a schema name is not provided) and all table and column names should follow case-sensitivity and quoting rules per the database. (For instance, 'mytable' and 'MyTable' both resolve to the same entity, i.e. 'mytable'. If mixed-case or multi-byte characters are desired for entity names then the string should be double-quoted; in this case the input would be '"MyTable"').</dd></dl>
+<p><a class="anchor" id="background"></a></p><dl class="section user"><dt>Technical Background</dt><dd></dd></dl>
+<p>(Binomial) logistic regression refers to a stochastic model in which the conditional mean of the dependent dichotomous variable (usually denoted <img class="formulaInl" alt="$ Y \in \{ 0,1 \} $" src="form_354.png"/>) is the logistic function of an affine function of the vector of independent variables (usually denoted <img class="formulaInl" alt="$ \boldsymbol x $" src="form_59.png"/>). That is, </p><p class="formulaDsp">
+<img class="formulaDsp" alt="\[ E[Y \mid \boldsymbol x] = \sigma(\boldsymbol c^T \boldsymbol x) \]" src="form_95.png"/>
+</p>
+<p> for some unknown vector of coefficients <img class="formulaInl" alt="$ \boldsymbol c $" src="form_79.png"/> and where <img class="formulaInl" alt="$ \sigma(x) = \frac{1}{1 + \exp(-x)} $" src="form_96.png"/> is the logistic function. Logistic regression finds the vector of coefficients <img class="formulaInl" alt="$ \boldsymbol c $" src="form_79.png"/> that maximizes the likelihood of the observations.</p>
+<p>Let</p><ul>
+<li><img class="formulaInl" alt="$ \boldsymbol y \in \{ 0,1 \}^n $" src="form_355.png"/> denote the vector of observed dependent variables, with <img class="formulaInl" alt="$ n $" src="form_11.png"/> rows, containing the observed values of the dependent variable,</li>
+<li><img class="formulaInl" alt="$ X \in \mathbf R^{n \times k} $" src="form_99.png"/> denote the design matrix with <img class="formulaInl" alt="$ k $" src="form_98.png"/> columns and <img class="formulaInl" alt="$ n $" src="form_11.png"/> rows, containing all observed vectors of independent variables <img class="formulaInl" alt="$ \boldsymbol x_i $" src="form_100.png"/> as rows.</li>
+</ul>
+<p>By definition, </p><p class="formulaDsp">
+<img class="formulaDsp" alt="\[ P[Y = y_i | \boldsymbol x_i] = \sigma((-1)^{(1 - y_i)} \cdot \boldsymbol c^T \boldsymbol x_i) \,. \]" src="form_356.png"/>
+</p>
+<p> Maximizing the likelihood <img class="formulaInl" alt="$ \prod_{i=1}^n \Pr(Y = y_i \mid \boldsymbol x_i) $" src="form_102.png"/> is equivalent to maximizing the log-likelihood <img class="formulaInl" alt="$ \sum_{i=1}^n \log \Pr(Y = y_i \mid \boldsymbol x_i) $" src="form_103.png"/>, which simplifies to </p><p class="formulaDsp">
+<img class="formulaDsp" alt="\[ l(\boldsymbol c) = -\sum_{i=1}^n \log(1 + \exp((-1)^{(1 - y_i)} \cdot \boldsymbol c^T \boldsymbol x_i)) \,. \]" src="form_357.png"/>
+</p>
+<p> The Hessian of this objective is <img class="formulaInl" alt="$ H = -X^T A X $" src="form_105.png"/> where <img class="formulaInl" alt="$ A = \text{diag}(a_1, \dots, a_n) $" src="form_106.png"/> is the diagonal matrix with <img class="formulaInl" alt="$ a_i = \sigma(\boldsymbol c^T \boldsymbol x) \cdot \sigma(-\boldsymbol c^T \boldsymbol x) \,. $" src="form_107.png"/> Since <img class="formulaInl" alt="$ H $" src="form_108.png"/> is non-positive definite, <img class="formulaInl" alt="$ l(\boldsymbol c) $" src="form_80.png"/> is convex. There are many techniques for solving convex optimization problems. Currently, logistic regression in MADlib can use one of three algorithms:</p><ul>
+<li>Iteratively Reweighted Least Squares</li>
+<li>A conjugate-gradient approach, also known as Fletcher-Reeves method in the literature, where we use the Hestenes-Stiefel rule for calculating the step size.</li>
+<li>Incremental gradient descent, also known as incremental gradient methods or stochastic gradient descent in the literature.</li>
+</ul>
+<p>We estimate the standard error for coefficient <img class="formulaInl" alt="$ i $" src="form_33.png"/> as </p><p class="formulaDsp">
+<img class="formulaDsp" alt="\[ \mathit{se}(c_i) = \left( (X^T A X)^{-1} \right)_{ii} \,. \]" src="form_109.png"/>
+</p>
+<p> The Wald z-statistic is </p><p class="formulaDsp">
+<img class="formulaDsp" alt="\[ z_i = \frac{c_i}{\mathit{se}(c_i)} \,. \]" src="form_110.png"/>
+</p>
+<p>The Wald <img class="formulaInl" alt="$ p $" src="form_111.png"/>-value for coefficient <img class="formulaInl" alt="$ i $" src="form_33.png"/> gives the probability (under the assumptions inherent in the Wald test) of seeing a value at least as extreme as the one observed, provided that the null hypothesis ( <img class="formulaInl" alt="$ c_i = 0 $" src="form_112.png"/>) is true. Letting <img class="formulaInl" alt="$ F $" src="form_113.png"/> denote the cumulative density function of a standard normal distribution, the Wald <img class="formulaInl" alt="$ p $" src="form_111.png"/>-value for coefficient <img class="formulaInl" alt="$ i $" src="form_33.png"/> is therefore </p><p class="formulaDsp">
+<img class="formulaDsp" alt="\[ p_i = \Pr(|Z| \geq |z_i|) = 2 \cdot (1 - F( |z_i| )) \]" src="form_114.png"/>
+</p>
+<p> where <img class="formulaInl" alt="$ Z $" src="form_115.png"/> is a standard normally distributed random variable.</p>
+<p>The odds ratio for coefficient <img class="formulaInl" alt="$ i $" src="form_33.png"/> is estimated as <img class="formulaInl" alt="$ \exp(c_i) $" src="form_116.png"/>.</p>
+<p>The condition number is computed as <img class="formulaInl" alt="$ \kappa(X^T A X) $" src="form_117.png"/> during the iteration immediately <em>preceding</em> convergence (i.e., <img class="formulaInl" alt="$ A $" src="form_14.png"/> is computed using the coefficients of the previous iteration). A large condition number (say, more than 1000) indicates the presence of significant multicollinearity.</p>
+<p><a class="anchor" id="literature"></a></p><dl class="section user"><dt>Literature</dt><dd></dd></dl>
+<p>A somewhat random selection of nice write-ups, with valuable pointers into further literature.</p>
+<p>[1] Cosma Shalizi: Statistics 36-350: Data Mining, Lecture Notes, 18 November 2009, <a href="http://www.stat.cmu.edu/~cshalizi/350/lectures/26/lecture-26.pdf">http://www.stat.cmu.edu/~cshalizi/350/lectures/26/lecture-26.pdf</a></p>
+<p>[2] Thomas P. Minka: A comparison of numerical optimizers for logistic regression, 2003 (revised Mar 26, 2007), <a href="http://research.microsoft.com/en-us/um/people/minka/papers/logreg/minka-logreg.pdf">http://research.microsoft.com/en-us/um/people/minka/papers/logreg/minka-logreg.pdf</a></p>
+<p>[3] Paul Komarek, Andrew W. Moore: Making Logistic Regression A Core Data Mining Tool With TR-IRLS, IEEE International Conference on Data Mining 2005, pp. 685-688, <a href="http://komarix.org/ac/papers/tr-irls.short.pdf">http://komarix.org/ac/papers/tr-irls.short.pdf</a></p>
+<p>[4] D. P. Bertsekas: Incremental gradient, subgradient, and proximal methods for convex optimization: a survey, Technical report, Laboratory for Information and Decision Systems, 2010, <a href="http://web.mit.edu/dimitrib/www/Incremental_Survey_LIDS.pdf">http://web.mit.edu/dimitrib/www/Incremental_Survey_LIDS.pdf</a></p>
+<p>[5] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro: Robust stochastic approximation approach to stochastic programming, SIAM Journal on Optimization, 19(4), 2009, <a href="http://www2.isye.gatech.edu/~nemirovs/SIOPT_RSA_2009.pdf">http://www2.isye.gatech.edu/~nemirovs/SIOPT_RSA_2009.pdf</a></p>
+<p><a class="anchor" id="related"></a></p><dl class="section user"><dt>Related Topics</dt><dd></dd></dl>
+<p>File <a class="el" href="logistic_8sql__in.html" title="SQL functions for logistic regression. ">logistic.sql_in</a> documenting the training function</p>
+<p><a class="el" href="logistic_8sql__in.html#a74210a7ef513dfcbdfdd9f3b37bfe428" title="Compute logistic-regression coefficients and diagnostic statistics. ">logregr_train()</a></p>
+<p><a class="el" href="elastic__net_8sql__in.html#a735038a5090c112505c740a90a203e83" title="Interface for elastic net. ">elastic_net_train()</a></p>
+<p><a class="el" href="group__grp__linreg.html">Linear Regression</a></p>
+<p><a class="el" href="group__grp__multinom.html">Multinomial Regression</a></p>
+<p><a class="el" href="group__grp__ordinal.html">Ordinal Regression</a></p>
+<p><a class="el" href="group__grp__robust.html">Robust Variance</a></p>
+<p><a class="el" href="group__grp__clustered__errors.html">Clustered Variance</a></p>
+<p><a class="el" href="group__grp__validation.html">Cross Validation</a></p>
+<p><a class="el" href="group__grp__marginal.html">Marginal Effects</a></p>
+</div><!-- contents -->
+</div><!-- doc-content -->
+<!-- start footer part -->
+<div id="nav-path" class="navpath"><!-- id is needed for treeview function! -->
+  <ul>
+    <li class="footer">Generated on Tue May 16 2017 13:24:38 for MADlib by
+    <a href="http://www.doxygen.org/index.html">
+    <img class="footer" src="doxygen.png" alt="doxygen"/></a> 1.8.13 </li>
+  </ul>
+</div>
+</body>
+</html>

http://git-wip-us.apache.org/repos/asf/incubator-madlib-site/blob/b5b51c69/docs/v1.11/group__grp__marginal.html
----------------------------------------------------------------------
diff --git a/docs/v1.11/group__grp__marginal.html b/docs/v1.11/group__grp__marginal.html
new file mode 100644
index 0000000..a2faca3
--- /dev/null
+++ b/docs/v1.11/group__grp__marginal.html
@@ -0,0 +1,427 @@
+<!-- HTML header for doxygen 1.8.4-->
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
+<html xmlns="http://www.w3.org/1999/xhtml">
+<head>
+<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
+<meta http-equiv="X-UA-Compatible" content="IE=9"/>
+<meta name="generator" content="Doxygen 1.8.13"/>
+<meta name="keywords" content="madlib,postgres,greenplum,machine learning,data mining,deep learning,ensemble methods,data science,market basket analysis,affinity analysis,pca,lda,regression,elastic net,huber white,proportional hazards,k-means,latent dirichlet allocation,bayes,support vector machines,svm"/>
+<title>MADlib: Marginal Effects</title>
+<link href="tabs.css" rel="stylesheet" type="text/css"/>
+<script type="text/javascript" src="jquery.js"></script>
+<script type="text/javascript" src="dynsections.js"></script>
+<link href="navtree.css" rel="stylesheet" type="text/css"/>
+<script type="text/javascript" src="resize.js"></script>
+<script type="text/javascript" src="navtreedata.js"></script>
+<script type="text/javascript" src="navtree.js"></script>
+<script type="text/javascript">
+  $(document).ready(initResizable);
+</script>
+<link href="search/search.css" rel="stylesheet" type="text/css"/>
+<script type="text/javascript" src="search/searchdata.js"></script>
+<script type="text/javascript" src="search/search.js"></script>
+<script type="text/javascript">
+  $(document).ready(function() { init_search(); });
+</script>
+<!-- hack in the navigation tree -->
+<script type="text/javascript" src="eigen_navtree_hacks.js"></script>
+<link href="doxygen.css" rel="stylesheet" type="text/css" />
+<link href="madlib_extra.css" rel="stylesheet" type="text/css"/>
+<!-- google analytics -->
+<script>
+  (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
+  (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
+  m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
+  })(window,document,'script','//www.google-analytics.com/analytics.js','ga');
+  ga('create', 'UA-45382226-1', 'madlib.incubator.apache.org');
+  ga('send', 'pageview');
+</script>
+</head>
+<body>
+<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
+<div id="titlearea">
+<table cellspacing="0" cellpadding="0">
+ <tbody>
+ <tr style="height: 56px;">
+  <td id="projectlogo"><a href="http://madlib.incubator.apache.org"><img alt="Logo" src="madlib.png" height="50" style="padding-left:0.5em;" border="0"/ ></a></td>
+  <td style="padding-left: 0.5em;">
+   <div id="projectname">
+   <span id="projectnumber">1.11</span>
+   </div>
+   <div id="projectbrief">User Documentation for MADlib</div>
+  </td>
+   <td>        <div id="MSearchBox" class="MSearchBoxInactive">
+        <span class="left">
+          <img id="MSearchSelect" src="search/mag_sel.png"
+               onmouseover="return searchBox.OnSearchSelectShow()"
+               onmouseout="return searchBox.OnSearchSelectHide()"
+               alt=""/>
+          <input type="text" id="MSearchField" value="Search" accesskey="S"
+               onfocus="searchBox.OnSearchFieldFocus(true)" 
+               onblur="searchBox.OnSearchFieldFocus(false)" 
+               onkeyup="searchBox.OnSearchFieldChange(event)"/>
+          </span><span class="right">
+            <a id="MSearchClose" href="javascript:searchBox.CloseResultsWindow()"><img id="MSearchCloseImg" border="0" src="search/close.png" alt=""/></a>
+          </span>
+        </div>
+</td>
+ </tr>
+ </tbody>
+</table>
+</div>
+<!-- end header part -->
+<!-- Generated by Doxygen 1.8.13 -->
+<script type="text/javascript">
+var searchBox = new SearchBox("searchBox", "search",false,'Search');
+</script>
+</div><!-- top -->
+<div id="side-nav" class="ui-resizable side-nav-resizable">
+  <div id="nav-tree">
+    <div id="nav-tree-contents">
+      <div id="nav-sync" class="sync"></div>
+    </div>
+  </div>
+  <div id="splitbar" style="-moz-user-select:none;" 
+       class="ui-resizable-handle">
+  </div>
+</div>
+<script type="text/javascript">
+$(document).ready(function(){initNavTree('group__grp__marginal.html','');});
+</script>
+<div id="doc-content">
+<!-- window showing the filter options -->
+<div id="MSearchSelectWindow"
+     onmouseover="return searchBox.OnSearchSelectShow()"
+     onmouseout="return searchBox.OnSearchSelectHide()"
+     onkeydown="return searchBox.OnSearchSelectKey(event)">
+</div>
+
+<!-- iframe showing the search results (closed by default) -->
+<div id="MSearchResultsWindow">
+<iframe src="javascript:void(0)" frameborder="0" 
+        name="MSearchResults" id="MSearchResults">
+</iframe>
+</div>
+
+<div class="header">
+  <div class="headertitle">
+<div class="title">Marginal Effects<div class="ingroups"><a class="el" href="group__grp__super.html">Supervised Learning</a> &raquo; <a class="el" href="group__grp__regml.html">Regression Models</a></div></div>  </div>
+</div><!--header-->
+<div class="contents">
+<div class="toc"><b>Contents</b> <ul>
+<li>
+<a href="#margins">Marginal Effects with Interaction Terms</a> </li>
+<li>
+<a href="#examples">Examples</a> </li>
+<li>
+<a href="#notes">Notes</a> </li>
+<li>
+<a href="#background">Technical Background</a> </li>
+<li>
+<a href="#literature">Literature</a> </li>
+<li>
+<a href="#related">Related Topics</a> </li>
+</ul>
+</div><p>A marginal effect (ME) or partial effect measures the effect on the conditional mean of <img class="formulaInl" alt="$ y $" src="form_324.png"/> for a change in one of the regressors, say <img class="formulaInl" alt="$X_k$" src="form_367.png"/>. In the linear regression model, the ME equals the relevant slope coefficient, greatly simplifying analysis. For nonlinear models, specialized algorithms are required for calculating ME. The marginal effect computed is the average of the marginal effect at every data point present in the source table.</p>
+<p>MADlib provides marginal effects regression functions for linear, logistic and multinomial logistic regressions.</p>
+<dl class="section warning"><dt>Warning</dt><dd>The <a class="el" href="marginal_8sql__in.html#a9517d679ee4209126895445cbed51fe3">margins_logregr()</a> and <a class="el" href="marginal_8sql__in.html#ae39ad0e1beca060fd153dba35901a4e7">margins_mlogregr()</a> functions have been deprecated in favor of the <a class="el" href="marginal_8sql__in.html#a36fcae5245ca31517723fce38b183c90" title="Marginal effects with default variable_names. ">margins()</a> function.</dd></dl>
+<p><a class="anchor" id="margins"></a></p><dl class="section user"><dt>Marginal Effects with Interaction Terms</dt><dd><pre class="syntax">
+margins( model_table,
+         output_table,
+         x_design,
+         source_table,
+         marginal_vars
+       )
+</pre> <b>Arguments</b> <dl class="arglist">
+<dt>model_table </dt>
+<dd>VARCHAR. The name of the model table, which is the output of <a class="el" href="logistic_8sql__in.html#a74210a7ef513dfcbdfdd9f3b37bfe428" title="Compute logistic-regression coefficients and diagnostic statistics. ">logregr_train()</a> or <a class="el" href="multilogistic_8sql__in.html#aedc13474e6abbc88451d120ad97e44d4" title="Compute multinomial logistic regression coefficients. ">mlogregr_train()</a>. </dd>
+<dt>output_table </dt>
+<dd>VARCHAR. The name of the result table. The output table has the following columns. <table class="output">
+<tr>
+<th>variables </th><td>INTEGER[]. The indices of the basis variables.  </td></tr>
+<tr>
+<th>margins </th><td>DOUBLE PRECISION[]. The marginal effects.  </td></tr>
+<tr>
+<th>std_err </th><td>DOUBLE PRECISION[]. An array of the standard errors, computed using the delta method.  </td></tr>
+<tr>
+<th>z_stats </th><td>DOUBLE PRECISION[]. An array of the z-stats of the marginal effects.  </td></tr>
+<tr>
+<th>p_values </th><td>DOUBLE PRECISION[]. An array of the Wald p-values of the marginal effects.  </td></tr>
+</table>
+</dd>
+<dt>x_design (optional) </dt>
+<dd><p class="startdd">VARCHAR, default: NULL. The design of independent variables, necessary only if interaction term or indicator (categorical) terms are present. This parameter is necessary since the independent variables in the underlying regression is not parsed to extract the relationship between variables.</p>
+<p>Example: The <em>independent_varname</em> in the regression method can be specified in either of the following ways:</p><ul>
+<li><code> ‘array[1, color_blue, color_green, gender_female, gpa, gpa^2, gender_female*gpa, gender_female*gpa^2, weight]’ </code></li>
+<li><code> ‘x’ </code></li>
+</ul>
+<p>In the second version, the column <em>x</em> is an array containing data identical to that expressed in the first version, computed in a prior data preparation step. Supply an <em>x_design argument</em> to the <a class="el" href="marginal_8sql__in.html#a36fcae5245ca31517723fce38b183c90" title="Marginal effects with default variable_names. ">margins()</a> function in the following way:</p><ul>
+<li><code> ‘1, i.color_blue.color, i.color_green.color, i.gender_female, gpa, gpa^2, gender_female*gpa, gender_female*gpa^2, weight’</code></li>
+</ul>
+<p>The variable names (<em>'gpa', 'weight', ...</em>), referred to here as <em>identifiers</em>, should be unique for each basis variable and need not be the same as the original variable name in <em>independent_varname</em>. They should, however, be in the same order as the corresponding variables in <em>independent_varname</em>. The length of <em>x_design</em> is expected to be the same as the length of <em>independent_varname</em>. Each <em>identifier</em> name can contain only alphanumeric characters and the underscore.</p>
+<p>Indicator (dummy) variables are prefixed with an 'i.' (This is only necessary for the basis term; it is not needed in the interaction terms.) Indicator variables that are obtained from the same categorical variable (for example, 'color_blue' and 'color_green') need to have a common and unique suffix (for example, '.color'). The '.' is used to add the prefix and suffix. If a reference indicator variable is present, it should contain the prefix 'ir.'.</p>
+<p>An identifier may contain alphanumeric characters and underscores. To include other characters, the string must be double-quoted. Escape-characters are not currently supported. </p>
+<p class="enddd"></p>
+</dd>
+<dt>source_table (optional) </dt>
+<dd><p class="startdd">VARCHAR, default: NULL. Name of the data table to apply marginal effects on. If not provided or NULL then the marginal effects are computed on the training table.</p>
+<p class="enddd"></p>
+</dd>
+<dt>marginal_vars (optional) </dt>
+<dd>VARCHAR, default: NULL. Comma-separated string containing specific variable identifiers to calculate marginal effects on. When NULL, marginal effects for all variables are returned. </dd>
+</dl>
+</dd></dl>
+<dl class="section note"><dt>Note</dt><dd>No output will be provided for the reference indicator variable, since the marginal effect for that variable is undefined. If a reference variable is included in the independent variables and <em>marginal_vars</em>, the <a class="el" href="marginal_8sql__in.html#a36fcae5245ca31517723fce38b183c90" title="Marginal effects with default variable_names. ">margins()</a> function will ignore that variable for the output. The variable can still be included in the regression and margins, since it will affect the values for other related indicator variables.</dd></dl>
+<p><a class="anchor" id="logregr_train"></a></p><dl class="section user"><dt>Marginal Effects for Logistic Regression</dt><dd></dd></dl>
+<dl class="section warning"><dt>Warning</dt><dd>This function has been deprecated in favor of the <a class="el" href="marginal_8sql__in.html#a36fcae5245ca31517723fce38b183c90" title="Marginal effects with default variable_names. ">margins()</a> function.</dd></dl>
+<pre class="syntax">
+margins_logregr( source_table,
+                 output_table,
+                 dependent_variable,
+                 independent_variable,
+                 grouping_cols,
+                 marginal_vars,
+                 max_iter,
+                 optimizer,
+                 tolerance,
+                 verbose_mode
+               )
+</pre><p> <b>Arguments</b> </p><dl class="arglist">
+<dt>source_table </dt>
+<dd>VARCHAR. The name of the data table. </dd>
+<dt>output_table </dt>
+<dd><p class="startdd">VARCHAR. The name of the result table. The output table has the following columns. </p><table class="output">
+<tr>
+<th>margins </th><td>DOUBLE PRECISION[]. The marginal effects.  </td></tr>
+<tr>
+<th>std_err </th><td>DOUBLE PRECISION[]. An array of the standard errors, using the delta method.  </td></tr>
+<tr>
+<th>z_stats </th><td>DOUBLE PRECISION[]. An array of the z-stats of the marginal effects.  </td></tr>
+<tr>
+<th>p_values </th><td>DOUBLE PRECISION[]. An array of the Wald p-values of the marginal effects.  </td></tr>
+</table>
+<p>A summary table named &lt;output_table&gt;_summary is also created, which is the same as the summary table created by <a class="el" href="logistic_8sql__in.html#a74210a7ef513dfcbdfdd9f3b37bfe428" title="Compute logistic-regression coefficients and diagnostic statistics. ">logregr_train()</a> function. Refer to the documentation for logistic regression for details.</p>
+<p class="enddd"></p>
+</dd>
+<dt>dependent_variable </dt>
+<dd>VARCHAR. The name of the column for dependent variables. </dd>
+<dt>independent_variable </dt>
+<dd>VARCHAR. The name of the column for independent variables. Can be any SQL expression that evaluates to an array. </dd>
+<dt>grouping_cols (optional) </dt>
+<dd>VARCHAR, default: NULL. <em>Not currently implemented. Any non-NULL value is ignored.</em> An expression list used to group the input dataset into discrete groups, running one regression per group. Similar to the SQL "GROUP BY" clause. When this value is NULL, no grouping is used and a single result model is generated. </dd>
+<dt>marginal_vars (optional) </dt>
+<dd>INTEGER[], default: NULL. An index list (base 1) representing the independent variables to compute marginal effects on. When NULL, computes marginal effects on all variables. </dd>
+<dt>max_iter (optional) </dt>
+<dd>INTEGER, default: 20. The maximum number of iterations for the logistic regression. </dd>
+<dt>optimizer (optional) </dt>
+<dd>VARCHAR, default: 'irls'. The optimizer to use for the logistic regression: newton/irls, cg, or igd. </dd>
+<dt>tolerance (optional) </dt>
+<dd>DOUBLE PRECISION, default: 1e-4. Termination criterion for logistic regression (relative). </dd>
+<dt>verbose_mode (optional) </dt>
+<dd>BOOLEAN, default FALSE. When TRUE, provides verbose output of the results of training.  </dd>
+</dl>
+<p><a class="anchor" id="mlogregr_train"></a></p><dl class="section user"><dt>Marginal Effects for Multinomial Logistic Regression</dt><dd></dd></dl>
+<dl class="section warning"><dt>Warning</dt><dd>This function has been deprecated in favor of the <a class="el" href="marginal_8sql__in.html#a36fcae5245ca31517723fce38b183c90" title="Marginal effects with default variable_names. ">margins()</a> function.</dd></dl>
+<pre class="syntax">
+margins_mlogregr( source_table,
+                  out_table,
+                  dependent_varname,
+                  independent_varname,
+                  ref_category,
+                  grouping_cols,
+                  marginal_vars,
+                  optimizer_params,
+                  verbose_mode
+                )
+</pre><p> <b>Arguments</b> </p><dl class="arglist">
+<dt>source_table </dt>
+<dd>VARCHAR. The name of data table. </dd>
+<dt>out_table </dt>
+<dd><p class="startdd">VARCHAR. The name of result table. The output table has the following columns. </p><table class="output">
+<tr>
+<th>category </th><td>The category.  </td></tr>
+<tr>
+<th>ref_category </th><td>The refererence category used for modeling.  </td></tr>
+<tr>
+<th>margins </th><td>DOUBLE PRECISION[]. The marginal effects.  </td></tr>
+<tr>
+<th>std_err </th><td>DOUBLE PRECISION[]. An array of the standard errors, using the delta method.  </td></tr>
+<tr>
+<th>z_stats </th><td>DOUBLE PRECISION[]. An array of the z-stats of the marginal effects.  </td></tr>
+<tr>
+<th>p_values </th><td>DOUBLE PRECISION[]. An array of the Wald p-values of the marginal effects.  </td></tr>
+</table>
+<p>A summary table named &lt;out_table&gt;_summary is also created, which is the same as the summary table created by <a class="el" href="multilogistic_8sql__in.html#aedc13474e6abbc88451d120ad97e44d4" title="Compute multinomial logistic regression coefficients. ">mlogregr_train()</a> function. Refer to the documentation for multinomial logistic regression for details.</p>
+<p class="enddd"></p>
+</dd>
+<dt>dependent_varname </dt>
+<dd>VARCHAR. The name of the column for dependent variables. </dd>
+<dt>independent_varname </dt>
+<dd>VARCHAR. The name of the column for independent variables. Can be any SQL expression that evaluates to an array. </dd>
+<dt>ref_category (optional) </dt>
+<dd>INTEGER, default: 0. Reference category for the multinomial logistic regression. </dd>
+<dt>grouping_cols (optional) </dt>
+<dd>VARCHAR, default: NULL. <em>Not currently implemented. Any non-NULL value is ignored.</em> An expression list used to group the input dataset into discrete groups, running one regression per group. Similar to the SQL "GROUP BY" clause. When this value is NULL, no grouping is used and a single result model is generated. </dd>
+<dt>marginal_vars(optional) </dt>
+<dd>INTEGER[], default: NULL. An index list (base 1) representing the independent variables to compute marginal effects on. When NULL, computes marginal effects on all variables. </dd>
+<dt>optimizer_params (optional) </dt>
+<dd>TEXT, default: NULL, which uses the default values of optimizer parameters: max_iter=20, optimizer='newton', tolerance=1e-4. It should be a string that contains 'key=value' pairs separated by commas. </dd>
+<dt>verbose_mode (optional) </dt>
+<dd>BOOLEAN, default FALSE. When TRUE, provides verbose output of the results of training.  </dd>
+</dl>
+<p><a class="anchor" id="examples"></a></p><dl class="section user"><dt>Examples</dt><dd></dd></dl>
+<ol type="1">
+<li>View online help for the marginal effects function. <pre class="example">
+SELECT madlib.margins();
+</pre></li>
+<li>Create the sample data set. Use the <code>patients</code> dataset from the <a href="group__grp__logreg.html#examples">Logistic Regression examples</a>. <pre class="example">
+SELECT * FROM patients;
+</pre> Result: <pre class="result">
+ id | second_attack | treatment | trait_anxiety
+&#160;---+---------------+-----------+---------------
+  1 |             1 |         1 |            70
+  3 |             1 |         1 |            50
+  5 |             1 |         0 |            40
+  7 |             1 |         0 |            75
+  9 |             1 |         0 |            70
+ 11 |             0 |         1 |            65
+ 13 |             0 |         1 |            45
+ 15 |             0 |         1 |            40
+ 17 |             0 |         0 |            55
+ 19 |             0 |         0 |            50
+  2 |             1 |         1 |            80
+  4 |             1 |         0 |            60
+  6 |             1 |         0 |            65
+  8 |             1 |         0 |            80
+ 10 |             1 |         0 |            60
+ 12 |             0 |         1 |            50
+ 14 |             0 |         1 |            35
+ 16 |             0 |         1 |            50
+ 18 |             0 |         0 |            45
+ 20 |             0 |         0 |            60
+</pre></li>
+<li>Run logistic regression to get the model, compute the marginal effects of all variables, and view the results. <pre class="example">
+DROP TABLE IF EXISTS model_table;
+DROP TABLE IF EXISTS model_table_summary;
+DROP TABLE IF EXISTS margins_table;
+SELECT madlib.logregr_train( 'patients',
+                             'model_table',
+                             'second_attack',
+                             'ARRAY[1, treatment, trait_anxiety, treatment^2, treatment * trait_anxiety]'
+                           );
+SELECT madlib.margins( 'model_table',
+                       'margins_table',
+                       'intercept, treatment, trait_anxiety, treatment^2, treatment*trait_anxiety',
+                       NULL,
+                       NULL
+                     );
+\x ON
+SELECT * FROM margins_table;
+</pre> Result: <pre class="result">
+variables | {intercept, treatment, trait_anxiety}
+margins   | {-0.876046514609573,-0.0648833521465306,0.0177196513589633}
+std_err   | {0.551714275062467,0.373592457067442,0.00458001207971933}
+z_stats   | {-1.58786269307674,-0.173674149247659,3.86890930646828}
+p_values  | {0.112317391159946,0.862121554662231,0.000109323294026272}
+</pre></li>
+<li>Compute the marginal effects of the first variable using the previous model and view the results (using different names in 'x_design'). <pre class="example">
+DROP TABLE IF EXISTS result_table;
+SELECT madlib.margins( 'model_table',
+                       'result_table',
+                       'i, tre, tra, tre^2, tre*tra',
+                       NULL,
+                       'tre'
+                     );
+SELECT * FROM result_table;
+</pre> Result: <pre class="result">
+-[ RECORD 1 ]-------------------
+variables | {tre}
+margins   | {-0.110453283517281}
+std_err   | {0.228981529064089}
+z_stats   | {-0.482367656329023}
+p_values  | {0.629544793219806}
+</pre></li>
+<li>Create a sample data set for multinomial logistic regression. (The full dataset has three categories.) Use the dataset from the <a href="group__grp__mlogreg.html#examples">Multinomial Regression example</a>. <pre class="example">
+\x OFF
+SELECT * FROM test3;
+</pre> Result: <pre class="result">
+ feat1 | feat2 | cat
+-------+-------+-----
+     2 |    33 |   0
+     2 |    31 |   1
+     2 |    36 |   1
+     2 |    31 |   1
+     2 |    41 |   1
+     2 |    37 |   1
+     2 |    44 |   1
+     2 |    46 |   1
+     2 |    46 |   2
+     2 |    39 |   0
+     2 |    44 |   1
+     2 |    44 |   0
+     2 |    67 |   2
+     2 |    59 |   2
+     2 |    59 |   0
+...
+</pre></li>
+<li>Run the regression function and then compute the marginal effects of all variables in the regression. <pre class="example">
+DROP TABLE IF EXISTS model_table;
+DROP TABLE IF EXISTS model_table_summary;
+DROP TABLE IF EXISTS result_table;
+SELECT madlib.mlogregr_train('test3', 'model_table', 'cat',
+                             'ARRAY[1, feat1, feat2, feat1*feat2]',
+                             0);
+SELECT madlib.margins('model_table',
+                      'result_table',
+                      'intercept, feat1, feat2, feat1*feat2');
+\x ON
+SELECT * FROM result_table;
+</pre> Result: <pre class="result">
+-[ RECORD 1 ]+-------------------------------------------------------------
+category     | 1
+ref_category | 0
+variables    | {intercept,feat1,feat2}
+margins      | {2.38176571752675,-0.0545733108729351,-0.0147264917310351}
+std_err      | {0.851299967007829,0.0697049196489632,0.00374946341567828}
+z_stats      | {2.79779843748643,-0.782919070099622,-3.92762646235104}
+p_values     | {0.00514522099923651,0.43367463815468,8.57883141882439e-05}
+-[ RECORD 2 ]+-------------------------------------------------------------
+category     | 2
+ref_category | 0
+variables    | {intercept,feat1,feat2}
+margins      | {-1.99279068434949,0.0922540608068343,0.0168049205501686}
+std_err      | {0.742790306495022,0.0690712705200096,0.00202015384479213}
+z_stats      | {-2.68284422524683,1.33563578767686,8.31863404536785}
+p_values     | {0.00729989838349161,0.181668346802398,8.89828265128986e-17}
+</pre></li>
+</ol>
+<p><a class="anchor" id="notes"></a> </p><dl class="section note"><dt>Note</dt><dd>The <em>marginal_vars</em> argument is a list with the names matching those in 'x_design'. If no 'x_design' is present (i.e. no interaction and no indicator variables), then <em>marginal_vars</em> must be the indices (base 1) of variables in 'independent_varname'. Use <em>NULL</em> to use all independent variables. It is important to note that the <em>independent_varname</em> array in the underlying regression is assumed to start with a lower bound index of 1. Arrays that don't follow this would result in an incorrect solution.</dd></dl>
+<p><a class="anchor" id="background"></a></p><dl class="section user"><dt>Technical Background</dt><dd></dd></dl>
+<p>The standard approach to modeling dichotomous/binary variables (so <img class="formulaInl" alt="$y \in \{0, 1\} $" src="form_368.png"/>) is to estimate a generalized linear model under the assumption that <img class="formulaInl" alt="$ y $" src="form_324.png"/> follows some form of Bernoulli distribution. Thus the expected value of <img class="formulaInl" alt="$ y $" src="form_324.png"/> becomes, </p><p class="formulaDsp">
+<img class="formulaDsp" alt="\[ y = G(X' \beta), \]" src="form_369.png"/>
+</p>
+<p>where G is the specified binomial distribution. For logistic regression, the function <img class="formulaInl" alt="$ G $" src="form_370.png"/> represents the inverse logit function.</p>
+<p>In logistic regression: </p><p class="formulaDsp">
+<img class="formulaDsp" alt="\[ P = \frac{1}{1 + e^{-(\beta_0 + \beta_1 x_1 + \dots \beta_j x_j)}} = \frac{1}{1 + e^{-z}} \implies \frac{\partial P}{\partial X_k} = \beta_k \cdot \frac{1}{1 + e^{-z}} \cdot \frac{e^{-z}}{1 + e^{-z}} \\ = \beta_k \cdot P \cdot (1-P) \]" src="form_371.png"/>
+</p>
+<p>There are several methods for calculating the marginal effects for dichotomous dependent variables. This package uses the average of the marginal effects at every sample observation.</p>
+<p>This is calculated as follows: </p><p class="formulaDsp">
+<img class="formulaDsp" alt="\[ \frac{\partial y}{\partial x_k} = \beta_k \frac{\sum_{i=1}^n P(y_i = 1)(1-P(y_i = 1))}{n}, \\ \text{where}, P(y_i=1) = g(X^{(i)}\beta) \]" src="form_372.png"/>
+</p>
+<p>We use the delta method for calculating standard errors on the marginal effects.</p>
+<p><a class="anchor" id="literature"></a></p><dl class="section user"><dt>Literature</dt><dd></dd></dl>
+<p>[1] mfx function in STATA: <a href="http://www.stata.com/help.cgi?mfx_option">http://www.stata.com/help.cgi?mfx_option</a></p>
+<p><a class="anchor" id="related"></a></p><dl class="section user"><dt>Related Topics</dt><dd></dd></dl>
+<p>File <a class="el" href="marginal_8sql__in.html" title="SQL functions for linear regression. ">marginal.sql_in</a> documenting the SQL functions.</p>
+</div><!-- contents -->
+</div><!-- doc-content -->
+<!-- start footer part -->
+<div id="nav-path" class="navpath"><!-- id is needed for treeview function! -->
+  <ul>
+    <li class="footer">Generated on Tue May 16 2017 13:24:38 for MADlib by
+    <a href="http://www.doxygen.org/index.html">
+    <img class="footer" src="doxygen.png" alt="doxygen"/></a> 1.8.13 </li>
+  </ul>
+</div>
+</body>
+</html>