You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@madlib.apache.org by ri...@apache.org on 2017/05/16 20:29:36 UTC

[21/51] [partial] incubator-madlib-site git commit: Add v1.11 docs

http://git-wip-us.apache.org/repos/asf/incubator-madlib-site/blob/b5b51c69/docs/v1.11/group__grp__mlogreg.html
----------------------------------------------------------------------
diff --git a/docs/v1.11/group__grp__mlogreg.html b/docs/v1.11/group__grp__mlogreg.html
new file mode 100644
index 0000000..887829f
--- /dev/null
+++ b/docs/v1.11/group__grp__mlogreg.html
@@ -0,0 +1,410 @@
+<!-- HTML header for doxygen 1.8.4-->
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
+<html xmlns="http://www.w3.org/1999/xhtml">
+<head>
+<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
+<meta http-equiv="X-UA-Compatible" content="IE=9"/>
+<meta name="generator" content="Doxygen 1.8.13"/>
+<meta name="keywords" content="madlib,postgres,greenplum,machine learning,data mining,deep learning,ensemble methods,data science,market basket analysis,affinity analysis,pca,lda,regression,elastic net,huber white,proportional hazards,k-means,latent dirichlet allocation,bayes,support vector machines,svm"/>
+<title>MADlib: Multinomial Logistic Regression</title>
+<link href="tabs.css" rel="stylesheet" type="text/css"/>
+<script type="text/javascript" src="jquery.js"></script>
+<script type="text/javascript" src="dynsections.js"></script>
+<link href="navtree.css" rel="stylesheet" type="text/css"/>
+<script type="text/javascript" src="resize.js"></script>
+<script type="text/javascript" src="navtreedata.js"></script>
+<script type="text/javascript" src="navtree.js"></script>
+<script type="text/javascript">
+  $(document).ready(initResizable);
+</script>
+<link href="search/search.css" rel="stylesheet" type="text/css"/>
+<script type="text/javascript" src="search/searchdata.js"></script>
+<script type="text/javascript" src="search/search.js"></script>
+<script type="text/javascript">
+  $(document).ready(function() { init_search(); });
+</script>
+<!-- hack in the navigation tree -->
+<script type="text/javascript" src="eigen_navtree_hacks.js"></script>
+<link href="doxygen.css" rel="stylesheet" type="text/css" />
+<link href="madlib_extra.css" rel="stylesheet" type="text/css"/>
+<!-- google analytics -->
+<script>
+  (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
+  (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
+  m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
+  })(window,document,'script','//www.google-analytics.com/analytics.js','ga');
+  ga('create', 'UA-45382226-1', 'madlib.incubator.apache.org');
+  ga('send', 'pageview');
+</script>
+</head>
+<body>
+<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
+<div id="titlearea">
+<table cellspacing="0" cellpadding="0">
+ <tbody>
+ <tr style="height: 56px;">
+  <td id="projectlogo"><a href="http://madlib.incubator.apache.org"><img alt="Logo" src="madlib.png" height="50" style="padding-left:0.5em;" border="0"/ ></a></td>
+  <td style="padding-left: 0.5em;">
+   <div id="projectname">
+   <span id="projectnumber">1.11</span>
+   </div>
+   <div id="projectbrief">User Documentation for MADlib</div>
+  </td>
+   <td>        <div id="MSearchBox" class="MSearchBoxInactive">
+        <span class="left">
+          <img id="MSearchSelect" src="search/mag_sel.png"
+               onmouseover="return searchBox.OnSearchSelectShow()"
+               onmouseout="return searchBox.OnSearchSelectHide()"
+               alt=""/>
+          <input type="text" id="MSearchField" value="Search" accesskey="S"
+               onfocus="searchBox.OnSearchFieldFocus(true)" 
+               onblur="searchBox.OnSearchFieldFocus(false)" 
+               onkeyup="searchBox.OnSearchFieldChange(event)"/>
+          </span><span class="right">
+            <a id="MSearchClose" href="javascript:searchBox.CloseResultsWindow()"><img id="MSearchCloseImg" border="0" src="search/close.png" alt=""/></a>
+          </span>
+        </div>
+</td>
+ </tr>
+ </tbody>
+</table>
+</div>
+<!-- end header part -->
+<!-- Generated by Doxygen 1.8.13 -->
+<script type="text/javascript">
+var searchBox = new SearchBox("searchBox", "search",false,'Search');
+</script>
+</div><!-- top -->
+<div id="side-nav" class="ui-resizable side-nav-resizable">
+  <div id="nav-tree">
+    <div id="nav-tree-contents">
+      <div id="nav-sync" class="sync"></div>
+    </div>
+  </div>
+  <div id="splitbar" style="-moz-user-select:none;" 
+       class="ui-resizable-handle">
+  </div>
+</div>
+<script type="text/javascript">
+$(document).ready(function(){initNavTree('group__grp__mlogreg.html','');});
+</script>
+<div id="doc-content">
+<!-- window showing the filter options -->
+<div id="MSearchSelectWindow"
+     onmouseover="return searchBox.OnSearchSelectShow()"
+     onmouseout="return searchBox.OnSearchSelectHide()"
+     onkeydown="return searchBox.OnSearchSelectKey(event)">
+</div>
+
+<!-- iframe showing the search results (closed by default) -->
+<div id="MSearchResultsWindow">
+<iframe src="javascript:void(0)" frameborder="0" 
+        name="MSearchResults" id="MSearchResults">
+</iframe>
+</div>
+
+<div class="header">
+  <div class="headertitle">
+<div class="title">Multinomial Logistic Regression<div class="ingroups"><a class="el" href="group__grp__deprecated.html">Deprecated Modules</a></div></div>  </div>
+</div><!--header-->
+<div class="contents">
+<dl class="section warning"><dt>Warning</dt><dd><em> This is an old implementation of multinomial logistic regression. Replacement of this function is available as the Multinomial regression module <a class="el" href="group__grp__multinom.html">Multinomial Regression</a></em></dd></dl>
+<div class="toc"><b>Contents</b> <ul>
+<li class="level1">
+<a href="#train">Training Function</a> </li>
+<li class="level1">
+<a href="#predict">Prediction Function</a> </li>
+<li class="level1">
+<a href="#examples">Examples</a> </li>
+<li class="level1">
+<a href="#background">Technical Background</a> </li>
+<li class="level1">
+<a href="#literature">Literature</a> </li>
+<li class="level1">
+<a href="#related">Related Topics</a> </li>
+</ul>
+</div><p>Multinomial logistic regression is a widely used regression analysis tool that models the outcomes of categorical dependent random variables. The model assumes that the conditional mean of the dependent categorical variables is the logistic function of an affine combination of independent variables. Multinomial logistic regression finds the vector of coefficients that maximizes the likelihood of the observations.</p>
+<p><a class="anchor" id="train"></a></p><dl class="section user"><dt>Training Function</dt><dd>The multinomial logistic regression training function has the following syntax: <pre class="syntax">
+mlogregr_train(source_table,
+               output_table,
+               dependent_varname,
+               independent_varname,
+               ref_category,
+               optimizer_params
+              )
+</pre> <b>Arguments</b> <dl class="arglist">
+<dt>source_table </dt>
+<dd><p class="startdd">TEXT. The name of the table containing the input data.</p>
+<p></p>
+<p class="enddd"></p>
+</dd>
+<dt>output_table </dt>
+<dd><p class="startdd">TEXT. The name of the generated table containing the output model. The output table produced by the multinomial logistic regression training function contains the following columns: </p><table class="output">
+<tr>
+<th>category </th><td>INTEGER. The category. Categories are encoded as integers with values from {0, 1, 2,..., <em>numCategories</em> &ndash; 1}  </td></tr>
+<tr>
+<th>ref_category </th><td>INTEGER. The reference category. Categories are encoded as integers with values from {0, 1, 2,..., <em>numCategories</em> &ndash; 1}  </td></tr>
+<tr>
+<th>coef </th><td>FLOAT8[]. An array of coefficients, <img class="formulaInl" alt="$ \boldsymbol c $" src="form_79.png"/>.   </td></tr>
+<tr>
+<th>log_likelihood </th><td>FLOAT8. The log-likelihood, <img class="formulaInl" alt="$ l(\boldsymbol c) $" src="form_80.png"/>.  </td></tr>
+<tr>
+<th>std_err </th><td>FLOAT8[]. An array of the standard errors.  </td></tr>
+<tr>
+<th>z_stats </th><td>FLOAT8[]. An array of the Wald z-statistics.  </td></tr>
+<tr>
+<th>p_values </th><td>FLOAT8[]. An array of the Wald p-values.  </td></tr>
+<tr>
+<th>odds_ratios </th><td>FLOAT8[]. An array of the odds ratios.  </td></tr>
+<tr>
+<th>condition_no </th><td>FLOAT8. The condition number of the matrix, computed using the coefficients of the iteration immediately preceding convergence.  </td></tr>
+<tr>
+<th>num_iterations </th><td>INTEGER. The number of iterations executed before the algorithm completed.  </td></tr>
+</table>
+<p>A summary table named &lt;out_table&gt;_summary is also created at the same time, and it contains the following columns:</p>
+<table class="output">
+<tr>
+<th>source_table </th><td>The data source table name.  </td></tr>
+<tr>
+<th>out_table </th><td>The output table name.  </td></tr>
+<tr>
+<th>dependent_varname </th><td>The dependent variable.  </td></tr>
+<tr>
+<th>independent_varname </th><td>The independent variables.  </td></tr>
+<tr>
+<th>optimizer_params </th><td>The optimizer parameters. It is a copy of the optimizer_params in the training function's arguments.  </td></tr>
+<tr>
+<th>ref_category </th><td>An integer, the value of reference category used.  </td></tr>
+<tr>
+<th>num_rows_processed </th><td>INTEGER. The number of rows actually processed, which is equal to the total number of rows in the source table minus the number of skipped rows.  </td></tr>
+<tr>
+<th>num_missing_rows_skipped </th><td>INTEGER. The number of rows skipped during the training. A row will be skipped if the ind_col is NULL or contains NULL values.  </td></tr>
+</table>
+<p class="enddd"></p>
+</dd>
+<dt>dependent_varname </dt>
+<dd><p class="startdd">TEXT. The name of the column containing the dependent variable.</p>
+<p class="enddd"></p>
+</dd>
+<dt>independent_varname </dt>
+<dd><p class="startdd">TEXT. Expression list to evaluate for the independent variables. An intercept variable is not assumed. The number of independent variables cannot exceed 65535.</p>
+<p class="enddd"></p>
+</dd>
+<dt>ref_category (optional) </dt>
+<dd><p class="startdd">INTEGER, default: 0. The reference category ranges from [0, <em>numCategories</em> &ndash; 1].</p>
+<p class="enddd"></p>
+</dd>
+<dt>optimizer_params (optional) </dt>
+<dd>VARCHAR, default: NULL, which uses the default values of optimizer parameters. It should be a string that contains pairs of 'key=value' separated by commas. Supported parameters with their default values: max_iter=20, optimizer='irls', precision=1e-4. Currently, only 'irls' and 'newton' are allowed for 'optimizer'.  </dd>
+</dl>
+</dd></dl>
+<dl class="section note"><dt>Note</dt><dd>Table names can be optionally schema qualified and table and column names should follow the same case-sensitivity and quoting rules as in the database.</dd></dl>
+<p><a class="anchor" id="predict"></a></p><dl class="section user"><dt>Prediction Function</dt><dd>The prediction function is provided to estimate the conditional mean given a new predictor. It has the following syntax: <pre class="syntax">
+mlogregr_predict(
+    model_table,
+    new_data_table,
+    id_col_name,
+    output_table,
+    type)
+</pre></dd></dl>
+<p><b>Arguments</b> </p><dl class="arglist">
+<dt>model_table </dt>
+<dd><p class="startdd">TEXT. Name of the table containing the multilogistic model. This should be the output table returned from <em>mlogregr_train</em>.</p>
+<p class="enddd"></p>
+</dd>
+<dt>new_data_table </dt>
+<dd><p class="startdd">TEXT. Name of the table containing prediction data. This table is expected to contain the same features that were used during training. The table should also contain <em>id_col_name</em> used for identifying each row.</p>
+<p class="enddd"></p>
+</dd>
+<dt>id_col_name </dt>
+<dd><p class="startdd">TEXT. Name of the column containing id information in the source data. This is a mandatory argument and is used for correlating prediction table rows with the source. The values of this column are expected to be unique for each tuple. </p>
+<p class="enddd"></p>
+</dd>
+<dt>output_table </dt>
+<dd><p class="startdd">TEXT. Name of the table to output prediction results to. If this table already exists then an error is returned. This output table contains the <em>id_col_name</em> column giving the 'id' for each prediction.</p>
+<p>If <em>type</em> = 'response', then the table has a single additional column with the prediction value of the response. The type of this column depends on the type of the response variable used during training.</p>
+<p>If <em>type</em> = 'prob', then the table has multiple additional columns, one for each possible category. The columns are labeled as 'estimated_prob_<em>category_value</em>', where <em>category_value</em> represents the values of categories (0 to K-1).</p>
+<p class="enddd"></p>
+</dd>
+<dt>type </dt>
+<dd><p class="startdd">TEXT, optional, default: 'response'.</p>
+<p>When <em>type</em> = 'prob', the probabilities of each category (including the reference category) is given.</p>
+<p class="enddd">When <em>type</em> = 'response', a single output is provided which represents the prediction category for each tuple. This represents the category with the highest probability.  </p>
+</dd>
+</dl>
+<p><a class="anchor" id="examples"></a></p><dl class="section user"><dt>Examples</dt><dd></dd></dl>
+<ol type="1">
+<li>Create the training data table. <pre class="example">
+DROP TABLE IF EXISTS test3;
+CREATE TABLE test3 (
+    feat1 INTEGER,
+    feat2 INTEGER,
+    cat INTEGER
+);
+INSERT INTO test3(feat1, feat2, cat) VALUES
+(1,35,1),
+(2,33,0),
+(3,39,1),
+(1,37,1),
+(2,31,1),
+(3,36,0),
+(2,36,1),
+(2,31,1),
+(2,41,1),
+(2,37,1),
+(1,44,1),
+(3,33,2),
+(1,31,1),
+(2,44,1),
+(1,35,1),
+(1,44,0),
+(1,46,0),
+(2,46,1),
+(2,46,2),
+(3,49,1),
+(2,39,0),
+(2,44,1),
+(1,47,1),
+(1,44,1),
+(1,37,2),
+(3,38,2),
+(1,49,0),
+(2,44,0),
+(3,61,2),
+(1,65,2),
+(3,67,1),
+(3,65,2),
+(1,65,2),
+(2,67,2),
+(1,65,2),
+(1,62,2),
+(3,52,2),
+(3,63,2),
+(2,59,2),
+(3,65,2),
+(2,59,0),
+(3,67,2),
+(3,67,2),
+(3,60,2),
+(3,67,2),
+(3,62,2),
+(2,54,2),
+(3,65,2),
+(3,62,2),
+(2,59,2),
+(3,60,2),
+(3,63,2),
+(3,65,2),
+(2,63,1),
+(2,67,2),
+(2,65,2),
+(2,62,2);
+</pre></li>
+<li>Run the multilogistic regression function. <pre class="example">
+DROP TABLE IF EXISTS test3_output;
+DROP TABLE IF EXISTS test3_output_summary;
+SELECT madlib.mlogregr_train('test3',
+                             'test3_output',
+                             'cat',
+                             'ARRAY[1, feat1, feat2]',
+                             0,
+                             'max_iter=20, optimizer=irls, precision=0.0001'
+                             );
+</pre></li>
+<li>View the result: <pre class="example">
+-- Set extended display on for easier reading of output
+\x on
+SELECT * FROM test3_output;
+</pre> Results: <pre class="result">
+-[ RECORD 1 ]--+------------------------------------------------------------
+category       | 1
+ref_category   | 0
+coef           | {1.45474045211601,0.0849956182104023,-0.0172383499601956}
+loglikelihood  | -39.14759930999
+std_err        | {2.13085072854143,0.585021661344715,0.0431487356292144}
+z_stats        | {0.682704063982831,0.145286275409074,-0.39950996729842}
+p_values       | {0.494793861210936,0.884484850387893,0.689517480964129}
+odd_ratios     | {4.28337158128448,1.08871229617973,0.982909380301134}
+condition_no   | 280069.034217586
+num_iterations | 5
+-[ RECORD 2 ]--+------------------------------------------------------------
+category       | 2
+ref_category   | 0
+coef           | {-7.12908167688326,0.87648787696783,0.127886153027713}
+loglikelihood  | -39.14759930999
+std_err        | {2.52104008297868,0.639575886323862,0.0445757462972303}
+z_stats        | {-2.82783352990566,1.37042045472615,2.86896269049475}
+p_values       | {0.00468641692252239,0.170555690550421,0.00411820373218956}
+odd_ratios     | {0.000801455044349486,2.40244718187161,1.13642361694409}
+condition_no   | 280069.034217586
+num_iterations | 5
+</pre></li>
+<li>View all parameters used during the training <pre class="example">
+\x on
+SELECT * FROM test3_output_summary;
+</pre> Results: <pre class="result">
+-[ RECORD 1 ]------------+--------------------------------------------------
+method                   | mlogregr
+source_table             | test3
+out_table                | test3_output
+dependent_varname        | cat
+independent_varname      | ARRAY[1, feat1, feat2]
+optimizer_params         | max_iter=20, optimizer=irls, precision=0.0001
+ref_category             | 0
+num_categories           | 3
+num_rows_processed       | 57
+num_missing_rows_skipped | 0
+variance_covariance      | {{4.54052482732554,3.01080140927409,-0.551901021610841,-0.380754019900586,-0.0784151362989211,-0.0510014701718268},{3.01080140927409,6.35564309998514,-0.351902272617974,-0.766730342510818,-0.051877550252329,-0.0954432017695571},{-0.551901021610841,-0.351902272617974,0.34225034424253,0.231740815080827,-0.00117521831508331,-0.00114043921343171},{-0.380754019900586,-0.766730342510818,0.231740815080827,0.409057314366954,-0.000556498286025567,-0.000404735750986327},{-0.0784151362989211,-0.051877550252329,-0.00117521831508331,-0.000556498286025569,0.00186181338639984,0.00121080293928445},{-0.0510014701718268,-0.0954432017695571,-0.00114043921343171,-0.000404735750986325,0.00121080293928446,0.00198699715795504}}
+coef                     | {{1.45474045211601,0.0849956182104023,-0.0172383499601956},{-7.12908167688326,0.87648787696783,0.127886153027713}}
+</pre></li>
+</ol>
+<p><a class="anchor" id="background"></a></p><dl class="section user"><dt>Technical Background</dt><dd>Multinomial logistic regression models the outcomes of categorical dependent random variables (denoted <img class="formulaInl" alt="$ Y \in \{ 0,1,2 \ldots k \} $" src="form_94.png"/>). The model assumes that the conditional mean of the dependent categorical variables is the logistic function of an affine combination of independent variables (usually denoted <img class="formulaInl" alt="$ \boldsymbol x $" src="form_59.png"/>). That is, <p class="formulaDsp">
+<img class="formulaDsp" alt="\[ E[Y \mid \boldsymbol x] = \sigma(\boldsymbol c^T \boldsymbol x) \]" src="form_95.png"/>
+</p>
+ for some unknown vector of coefficients <img class="formulaInl" alt="$ \boldsymbol c $" src="form_79.png"/> and where <img class="formulaInl" alt="$ \sigma(x) = \frac{1}{1 + \exp(-x)} $" src="form_96.png"/> is the logistic function. Multinomial logistic regression finds the vector of coefficients <img class="formulaInl" alt="$ \boldsymbol c $" src="form_79.png"/> that maximizes the likelihood of the observations.</dd></dl>
+<p>Let</p><ul>
+<li><img class="formulaInl" alt="$ \boldsymbol y \in \{ 0,1 \}^{n \times k} $" src="form_97.png"/> denote the vector of observed dependent variables, with <img class="formulaInl" alt="$ n $" src="form_11.png"/> rows and <img class="formulaInl" alt="$ k $" src="form_98.png"/> columns, containing the observed values of the dependent variable,</li>
+<li><img class="formulaInl" alt="$ X \in \mathbf R^{n \times k} $" src="form_99.png"/> denote the design matrix with <img class="formulaInl" alt="$ k $" src="form_98.png"/> columns and <img class="formulaInl" alt="$ n $" src="form_11.png"/> rows, containing all observed vectors of independent variables <img class="formulaInl" alt="$ \boldsymbol x_i $" src="form_100.png"/> as rows.</li>
+</ul>
+<p>By definition, </p><p class="formulaDsp">
+<img class="formulaDsp" alt="\[ P[Y = y_i | \boldsymbol x_i] = \sigma((-1)^{y_i} \cdot \boldsymbol c^T \boldsymbol x_i) \,. \]" src="form_101.png"/>
+</p>
+<p> Maximizing the likelihood <img class="formulaInl" alt="$ \prod_{i=1}^n \Pr(Y = y_i \mid \boldsymbol x_i) $" src="form_102.png"/> is equivalent to maximizing the log-likelihood <img class="formulaInl" alt="$ \sum_{i=1}^n \log \Pr(Y = y_i \mid \boldsymbol x_i) $" src="form_103.png"/>, which simplifies to </p><p class="formulaDsp">
+<img class="formulaDsp" alt="\[ l(\boldsymbol c) = -\sum_{i=1}^n \log(1 + \exp((-1)^{y_i} \cdot \boldsymbol c^T \boldsymbol x_i)) \,. \]" src="form_104.png"/>
+</p>
+<p> The Hessian of this objective is <img class="formulaInl" alt="$ H = -X^T A X $" src="form_105.png"/> where <img class="formulaInl" alt="$ A = \text{diag}(a_1, \dots, a_n) $" src="form_106.png"/> is the diagonal matrix with <img class="formulaInl" alt="$ a_i = \sigma(\boldsymbol c^T \boldsymbol x) \cdot \sigma(-\boldsymbol c^T \boldsymbol x) \,. $" src="form_107.png"/> Since <img class="formulaInl" alt="$ H $" src="form_108.png"/> is non-positive definite, <img class="formulaInl" alt="$ l(\boldsymbol c) $" src="form_80.png"/> is convex. There are many techniques for solving convex optimization problems. Currently, logistic regression in MADlib can use:</p><ul>
+<li>Iteratively Reweighted Least Squares</li>
+</ul>
+<p>We estimate the standard error for coefficient <img class="formulaInl" alt="$ i $" src="form_33.png"/> as </p><p class="formulaDsp">
+<img class="formulaDsp" alt="\[ \mathit{se}(c_i) = \left( (X^T A X)^{-1} \right)_{ii} \,. \]" src="form_109.png"/>
+</p>
+<p> The Wald z-statistic is </p><p class="formulaDsp">
+<img class="formulaDsp" alt="\[ z_i = \frac{c_i}{\mathit{se}(c_i)} \,. \]" src="form_110.png"/>
+</p>
+<p>The Wald <img class="formulaInl" alt="$ p $" src="form_111.png"/>-value for coefficient <img class="formulaInl" alt="$ i $" src="form_33.png"/> gives the probability (under the assumptions inherent in the Wald test) of seeing a value at least as extreme as the one observed, provided that the null hypothesis ( <img class="formulaInl" alt="$ c_i = 0 $" src="form_112.png"/>) is true. Letting <img class="formulaInl" alt="$ F $" src="form_113.png"/> denote the cumulative density function of a standard normal distribution, the Wald <img class="formulaInl" alt="$ p $" src="form_111.png"/>-value for coefficient <img class="formulaInl" alt="$ i $" src="form_33.png"/> is therefore </p><p class="formulaDsp">
+<img class="formulaDsp" alt="\[ p_i = \Pr(|Z| \geq |z_i|) = 2 \cdot (1 - F( |z_i| )) \]" src="form_114.png"/>
+</p>
+<p> where <img class="formulaInl" alt="$ Z $" src="form_115.png"/> is a standard normally distributed random variable.</p>
+<p>The odds ratio for coefficient <img class="formulaInl" alt="$ i $" src="form_33.png"/> is estimated as <img class="formulaInl" alt="$ \exp(c_i) $" src="form_116.png"/>.</p>
+<p>The condition number is computed as <img class="formulaInl" alt="$ \kappa(X^T A X) $" src="form_117.png"/> during the iteration immediately <em>preceding</em> convergence (i.e., <img class="formulaInl" alt="$ A $" src="form_14.png"/> is computed using the coefficients of the previous iteration). A large condition number (say, more than 1000) indicates the presence of significant multicollinearity.</p>
+<p>The multinomial logistic regression uses a default reference category of zero, and the regression coefficients in the output are in the order described below. For a problem with <img class="formulaInl" alt="$ K $" src="form_118.png"/> dependent variables <img class="formulaInl" alt="$ (1, ..., K) $" src="form_119.png"/> and <img class="formulaInl" alt="$ J $" src="form_120.png"/> categories <img class="formulaInl" alt="$ (0, ..., J-1) $" src="form_121.png"/>, let <img class="formulaInl" alt="$ {m_{k,j}} $" src="form_122.png"/> denote the coefficient for dependent variable <img class="formulaInl" alt="$ k $" src="form_98.png"/> and category <img class="formulaInl" alt="$ j $" src="form_123.png"/>. The output is <img class="formulaInl" alt="$ {m_{k_1, j_0}, m_{k_1, j_1} \ldots m_{k_1, j_{J-1}}, m_{k_2, j_0}, m_{k_2, j_1}, \ldots m_{k_2, j_{J-1}} \ldots m_{k_K, j_{J-1}}} $" src="form_124.png"/>. The order is NOT CONSISTENT with the multinomial regression marginal effect calculation 
 with function <em>marginal_mlogregr</em>. This is deliberate because the interfaces of all multinomial regressions (robust, clustered, ...) will be moved to match that used in marginal.</p>
+<p><a class="anchor" id="literature"></a></p><dl class="section user"><dt>Literature</dt><dd></dd></dl>
+<p>A collection of nice write-ups, with valuable pointers into further literature:</p>
+<p>[1] Annette J. Dobson: An Introduction to Generalized Linear Models, Second Edition. Nov 2001</p>
+<p>[2] Cosma Shalizi: Statistics 36-350: Data Mining, Lecture Notes, 18 November 2009, <a href="http://www.stat.cmu.edu/~cshalizi/350/lectures/26/lecture-26.pdf">http://www.stat.cmu.edu/~cshalizi/350/lectures/26/lecture-26.pdf</a></p>
+<p>[3] Scott A. Czepiel: Maximum Likelihood Estimation of Logistic Regression Models: Theory and Implementation, Retrieved Jul 12 2012, <a href="http://czep.net/stat/mlelr.pdf">http://czep.net/stat/mlelr.pdf</a></p>
+<p><a class="anchor" id="related"></a></p><dl class="section user"><dt>Related Topics</dt><dd></dd></dl>
+<p>File <a class="el" href="multilogistic_8sql__in.html" title="SQL functions for multinomial logistic regression. ">multilogistic.sql_in</a> documenting the multinomial logistic regression functions</p>
+<p><a class="el" href="group__grp__logreg.html">Logistic Regression</a></p>
+</div><!-- contents -->
+</div><!-- doc-content -->
+<!-- start footer part -->
+<div id="nav-path" class="navpath"><!-- id is needed for treeview function! -->
+  <ul>
+    <li class="footer">Generated on Tue May 16 2017 13:24:39 for MADlib by
+    <a href="http://www.doxygen.org/index.html">
+    <img class="footer" src="doxygen.png" alt="doxygen"/></a> 1.8.13 </li>
+  </ul>
+</div>
+</body>
+</html>

http://git-wip-us.apache.org/repos/asf/incubator-madlib-site/blob/b5b51c69/docs/v1.11/group__grp__multinom.html
----------------------------------------------------------------------
diff --git a/docs/v1.11/group__grp__multinom.html b/docs/v1.11/group__grp__multinom.html
new file mode 100644
index 0000000..cc50f5a
--- /dev/null
+++ b/docs/v1.11/group__grp__multinom.html
@@ -0,0 +1,481 @@
+<!-- HTML header for doxygen 1.8.4-->
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
+<html xmlns="http://www.w3.org/1999/xhtml">
+<head>
+<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
+<meta http-equiv="X-UA-Compatible" content="IE=9"/>
+<meta name="generator" content="Doxygen 1.8.13"/>
+<meta name="keywords" content="madlib,postgres,greenplum,machine learning,data mining,deep learning,ensemble methods,data science,market basket analysis,affinity analysis,pca,lda,regression,elastic net,huber white,proportional hazards,k-means,latent dirichlet allocation,bayes,support vector machines,svm"/>
+<title>MADlib: Multinomial Regression</title>
+<link href="tabs.css" rel="stylesheet" type="text/css"/>
+<script type="text/javascript" src="jquery.js"></script>
+<script type="text/javascript" src="dynsections.js"></script>
+<link href="navtree.css" rel="stylesheet" type="text/css"/>
+<script type="text/javascript" src="resize.js"></script>
+<script type="text/javascript" src="navtreedata.js"></script>
+<script type="text/javascript" src="navtree.js"></script>
+<script type="text/javascript">
+  $(document).ready(initResizable);
+</script>
+<link href="search/search.css" rel="stylesheet" type="text/css"/>
+<script type="text/javascript" src="search/searchdata.js"></script>
+<script type="text/javascript" src="search/search.js"></script>
+<script type="text/javascript">
+  $(document).ready(function() { init_search(); });
+</script>
+<!-- hack in the navigation tree -->
+<script type="text/javascript" src="eigen_navtree_hacks.js"></script>
+<link href="doxygen.css" rel="stylesheet" type="text/css" />
+<link href="madlib_extra.css" rel="stylesheet" type="text/css"/>
+<!-- google analytics -->
+<script>
+  (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
+  (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
+  m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
+  })(window,document,'script','//www.google-analytics.com/analytics.js','ga');
+  ga('create', 'UA-45382226-1', 'madlib.incubator.apache.org');
+  ga('send', 'pageview');
+</script>
+</head>
+<body>
+<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
+<div id="titlearea">
+<table cellspacing="0" cellpadding="0">
+ <tbody>
+ <tr style="height: 56px;">
+  <td id="projectlogo"><a href="http://madlib.incubator.apache.org"><img alt="Logo" src="madlib.png" height="50" style="padding-left:0.5em;" border="0"/ ></a></td>
+  <td style="padding-left: 0.5em;">
+   <div id="projectname">
+   <span id="projectnumber">1.11</span>
+   </div>
+   <div id="projectbrief">User Documentation for MADlib</div>
+  </td>
+   <td>        <div id="MSearchBox" class="MSearchBoxInactive">
+        <span class="left">
+          <img id="MSearchSelect" src="search/mag_sel.png"
+               onmouseover="return searchBox.OnSearchSelectShow()"
+               onmouseout="return searchBox.OnSearchSelectHide()"
+               alt=""/>
+          <input type="text" id="MSearchField" value="Search" accesskey="S"
+               onfocus="searchBox.OnSearchFieldFocus(true)" 
+               onblur="searchBox.OnSearchFieldFocus(false)" 
+               onkeyup="searchBox.OnSearchFieldChange(event)"/>
+          </span><span class="right">
+            <a id="MSearchClose" href="javascript:searchBox.CloseResultsWindow()"><img id="MSearchCloseImg" border="0" src="search/close.png" alt=""/></a>
+          </span>
+        </div>
+</td>
+ </tr>
+ </tbody>
+</table>
+</div>
+<!-- end header part -->
+<!-- Generated by Doxygen 1.8.13 -->
+<script type="text/javascript">
+var searchBox = new SearchBox("searchBox", "search",false,'Search');
+</script>
+</div><!-- top -->
+<div id="side-nav" class="ui-resizable side-nav-resizable">
+  <div id="nav-tree">
+    <div id="nav-tree-contents">
+      <div id="nav-sync" class="sync"></div>
+    </div>
+  </div>
+  <div id="splitbar" style="-moz-user-select:none;" 
+       class="ui-resizable-handle">
+  </div>
+</div>
+<script type="text/javascript">
+$(document).ready(function(){initNavTree('group__grp__multinom.html','');});
+</script>
+<div id="doc-content">
+<!-- window showing the filter options -->
+<div id="MSearchSelectWindow"
+     onmouseover="return searchBox.OnSearchSelectShow()"
+     onmouseout="return searchBox.OnSearchSelectHide()"
+     onkeydown="return searchBox.OnSearchSelectKey(event)">
+</div>
+
+<!-- iframe showing the search results (closed by default) -->
+<div id="MSearchResultsWindow">
+<iframe src="javascript:void(0)" frameborder="0" 
+        name="MSearchResults" id="MSearchResults">
+</iframe>
+</div>
+
+<div class="header">
+  <div class="headertitle">
+<div class="title">Multinomial Regression<div class="ingroups"><a class="el" href="group__grp__super.html">Supervised Learning</a> &raquo; <a class="el" href="group__grp__regml.html">Regression Models</a></div></div>  </div>
+</div><!--header-->
+<div class="contents">
+<div class="toc"><b>Contents</b> <ul>
+<li class="level1">
+<a href="#train">Training Function</a> </li>
+<li class="level1">
+<a href="#predict">Prediction Function</a> </li>
+<li class="level1">
+<a href="#examples">Examples</a> </li>
+<li class="level1">
+<a href="#background">Technical Background</a> </li>
+<li class="level1">
+<a href="#literature">Literature</a> </li>
+<li class="level1">
+<a href="#related">Related Topics</a> </li>
+</ul>
+</div><p>In statistics, multinomial regression is a classification method that generalizes binomial regression to multiclass problems, i.e. with more than two possible discrete outcomes. That is, it is a model that is used to predict the probabilities of the different possible outcomes of a categorically distributed dependent variable, given a set of independent variables (which may be real-valued, binary-valued, categorical-valued, etc.).</p>
+<p><a class="anchor" id="train"></a></p><dl class="section user"><dt>Training Function</dt><dd>The multinomial regression training function has the following syntax: <pre class="syntax">
+multinom(source_table,
+         model_table,
+         dependent_varname,
+         independent_varname,
+         ref_category,
+         link_func,
+         grouping_col,
+         optim_params,
+         verbose
+        )
+</pre></dd></dl>
+<p><b>Arguments</b> </p><dl class="arglist">
+<dt>source_table </dt>
+<dd><p class="startdd">VARCHAR. Name of the table containing the training data.</p>
+<p class="enddd"></p>
+</dd>
+<dt>model_table </dt>
+<dd><p class="startdd">VARCHAR. Name of the generated table containing the model.</p>
+<p>The model table produced by multinom() contains the following columns:</p>
+<table class="output">
+<tr>
+<th>&lt;...&gt; </th><td><p class="starttd">Grouping columns, if provided in input. This could be multiple columns depending on the <code>grouping_col</code> input. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>category </th><td><p class="starttd">VARCHAR. String representation of category value. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>coef </th><td><p class="starttd">FLOAT8[]. Vector of the coefficients in linear predictor. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>log_likelihood </th><td><p class="starttd">FLOAT8. The log-likelihood <img class="formulaInl" alt="$ l(\boldsymbol \beta) $" src="form_93.png"/>. The value will be the same across categories within the same group. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>std_err </th><td><p class="starttd">FLOAT8[]. Vector of the standard errors of the coefficients. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>z_stats </th><td><p class="starttd">FLOAT8[]. Vector of the z-statistics of the coefficients. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>p_values </th><td><p class="starttd">FLOAT8[]. Vector of the p-values of the coefficients. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>num_rows_processed </th><td><p class="starttd">BIGINT. Number of rows processed. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>num_rows_skipped </th><td><p class="starttd">BIGINT. Number of rows skipped due to missing values or failures. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>num_iterations </th><td>INTEGER. Number of iterations actually completed. This would be different from the <code>nIterations</code> argument if a <code>tolerance</code> parameter is provided and the algorithm converges before all iterations are completed.  </td></tr>
+</table>
+<p>A summary table named &lt;model_table&gt;_summary is also created at the same time, which has the following columns: </p><table class="output">
+<tr>
+<th>method </th><td><p class="starttd">VARCHAR. String describes the model: 'multinom'. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>source_table </th><td><p class="starttd">VARCHAR. Data source table name. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>model_table </th><td><p class="starttd">VARCHAR. Model table name. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>dependent_varname </th><td><p class="starttd">VARCHAR. Expression for dependent variable. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>independent_varname </th><td><p class="starttd">VARCHAR. Expression for independent variables. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>ref_category </th><td><p class="starttd">VARCHAR. String representation of reference category. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>link_func </th><td><p class="starttd">VARCHAR. String that contains link function parameters: only 'logit' is implemented now </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>grouping_col </th><td><p class="starttd">VARCHAR. String representation of grouping columns. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>optimizer_params </th><td><p class="starttd">VARCHAR. String that contains optimizer parameters, and has the form of 'optimizer=..., max_iter=..., tolerance=...'. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>num_all_groups </th><td><p class="starttd">INTEGER. Number of groups in glm training. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>num_failed_groups </th><td><p class="starttd">INTEGER. Number of failed groups in glm training. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>total_rows_processed </th><td><p class="starttd">BIGINT. Total number of rows processed in all groups. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>total_rows_skipped </th><td><p class="starttd">BIGINT. Total number of rows skipped in all groups due to missing values or failures. </p>
+<p class="endtd"></p>
+</td></tr>
+</table>
+<p class="enddd"></p>
+</dd>
+<dt>dependent_varname </dt>
+<dd><p class="startdd">VARCHAR. Name of the dependent variable column.</p>
+<p class="enddd"></p>
+</dd>
+<dt>independent_varname </dt>
+<dd><p class="startdd">VARCHAR. Expression list to evaluate for the independent variables. An intercept variable is not assumed. It is common to provide an explicit intercept term by including a single constant <code>1</code> term in the independent variable list.</p>
+<p class="enddd"></p>
+</dd>
+<dt>link_function (optional) </dt>
+<dd><p class="startdd">VARCHAR, default: 'logit'. Parameters for link function. Currently, we support logit. </p>
+<p class="enddd"></p>
+</dd>
+<dt>ref_category (optional) </dt>
+<dd><p class="startdd">VARCHAR, default: '0'. Parameters to specify the reference category. </p>
+<p class="enddd"></p>
+</dd>
+<dt>grouping_col (optional) </dt>
+<dd><p class="startdd">VARCHAR, default: NULL. An expression list used to group the input dataset into discrete groups, running one regression per group. Similar to the SQL "GROUP BY" clause. When this value is NULL, no grouping is used and a single model is generated.</p>
+<p class="enddd"></p>
+</dd>
+<dt>optim_params (optional) </dt>
+<dd><p class="startdd">VARCHAR, default: 'max_iter=100,optimizer=irls,tolerance=1e-6'. Parameters for optimizer. Currently, we support tolerance=[tolerance for relative error between log-likelihoods], max_iter=[maximum iterations to run], optimizer=irls.</p>
+<p class="enddd"></p>
+</dd>
+<dt>verbose (optional) </dt>
+<dd>BOOLEAN, default: FALSE. Provides verbose output of the results of training. </dd>
+</dl>
+<dl class="section note"><dt>Note</dt><dd>For p-values, we just return the computation result directly. Other statistical packages, like 'R', produce the same result, but on printing the result to screen, another format function is used and any p-value that is smaller than the machine epsilon (the smallest positive floating-point number 'x' such that '1 + x != 1') will be printed on screen as "&lt; xxx" (xxx is the value of the machine epsilon). Although the result may look different, they are in fact the same. </dd></dl>
+<p><a class="anchor" id="predict"></a></p><dl class="section user"><dt>Prediction Function</dt><dd>Multinomial regression prediction function has the following format: <pre class="syntax">
+multinom_predict(model_table,
+                 predict_table_input,
+                 output_table,
+                 predict_type,
+                 verbose,
+                 id_column
+                )
+</pre> <b>Arguments</b> <dl class="arglist">
+<dt>model_table </dt>
+<dd><p class="startdd">TEXT. Name of the generated table containing the model, which is the output table from multinom().</p>
+<p class="enddd"></p>
+</dd>
+<dt>predict_table_input </dt>
+<dd><p class="startdd">TEXT. The name of the table containing the data to predict on. The table must contain id column as the primary key.</p>
+<p class="enddd"></p>
+</dd>
+<dt>output_table </dt>
+<dd><p class="startdd">TEXT. Name of the generated table containing the predicted values.</p>
+<p>The model table produced by multinom_predict contains the following columns:</p>
+<table class="output">
+<tr>
+<th>id </th><td><p class="starttd">SERIAL. Column to identify the predicted value. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>category </th><td><p class="starttd">TEXT. Available if the predicted type = 'response'. Column contains the predicted categories </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>category_value </th><td>FLOAT8. The predicted probability for the specific category_value.  </td></tr>
+</table>
+<p class="enddd"></p>
+</dd>
+<dt>predict_type </dt>
+<dd>TEXT. Either 'response' or 'probability'. Using 'response' will give the predicted category with the largest probability. Using probability will give the predicted probabilities for all categories </dd>
+<dt>verbose </dt>
+<dd><p class="startdd">BOOLEAN. Control whether verbose is displayed. The default is FALSE. </p>
+<p class="enddd"></p>
+</dd>
+<dt>id_column </dt>
+<dd>TEXT. The name of the column in the input table. </dd>
+</dl>
+</dd></dl>
+<p><a class="anchor" id="examples"></a></p><dl class="section user"><dt>Examples</dt><dd></dd></dl>
+<ol type="1">
+<li>Create the training data table. <pre class="example">
+DROP TABLE IF EXISTS test3;
+CREATE TABLE test3 (
+    feat1 INTEGER,
+    feat2 INTEGER,
+    cat INTEGER
+);
+INSERT INTO test3(feat1, feat2, cat) VALUES
+(1,35,1),
+(2,33,0),
+(3,39,1),
+(1,37,1),
+(2,31,1),
+(3,36,0),
+(2,36,1),
+(2,31,1),
+(2,41,1),
+(2,37,1),
+(1,44,1),
+(3,33,2),
+(1,31,1),
+(2,44,1),
+(1,35,1),
+(1,44,0),
+(1,46,0),
+(2,46,1),
+(2,46,2),
+(3,49,1),
+(2,39,0),
+(2,44,1),
+(1,47,1),
+(1,44,1),
+(1,37,2),
+(3,38,2),
+(1,49,0),
+(2,44,0),
+(3,61,2),
+(1,65,2),
+(3,67,1),
+(3,65,2),
+(1,65,2),
+(2,67,2),
+(1,65,2),
+(1,62,2),
+(3,52,2),
+(3,63,2),
+(2,59,2),
+(3,65,2),
+(2,59,0),
+(3,67,2),
+(3,67,2),
+(3,60,2),
+(3,67,2),
+(3,62,2),
+(2,54,2),
+(3,65,2),
+(3,62,2),
+(2,59,2),
+(3,60,2),
+(3,63,2),
+(3,65,2),
+(2,63,1),
+(2,67,2),
+(2,65,2),
+(2,62,2);
+</pre></li>
+<li>Run the multilogistic regression function. <pre class="example">
+DROP TABLE IF EXISTS test3_output;
+DROP TABLE IF EXISTS test3_output_summary;
+SELECT madlib.multinom('test3',
+                       'test3_output',
+                       'cat',
+                       'ARRAY[1, feat1, feat2]',
+                       '0',
+                       'logit'
+                       );
+</pre></li>
+<li>View the regression results. <pre class="example">
+-- Set extended display on for easier reading of output
+\x on
+SELECT * FROM test3_output;
+</pre></li>
+</ol>
+<p>Result: </p><pre class="result">
+-[ RECORD 1 ]------+------------------------------------------------------------
+category           | 1
+coef               | {1.45474045165731,0.084995618282504,-0.0172383499512136}
+log_likelihood     | -39.1475993094045
+std_err            | {2.13085878785549,0.585023211942952,0.0431489262260687}
+z_stats            | {0.682701481650677,0.145285890452484,-0.399508202380224}
+p_values           | {0.494795493298706,0.884485154314181,0.689518781152604}
+num_rows_processed | 57
+num_rows_skipped   | 0
+iteration          | 6
+-[ RECORD 2 ]------+------------------------------------------------------------
+category           | 2
+coef               | {-7.1290816775109,0.876487877074751,0.127886153038661}
+log_likelihood     | -39.1475993094045
+std_err            | {2.52105418324135,0.639578886139654,0.0445760103748678}
+z_stats            | {-2.82781771407425,1.37041402721253,2.86894569440347}
+p_values           | {0.00468664844488755,0.170557695812408,0.00411842502754068}
+num_rows_processed | 57
+num_rows_skipped   | 0
+iteration          | 6
+</pre><ol type="1">
+<li>Predicting dependent variable using multinomial model. (This example uses the original data table to perform the prediction. Typically a different test dataset with the same features as the original training dataset would be used for prediction.)</li>
+</ol>
+<pre class="example">
+\x off
+-- Add the id column for prediction function
+ALTER TABLE test3 ADD COLUMN id SERIAL;
+-- Predict probabilities for all categories using the original data
+SELECT madlib.multinom_predict('test3_out','test3', 'test3_prd_prob', 'probability');
+-- Display the predicted value
+SELECT * FROM test3_prd_prob;
+</pre><p><a class="anchor" id="background"></a></p><dl class="section user"><dt>Technical Background</dt><dd>When link = 'logit', multinomial logistic regression models the outcomes of categorical dependent random variables (denoted <img class="formulaInl" alt="$ Y \in \{ 0,1,2 \ldots k \} $" src="form_94.png"/>). The model assumes that the conditional mean of the dependent categorical variables is the logistic function of an affine combination of independent variables (usually denoted <img class="formulaInl" alt="$ \boldsymbol x $" src="form_59.png"/>). That is, <p class="formulaDsp">
+<img class="formulaDsp" alt="\[ E[Y \mid \boldsymbol x] = \sigma(\boldsymbol c^T \boldsymbol x) \]" src="form_95.png"/>
+</p>
+ for some unknown vector of coefficients <img class="formulaInl" alt="$ \boldsymbol c $" src="form_79.png"/> and where <img class="formulaInl" alt="$ \sigma(x) = \frac{1}{1 + \exp(-x)} $" src="form_96.png"/> is the logistic function. Multinomial logistic regression finds the vector of coefficients <img class="formulaInl" alt="$ \boldsymbol c $" src="form_79.png"/> that maximizes the likelihood of the observations.</dd></dl>
+<p>Let</p><ul>
+<li><img class="formulaInl" alt="$ \boldsymbol y \in \{ 0,1 \}^{n \times k} $" src="form_97.png"/> denote the vector of observed dependent variables, with <img class="formulaInl" alt="$ n $" src="form_11.png"/> rows and <img class="formulaInl" alt="$ k $" src="form_98.png"/> columns, containing the observed values of the dependent variable,</li>
+<li><img class="formulaInl" alt="$ X \in \mathbf R^{n \times k} $" src="form_99.png"/> denote the design matrix with <img class="formulaInl" alt="$ k $" src="form_98.png"/> columns and <img class="formulaInl" alt="$ n $" src="form_11.png"/> rows, containing all observed vectors of independent variables <img class="formulaInl" alt="$ \boldsymbol x_i $" src="form_100.png"/> as rows.</li>
+</ul>
+<p>By definition, </p><p class="formulaDsp">
+<img class="formulaDsp" alt="\[ P[Y = y_i | \boldsymbol x_i] = \sigma((-1)^{y_i} \cdot \boldsymbol c^T \boldsymbol x_i) \,. \]" src="form_101.png"/>
+</p>
+<p> Maximizing the likelihood <img class="formulaInl" alt="$ \prod_{i=1}^n \Pr(Y = y_i \mid \boldsymbol x_i) $" src="form_102.png"/> is equivalent to maximizing the log-likelihood <img class="formulaInl" alt="$ \sum_{i=1}^n \log \Pr(Y = y_i \mid \boldsymbol x_i) $" src="form_103.png"/>, which simplifies to </p><p class="formulaDsp">
+<img class="formulaDsp" alt="\[ l(\boldsymbol c) = -\sum_{i=1}^n \log(1 + \exp((-1)^{y_i} \cdot \boldsymbol c^T \boldsymbol x_i)) \,. \]" src="form_104.png"/>
+</p>
+<p> The Hessian of this objective is <img class="formulaInl" alt="$ H = -X^T A X $" src="form_105.png"/> where <img class="formulaInl" alt="$ A = \text{diag}(a_1, \dots, a_n) $" src="form_106.png"/> is the diagonal matrix with <img class="formulaInl" alt="$ a_i = \sigma(\boldsymbol c^T \boldsymbol x) \cdot \sigma(-\boldsymbol c^T \boldsymbol x) \,. $" src="form_107.png"/> Since <img class="formulaInl" alt="$ H $" src="form_108.png"/> is non-positive definite, <img class="formulaInl" alt="$ l(\boldsymbol c) $" src="form_80.png"/> is convex. There are many techniques for solving convex optimization problems. Currently, logistic regression in MADlib can use:</p><ul>
+<li>Iteratively Reweighted Least Squares</li>
+</ul>
+<p>We estimate the standard error for coefficient <img class="formulaInl" alt="$ i $" src="form_33.png"/> as </p><p class="formulaDsp">
+<img class="formulaDsp" alt="\[ \mathit{se}(c_i) = \left( (X^T A X)^{-1} \right)_{ii} \,. \]" src="form_109.png"/>
+</p>
+<p> The Wald z-statistic is </p><p class="formulaDsp">
+<img class="formulaDsp" alt="\[ z_i = \frac{c_i}{\mathit{se}(c_i)} \,. \]" src="form_110.png"/>
+</p>
+<p>The Wald <img class="formulaInl" alt="$ p $" src="form_111.png"/>-value for coefficient <img class="formulaInl" alt="$ i $" src="form_33.png"/> gives the probability (under the assumptions inherent in the Wald test) of seeing a value at least as extreme as the one observed, provided that the null hypothesis ( <img class="formulaInl" alt="$ c_i = 0 $" src="form_112.png"/>) is true. Letting <img class="formulaInl" alt="$ F $" src="form_113.png"/> denote the cumulative density function of a standard normal distribution, the Wald <img class="formulaInl" alt="$ p $" src="form_111.png"/>-value for coefficient <img class="formulaInl" alt="$ i $" src="form_33.png"/> is therefore </p><p class="formulaDsp">
+<img class="formulaDsp" alt="\[ p_i = \Pr(|Z| \geq |z_i|) = 2 \cdot (1 - F( |z_i| )) \]" src="form_114.png"/>
+</p>
+<p> where <img class="formulaInl" alt="$ Z $" src="form_115.png"/> is a standard normally distributed random variable.</p>
+<p>The odds ratio for coefficient <img class="formulaInl" alt="$ i $" src="form_33.png"/> is estimated as <img class="formulaInl" alt="$ \exp(c_i) $" src="form_116.png"/>.</p>
+<p>The condition number is computed as <img class="formulaInl" alt="$ \kappa(X^T A X) $" src="form_117.png"/> during the iteration immediately <em>preceding</em> convergence (i.e., <img class="formulaInl" alt="$ A $" src="form_14.png"/> is computed using the coefficients of the previous iteration). A large condition number (say, more than 1000) indicates the presence of significant multicollinearity.</p>
+<p>The multinomial logistic regression uses a default reference category of zero, and the regression coefficients in the output are in the order described below. For a problem with <img class="formulaInl" alt="$ K $" src="form_118.png"/> dependent variables <img class="formulaInl" alt="$ (1, ..., K) $" src="form_119.png"/> and <img class="formulaInl" alt="$ J $" src="form_120.png"/> categories <img class="formulaInl" alt="$ (0, ..., J-1) $" src="form_121.png"/>, let <img class="formulaInl" alt="$ {m_{k,j}} $" src="form_122.png"/> denote the coefficient for dependent variable <img class="formulaInl" alt="$ k $" src="form_98.png"/> and category <img class="formulaInl" alt="$ j $" src="form_123.png"/>. The output is <img class="formulaInl" alt="$ {m_{k_1, j_0}, m_{k_1, j_1} \ldots m_{k_1, j_{J-1}}, m_{k_2, j_0}, m_{k_2, j_1}, \ldots m_{k_2, j_{J-1}} \ldots m_{k_K, j_{J-1}}} $" src="form_124.png"/>. The order is NOT CONSISTENT with the multinomial regression marginal effect calculation 
 with function <em>marginal_mlogregr</em>. This is deliberate because the interfaces of all multinomial regressions (robust, clustered, ...) will be moved to match that used in marginal.</p>
+<p><a class="anchor" id="literature"></a></p><dl class="section user"><dt>Literature</dt><dd></dd></dl>
+<p>A collection of nice write-ups, with valuable pointers into further literature:</p>
+<p>[1] Annette J. Dobson: An Introduction to Generalized Linear Models, Second Edition. Nov 2001</p>
+<p>[2] Cosma Shalizi: Statistics 36-350: Data Mining, Lecture Notes, 18 November 2009, <a href="http://www.stat.cmu.edu/~cshalizi/350/lectures/26/lecture-26.pdf">http://www.stat.cmu.edu/~cshalizi/350/lectures/26/lecture-26.pdf</a></p>
+<p>[3] Scott A. Czepiel: Maximum Likelihood Estimation of Logistic Regression Models: Theory and Implementation, Retrieved Jul 12 2012, <a href="http://czep.net/stat/mlelr.pdf">http://czep.net/stat/mlelr.pdf</a></p>
+<p><a class="anchor" id="related"></a></p><dl class="section user"><dt>Related Topics</dt><dd></dd></dl>
+<p>File <a class="el" href="multiresponseglm_8sql__in.html" title="SQL functions for multinomial regression. ">multiresponseglm.sql_in</a> documenting the multinomial regression functions</p>
+<p><a class="el" href="group__grp__logreg.html">Logistic Regression</a></p>
+<p><a class="el" href="group__grp__ordinal.html">Ordinal Regression</a></p>
+</div><!-- contents -->
+</div><!-- doc-content -->
+<!-- start footer part -->
+<div id="nav-path" class="navpath"><!-- id is needed for treeview function! -->
+  <ul>
+    <li class="footer">Generated on Tue May 16 2017 13:24:38 for MADlib by
+    <a href="http://www.doxygen.org/index.html">
+    <img class="footer" src="doxygen.png" alt="doxygen"/></a> 1.8.13 </li>
+  </ul>
+</div>
+</body>
+</html>

http://git-wip-us.apache.org/repos/asf/incubator-madlib-site/blob/b5b51c69/docs/v1.11/group__grp__nene.html
----------------------------------------------------------------------
diff --git a/docs/v1.11/group__grp__nene.html b/docs/v1.11/group__grp__nene.html
new file mode 100644
index 0000000..6b6ed21
--- /dev/null
+++ b/docs/v1.11/group__grp__nene.html
@@ -0,0 +1,133 @@
+<!-- HTML header for doxygen 1.8.4-->
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
+<html xmlns="http://www.w3.org/1999/xhtml">
+<head>
+<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
+<meta http-equiv="X-UA-Compatible" content="IE=9"/>
+<meta name="generator" content="Doxygen 1.8.13"/>
+<meta name="keywords" content="madlib,postgres,greenplum,machine learning,data mining,deep learning,ensemble methods,data science,market basket analysis,affinity analysis,pca,lda,regression,elastic net,huber white,proportional hazards,k-means,latent dirichlet allocation,bayes,support vector machines,svm"/>
+<title>MADlib: Nearest Neighbors</title>
+<link href="tabs.css" rel="stylesheet" type="text/css"/>
+<script type="text/javascript" src="jquery.js"></script>
+<script type="text/javascript" src="dynsections.js"></script>
+<link href="navtree.css" rel="stylesheet" type="text/css"/>
+<script type="text/javascript" src="resize.js"></script>
+<script type="text/javascript" src="navtreedata.js"></script>
+<script type="text/javascript" src="navtree.js"></script>
+<script type="text/javascript">
+  $(document).ready(initResizable);
+</script>
+<link href="search/search.css" rel="stylesheet" type="text/css"/>
+<script type="text/javascript" src="search/searchdata.js"></script>
+<script type="text/javascript" src="search/search.js"></script>
+<script type="text/javascript">
+  $(document).ready(function() { init_search(); });
+</script>
+<!-- hack in the navigation tree -->
+<script type="text/javascript" src="eigen_navtree_hacks.js"></script>
+<link href="doxygen.css" rel="stylesheet" type="text/css" />
+<link href="madlib_extra.css" rel="stylesheet" type="text/css"/>
+<!-- google analytics -->
+<script>
+  (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
+  (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
+  m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
+  })(window,document,'script','//www.google-analytics.com/analytics.js','ga');
+  ga('create', 'UA-45382226-1', 'madlib.incubator.apache.org');
+  ga('send', 'pageview');
+</script>
+</head>
+<body>
+<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
+<div id="titlearea">
+<table cellspacing="0" cellpadding="0">
+ <tbody>
+ <tr style="height: 56px;">
+  <td id="projectlogo"><a href="http://madlib.incubator.apache.org"><img alt="Logo" src="madlib.png" height="50" style="padding-left:0.5em;" border="0"/ ></a></td>
+  <td style="padding-left: 0.5em;">
+   <div id="projectname">
+   <span id="projectnumber">1.11</span>
+   </div>
+   <div id="projectbrief">User Documentation for MADlib</div>
+  </td>
+   <td>        <div id="MSearchBox" class="MSearchBoxInactive">
+        <span class="left">
+          <img id="MSearchSelect" src="search/mag_sel.png"
+               onmouseover="return searchBox.OnSearchSelectShow()"
+               onmouseout="return searchBox.OnSearchSelectHide()"
+               alt=""/>
+          <input type="text" id="MSearchField" value="Search" accesskey="S"
+               onfocus="searchBox.OnSearchFieldFocus(true)" 
+               onblur="searchBox.OnSearchFieldFocus(false)" 
+               onkeyup="searchBox.OnSearchFieldChange(event)"/>
+          </span><span class="right">
+            <a id="MSearchClose" href="javascript:searchBox.CloseResultsWindow()"><img id="MSearchCloseImg" border="0" src="search/close.png" alt=""/></a>
+          </span>
+        </div>
+</td>
+ </tr>
+ </tbody>
+</table>
+</div>
+<!-- end header part -->
+<!-- Generated by Doxygen 1.8.13 -->
+<script type="text/javascript">
+var searchBox = new SearchBox("searchBox", "search",false,'Search');
+</script>
+</div><!-- top -->
+<div id="side-nav" class="ui-resizable side-nav-resizable">
+  <div id="nav-tree">
+    <div id="nav-tree-contents">
+      <div id="nav-sync" class="sync"></div>
+    </div>
+  </div>
+  <div id="splitbar" style="-moz-user-select:none;" 
+       class="ui-resizable-handle">
+  </div>
+</div>
+<script type="text/javascript">
+$(document).ready(function(){initNavTree('group__grp__nene.html','');});
+</script>
+<div id="doc-content">
+<!-- window showing the filter options -->
+<div id="MSearchSelectWindow"
+     onmouseover="return searchBox.OnSearchSelectShow()"
+     onmouseout="return searchBox.OnSearchSelectHide()"
+     onkeydown="return searchBox.OnSearchSelectKey(event)">
+</div>
+
+<!-- iframe showing the search results (closed by default) -->
+<div id="MSearchResultsWindow">
+<iframe src="javascript:void(0)" frameborder="0" 
+        name="MSearchResults" id="MSearchResults">
+</iframe>
+</div>
+
+<div class="header">
+  <div class="summary">
+<a href="#groups">Modules</a>  </div>
+  <div class="headertitle">
+<div class="title">Nearest Neighbors<div class="ingroups"><a class="el" href="group__grp__early__stage.html">Early Stage Development</a></div></div>  </div>
+</div><!--header-->
+<div class="contents">
+<a name="details" id="details"></a><h2 class="groupheader">Detailed Description</h2>
+<p>A collection of methods to create nearest neigbor based models. </p>
+<table class="memberdecls">
+<tr class="heading"><td colspan="2"><h2 class="groupheader"><a name="groups"></a>
+Modules</h2></td></tr>
+<tr class="memitem:group__grp__knn"><td class="memItemLeft" align="right" valign="top">&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="group__grp__knn.html">k-Nearest Neighbors</a></td></tr>
+<tr class="memdesc:group__grp__knn"><td class="mdescLeft">&#160;</td><td class="mdescRight">Finds k nearest data points to the given data point and outputs majority vote value of output classes for classification, and average value of target values for regression. <br /></td></tr>
+<tr class="separator:"><td class="memSeparator" colspan="2">&#160;</td></tr>
+</table>
+</div><!-- contents -->
+</div><!-- doc-content -->
+<!-- start footer part -->
+<div id="nav-path" class="navpath"><!-- id is needed for treeview function! -->
+  <ul>
+    <li class="footer">Generated on Tue May 16 2017 13:24:39 for MADlib by
+    <a href="http://www.doxygen.org/index.html">
+    <img class="footer" src="doxygen.png" alt="doxygen"/></a> 1.8.13 </li>
+  </ul>
+</div>
+</body>
+</html>

http://git-wip-us.apache.org/repos/asf/incubator-madlib-site/blob/b5b51c69/docs/v1.11/group__grp__nene.js
----------------------------------------------------------------------
diff --git a/docs/v1.11/group__grp__nene.js b/docs/v1.11/group__grp__nene.js
new file mode 100644
index 0000000..54ffcd8
--- /dev/null
+++ b/docs/v1.11/group__grp__nene.js
@@ -0,0 +1,4 @@
+var group__grp__nene =
+[
+    [ "k-Nearest Neighbors", "group__grp__knn.html", null ]
+];
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-madlib-site/blob/b5b51c69/docs/v1.11/group__grp__ordinal.html
----------------------------------------------------------------------
diff --git a/docs/v1.11/group__grp__ordinal.html b/docs/v1.11/group__grp__ordinal.html
new file mode 100644
index 0000000..8ce5526
--- /dev/null
+++ b/docs/v1.11/group__grp__ordinal.html
@@ -0,0 +1,464 @@
+<!-- HTML header for doxygen 1.8.4-->
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
+<html xmlns="http://www.w3.org/1999/xhtml">
+<head>
+<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
+<meta http-equiv="X-UA-Compatible" content="IE=9"/>
+<meta name="generator" content="Doxygen 1.8.13"/>
+<meta name="keywords" content="madlib,postgres,greenplum,machine learning,data mining,deep learning,ensemble methods,data science,market basket analysis,affinity analysis,pca,lda,regression,elastic net,huber white,proportional hazards,k-means,latent dirichlet allocation,bayes,support vector machines,svm"/>
+<title>MADlib: Ordinal Regression</title>
+<link href="tabs.css" rel="stylesheet" type="text/css"/>
+<script type="text/javascript" src="jquery.js"></script>
+<script type="text/javascript" src="dynsections.js"></script>
+<link href="navtree.css" rel="stylesheet" type="text/css"/>
+<script type="text/javascript" src="resize.js"></script>
+<script type="text/javascript" src="navtreedata.js"></script>
+<script type="text/javascript" src="navtree.js"></script>
+<script type="text/javascript">
+  $(document).ready(initResizable);
+</script>
+<link href="search/search.css" rel="stylesheet" type="text/css"/>
+<script type="text/javascript" src="search/searchdata.js"></script>
+<script type="text/javascript" src="search/search.js"></script>
+<script type="text/javascript">
+  $(document).ready(function() { init_search(); });
+</script>
+<!-- hack in the navigation tree -->
+<script type="text/javascript" src="eigen_navtree_hacks.js"></script>
+<link href="doxygen.css" rel="stylesheet" type="text/css" />
+<link href="madlib_extra.css" rel="stylesheet" type="text/css"/>
+<!-- google analytics -->
+<script>
+  (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
+  (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
+  m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
+  })(window,document,'script','//www.google-analytics.com/analytics.js','ga');
+  ga('create', 'UA-45382226-1', 'madlib.incubator.apache.org');
+  ga('send', 'pageview');
+</script>
+</head>
+<body>
+<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
+<div id="titlearea">
+<table cellspacing="0" cellpadding="0">
+ <tbody>
+ <tr style="height: 56px;">
+  <td id="projectlogo"><a href="http://madlib.incubator.apache.org"><img alt="Logo" src="madlib.png" height="50" style="padding-left:0.5em;" border="0"/ ></a></td>
+  <td style="padding-left: 0.5em;">
+   <div id="projectname">
+   <span id="projectnumber">1.11</span>
+   </div>
+   <div id="projectbrief">User Documentation for MADlib</div>
+  </td>
+   <td>        <div id="MSearchBox" class="MSearchBoxInactive">
+        <span class="left">
+          <img id="MSearchSelect" src="search/mag_sel.png"
+               onmouseover="return searchBox.OnSearchSelectShow()"
+               onmouseout="return searchBox.OnSearchSelectHide()"
+               alt=""/>
+          <input type="text" id="MSearchField" value="Search" accesskey="S"
+               onfocus="searchBox.OnSearchFieldFocus(true)" 
+               onblur="searchBox.OnSearchFieldFocus(false)" 
+               onkeyup="searchBox.OnSearchFieldChange(event)"/>
+          </span><span class="right">
+            <a id="MSearchClose" href="javascript:searchBox.CloseResultsWindow()"><img id="MSearchCloseImg" border="0" src="search/close.png" alt=""/></a>
+          </span>
+        </div>
+</td>
+ </tr>
+ </tbody>
+</table>
+</div>
+<!-- end header part -->
+<!-- Generated by Doxygen 1.8.13 -->
+<script type="text/javascript">
+var searchBox = new SearchBox("searchBox", "search",false,'Search');
+</script>
+</div><!-- top -->
+<div id="side-nav" class="ui-resizable side-nav-resizable">
+  <div id="nav-tree">
+    <div id="nav-tree-contents">
+      <div id="nav-sync" class="sync"></div>
+    </div>
+  </div>
+  <div id="splitbar" style="-moz-user-select:none;" 
+       class="ui-resizable-handle">
+  </div>
+</div>
+<script type="text/javascript">
+$(document).ready(function(){initNavTree('group__grp__ordinal.html','');});
+</script>
+<div id="doc-content">
+<!-- window showing the filter options -->
+<div id="MSearchSelectWindow"
+     onmouseover="return searchBox.OnSearchSelectShow()"
+     onmouseout="return searchBox.OnSearchSelectHide()"
+     onkeydown="return searchBox.OnSearchSelectKey(event)">
+</div>
+
+<!-- iframe showing the search results (closed by default) -->
+<div id="MSearchResultsWindow">
+<iframe src="javascript:void(0)" frameborder="0" 
+        name="MSearchResults" id="MSearchResults">
+</iframe>
+</div>
+
+<div class="header">
+  <div class="headertitle">
+<div class="title">Ordinal Regression<div class="ingroups"><a class="el" href="group__grp__super.html">Supervised Learning</a> &raquo; <a class="el" href="group__grp__regml.html">Regression Models</a></div></div>  </div>
+</div><!--header-->
+<div class="contents">
+<div class="toc"><b>Contents</b> <ul>
+<li class="level1">
+<a href="#train">Training Function</a> </li>
+<li class="level1">
+<a href="#predict">Prediction Function</a> </li>
+<li class="level1">
+<a href="#examples">Examples</a> </li>
+<li class="level1">
+<a href="#background">Model Details</a> </li>
+<li class="level1">
+<a href="#literature">Literature</a> </li>
+<li class="level1">
+<a href="#related">Related Topics</a> </li>
+</ul>
+</div><p>In statistics, ordinal regression is a type of regression analysis used for predicting an ordinal variable, i.e. a variable whose value exists on an arbitrary scale where only the relative ordering between different values is significant. The two most common types of ordinal regression models are ordered logit, which applies to data that meet the proportional odds assumption, and ordered probit.</p>
+<p><a class="anchor" id="train"></a></p><dl class="section user"><dt>Training Function</dt><dd>The ordinal regression training function has the following syntax: <pre class="syntax">
+ordinal(source_table,
+         model_table,
+         dependent_varname,
+         independent_varname,
+         cat_order,
+         link_func,
+         grouping_col,
+         optim_params,
+         verbose
+        )
+</pre></dd></dl>
+<p><b>Arguments</b> </p><dl class="arglist">
+<dt>source_table </dt>
+<dd><p class="startdd">VARCHAR. Name of the table containing the training data.</p>
+<p class="enddd"></p>
+</dd>
+<dt>model_table </dt>
+<dd><p class="startdd">VARCHAR. Name of the generated table containing the model.</p>
+<p>The model table produced by ordinal() contains the following columns:</p>
+<table class="output">
+<tr>
+<th>&lt;...&gt; </th><td><p class="starttd">Grouping columns, if provided in input. This could be multiple columns depending on the <code>grouping_col</code> input. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>coef_threshold </th><td><p class="starttd">FLOAT8[]. Vector of the threshold coefficients in linear predictor. The threshold coefficients are the intercepts specific to each categorical levels </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>std_err_threshold </th><td><p class="starttd">FLOAT8[]. Vector of the threshold standard errors of the threshold coefficients. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>z_stats_threshold </th><td><p class="starttd">FLOAT8[]. Vector of the threshold z-statistics of the thresholdcoefficients. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>p_values_threshold </th><td><p class="starttd">FLOAT8[]. Vector of the threshold p-values of the threshold coefficients. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>log_likelihood </th><td><p class="starttd">FLOAT8. The log-likelihood <img class="formulaInl" alt="$ l(\boldsymbol \beta) $" src="form_93.png"/>. The value will be the same across categories within the same group. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>coef_feature </th><td><p class="starttd">FLOAT8[]. Vector of the feature coefficients in linear predictor. The feature coefficients are the coefficients for the independent variables. They are the same across categories. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>std_err_feature </th><td><p class="starttd">FLOAT8[]. Vector of the feature standard errors of the feature coefficients. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>z_stats_feature </th><td><p class="starttd">FLOAT8[]. Vector of the feature z-statistics of the feature coefficients. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>p_values_feature </th><td><p class="starttd">FLOAT8[]. Vector of the feature p-values of the feature coefficients. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>num_rows_processed </th><td><p class="starttd">BIGINT. Number of rows processed. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>num_rows_skipped </th><td><p class="starttd">BIGINT. Number of rows skipped due to missing values or failures. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>num_iterations </th><td>INTEGER. Number of iterations actually completed. This would be different from the <code>nIterations</code> argument if a <code>tolerance</code> parameter is provided and the algorithm converges before all iterations are completed.  </td></tr>
+</table>
+<p>A summary table named &lt;model_table&gt;_summary is also created at the same time, which has the following columns: </p><table class="output">
+<tr>
+<th>method </th><td><p class="starttd">VARCHAR. String describes the model: 'ordinal'. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>source_table </th><td><p class="starttd">VARCHAR. Data source table name. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>model_table </th><td><p class="starttd">VARCHAR. Model table name. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>dependent_varname </th><td><p class="starttd">VARCHAR. Expression for dependent variable. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>independent_varname </th><td><p class="starttd">VARCHAR. Expression for independent variables. The independent variables should not include intercept term. Otherwise there will be an error message indicating Hessian matrix is not finite. In that case, the user should drop the intercept and rerun the function agian. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>cat_order </th><td><p class="starttd">VARCHAR. String representation of category order. Default is the sorted categories in data using python sort </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>link_func </th><td><p class="starttd">VARCHAR. String that contains link function parameters: 'logit' and 'probit' links are implemented now </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>grouping_col </th><td><p class="starttd">VARCHAR. String representation of grouping columns. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>optimizer_params </th><td><p class="starttd">VARCHAR. String that contains optimizer parameters, and has the form of 'optimizer=..., max_iter=..., tolerance=...'. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>num_all_groups </th><td><p class="starttd">INTEGER. Number of groups in ordinal regression training. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>num_failed_groups </th><td><p class="starttd">INTEGER. Number of failed groups in ordinal regression training. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>total_rows_processed </th><td><p class="starttd">BIGINT. Total number of rows processed in all groups. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>total_rows_skipped </th><td><p class="starttd">BIGINT. Total number of rows skipped in all groups due to missing values or failures. </p>
+<p class="endtd"></p>
+</td></tr>
+</table>
+<p class="enddd"></p>
+</dd>
+<dt>dependent_varname </dt>
+<dd><p class="startdd">VARCHAR. Name of the dependent variable column.</p>
+<p class="enddd"></p>
+</dd>
+<dt>independent_varname </dt>
+<dd><p class="startdd">VARCHAR. Expression list to evaluate for the independent variables. The intercept should not be included here since the cumulative probability force to have intercepts for each category level.</p>
+<p class="enddd"></p>
+</dd>
+<dt>cat_order </dt>
+<dd><p class="startdd">VARCHAR, String that represents the order of category. The order is specified by charactor '&lt;'. </p>
+<p class="enddd"></p>
+</dd>
+<dt>link_function (optional) </dt>
+<dd><p class="startdd">VARCHAR, default: 'logit'. Parameters for link function. Currently, we support logit and probit. </p>
+<p class="enddd"></p>
+</dd>
+<dt>grouping_col (optional) </dt>
+<dd><p class="startdd">VARCHAR, default: NULL. An expression list used to group the input dataset into discrete groups, running one regression per group. Similar to the SQL "GROUP BY" clause. When this value is NULL, no grouping is used and a single model is generated.</p>
+<p class="enddd"></p>
+</dd>
+<dt>optim_params (optional) </dt>
+<dd><p class="startdd">VARCHAR, default: 'max_iter=100,optimizer=irls,tolerance=1e-6'. Parameters for optimizer. Currently, we support tolerance=[tolerance for relative error between log-likelihoods], max_iter=[maximum iterations to run], optimizer=irls.</p>
+<p class="enddd"></p>
+</dd>
+<dt>verbose (optional) </dt>
+<dd>BOOLEAN, default: FALSE. Provides verbose output of the results of training. </dd>
+</dl>
+<dl class="section note"><dt>Note</dt><dd>To calculate the standard error the coefficient, we are using the square root of the diagnal elements of the expected Fisher information matrix, which is a by-product of iteratively reweighted least square. This method is used in the original ordinal regression paper by McCullagh(1980). In some software like Stata, the standard error is calculated by the observed information matrix, which is supported by Efron and Hinkley (1978). In R, polr() uses the approximated observed information matrix while the optimization is achieved by first order optimization method. Therefore, there will be some difference on standard error, z-stats and p-value from other software.</dd></dl>
+<p><a class="anchor" id="predict"></a></p><dl class="section user"><dt>Prediction Function</dt><dd>Ordinal regression prediction function has the following format: <pre class="syntax">
+ordinal_predict(
+                    model_table,
+                    predict_table_input,
+                    output_table,
+                    predict_type,
+                    verbose
+               )
+</pre> <b>Arguments</b> <dl class="arglist">
+<dt>model_table </dt>
+<dd><p class="startdd">TEXT. Name of the generated table containing the model, which is the output table from ordinal().</p>
+<p class="enddd"></p>
+</dd>
+<dt>predict_table_input </dt>
+<dd><p class="startdd">TEXT. The name of the table containing the data to predict on. The table must contain id column as the primary key.</p>
+<p class="enddd"></p>
+</dd>
+<dt>output_table </dt>
+<dd><p class="startdd">TEXT. Name of the generated table containing the predicted values.</p>
+<p>The model table produced by ordinal_predict contains the following columns:</p>
+<table class="output">
+<tr>
+<th>id </th><td><p class="starttd">SERIAL. Column to identify the predicted value. </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>category </th><td><p class="starttd">TEXT. Available if the predicted type = 'response'. Column contains the predicted categories </p>
+<p class="endtd"></p>
+</td></tr>
+<tr>
+<th>category_value </th><td>FLOAT8. The predicted probability for the specific category_value.  </td></tr>
+</table>
+<p class="enddd"></p>
+</dd>
+<dt>predict_type </dt>
+<dd><p class="startdd">TEXT. Either 'response' or 'probability'. Using 'response' will give the predicted category with the largest probability. Using probability will give the predicted probabilities for all categories</p>
+<p class="enddd"></p>
+</dd>
+<dt>verbose </dt>
+<dd>BOOLEAN. Whether verbose is displayed </dd>
+</dl>
+</dd></dl>
+<p><a class="anchor" id="examples"></a></p><dl class="section user"><dt>Examples</dt><dd></dd></dl>
+<ol type="1">
+<li>Create the training data table. <pre class="example">
+DROP TABLE IF EXISTS test3;
+CREATE TABLE test3 (
+    feat1 INTEGER,
+    feat2 INTEGER,
+    cat INTEGER
+);
+INSERT INTO test3(feat1, feat2, cat) VALUES
+(1,35,1),
+(2,33,0),
+(3,39,1),
+(1,37,1),
+(2,31,1),
+(3,36,0),
+(2,36,1),
+(2,31,1),
+(2,41,1),
+(2,37,1),
+(1,44,1),
+(3,33,2),
+(1,31,1),
+(2,44,1),
+(1,35,1),
+(1,44,0),
+(1,46,0),
+(2,46,1),
+(2,46,2),
+(3,49,1),
+(2,39,0),
+(2,44,1),
+(1,47,1),
+(1,44,1),
+(1,37,2),
+(3,38,2),
+(1,49,0),
+(2,44,0),
+(3,61,2),
+(1,65,2),
+(3,67,1),
+(3,65,2),
+(1,65,2),
+(2,67,2),
+(1,65,2),
+(1,62,2),
+(3,52,2),
+(3,63,2),
+(2,59,2),
+(3,65,2),
+(2,59,0),
+(3,67,2),
+(3,67,2),
+(3,60,2),
+(3,67,2),
+(3,62,2),
+(2,54,2),
+(3,65,2),
+(3,62,2),
+(2,59,2),
+(3,60,2),
+(3,63,2),
+(3,65,2),
+(2,63,1),
+(2,67,2),
+(2,65,2),
+(2,62,2);
+</pre></li>
+<li>Run the multilogistic regression function. <pre class="example">
+DROP TABLE IF EXISTS test3_output;
+DROP TABLE IF EXISTS test3_output_summary;
+SELECT madlib.ordinal('test3',
+                       'test3_output',
+                       'cat',
+                       'ARRAY[feat1, feat2]',
+                       '0&lt;1&lt;2',
+                       'logit'
+                       );
+</pre></li>
+<li>View the regression results. <pre class="example">
+-- Set extended display on for easier reading of output
+\x on
+SELECT * FROM test3_output;
+</pre></li>
+</ol>
+<p>Result: </p><pre class="result">
+-[ RECORD 1 ]------+-------------------------------------------
+coef_threshold     | {4.12831944358935,6.55999442887089}
+std_err_threshold  | {1.3603408170882,1.54843501580999}
+z_stats_threshold  | {3.03476848722806,4.23653195768075}
+p_values_threshold | {0.00240720390579325,2.26998625331282e-05}
+log_likelihood     | -42.1390192418541
+coef_feature       | {0.574822563129293,0.108115645059558}
+std_err_feature    | {0.394064908788145,0.0276025960683975}
+z_stats_feature    | {1.45870020473791,3.91686509456046}
+p_values_feature   | {0.144647639733733,8.9707915817562e-05}
+num_rows_processed | 57
+num_rows_skipped   | 0
+iteration          | 7
+</pre><ol type="1">
+<li>Predicting dependent variable using ordinal model. (This example uses the original data table to perform the prediction. Typically a different test dataset with the same features as the original training dataset would be used for prediction.)</li>
+</ol>
+<pre class="example">
+\x off
+-- Add the id column for prediction function
+ALTER TABLE test3 ADD COLUMN id SERIAL;
+-- Predict probabilities for all categories using the original data 
+SELECT ordinal_predict('test3_output','test3', 'test3_prd_prob', 'probability');
+-- Display the predicted value
+SELECT * FROM test3_prd_prob;
+</pre><p><a class="anchor" id="background"></a></p><dl class="section user"><dt>Model Details</dt><dd></dd></dl>
+<p>The function ordinal() fit the ordinal response model using a cumulative link model. The ordinal reponse variable, denoted by <img class="formulaInl" alt="$ Y_i $" src="form_125.png"/>, can fall in <img class="formulaInl" alt="$ j = 1,.. , J$" src="form_126.png"/> categories. Then <img class="formulaInl" alt="$ Y_i $" src="form_125.png"/> follows a multinomial distribution with parameter <img class="formulaInl" alt="$\pi$" src="form_127.png"/> where <img class="formulaInl" alt="$\pi_{ij}$" src="form_128.png"/> denote the probability that the <img class="formulaInl" alt="$i$" src="form_129.png"/>th observation falls in response category <img class="formulaInl" alt="$j$" src="form_130.png"/>. We define the cumulative probabilities as </p><p class="formulaDsp">
+<img class="formulaDsp" alt="\[ \gamma_{ij} = \Pr(Y_i \le j)= \pi_{i1} +...+ \pi_{ij} . \]" src="form_131.png"/>
+</p>
+<p> Next we will consider the logit link for illustration purpose. The logit function is defined as <img class="formulaInl" alt="$ \mbox{logit}(\pi) = \log[\pi/(1-\pi)] $" src="form_132.png"/> and cumulative logits are defined as: </p><p class="formulaDsp">
+<img class="formulaDsp" alt="\[ \mbox{logit}(\gamma_{ij})=\mbox{logit}(\Pr(Y_i \le j))=\log \frac{\Pr(Y_i \le j)}{1-\Pr(Y_i\le j)}, j=1,...,Jāˆ’1 \]" src="form_133.png"/>
+</p>
+<p> so that the cumulative logits are defined for all but the last category.</p>
+<p>A cumulative link model with a logit link, or simply cumulative logit model is a regression model for cumulative logits: </p><p class="formulaDsp">
+<img class="formulaDsp" alt="\[ \mbox{logit}(\gamma_{ij}) = \theta_j - x^T_i \beta \]" src="form_134.png"/>
+</p>
+<p> where <img class="formulaInl" alt="$x_i$" src="form_135.png"/> is a vector of explanatory variables for the <img class="formulaInl" alt="$i$" src="form_129.png"/>th observation and <img class="formulaInl" alt="$\beta$" src="form_136.png"/> is the corresponding set of regression parameters. The <img class="formulaInl" alt="$\{\theta_j\}$" src="form_137.png"/> parameters provide each cumulative logit (for each <img class="formulaInl" alt="$j$" src="form_130.png"/>) with its own intercept. A key point is that the regression part <img class="formulaInl" alt="$x^T_i\beta$" src="form_138.png"/> is independent of <img class="formulaInl" alt="$j$" src="form_130.png"/>, so <img class="formulaInl" alt="$\beta$" src="form_136.png"/> has the same effect for each of the J āˆ’ 1 cumulative logits. Note that <img class="formulaInl" alt="$x^T_i\beta$" src="form_138.png"/> does not contain an intercept, since the <img class="formulaInl" alt="$\{\theta_j\}$" src="form_137.png"/> act as intercepts
 . For small values of <img class="formulaInl" alt="$x^T_i\beta$" src="form_138.png"/> the response is likely to fall in the first category and for large values of <img class="formulaInl" alt="$x^T_i\beta$" src="form_138.png"/> the response is likely to fall in the last category. The horizontal displacements of the curves are given by the values of <img class="formulaInl" alt="$\{\theta_j\}$" src="form_137.png"/>.</p>
+<p><a class="anchor" id="literature"></a></p><dl class="section user"><dt>Literature</dt><dd></dd></dl>
+<p>A collection of nice write-ups, with valuable pointers into further literature:</p>
+<p>[1] Peter McCullagh: Regression Models for Ordinal Data, Journal of the Royal Statistical Society. Series B (Methodological), Volume 42, Issue 2 (1980), 109-142</p>
+<p>[2] Rune Haubo B Christensen: Analysis of ordinal data with cumulative link models &ndash; estimation with the R-package ordinal. cran.r-project.org/web/packages/ordinal/vignettes/clm_intro.pdf</p>
+<p><a class="anchor" id="related"></a></p><dl class="section user"><dt>Related Topics</dt><dd></dd></dl>
+<p>File <a class="el" href="ordinal_8sql__in.html" title="SQL functions for ordinal regression. ">ordinal.sql_in</a> documenting the ordinal regression functions</p>
+<p><a class="el" href="group__grp__multinom.html">Multinomial Regression</a></p>
+</div><!-- contents -->
+</div><!-- doc-content -->
+<!-- start footer part -->
+<div id="nav-path" class="navpath"><!-- id is needed for treeview function! -->
+  <ul>
+    <li class="footer">Generated on Tue May 16 2017 13:24:38 for MADlib by
+    <a href="http://www.doxygen.org/index.html">
+    <img class="footer" src="doxygen.png" alt="doxygen"/></a> 1.8.13 </li>
+  </ul>
+</div>
+</body>
+</html>