You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@madlib.apache.org by "Frank McQuillan (JIRA)" <ji...@apache.org> on 2017/06/19 18:33:00 UTC

[jira] [Comment Edited] (MADLIB-413) Neural Networks - MLP

    [ https://issues.apache.org/jira/browse/MADLIB-413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054543#comment-16054543 ] 

Frank McQuillan edited comment on MADLIB-413 at 6/19/17 6:32 PM:
-----------------------------------------------------------------

1) Proposed name change:

Change 
n_neurons_per_hidden_layer
to
hidden_layer_sizes

which is shorter and still descriptive.  I updated the descr above

2) 
activation_function
is for hidden layers.  Output/final layer will use:

softmax
* classification

identity
* regression





was (Author: fmcquillan):
1) Proposed name change:

Change 
n_neurons_per_hidden_layer
to
hidden_layer_sizes

which is shorter and still descriptive

2) 
activation_function
is for hidden layers.  Output/final layer will use:

softmax
* classification

identity
* regression




> Neural Networks - MLP
> ---------------------
>
>                 Key: MADLIB-413
>                 URL: https://issues.apache.org/jira/browse/MADLIB-413
>             Project: Apache MADlib
>          Issue Type: New Feature
>          Components: Module: Neural Networks
>            Reporter: Caleb Welton
>            Assignee: Cooper Sloan
>             Fix For: v1.12
>
>
> Multilayer perceptron with backpropagation
> Modules:
> * mlp_classification
> * mlp_regression
> Interface
> {code}
> source_table VARCHAR
> output_table VARCHAR
> independent_varname VARCHAR -- Column name for input features, should be a Real Valued array
> dependent_varname VARCHAR, -- Column name for target values, should be Real Valued array of size 1 or greater
> n_neurons_per_hidden_layer INTEGER[], -- Number of units per hidden layer (can be empty or null, in which case, no hidden layers)
> optimizer_params VARCHAR, -- Specified below
> weights VARCHAR, -- Column name for weights. Weights the loss for each input vector. Column should contain positive real value
> activation_function VARCHAR, -- One of 'sigmoid' (default), 'tanh', 'relu', or any prefix (eg. 't', 's')
> grouping_cols
> )
> {code}
> where
> {code}
> optimizer_params: -- eg "step_size=0.5, n_tries=5"
> {
> step_size DOUBLE PRECISION, -- Learning rate
> n_iterations INTEGER, -- Number of iterations per try
> n_tries INTEGER, -- Total number of training cycles, with random initializations to avoid local minima.
> tolerance DOUBLE PRECISION, -- Maximum distance between weights before training stops (or until it reaches n_iterations)
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)